How to verify that async has no effect in a specific case? - c#

While using async and await, sometimes I come to a spot where it bugs me to use it because I sense it's pointless. I haven't been successful proving so is the case (and, admittedly, it doesn't hurt the performance to keep it). How can I validate (or reject) my claim in the following example.
bool empty = await Context.Stuff.AnyAsync();
if(empty)
throw new Exception();
My claim is that - since we're using the result of the check immediately to verify if we should leave the method, that call needs to be actuated in sync. Hence, the following has no worse performance, I believe.
bool empty = Context.Stuff.Any();
if(empty)
throw new Exception();
How can I verify my claim (other than empirically)?

I agree with all the comments; it's not about what you do with the result and when, it's about what the thread that was executing your code is allowed to go off and do elsewise while the Async operation is working out. If the Stuff is a complex view in the DB based on a query that takes 5 minutes to run then Any will block your thread for 5 minutes. AnyAsync could let that thread serve tens of thousands of requests to your webserver in that time. If you've blocked one thread the webserver will have to spin up another to serve the other people and threads are expensive.
Async isn't about "better performance" in the sense of "make it async and it runs faster" - the code executes at the same rate. Async is about "better use of resources" - you need fewer threads and they're more busy/less sitting around doing nothing waiting for e.g IO to complete
If it were an office it's analogous to making a coffee while you're on hold on the phone; imagine you get put on hold to the gas company and your boss shouts saying he wants a coffee. If you're async you'll put it on speaker, get up while you're on hold and make the coffee, waiting to be called back by the sound of the hold music stopping and the gas company saying "hello". If you're sync you'll sit there ignoring the boss' request while someone else makes the coffee (which means the boss has to employ someone else). It's more expensive to have you sitting around doing nothing just waiting, and have to hire someone else, than have you reach a point with job x and then go do something else. If you're async you'll go and refill the printer while you're waiting for the kettle to boil. If you're sync on hold and the office junior is sync waiting for the kettle to boil, the boss will have to employ yet another person to fill the printer..
Whether it's you or someone else that picks up the call to the gas company when they finally take you off hold depends on whether you're done making the coffee and available and/or whether you've ConfigureAwait'd to indicate it has to be you that picks up the call (true) or whether anyone in the office can continue it (false)
comments: I'm comparing it to using IEnumerable immediately followed by e.g. Count(), which will iterate through the whole shabang anyway. In that case, we may go T[] right away with no deteriorated performance. What's your thought on that?
It depends on what else you will do with the result. If you need to repeatedly ask your result for its length and random access it then sure, use ToArrayAsync to turn it into an array and then do all your work with it as locally cached data. Unless it's a query that is two terabytes big as a result 😀
If you literally only need the count once, then it doesn't make sense to spend all that memory allocating an array and getting its length; just do the CountAsync
Neither of these seem entirely relevant to the question of "Async or no?" - if your IEnumerable is coming over a slow network and is some huge slow query it still goes back to "let the thread go off and make busy doing something else so you don't have to spin up more threads". Note that "slow" here could mean even tens of milliseconds. We don't have to be talking minute ops to see a benefit from async
Very fast operations sure, you can do them sync to save on the minuscule cost of setting up the state machine but be certain of the tipping point between the cost of setting up the state machine so the thread can do something else versus making it wait amount of time; the machine costs very little. Faced with the choice, I'd generally choose async if available, especially if any IO is involved
how to prove/refute whether it matters.
You'll have to race the horses for every case; how quickly does the op complete sync, how long does it take to do the async state management. It'd probably be quite a wearisome to do for an entire codebase which is why I tend to proceed on an "if async is available and isn't just available for async's sake, then probably someone has reasoned that using async is sensible, so we should use it" basis. Async all the way up spreading through a codebase is perhaps a good thing if you use its presence in a library as an indicator that you should leverage it in your code (which then indicates to users of your code that they should..)

Hence, the following has no worse performance, I believe.
How can I verify my claim (other than empirically)?
There is no other way to verify a claim, other than empirically. Anything else is just words. You have to do an experiment and see the difference with your own eyes, or see a screenshot with the results of an experiment that was conducted by someone else. At the end of the day in order to verify something, an experiment has to be made by someone.
My guess is that if you do the experiment, you'll find that the synchronous Context.Stuff.Any() should have equal or better performance than the asynchronous await Context.Stuff.AnyAsync(). If it's better, the difference might be significant. Asynchronous APIs have been proven to be slower than synchronous APIs in more than one occasions. Personally I am not aware of any API that has both a synchronous and an asynchronous version, and the asynchronous is faster than the synchronous.
You haven't asked which version is more scalable though, so you might not be interested in this aspect of the equation. In case you are interested, conducting an experiment that compares the scalability of the two options is much more involved. You can't just use a Stopwatch, and measure the duration of a single operation. You'll have to launch a large number of operations concurrently, and observe how the system behaves as a whole. You could obtain metrics like CPU utilization, memory consumption, throughput etc. My expectation is that under heavy load the asynchronous version should give better metrics than the synchronous, and the difference might be substantial.
For what it's worth you can see here a somewhat silly experiment of mine, that proves that the asynchronous await Task.Delay() is vastly more scalable than the synchronous Thread.Sleep(). The later requires one thread per operation. The former requires a handful of threads for 100,000 operations.

Related

Generating cacheable data exactly once when needed and blocking otherwise?

I'm making a cool (imo) T4 template which will make caching a lot easier. One of the options I have in making this template is to allow for a "load once" type functionality, though I'm not sure how safe it is.
Basically, I want to make it so you can do something like this:
var post=MyCache.PostsCache.GetOrLockLoad(id, ()=>LoadPost(id));
and basically make it so that when the cache must be loaded, it will place a blocking lock across PostsCache. This way, other threads would block until the LoadPost() function is done. This makes it so that LoadPost will only be executed once per cache miss. The traditional way of doing this is LoadPost will be executed anytime the cache is empty, possibly multiple times if multiple requests for it come before the cache is loaded the first time.
Is this a reasonable thing to do, or is blocking other threads for something like this dangerous or wasteful? I'm thinking something along the lines that the thread locking overheads are greater than most operations, but maybe not?
Has anyone seen this kind of thing done and is it a good idea or just dangerous?
Also, although it's designed to run on any cache and application type, it's initially being targetted at ASP.Net's built in caching mechanism.
This seems ok, since in theory the requests after the first will only wait about as long as it would have taken for them to load the data themselves anyway.
But it still feels a bit iffy - what if the first loader thread gets held up due to some intermittent issue that may not affect other threads. It feels like it would be safer to let each thread try the load independently.
It's also adding the complexity and overhead of the locking mechanisms. Keep in mind the more locking you do, the more risk you introduce of getting a deadlock condition (in general). Although in your case, as long as there's no funky locking going on in the LoadPost method it shouldn't be an issue.
Given the risks, I think you would be better off going with a non-locking option.
After all, for any given thread the wait time is pretty much the same - either the time taken to load, or the time spent waiting for the first thread to load.
I'm always a little uncomfortable when a non-concurrent option is used over a concurrent one, especially if the gain seems marginal.

Inefficient Parallel.For?

I'm using a parallel for loop in my code to run a long running process on a large number of entities (12,000).
The process parses a string, goes through a number of input files (I've read that given the number of IO based things the benefits of threading could be questionable, but it seems to have sped things up elsewhere) and outputs a matched result.
Initially, the process goes quite quickly - however it ends up slowing to a crawl. It's possible that it's just hit a number of particularly tricky input data, but this seems unlikely looking closer at things.
Within the loop, I added some debug code that prints "Started Processing: " and "Finished Processing: " when it begins/ends an iteration and then wrote a program that pairs a start and a finish, initially in order to find which ID was causing a crash.
However, looking at the number of unmatched ID's, it looks like the program is processing in excess of 400 different entities at once. This seems like, with the large number of IO, it could be the source of the issue.
So my question(s) is(are) this(these):
Am I interpreting the unmatched ID's properly, or is there some clever stuff going behind the scenes I'm missing, or even something obvious?
If you'd agree what I've spotted is correct, how can I limit the number it spins off and does at once?
I realise this is perhaps a somewhat unorthodox question and may be tricky to answer given there is no code, but any help is appreciated and if there's any more info you'd like, let me know in the comments.
Without seeing some code, I can guess at the answers to your questions:
Unmatched IDs indicate to me that the thread that is processing that data is being de-prioritized. This could be due to IO or the thread pool trying to optimize, however it seems like if you are strongly IO bound then that is most likely your issue.
I would take a look at Parallel.For, specifically using ParallelOptions.MaxDegreesOfParallelism to limit the maximum number of tasks to a reasonable number. I would suggest trial and error to determine the optimum number of degrees, starting around the number of processor cores you have.
Good luck!
Let me start by confirming that is indeed a very bad idea to read 2 files at the same time from a hard drive (at least until the majority of HDs out there are SSDs), let alone whichever number your whole thing is using.
The use of parallelism serves to optimize processing using an actually paralellizable resource, which is the CPU power. If you paralellized process reads from a hard drive then you're losing most of the benefit.
And even then, even the CPU power is not prone to infinite paralellization. A normal desktop CPU has the capacity to run up to 10 threads at the same time (depends of the model obviously, but that's the order of magnitude).
So two things
first, I am going to make the assumption that your entities use all your files, but your files are not too big to be loaded into memory. If it's the case, you should read your files into objects (i.e. into memory), then paralellize the processing of your entities using those objects. If not, you're basically relying on your hard drive's cache to not reread your files every time you need them, and your hard drive's cache is far smaller than your memory (1000-fold).
second, you shouldn't be running Parallel.For on 12.000 items. Parallel.For will actually (try to) create 12.000 threads, and that is actually worse than 10 threads, because of the big overhead that paralellizing will create, and the fact your CPU will not benefit from it at all since it cannot run more than 10 threads at a time.
You should probably use a more efficient method, which is the IEnumerable<T>.AsParallel() extension (comes with .net 4.0). This one will, at runtime, determine what is the optimal thread number to run, then divide your enumerable into as many batches. Basically, it does the job for you - but it creates a big overhead too, so it's only useful if the processing of one element is actually costly for the CPU.
From my experience, using anything parallel should always be evaluated against not using it in real-life, i.e. by actually profiling your application. Don't assume it's going to work better.

Highest Performance for Cross AppDomain Signaling

My performance sensitive application uses MemoryMappedFiles for pushing bulk data between many AppDomains. I need the fastest mechanism to signal a receiving AD that there is new data to be read.
The design looks like this:
AD 1: Writer to MMF, when data is written it should notify the reader ADs
AD 2,3,N..: Reader of MMF
The readers do not need to know how much data is written because each message written will start with a non zero int and it will read until zero, don't worry about partially written messages.
(I think) Traditionally, within a single AD, Monitor.Wait/Pulse could be used for this, I do not think it works across AppDomains.
A MarshalByRefObject remoting method or event can also be used but I would like something faster. (I benchmark 1,000,000 MarshalByRefObject calls/sec on my machine, not bad but I want more)
A named EventWaitHandle is about twice as fast from initial measurements.
Is there anything faster?
Note: The receiving ADs do not need to get every signal as long as the last signal is not dropped.
A thread context switch costs between 2000 and 10,000 machine cycles on Windows. If you want more than a million per second then you are going to have to solve the Great Silicon Speed Bottleneck. You are already on the very low end of the overhead.
Focus on switching less often and collecting more data in one whack. Nothing needs to switch at a microsecond.
The named EventWaitHandle is the way to go for a one way signal (For lowest latency). From my measurements 2x faster than a cross-appdomain method call. The method call performance is very impressive in the latest versions of the CLR to date (4) and should make the most sense for the large majority of cases since it's possible to pass some information int he method call (in my case, how much data to read)
If it's OK to continuously burn a thread on the receiving end, and performance is that critical, a tight loop may be faster.
I hope Microsoft continues to improve the cross appdomain functionality as it can really help with application reliability and plugin-ins.

What to make parallel? What will make me better? (.net Web Business Application, MVC+SL)

I'm working on a web application framework, which uses MSSQL for data storage, mostly just does CRUD operations (but on arbitrarly complex structures), provides a WCF interface for rich Silverlight admin and has an MVC3 display (and some basic forms like user settings, etc).
It's getting quite good at being able to load, display, edit and save any (reasonably) complex data structure, in a user-friendly way.
But, I'm looking towards the future, and want to expand my capabilities (and it would be fun to learn new things along the way as well...) - so I've decided (in the light of what's coming for C#5...) to try to get some parallel/async optimalization... Now, I haven't even learned TPL and PLinq yet, so I'm happy for any advice there as well.
So my question is, what are possible areas where parallel processing maybe of help, and where does TPL and PLinq help me on that?
My guts tell me, I could try saving branches of a data structure in a parallel way to the database (this is where I'd expect the biggest peformance optimalization), I could perform some complex operations (file upload, mail sending maybe?) in a multithreaded enviroment, etc. Can I build complex SL UI views in parallel on the client? (Creating 60 data-bound fields on a view can cause "blinking"...) Can I create partial views (menus, category trees, search forms, etc) in MVC at once?
ps: If this turns into "Tell me everything about parallel stuffs" thread, I'm happy to make it community-wiki...
Remember that an asp.net web application is intrinsically a parallel application in any case. Requests can be serviced in parallel and this will all be managed by the asp.net framework. So there are two cases:
You have lots of users all hitting the site at once. In which case the parallel processing capability of the server is probably being used to capacity in any case.
You don't have lots of users all hitting the site at once. In which case the server is probably quite capable of dealing with the responses without parallel processing in a suitable fast response time.
Any time you start thinking about optimising something just because it might be fun, or because you just think you should make stuff faster then you are almost certainly guilty of premature optimization. Your efforts could almost certainly be better spent enriching the functionality of the framework, rather than making what is probably a plenty fast enough solution a little bit faster (at the cost of significantly increase complexity).
In answer to the question of where can TPL and PLINQ really help. In my opinion the main advantage of these technologies is in places in the application where you really do have a lot of long running blocking processes. For example if you have a situation where you call out several times to an external web service - it can be a significant advantage to make these calls in parallel. I would strongly question whether writing to a local database - or even a database on a different box on a local network would count as being a long running blocking process to the extent that this kind of parallelisation is of any significant value.
Pretty much all the examples you list fall in to the category of getting the PC to do something in parallel that it was previously doing in sequence. How many CPUs are on your server - how many are really free when the website is under load. Making something parallel does not necessarily equate to making it faster unless the process involved has some measure of time when you PC is sitting around doing nothing waiting for an external event.
First question is to ask the users / testers which bits seem slow. The only way to know for sure what's slowing you down is to use a profiler like dottrace. The results are sometimes surprising.
If you do find something, parallel processing may not be the answer. You need to remember that there is an overhead in splitting tasks up, so if the task is fairly quick in the first place, it could end up being slower. You also have to consider the added complexity, e.g. what happens if half a task succeeds, and half fails? (Although TPL and PLINQ hide you from this to an extend)
Have fun, but I wondering whether this is a case of 1) solution chasing a problem, and 2) premature optimization.

What is wrong with polling?

I have heard a few developers recently say that they are simply polling stuff (databases, files, etc.) to determine when something has changed and then run a task, such as an import.
I'm really against this idea and feel that utilising available technology such as Remoting, WCF, etc. would be far better than polling.
However, I'd like to identify the reasons why other people prefer one approach over the other and more importantly, how can I convince others that polling is wrong in this day and age?
Polling is not "wrong" as such.
A lot depends on how it is implemented and for what purpose. If you really care about immedatly notification of a change, it is very efficient. Your code sits in tight loop, constantly polling (asking) a resource whether it has changed / updated. This means you are notified as soon as you can be that something is different. But, your code is not doing anything else and there is overhead in terms of many many calls to the object in question.
If you are less concerned with immediate notification you can increase the interval between polls, and this can also work well, but picking the correct interval can be difficult. Too long and you might miss critical changes, too short and you are back to the problems of the first method.
Alternatives, such as interrupts or messages, etc. can provide a better compromise in these situations. You are notified of a change as soon as is practically possible, but this delay is not something you control, it depends on the component tself being timely about passing on changes in state.
What is "wrong" with polling?
It can be resource hogging.
It can be limiting (especially if you have many things you want to know about / poll).
It can be overkill.
But...
It is not inherently wrong.
It can be very effective.
It is very simple.
Examples of things that use polling in this day and age:
Email clients poll for new messages (even with IMAP).
RSS readers poll for changes to feeds.
Search engines poll for changes to the pages they index.
StackOverflow users poll for new questions, by hitting 'refresh' ;-)
Bittorrent clients poll the tracker (and each other, I think, with DHT) for changes in the swarm.
Spinlocks on multi-core systems can be the most efficient synchronisation between cores, in cases where the delay is too short for there to be time to schedule another thread on this core, before the other core does whatever we're waiting for.
Sometimes there simply isn't any way to get asynchronous notifications: for example to replace RSS with a push system, the server would have to know about everyone who reads the feed and have a way of contacting them. This is a mailing list - precisely one of the things RSS was designed to avoid. Hence the fact that most of my examples are network apps, where this is most likely to be an issue.
Other times, polling is cheap enough to work even where there is async notification.
For a local file, notification of changes is likely to be the better option in principle. For example, you might (might) prevent the disk spinning down if you're forever poking it, although then again the OS might cache. And if you're polling every second on a file which only changes once an hour, you might be needlessly occupying 0.001% (or whatever) of your machine's processing power. This sounds tiny, but what happens when there are 100,000 files you need to poll?
In practice, though, the overhead is likely to be negligible whichever you do, making it hard to get excited about changing code that currently works. Best thing is to watch out for specific problems that polling causes on the system you want to change - if you find any then raise those rather than trying to make a general argument against all polling. If you don't find any, then you can't fix what isn't broken...
There are two reasons why polling could be considered bad by principle.
It is a waste of resources. It is very likely that you will check for a change while no change has occurred. The CPU cycles/bandwidth spend on this action does not result in a change and thus could have been better spend on something else.
Polling is done on a certain interval. This means that you won’t know that a change has occurred until the next time that the interval has passed.
It would be better to be notified of changes. This way you’re not polling for changes that haven’t occurred and you’ll know of a change as soon as you receive the notification.
Polling is easy to do, very easy, its as easy as any procedural code. Not polling means you enter the world of Asynchronous programming, which isn't as brain-dead easy, and might even become challenging at times.
And as with everything in any system, the path of less resistance is normally more commonly taken, so there will always be programmers using polling, even great programmers, because sometimes there is no need to complicate things with asynchronous patterns.
I for one always thrive to avoid polling, but sometimes I do polling anyways, especially when the actual gains of asynchronous handling aren't that great, such as when acting against some small local data (of course you get a bit faster, but users won't notice the difference in a case like this). So there is room for both methodologies IMHO.
Client polling doesn't scale as well as server notifications. Imagine thousands of clients asking the server "any new data?" every 5 seconds. Now imagine the server keeping a list of clients to notify of new data. Server notification scales better.
I think people should realize that in most cases, at some level there is polling being done, even in event or interrupt driven situations, but you're isolated from the actual code doing the polling. Really, this is the most desirable situation ... isolate yourself from the implementaion, and just deal with the event. Even if you must implement the polling yourself, write the code so that it's isolated, and the results are dealt with independently of the implementation.
The thing about polling is that it works! Its reliable and simple to implement.
The costs of pooling can be high -- if you are scanning a database for changes every minute when there are only two changes a day you are consuming a lot of resources for a very small result.
However the problem with any notification technoligy is that they are much more complex to implement and not only can they be unreliable but (and this is a big BUT) you cannot easily tell when they are not working.
So if you do drop polling for some other technoligy make sure it is usable by average programmers and is ultra reliable.
Its simple - polling is bad - inefficient, waste of resources, etc. There is always some form of connectivity in place that is monitoring for an event of some sort anyway, even if 'polling' is not chosen.
So why go the extra mile and put additional polling in place.
Callbacks are the best option - just need to worry about tie the callback in with your current process. Underlying, there is polling going on to see that the connection is still in place anyhow.
If you keep phoning/ringing your girlfriend and shes never answers, then why keep calling? Just leave a message, and wait until she 'calls back' ;)
I use polling occasionally for certain situations (for example, in a game, I would poll the keyboard state every frame), but never in a loop that ONLY does polling, rather I would do polling as a check (has resource X changed? If yes, do something, otherwise process something else and check again later). Generally speaking though, I avoid polling in favor of asynchronous notifications.
The reasons being that I do not spend resources (CPU time, whatever) waiting for something to happen (especially if those resources could speed up that thing happening in the first place). The cases where I use polling, I don't sit idle waiting, I use the resources elsewhere, so it's a non-issue (for me, at least).
If you are polling for changes to a file, then I agree that you should use the filesystem notifications that are available for when this happens, which are available in most operating systems now.
In a database you could trigger on update/insert and then call your external code to do something. However it might just be that you don't have a requirement for instant actions. For instance you might only need to get data from Database A to Database B on a different network within 15 minutes. Database B might not be accessible from Database A, so you end up doing the polling from, or as a standalone program running near, Database B.
Also, Polling is a very simple thing to program. It is often a first step implementation done when time constraints are short, and because it works well enough, it remains.
I see many answers here, but I think the simplest answer is the answer it self:
Because is (usually) much more simple to code a polling loop than to make the infrastructure for callbacks.
Then, you get simpler code which if it turns out to be a bottleneck later can be easily understood and redesigned/refactored into something else.
This is not answering your question. But realistically, especially in this "day and age" where processor cycles are cheap, and bandwidth is large, polling is actually a pretty good solution for some tasks.
The benefits are:
Cheap
Reliable
Testable
Flexible
I agree that avoiding polling is a good policy. However, In reference to Robert's post, I would say that the simplicity of polling can make it a better approach in instances where the issues mentioned here are not such a big problem, as the asynchronous approach is often considerably less readable and harder to maintain, not to mention the bugs that can creep in to its implementation.
As with everything, it depends. A large high-transaction system I work on currently uses a notification with SQL (A DLL loaded within SQL Server that is called by an extended SP from triggers on certain tables. The DLL then notifies other apps that there is work to do).
However we're moving away from this because we can practically guarantee that there will be work to do continuously. Therefore in order to reduce the complexity and actually speed things up a bit, the apps will process their work and immediately poll the DB again for new work. Should there be none it'll try again after a small interval.
This seems to work quicker and is much simpler. However, another part of the application which is much lower volume does not benefit from a speed increase using this method - unless the polling interval is very small, which leads to performance problems. So we're leaving it as is for this part. Therefore it's a good thing when it's appropriate, but everybody's needs are different.
Here is a good summary of relative merits of push and pull:
https://stpeter.im/index.php/2007/12/14/push-and-pull-in-application-architectures/
I wish I could summarize it further into this answer but some things are best left unabridged.
When thinking about SQL polling, back in the day of VB6 you used to be able to create recordsets using the WithEvents keyword which was an early incarnation of async "listening".
I personally would always look for a way of using an events driven implementation before polling. Failing that a manual implementation of any of the following might help:
sql service broker / dependency class
Some kind of queue technology(RabbitMQ or similar)
UDP broadcast - interesting technique that can
be built with multiple node listeners. Not always possible on some net works though.
Some of these may require a slight redesign of your project, but in an enterprise world might be the better route to go rather than a polling service.
Agreee with most responses that Async/Messaging is usually better. I absolutely agree with Robert Gould's answer. But I'd like to add one more point.
One addition is that polling can kill two birds with one stone. In one particular use case, a project I was involved with used a message queue between databases but polling from an application server to one of the databases. Because the network from app server to DB was occasionally down, polling was additionally used to notify the app of network issues.
In the end, use what makes most sense for the use case with scale-ability in mind.
I'm using polling to check for updates on a file because I'm getting information about that file across a heterogeneous system with different OS types, one of which is very old. The notifications for Linux won't work if the file is on a remote system with a different OS, because that information is not transmitted, but polling works. It's a low bandwidth check, so it doesn't hurt anything.

Categories