Options for calling sync code - c#

I feel like I have read thousands of SO questions and blog posts and MSDN articles about this stuff, but I'm still not "getting" it completely. So please help me out.
At my work, we have an ASP.NET application that HAS to get data from a sync method. I have no control over this method and I have no idea as to how it is implemented.
I have an async controller method that is calling 3 other async services, I can use async "all the way down" (yay!). But this last one is not async and will likely never be. Worse, it takes the longest. I would love to be able to start tasks for all the services, and await them all. From what I have been able to piece together, I have the following options available to me, and I think I understand most of the consequences, but would like confirmation and clarification where necessary.
So, given the method:
public async Task<Thing> GetThing()
{
var task1 = GetProp1Async(); // etc.
var syncResult = ......
myThing.Prop1 = await task1;
myThing.SyncProp = .......
return myThing;
}
The options I have for filling in those dots could be:
Option 1:
Just call the method synchronously. This will block my current thread. My other tasks can all be running, and I can await them later, but I am going to block here, inside my async method.
var syncResult = GetSyncResult();
Option 2:
Use Task.Run().Result. I understand this is not ideal at all, as I will spawn a new thread, while also blocking my original thread, which I think is the worst case here, right? HOWEVER, I will NOT get the deadlock issue, since I'm forcing a
new thread, right?
var syncResult = Task.Run(() => GetSyncResult()).Result;
Option 3:
I THINK this is the best option? My async code won't block a thread, but I'm still spawning a new thread that will go do sync work. So, this feels like a net-zero gain. I still have 1 thread that's doing stuff, and possibly waiting, but that still may be my best option?
var syncResult = await Task.Run(() => GetSyncResult());
Option 4:
I still don't understand ConfigureAwait(false), but I could try it. What does this get me? It seems that some posts say "do this in all the places!" Some say "don't do this to avoid deadlocks". Others say, if you do it, do it "all the way down". Well, I have no idea what GetSyncResult() does, so I have no idea if it calls any async stuff behind the scenes (I mean, it probably doesn't, but I don't KNOW for sure). So could that come back to bite me?
var syncResult = await Task.Run(() => GetSyncResult()).ConfigureAwait(false);
So what is my best option here? Did I miss any options? Worse, did I miss the best option?
I know these questions have been asked and answered to death, but, I'm just not getting my dots connected, and I definitely need some help.

The options depend a lot on the context of your code.
If your code (GetThing() in your example, which should really be GetThingAsync), is either a classic ASP.NET action or an event handler for a UI application, which based on your answer, I think it is, then you are an important thread, usually limited in number, so it's important that you not block this particular thread because it's special. Option 3 is the best answer here, because it frees up the important thread to let it get back to UI/Web stuff, and shunts the work off to a non-important thread pool thread. But, importantly, re-enters the important context when the expensive work is done so you can do things like access the HttpContext or UI elements.
If your code is in some library others are going to use and it's super important you never block the thread, option 4 is the "right" answer. As you've said, there are dozens of blog posts on the internet that go in to varying levels of details as to what those do, so I won't repeat them all here, since you've already read them. It's up to your caller to decide to attempt to renter the important thread context when they call you by calling ConfigureAwait(false) (to not re-enter it), or just awaiting it straight (which will re-enter it).
Option 1 is the easiest and least likely to have some unexpected bug. Unless you are having capacity issues on your ASP.NET application and you know this call is what's doing it, I'd recommend sticking with the simple one. You can always add complexity later, but bugs in async code can be very painful to track down, so trying to do it the "right" way first instead of the simple way might cost more than it's worth in the end until you know it's really a problem, and not just a theoretical one.

Related

Start a concurrent async task from within another async task

I have an I/O-bound task implemented with an async-await function.
At some point in this function, I receive some information which allows me to start a concurrent task, which is also I/O-bound (and so it is also implemented with an async-await function). My original task is not bound by this new task - I just need to set it going in the background.
If I do the following, I get warned that I'm not awaiting the call. I don't want to await the call! I want it to happen in the background!
async Task AnotherAsyncThing()
{
// ...
}
async Task SomeAsyncThing()
{
// ...
// run concurrently - warning raised here
Task.Run(async () => await AnotherAsyncThing());
// ...
}
Am I missing something horribly obvious here? It feels like I am!
You can do something like this:
_ = AnotherAsyncThing()
This is the discards feature added in C# 7.0 and is a way of communicating the fact you're not interested in the return value to the compiler.
Yes and no :)
so often bugs occur when people forget to wait for tasks and it is considered a risk in APIs for instance to keep spinning up non awaited tasks because you can sometimes do so rapidly with bad performing client code and if that can steal many resources ... we'll i'm sure You can imagine.
But to signify that You know what You're doing and assume full responsibility, You can use the TPL like this and get rid of the warnings
_ = Task.Run(
() => _ = AnotherAsyncThing()
);
But each time this code is passed it will continue immediately and start something which will also continue to run. So Say your API gets a post, which accidentally happens every 10th millisecond instead of every 10th second as intended ... there is a danger in making the use of these things a standard.
It is a tool for a specific purpose, not the new white for walls, but yea You may have missed that we now should tell by using the nevermind underscore, that we know what we're doing this time and the compiler should back out from helping.

How can I make a interrupt in C#? [duplicate]

I understand Thread.Abort() is evil from the multitude of articles I've read on the topic, so I'm currently in the process of ripping out all of my abort's in order to replace it for a cleaner way; and after comparing user strategies from people here on stackoverflow and then after reading "How to: Create and Terminate Threads (C# Programming Guide)" from MSDN both which state an approach very much the same -- which is to use a volatile bool approach checking strategy, which is nice, but I still have a few questions....
Immediately what stands out to me here, is what if you do not have a simple worker process which is just running a loop of crunching code? For instance for me, my process is a background file uploader process, I do in fact loop through each file, so that's something, and sure I could add my while (!_shouldStop) at the top which covers me every loop iteration, but I have many more business processes which occur before it hits it's next loop iteration, I want this cancel procedure to be snappy; don't tell me I need to sprinkle these while loops every 4-5 lines down throughout my entire worker function?!
I really hope there is a better way, could somebody please advise me on if this is in fact, the correct [and only?] approach to do this, or strategies they have used in the past to achieve what I am after.
Thanks gang.
Further reading: All these SO responses assume the worker thread will loop. That doesn't sit comfortably with me. What if it is a linear, but timely background operation?
Unfortunately there may not be a better option. It really depends on your specific scenario. The idea is to stop the thread gracefully at safe points. That is the crux of the reason why Thread.Abort is not good; because it is not guaranteed to occur at safe points. By sprinkling the code with a stopping mechanism you are effectively manually defining the safe points. This is called cooperative cancellation. There are basically 4 broad mechanisms for doing this. You can choose the one that best fits your situation.
Poll a stopping flag
You have already mentioned this method. This a pretty common one. Make periodic checks of the flag at safe points in your algorithm and bail out when it gets signalled. The standard approach is to mark the variable volatile. If that is not possible or inconvenient then you can use a lock. Remember, you cannot mark a local variable as volatile so if a lambda expression captures it through a closure, for example, then you would have to resort to a different method for creating the memory barrier that is required. There is not a whole lot else that needs to be said for this method.
Use the new cancellation mechanisms in the TPL
This is similar to polling a stopping flag except that it uses the new cancellation data structures in the TPL. It is still based on cooperative cancellation patterns. You need to get a CancellationToken and the periodically check IsCancellationRequested. To request cancellation you would call Cancel on the CancellationTokenSource that originally provided the token. There is a lot you can do with the new cancellation mechanisms. You can read more about here.
Use wait handles
This method can be useful if your worker thread requires waiting on an specific interval or for a signal during its normal operation. You can Set a ManualResetEvent, for example, to let the thread know it is time to stop. You can test the event using the WaitOne function which returns a bool indicating whether the event was signalled. The WaitOne takes a parameter that specifies how much time to wait for the call to return if the event was not signaled in that amount of time. You can use this technique in place of Thread.Sleep and get the stopping indication at the same time. It is also useful if there are other WaitHandle instances that the thread may have to wait on. You can call WaitHandle.WaitAny to wait on any event (including the stop event) all in one call. Using an event can be better than calling Thread.Interrupt since you have more control over of the flow of the program (Thread.Interrupt throws an exception so you would have to strategically place the try-catch blocks to perform any necessary cleanup).
Specialized scenarios
There are several one-off scenarios that have very specialized stopping mechanisms. It is definitely outside the scope of this answer to enumerate them all (never mind that it would be nearly impossible). A good example of what I mean here is the Socket class. If the thread is blocked on a call to Send or Receive then calling Close will interrupt the socket on whatever blocking call it was in effectively unblocking it. I am sure there are several other areas in the BCL where similiar techniques can be used to unblock a thread.
Interrupt the thread via Thread.Interrupt
The advantage here is that it is simple and you do not have to focus on sprinkling your code with anything really. The disadvantage is that you have little control over where the safe points are in your algorithm. The reason is because Thread.Interrupt works by injecting an exception inside one of the canned BCL blocking calls. These include Thread.Sleep, WaitHandle.WaitOne, Thread.Join, etc. So you have to be wise about where you place them. However, most the time the algorithm dictates where they go and that is usually fine anyway especially if your algorithm spends most of its time in one of these blocking calls. If you algorithm does not use one of the blocking calls in the BCL then this method will not work for you. The theory here is that the ThreadInterruptException is only generated from .NET waiting call so it is likely at a safe point. At the very least you know that the thread cannot be in unmanaged code or bail out of a critical section leaving a dangling lock in an acquired state. Despite this being less invasive than Thread.Abort I still discourage its use because it is not obvious which calls respond to it and many developers will be unfamiliar with its nuances.
Well, unfortunately in multithreading you often have to compromise "snappiness" for cleanliness... you can exit a thread immediately if you Interrupt it, but it won't be very clean. So no, you don't have to sprinkle the _shouldStop checks every 4-5 lines, but if you do interrupt your thread then you should handle the exception and exit out of the loop in a clean manner.
Update
Even if it's not a looping thread (i.e. perhaps it's a thread that performs some long-running asynchronous operation or some type of block for input operation), you can Interrupt it, but you should still catch the ThreadInterruptedException and exit the thread cleanly. I think that the examples you've been reading are very appropriate.
Update 2.0
Yes I have an example... I'll just show you an example based on the link you referenced:
public class InterruptExample
{
private Thread t;
private volatile boolean alive;
public InterruptExample()
{
alive = false;
t = new Thread(()=>
{
try
{
while (alive)
{
/* Do work. */
}
}
catch (ThreadInterruptedException exception)
{
/* Clean up. */
}
});
t.IsBackground = true;
}
public void Start()
{
alive = true;
t.Start();
}
public void Kill(int timeout = 0)
{
// somebody tells you to stop the thread
t.Interrupt();
// Optionally you can block the caller
// by making them wait until the thread exits.
// If they leave the default timeout,
// then they will not wait at all
t.Join(timeout);
}
}
If cancellation is a requirement of the thing you're building, then it should be treated with as much respect as the rest of your code--it may be something you have to design for.
Lets assume that your thread is doing one of two things at all times.
Something CPU bound
Waiting for the kernel
If you're CPU bound in the thread in question, you probably have a good spot to insert the bail-out check. If you're calling into someone else's code to do some long-running CPU-bound task, then you might need to fix the external code, move it out of process (aborting threads is evil, but aborting processes is well-defined and safe), etc.
If you're waiting for the kernel, then there's probably a handle (or fd, or mach port, ...) involved in the wait. Usually if you destroy the relevant handle, the kernel will return with some failure code immediately. If you're in .net/java/etc. you'll likely end up with an exception. In C, whatever code you already have in place to handle system call failures will propagate the error up to a meaningful part of your app. Either way, you break out of the low-level place fairly cleanly and in a very timely manner without needing new code sprinkled everywhere.
A tactic I often use with this kind of code is to keep track of a list of handles that need to be closed and then have my abort function set a "cancelled" flag and then close them. When the function fails it can check the flag and report failure due to cancellation rather than due to whatever the specific exception/errno was.
You seem to be implying that an acceptable granularity for cancellation is at the level of a service call. This is probably not good thinking--you are much better off cancelling the background work synchronously and joining the old background thread from the foreground thread. It's way cleaner becasue:
It avoids a class of race conditions when old bgwork threads come back to life after unexpected delays.
It avoids potential hidden thread/memory leaks caused by hanging background processes by making it possible for the effects of a hanging background thread to hide.
There are two reasons to be scared of this approach:
You don't think you can abort your own code in a timely fashion. If cancellation is a requirement of your app, the decision you really need to make is a resource/business decision: do a hack, or fix your problem cleanly.
You don't trust some code you're calling because it's out of your control. If you really don't trust it, consider moving it out-of-process. You get much better isolation from many kinds of risks, including this one, that way.
The best answer largely depends on what you're doing in the thread.
Like you said, most answers revolve around polling a shared boolean every couple lines. Even though you may not like it, this is often the simplest scheme. If you want to make your life easier, you can write a method like ThrowIfCancelled(), which throws some kind of exception if you're done. The purists will say this is (gasp) using exceptions for control flow, but then again cacelling is exceptional imo.
If you're doing IO operations (like network stuff), you may want to consider doing everything using async operations.
If you're doing a sequence of steps, you could use the IEnumerable trick to make a state machine. Example:
<
abstract class StateMachine : IDisposable
{
public abstract IEnumerable<object> Main();
public virtual void Dispose()
{
/// ... override with free-ing code ...
}
bool wasCancelled;
public bool Cancel()
{
// ... set wasCancelled using locking scheme of choice ...
}
public Thread Run()
{
var thread = new Thread(() =>
{
try
{
if(wasCancelled) return;
foreach(var x in Main())
{
if(wasCancelled) return;
}
}
finally { Dispose(); }
});
thread.Start()
}
}
class MyStateMachine : StateMachine
{
public override IEnumerabl<object> Main()
{
DoSomething();
yield return null;
DoSomethingElse();
yield return null;
}
}
// then call new MyStateMachine().Run() to run.
>
Overengineering? It depends how many state machines you use. If you just have 1, yes. If you have 100, then maybe not. Too tricky? Well, it depends. Another bonus of this approach is that it lets you (with minor modifications) move your operation into a Timer.tick callback and void threading altogether if it makes sense.
and do everything that blucz says too.
Perhaps the a piece of the problem is that you have such a long method / while loop. Whether or not you are having threading issues, you should break it down into smaller processing steps. Let's suppose those steps are Alpha(), Bravo(), Charlie() and Delta().
You could then do something like this:
public void MyBigBackgroundTask()
{
Action[] tasks = new Action[] { Alpha, Bravo, Charlie, Delta };
int workStepSize = 0;
while (!_shouldStop)
{
tasks[workStepSize++]();
workStepSize %= tasks.Length;
};
}
So yes it loops endlessly, but checks if it is time to stop between each business step.
You don't have to sprinkle while loops everywhere. The outer while loop just checks if it's been told to stop and if so doesn't make another iteration...
If you have a straight "go do something and close out" thread (no loops in it) then you just check the _shouldStop boolean either before or after each major spot inside the thread. That way you know whether it should continue on or bail out.
for example:
public void DoWork() {
RunSomeBigMethod();
if (_shouldStop){ return; }
RunSomeOtherBigMethod();
if (_shouldStop){ return; }
//....
}
Instead of adding a while loop where a loop doesn't otherwise belong, add something like if (_shouldStop) CleanupAndExit(); wherever it makes sense to do so. There's no need to check after every single operation or sprinkle the code all over with them. Instead, think of each check as a chance to exit the thread at that point and add them strategically with this in mind.
All these SO responses assume the worker thread will loop. That doesn't sit comfortably with me
There are not a lot of ways to make code take a long time. Looping is a pretty essential programming construct. Making code take a long time without looping takes a huge amount of statements. Hundreds of thousands.
Or calling some other code that is doing the looping for you. Yes, hard to make that code stop on demand. That just doesn't work.

The proper way to await databound property getters c#

What would be the most correct way to use async method in databound property getter? I am talking about solid, scientific arguments, not personal preferences. I've read many threads about the problem, but not this specific case. Some of the solutions don't work in all the cases, and some of the suggestion, well... they were just too subjective or just wrong.
What I don't accept and why:
You can't - Actually, it is possible. There are many posts like "there are no such things like async properties", "it is against the design of the language" etc. but also there are many sensible explanations why such expressions are false
This should me method, not property - It can't be. I want to databind it. I provide property "proxies" for people using this code because in the future there may be different method to calculate this pseudo-property. And I want the View-side of the binding to be simple as possible
Use property to store the cached result of the method - that would defeat the purpose, it is actually something that changes dynamically and the class is an ORM Entity so it would store redundant data to the DB.
Use SomeTask.Result; or SomeTask.GetAwaiter().GetResult() - In most cases I would just use it. I've successfully used those in many cases i.e. Console applications. It's nice, clear and easily readable. But when I use it in databound property I get a deadlock
Problem background (simplified)
Let's say that I am responsible for developing ORM mechanism in a project. There was a first stable version, but now I want to add some properties to the Entities for the DataBinders who are responsible for the layout. I can edit Entity layer, but I can't edit Mapping and Repository layers. (I am not held againt my will, this situation is fictional simplification). All the methods in repositories are async. All I can do is ask someone responsible to provide identical synchronous methods for all of the methods, but it would be stupid to this kind of redundant work.
Only solution I can use now
_something = Task.Run(async () => await AnotherRepository.CalculateStuff(this)).Result;
And it just doesn't look right to me. It works, but I have to await my method inside the lambda in Task.Run(). I am stuck with it for the time being, and I want to know the simplest and correct approach.
Repository method pseudo-code
public async static Task<IList<CalculatedStuff>> CalculateStuff(SomeClass class)
{
return await Task.Run(() =>
{
using (var session = Helper.OpenSession())
return session.CreateCriteria(typeof(CalculatedStuff)).Add(Restrictions.Eq("SomeClass", class))
///...
.List<CalculatedStuff>();
});
}
there are no such things like async properties
I have a blog post and MSDN article on "async properties" for data binding. I do take the stance that they are not natural, which is based on these (objective) observations:
Properties read by data binding must return immediately (synchronously).
Asynchronous operations are asynchronous (that is, they complete after some time).
Clearly, these are at complete odds with one another.
Now, there are a few different solutions, but any solution that attempts to violate one of these observations is going to be dubious, at best.
For example, you can attempt to violate the second observation by trying to run the asynchronous operation synchronously. As you discovered, Result / Wait / GetAwaiter().GetResult() will deadlock (for reasons described in detail on my blog). Task.Run(() => ...).GetAwaiter().GetResult() will avoid the deadlock but will execute the code in a free-threaded context (which is OK for most code but not all). These are two different kinds of sync-over-async; I call them the "Blocking Hack" and the "Thread Pool Hack" in my Async Brownfield article, which also covers two other kinds of sync-over-async patterns.
Unfortunately, there is no solution for sync-over-async that works in every scenario. Even if you get it to work, your users would get a substandard experience (blocking the UI thread for an indefinite amount of time), and you may have problems with app stores (I believe MS's at least will actively check for blocking the UI thread and auto-reject). IMO, sync-over-async is best avoided.
However, we obviously cannot violate the first observation, either. If we're data binding to the result of some asynchronous operation, we can't very well return it before the operation completes!
Or can we?
What if we change what the data binding is attaching to? Say, introduce a property that has a default value before the operation is completed, and changes (via INotifyPropertyChanged) to the result of the operation when the operation completes. That sounds reasonable... And we can stick in another property to indicate to the UI that the operation is in progress! And maybe another one to indicate if the operation failed...
This is the line of thinking that resulted in my NotifyTaskCompletion type in the article on data binding (updated NotifyTask type here). It is essentially a data-bindable wrapper for Task<T>, so the UI can respond dynamically to the asynchronous operation without trying to force it to be synchronous.
This does require some changes to the bindings, but you get a nice side effect that your UX is better (non-blocking).
This should me method, not property
Well, you can do this as a property:
TEntity Entity { get { return NotifyTask.Create(() => Repository.GetEntityAsync()); } }
// Data bind to Entity.Result for the results.
// Data bind to Entity.IsNotCompleted for a busy spinner.
However, I would say that it's surprising behavior to have a property read kick off something significant like a database query or HTTP download. That's a pretty wide definition of "property". IMO, this would be better represented as a method, which connotates action more than a property does (or perhaps as part of an asynchronous initialization, which I also describe on my blog). Put another way: I prefer my properties without side effects. Reading a property more than once and having it return different values is counterintuitive. This final paragraph is entirely my own opinion. :)
If you have access to the source code of AnotherRepository.CalculateStuff, you can implement it in a way that won't deadlock when called from bound property. First short summary of why it deadlocks. When you await something, current synchronization context is remembered and the rest of the method (after async) is executed on that context. For UI applications that means the rest of the method is executed on UI thread. But in your case UI thread is already blocked by waiting for the Result of task - hence deadlock.
But there is method of Task named ConfigureAwait. If you pass false for it's only argument (named continueOnCapturedContext) and await task returned by this method - it won't continue on captured context, which will solve your problem. So suppose you have:
// this is UI bound
public string Data
{
get { return GetData().Result; }
}
static async Task<string> GetData() {
await Task.Run(() =>
{
Thread.Sleep(2000);
});
return "test!";
}
This will deadlock when called from UI thread. But if you change it:
static async Task<string> GetData() {
await Task.Run(() =>
{
Thread.Sleep(2000);
}).ConfigureAwait(false);
return "test!";
}
It won't any more.
For those who might read this later - don't do it this way, only if for temporary debugging purposes. Instead return dummy object from your property getter with some IsLoading flag set to true, and meanwhile load data in background and fill dummy object properties when done. This will not freeze your UI during long blocking operation.

Best way to delay execution

Let's say I have a method that I run in a separate thread via Task.Factory.StartNew().
This method reports so many progress (IProgress) that it freezes my GUI.
I know that simply reducing the number of reports would be a solution, like reporting only 1 out of 10 but in my case, I really want to get all reports and display them in my GUI.
My first idea was to queue all reports and treat them one by one, pausing a little bit between each of them.
Firstly: Is it a good option?
Secondly: How to implement that? Using a timer or using some kind of Task.Delay()?
UPDATE:
I'll try to explain better. The progress sent to the GUI consists of geocoordinates that I display on a map. Displaying each progress one after another provide some kind of animation on the map. That's why I don't want to skip any of them.
In fact, I don't mind if the method that I execute in another thread finishes way before the animation. All I want, is to be sure that all points have been displayed for at least a certain amount of time (let's say 200 ms).
Sounds like the whole point of having the process run in a separate thread is wasted if this is the result. As such, my first recommendation would be to reduce the number of updates if possible.
If that is out of the question, perhaps you could revise the data you are sending as part of each update. How large, and how complex is the object or data-structure used for reporting? Can performance be improved by reducing it's complexity?
Finally, you might try another approach: What if you create a third thread that just handles the reporting, and delivers it to your GUI in larger chunks? If you let your worker-thread report it's status to this reporter-thread, then let the reporter thread report back to your main GUI-thread only occasionally (e.g. every 1 in 10, as you suggest yourself above, bur then reporting 10 chunks of data at once), then you won't call on your GUI that often, yet you'll still be able to keep all the status data from the processing, and make it available in the GUI.
I don't know how viable this will be for your particular situation, but it might be worth an experiment or two?
I have many concerns regarding your solution, but I can't say for sure which one can be a problem without code samples.
First of all, Stephen Cleary in his StartNew is Dangerous article points out the real problem with this method with using it with default parameters:
Easy enough for the simple case, but let’s consider a more realistic example:
private void Form1_Load(object sender, EventArgs e)
{
Compute(3);
}
private void Compute(int counter)
{
// If we're done computing, just return.
if (counter == 0)
return;
var ui = TaskScheduler.FromCurrentSynchronizationContext();
Task.Factory.StartNew(() => A(counter))
.ContinueWith(t =>
{
Text = t.Result.ToString(); // Update UI with results.
// Continue working.
Compute(counter - 1);
}, ui);
}
private int A(int value)
{
return value; // CPU-intensive work.
}
...
Now, the question returns: what thread does A run on? Go ahead and walk through it; you should have enough knowledge at this point to figure out the answer.
Ready? The method A runs on a thread pool thread the first time, and then it runs on the UI thread the last two times.
I strongly recommend you to read whole article for better understanding the StartNew method usage, but want to point out the last advice from there:
Unfortunately, the only overloads for StartNew that take a
TaskScheduler also require you to specify the CancellationToken and
TaskCreationOptions. This means that in order to use
Task.Factory.StartNew to reliably, predictably queue work to the
thread pool, you have to use an overload like this:
Task.Factory.StartNew(A, CancellationToken.None,
TaskCreationOptions.DenyChildAttach, TaskScheduler.Default);
And really, that’s kind of ridiculous. Just use Task.Run(() => A());.
So, may be your code can be easily improved simply by switching the method you are creating news tasks. But there is some other suggestions regarding your question:
Use BlockingCollection for the storing the reports, and write a simple consumer from this queue to UI, so you'll always have a limited number of reports to represent, but at the end all of them will be handled.
Use a ConcurrentExclusiveSchedulerPair class for your logic: for generating the reports use the ConcurrentScheduler Property and for displaying them use ExclusiveScheduler Property.

In WinRT, why does calling Task.Result from a synchronous context block my app

I'm involved in porting some code over to a Windows Store Library.
It would be convenient for us to be able to sometimes wait for the results of an asynchronous operation before continuing. I thought we had the answer with Task's Result property, but for some reason it crashes my app.
I made a quick Windows Store test App like so:
public MainPage()
{
this.InitializeComponent();
MyTextBox.Text = "0";
var a = ReturnIntMax();
MyTextBox.Text = "1";
MyTextBox.Text = a.Result.ToString();
}
public async Task<int> ReturnIntMax()
{
await Task.Delay(1000);
return 5;
}
As far as I understand the Result property, this should work, so I hope somebody can tell me what's going on. I'm not interested in why this is bad design. We're dealing with a lot of comm traffic and we can't allow conflicts. If necessary, we can work around this, but first I want to understand why its not working. If it's just unsupported to call Result from a synchronous context, I would have thought it would give me a compiler error.
I describe the underlying cause of the deadlock on my blog. This is not a compiler error for several reasons; e.g., it's a violation of separation of concerns to ask a compiler (which is responsible for translating C# to IL) to understand the threading models of various user interface frameworks. Also, it's not practical to add an error for each call to Result that might be on a UI thread.
As a general rule, do not use Task.Wait or Task.Result at all. Instead, find a way to structure your code so that it's asynchronous. This can be awkward, but in the end you'll find that you end up with a better UI design (i.e., instead of blocking the UI while the main page loads, you need to design a blank or "loading..." page that can be shown immediately and then transition to show the data). My blog posts on constructors and properties have some ideas on how to make them async-friendly.

Categories