Evaluate NUnit assertion after a delay - c#

I'm trying to write a unit test with NUnit for a method that could take anywhere from 1 to 3 seconds to complete. In order to valid the test all I need to do is check if a List<string> entries has been incremented in that 1 to 3 second span.
My current solution is to use a Thread.Sleep():
1. int currentEntries = entries.count;
2. Call methodA()
3. Thread.Sleep(3000);
4. int updatedEntries = entries.count;
5. Assert.That(updatedEntries, Is.EqualTo(currentEntries+1));
This solution always takes at least 3 seconds, even when methodA() finishes faster.
I have tried using NUnits Delay constraint:
Assert.That(entries.Count, Is.EqualTo(currentEntries+1).After(3).Seconds.PollEvery(250).MilliSeconds);
This would have been ideal, since it supports polling. But the After constraint is still evaluated instantly instead of after 3 seconds, as discussed here.
Is there a better solution to this issue?

Your call to Assert.That has an actual argument, which is evaluated immediately before the method is called. The value of entries.Count is taken and the resulting integer value is copied as the method argument. Within the method, we are dealing with a constant.
When the constraint is re-evaluated every 250 milliseconds, it is done each time against that same copied constant, which of course never changes.
When you use a delayed constraint, with or without polling, the actual argument has to be in the form of a delegate, a lambda or a reference to a field. The following simple modification should make it work.
Assert.That (() => entries.Count, Is.EqualTo(currentEntries+1).After(3).Seconds.PollEvery(250).MilliSeconds);
Alternatively, this should work as well
Assert.That (entries, Has.Count.EqualTo(currentEntries+1).After(3).Seconds.PollEvery(250).Milliseconds);
because the property is evaluated each time we poll.

Charlie's answer nails your actual problem (as his answers always do). However, I'd strongly recommend against having tests which depend upon delays. If your test fails, how will you know whether the problem was that you just didn't wait long enough.
Also, if this is part of an automated test suite, the elapsed time for executing all the tests will become extremely expensive as you add more tests like this.
Good tests are deterministic. For example, see Martin Fowler's Eradicating Non-Determinism in Tests, which includes the phrase:
Never use bare sleeps to wait for asynchonous responses: use a callback or polling.
My suggestion would be to refactor your code so that the functionality is separated from the threading, and test the functionality, not the threading part.
Once this separation is achieved, the code which calls the functionality on a thread can be given a Mock, and you can simply verify that the Mock was called, since you have separately tested that it behaves correctly, although that still requires polling...so:
If you encapsulate the code which launches the separate thread, then in your test, provide a mocked thread launcher which actually executes the code synchronously. So as soon as the code being tested has completed, you can make the assertion, without waiting.

Related

Best way to delay execution

Let's say I have a method that I run in a separate thread via Task.Factory.StartNew().
This method reports so many progress (IProgress) that it freezes my GUI.
I know that simply reducing the number of reports would be a solution, like reporting only 1 out of 10 but in my case, I really want to get all reports and display them in my GUI.
My first idea was to queue all reports and treat them one by one, pausing a little bit between each of them.
Firstly: Is it a good option?
Secondly: How to implement that? Using a timer or using some kind of Task.Delay()?
UPDATE:
I'll try to explain better. The progress sent to the GUI consists of geocoordinates that I display on a map. Displaying each progress one after another provide some kind of animation on the map. That's why I don't want to skip any of them.
In fact, I don't mind if the method that I execute in another thread finishes way before the animation. All I want, is to be sure that all points have been displayed for at least a certain amount of time (let's say 200 ms).
Sounds like the whole point of having the process run in a separate thread is wasted if this is the result. As such, my first recommendation would be to reduce the number of updates if possible.
If that is out of the question, perhaps you could revise the data you are sending as part of each update. How large, and how complex is the object or data-structure used for reporting? Can performance be improved by reducing it's complexity?
Finally, you might try another approach: What if you create a third thread that just handles the reporting, and delivers it to your GUI in larger chunks? If you let your worker-thread report it's status to this reporter-thread, then let the reporter thread report back to your main GUI-thread only occasionally (e.g. every 1 in 10, as you suggest yourself above, bur then reporting 10 chunks of data at once), then you won't call on your GUI that often, yet you'll still be able to keep all the status data from the processing, and make it available in the GUI.
I don't know how viable this will be for your particular situation, but it might be worth an experiment or two?
I have many concerns regarding your solution, but I can't say for sure which one can be a problem without code samples.
First of all, Stephen Cleary in his StartNew is Dangerous article points out the real problem with this method with using it with default parameters:
Easy enough for the simple case, but let’s consider a more realistic example:
private void Form1_Load(object sender, EventArgs e)
{
Compute(3);
}
private void Compute(int counter)
{
// If we're done computing, just return.
if (counter == 0)
return;
var ui = TaskScheduler.FromCurrentSynchronizationContext();
Task.Factory.StartNew(() => A(counter))
.ContinueWith(t =>
{
Text = t.Result.ToString(); // Update UI with results.
// Continue working.
Compute(counter - 1);
}, ui);
}
private int A(int value)
{
return value; // CPU-intensive work.
}
...
Now, the question returns: what thread does A run on? Go ahead and walk through it; you should have enough knowledge at this point to figure out the answer.
Ready? The method A runs on a thread pool thread the first time, and then it runs on the UI thread the last two times.
I strongly recommend you to read whole article for better understanding the StartNew method usage, but want to point out the last advice from there:
Unfortunately, the only overloads for StartNew that take a
TaskScheduler also require you to specify the CancellationToken and
TaskCreationOptions. This means that in order to use
Task.Factory.StartNew to reliably, predictably queue work to the
thread pool, you have to use an overload like this:
Task.Factory.StartNew(A, CancellationToken.None,
TaskCreationOptions.DenyChildAttach, TaskScheduler.Default);
And really, that’s kind of ridiculous. Just use Task.Run(() => A());.
So, may be your code can be easily improved simply by switching the method you are creating news tasks. But there is some other suggestions regarding your question:
Use BlockingCollection for the storing the reports, and write a simple consumer from this queue to UI, so you'll always have a limited number of reports to represent, but at the end all of them will be handled.
Use a ConcurrentExclusiveSchedulerPair class for your logic: for generating the reports use the ConcurrentScheduler Property and for displaying them use ExclusiveScheduler Property.

When is the GUI overloaded?

Suppose you are permanently invoking a method asynchronously onto the UI thread/dispatcher with
while (true) {
uiDispatcher.BeginInvoke(new Action<int, T>(insert_), DispatcherPriority.Normal, new object[] { });
}
On every run of the program you observe that the GUI of the application begins to freeze after about 90 seconds due to the flood of invocations (time varies but lies roughly between 1 and 2 minutes).
How could one exactly determine (measure ?) the point when this overloading occurs in order to stop it early enough ?
Appendix I:
In my actual program I don't have an infinite loop. I have an algorithm that iterates several hundred times before terminating. In every iteration I am adding a string to a List control in my WPF application. I used the while (true) { ... } construct because it matches best what happens. In fact the algorithm terminates correctly and all (hundreds) strings are added correctly to my List but after some time I am loosing the ability to use my GUI until the algorithm terminates - then the GUI is responsive again.
Appendix II:
The purpose of my program is to observe a particular algorithm while it's running. The strings I am adding are log entries: one log string per iteration. The reason why I am invoking these add-operations is that the algorithm is running in another thread than the UI thread. To catch up with the fact that I can't do UI manipulation from any thread other than the UI thread I built some kind of ThreadSafeObservableCollection (But I am pretty sure that this code is not worth posting because it would detract from the actual problem what I think is that the UI can't handle the repeatedly and fast invocation of methods.
It's pretty straight forward: you are doing it wrong by the time you overload the user's eyeballs. Which happens pretty quickly as far as modern cpu cores are concerned, beyond 20 updates per second the displayed information just starts to look like a blur. Something the cinema takes advantage of, movies play back at 24 frames per second.
Updating any faster than that is just a waste of resources. You still have an enormous amount of breathing room left before the UI thread starts to buckle. It depends on the amount of work you ask it to do, but typical is a x50 safety margin. A simple timer based on Environment.TickCount will get the job done, fire an update when the difference is >= 45 msec.
Posting that often to the UI is a red flag. Here is an alternative: Put new strings into a ConcurrentQueue and have a timer pull them out every 100ms.
Very simple and easy to implement, and the result is perfect.
I've not used WPF--just Windows Forms, but I would suggest that if there is a view-only control which will need to be updated asynchronously, the proper way to do it is to write the control so that its properties can be accessed freely from any thread, and updating a control will BeginInvoke the refresh routine only if there isn't already an update pending; the latter determination can be made with an Int32 "flag" and Interlock.Exchange (the property setter calls Interlocked.Exchange on the flag after changing the underlying field; if the flag had been clear, it does a BeginInvoke on the refresh routine; the refresh routine then clears the flag and performs the refresh). In some cases, the pattern may be further enhanced by having the control's refresh routine check how much time has elapsed since the last time it ran and, if the answer is less than 20ms or so, use a timer to trigger a refresh 20ms after the previous one.
Even though .net can handle having many BeginInvoke actions posted on the UI thread, it's often pointless to have more than update for a single control pending at a time. Limit the pending actions to one (or at most a small number) per control, and there will be no danger of the queue overflowing.
ok, sorry for the bad link before in the comments, but I kept reading and maybe this will be of help:
The DispatcherOperation object returned by BeginInvoke can be used in several ways to interact with the specified delegate, such as:
Changing the DispatcherPriority of the delegate as it is pending execution in the event queue.
Removing the delegate from the event queue.
Waiting for the delegate to return.
Obtaining the value that the delegate returns after it is executed.
If multiple BeginInvoke calls are made at the same DispatcherPriority, they will be executed in the order the calls were made.
If BeginInvoke is called on a Dispatcher which has shut down, the status property of the returned DispatcherOperation is set to Aborted.
Maybe you can do something with the number of delegates that you are waiting on...
To put supercat's solution in a more WPF like way, try for an MVVM pattern and then you can have a separate view model class which you can share between threads, perhaps take locks out at apropriate points or use the concurrent collections class. You implement an interface (I think it's INotifyPropertyChanged and fire an event to say the collection has changed. This event must be fired from the UI thread, but only needs
After going through the answers provided by others and your comments on them, your actual intent seems to be ensuring that UI remains responsive. For this I think you have already received good proposals.
But still, to answer your question (how to detect and flag overloading of UI thread) verbatim, I can suggest the following:
First determine what should be the definition of 'overloading' (for e.g. I can assume it to be 'UI thread stops rendering the controls and stops processing user input' for a big enough duration)
Define this duration (for e.g. if UI thread continues to process render and input messages in at-most 40ms I will say it is not overloaded).
Now Initiate a DispactherTimer with DispatcherPriority set according to your definition for overloading (for my e.g. it can be DispatcherPriority.Input or lower) and Interval sufficiently less than your 'duration' for overloading
Maintain a shared variable of type DateTime and on each tick of the timer change its value to DateTime.Now.
In the delegate you pass to BeginInvoke, you can compute a difference between current time and the last time Tick was fired. If it exceeds your 'measure' of overloading then well the UI thread is 'Overloaded' according to your definition. You can then set a shared flag which can be checked from inside your loop to take appropriate action.
Though I admit, it is not fool proof, but by empirically adjusting your 'measure' you should be able to detect overloading before it impacts you.
Use a StopWatch to measure minimum, maximum, average, first and last update durations. (You can ouput this to your UI.)
Your update frequency must be < than 1/(the average update duration).
Change your algorithm's implementation so that it iterations are invoked by a multimedia timer e.g. this .NET wrapper or this .NET wrapper. When the timer is activated, use Interlocked to prevent running a new iteration before current iteration is complete. If you need to iterations on the main, use a dispatcher. You can run more than 1 iteration per timer event, use a parameter for this and together with time measurements determine how many interations to run per timer event and how often you want the timer events.
I do not recommend using less than 5mSec for the timer, as the timer events will suffocate the CPU.
As I wrote ealier in my comment, use DispatcherPriority.Input when dispatching to the main thread, that way the UI's CPU time isn't suffocated by the dispatches. This is the same priority the UI messages have, so that way they are not ignored.

Alternative to waiting in a method with a while loop

I'm writing a program that runs through my method possibly 50 times a second or more (necessary)
The method needs to follow this model:
Create boolean value.
Wait until the value changes.
Continue on in the method.
Simple, I know, but I don't want to use a while loop because it takes up 3% or so CPU more than it should, and I imagine, should I need it to wait any longer for the value to change, that could take up all of my CPU cycles, which I don't want. Also, creating a new thread for every time I execute the method at 50 times per second is a horrible idea.
So what could I do? If I need to provide any other kind of information feel free to ask.
Could a ManualResetEvent be of any use? Not sure how it would work with your system, but it might be something to look into.
Depending on the nature of the method, you could just make the rest of the method into an event handler, and the place that changes its value then first a ValueChanged type event.

One timer, many method calls or many timers, one method call?

I'm developing an application for WinCE 5.0 on .NET CF 2.0.
I was wondering what other people see as the best solution in the following two cases:
Assume that i have 10 methods that i need to run every 50mS.
Solution 1:
Create one System.Threading.Timer that runs every 50mS and then in this callback run the above mentioned 10 methods.
Solution 2:
Create 10 System.Threading.Timer that runs every 50mS and then in each of their callbacks call one of the above methods.
Which solution do you think is best? Some of the methods are cpu intensive, others are not.
Using multiple timers makes the calls independent. That matters most with respect to exceptions. Do you want the other methods to proceed if #2 throws? Are the methods otherwise dependent on each other?
With multiple timers on a multi-core you will profit by executing on the ThreadPool.
Under Win-CE you are probably running on a single core, making part of this reasoning academic.
I don't think I'd use a Timer at all. A Timer is going to spawn a thread when it fires, which a) takes more time and b) allows for reentrancy, meaning you could be running your methods, especially if they take a while, simultaneously.
I'd spawn a single thread at startup (you define what that means - app startup, some object creation, etc) that does all the reads sequentially, does a Sleep call and then repeats.
Something along these lines:
private void Startup()
{
new Thread(WorkerProc) { IsBackground = true }
.Start();
}
private void WorkerProc()
{
int pollPeriod = 50;
while (true)
{
var et = Environment.TickCount;
// call your methods
et = Environment.TickCount - et;
var sleep = pollPeriod - et;
if (sleep < 0) sleep = 0; // always yield
Thread.Sleep(sleep);
}
}
It boils down to how accurate those methods needs to be. Calling each method in sequence (using the same timer) will not run all methods every 50ms, since each method takes time to complete.
If all methods must run every 50s: use different timers; otherwise use the same timer.
as it looks you dont depend on the order of your operations, otherwise you wouldnt ask the question.
I would prefer the "1 Timer per operation" solution. If you have an operation which is once more time consuming (lot of data whatever) at least the other operations will still get executed. But I dont know if that really helps you. It depends a lot about your needs/implementation
I would go for Solution 1, at least for your CPU intensive methods.
Then you implicitly run your methods in sequence. I assume that since your are on WinCE you don't have that many cores or that much RAM and the best trade of is to not try to run more code in parallel than necessary.
In Solution 2 you run into the risk of creating multiple thread executing your 10 methods at the same time. This might be good if you are waiting on I/O, especially network I/O.

Parallel.ForEach() .... how best to terminate loop externally?

I have an application whereby I download and process approximately 7,800 URL's using a Paraller.ForEach() loop. Using this technique has allowed my computer to complete this operation in about 4.5 minutes (is used to take almost 28 minutes on average).
As this is running inside a WinForms application I allow the user to "stop" the process early by simply clicking a stop button, which will in turn set a volatile boolean variable to 'false'. At the top of my Parallel.ForEach() loop I check the state of that variable, and if it has been set to 'false' I simply call the ParallelLoopState.Stop() method. My next code block inside the loop only runs if the ParallelLoopState has not been stopped.
Works great. Have experienced zero problems using this implementation for the 3 weeks I've been using it.
But I was just reading and came across the "CancellationTokenSource" and "CancellationToken" classes and discovered these were designed to essentially perform the same act ... to allow a Parallel loop to be cancelled externally.
Can anyone tell me if they foresee a problem with me continuing to use my existing implementation?
Parallel.ForEach(searchList, (url, state) =>
{
if (!this.isSearching)
{
state.Stop();
OnSearchEvent(new SearchStatusEventArgs(SearchState.STOP_REQUESTED, ......));
}
if (!state.IsStopped)
{
// continue on with regular processing .......
}
});
Looks fine to me! The CancellationTokenSource and CancellationToken are really for use with Tasks especially where tasks are linked together via ContinueWith. From there, you can interrogate the Token (thread safe) and throw an exception or quit the thread in exactly the same way you are already doing it.
Unless you go down the route of complicated task chaining and closures, then I'd say there is no need to complicate things!

Categories