I have an "autocomplete" textbox that will invoke a WCF method each time a key is pressed.
The WCF server, in turn, will run an SQL query, return the first 15 results and send them.
However, this results in a noticeable latency when typing in the box.
What I'm about to do instead is this:
Create a new thread when a text_changed event is fired, make that thread wait 1000ms using Stopwatch.ElapsedMilliseconds. During this waiting time, the thread can be stopped permanently.
If it was not stopped, the thread will send the request to the server (and repopulate the auto complete box).
As soon as a new "text_changed" event is fired, I will stop the current thread and start a new one.
Is there a better approach or is this the way to go?
So basically wait for 1 second for the user to stop typing before requesting results.
That's a good solution for conserving server resources, but you are actually adding latency by making the user wait for a minimum of 1000ms.
My guess is that your original issue was that this is a winforms app and the request you made was synchronous by default. As a result, the textbox wasn't accepting user input while the app was waiting for a response. Just making the call asynchronous should solve that issue without making the typing slower.
Another approach is to use the Rx (Reactive Extensions) framework to encapsulate automplete with some very interesting characteristics.
With Rx you get ability to compose (multiple event sources), throttle the user input so that you don't overwhelm the source, and on top of that you can ignore the old result if user typed more in the search box (TakeUntil).
More info:
Rx: Curing your asynchronous programming blues
Curing the asynchronous blues with the Reactive Extensions for .NET
Example:
SO: RX AutoCompleteBox
RxProperty = Observable.FromEvent<TextChangedEventHandler, TextChangedEventArgs>(
h => new TextChangedEventHandler(h),
h => AssociatedObject.TextChanged += h,
h => AssociatedObject.TextChanged -= h)
.Select(t => ((TextBox)t.Sender).Text)
.Throttle(TimeSpan.FromMilliseconds(400))
.SubscribeOnDispatcher()
.Take(10)
.TakeUntil(AssociatedObject.TextChanged );
Instead of FromEvent you can use FromAsync and use proxy Beginxxx Endxxx methods.
Related
I rewrote some old async code of mine that makes that makes SOAP calls. The fetch() method would go out, get the result from the SOAP interface, and then add it to a DataTable that is bound to my WPF view. The new code uses Reactive Extensions to get a list of strings and creates an IObservable from the list. I thought it would return the results asynchronously, but the entire UI locks up until the entire result set is ready. I'm new to Reactive Extensions so I'm hoping I'm just missing something simple.
The Code:
(from click event)
private void fetchSoapRows()
{
var strings = (txtInput.Text.Split('*')).ToObservable();
strings.Subscribe(s=> SoapQueryEngine.Fetch(s));
}
Also, does anyone know how I could write a test to make certain this method doesn't block the application in the future?
There are two parts to an observable query, the Query itself and the Subscription.
Your Query is an IEnumerable<string> producing values as fast as the computer can do it.
Your Subscription is
SoapQueryEngine.Fetch(s);
This runs Fetch for each string produced by the Query in the Subscriber thread which tends to be the thread where you're setting up your Subscription (although it isn't necessarily).
The issue has to do with the intention and design of Rx. It's intended that the Query is the long-running process and the Subscription is a short method that deals with the results. If you want to run a long running function as an Rx Observable your best option is to use Observable.ToAsync.
You should also take a look at this question to see a similar problem which shows more of what's going on in the background.
There is nothing inherently concurrent about Rx. If you want to make your calls to Fetch you will need to change SoapQueryEngine so that it is async or call it on another thread and then bring the results back to the UI thread.
Try this way. Instead of subscribing to the event text changed event, create an observable on the event and observe it on the thread pool:
Observable.FromEventPattern(<subscribe event>, <unsubscribe event>)
.ObserveOn(ThreadPoolScheduler.Instance)
.SelectMany(s => s.Split('*'))
.Subscribe(s=> SoapQueryEngine.Fetch(s));
Suppose you are permanently invoking a method asynchronously onto the UI thread/dispatcher with
while (true) {
uiDispatcher.BeginInvoke(new Action<int, T>(insert_), DispatcherPriority.Normal, new object[] { });
}
On every run of the program you observe that the GUI of the application begins to freeze after about 90 seconds due to the flood of invocations (time varies but lies roughly between 1 and 2 minutes).
How could one exactly determine (measure ?) the point when this overloading occurs in order to stop it early enough ?
Appendix I:
In my actual program I don't have an infinite loop. I have an algorithm that iterates several hundred times before terminating. In every iteration I am adding a string to a List control in my WPF application. I used the while (true) { ... } construct because it matches best what happens. In fact the algorithm terminates correctly and all (hundreds) strings are added correctly to my List but after some time I am loosing the ability to use my GUI until the algorithm terminates - then the GUI is responsive again.
Appendix II:
The purpose of my program is to observe a particular algorithm while it's running. The strings I am adding are log entries: one log string per iteration. The reason why I am invoking these add-operations is that the algorithm is running in another thread than the UI thread. To catch up with the fact that I can't do UI manipulation from any thread other than the UI thread I built some kind of ThreadSafeObservableCollection (But I am pretty sure that this code is not worth posting because it would detract from the actual problem what I think is that the UI can't handle the repeatedly and fast invocation of methods.
It's pretty straight forward: you are doing it wrong by the time you overload the user's eyeballs. Which happens pretty quickly as far as modern cpu cores are concerned, beyond 20 updates per second the displayed information just starts to look like a blur. Something the cinema takes advantage of, movies play back at 24 frames per second.
Updating any faster than that is just a waste of resources. You still have an enormous amount of breathing room left before the UI thread starts to buckle. It depends on the amount of work you ask it to do, but typical is a x50 safety margin. A simple timer based on Environment.TickCount will get the job done, fire an update when the difference is >= 45 msec.
Posting that often to the UI is a red flag. Here is an alternative: Put new strings into a ConcurrentQueue and have a timer pull them out every 100ms.
Very simple and easy to implement, and the result is perfect.
I've not used WPF--just Windows Forms, but I would suggest that if there is a view-only control which will need to be updated asynchronously, the proper way to do it is to write the control so that its properties can be accessed freely from any thread, and updating a control will BeginInvoke the refresh routine only if there isn't already an update pending; the latter determination can be made with an Int32 "flag" and Interlock.Exchange (the property setter calls Interlocked.Exchange on the flag after changing the underlying field; if the flag had been clear, it does a BeginInvoke on the refresh routine; the refresh routine then clears the flag and performs the refresh). In some cases, the pattern may be further enhanced by having the control's refresh routine check how much time has elapsed since the last time it ran and, if the answer is less than 20ms or so, use a timer to trigger a refresh 20ms after the previous one.
Even though .net can handle having many BeginInvoke actions posted on the UI thread, it's often pointless to have more than update for a single control pending at a time. Limit the pending actions to one (or at most a small number) per control, and there will be no danger of the queue overflowing.
ok, sorry for the bad link before in the comments, but I kept reading and maybe this will be of help:
The DispatcherOperation object returned by BeginInvoke can be used in several ways to interact with the specified delegate, such as:
Changing the DispatcherPriority of the delegate as it is pending execution in the event queue.
Removing the delegate from the event queue.
Waiting for the delegate to return.
Obtaining the value that the delegate returns after it is executed.
If multiple BeginInvoke calls are made at the same DispatcherPriority, they will be executed in the order the calls were made.
If BeginInvoke is called on a Dispatcher which has shut down, the status property of the returned DispatcherOperation is set to Aborted.
Maybe you can do something with the number of delegates that you are waiting on...
To put supercat's solution in a more WPF like way, try for an MVVM pattern and then you can have a separate view model class which you can share between threads, perhaps take locks out at apropriate points or use the concurrent collections class. You implement an interface (I think it's INotifyPropertyChanged and fire an event to say the collection has changed. This event must be fired from the UI thread, but only needs
After going through the answers provided by others and your comments on them, your actual intent seems to be ensuring that UI remains responsive. For this I think you have already received good proposals.
But still, to answer your question (how to detect and flag overloading of UI thread) verbatim, I can suggest the following:
First determine what should be the definition of 'overloading' (for e.g. I can assume it to be 'UI thread stops rendering the controls and stops processing user input' for a big enough duration)
Define this duration (for e.g. if UI thread continues to process render and input messages in at-most 40ms I will say it is not overloaded).
Now Initiate a DispactherTimer with DispatcherPriority set according to your definition for overloading (for my e.g. it can be DispatcherPriority.Input or lower) and Interval sufficiently less than your 'duration' for overloading
Maintain a shared variable of type DateTime and on each tick of the timer change its value to DateTime.Now.
In the delegate you pass to BeginInvoke, you can compute a difference between current time and the last time Tick was fired. If it exceeds your 'measure' of overloading then well the UI thread is 'Overloaded' according to your definition. You can then set a shared flag which can be checked from inside your loop to take appropriate action.
Though I admit, it is not fool proof, but by empirically adjusting your 'measure' you should be able to detect overloading before it impacts you.
Use a StopWatch to measure minimum, maximum, average, first and last update durations. (You can ouput this to your UI.)
Your update frequency must be < than 1/(the average update duration).
Change your algorithm's implementation so that it iterations are invoked by a multimedia timer e.g. this .NET wrapper or this .NET wrapper. When the timer is activated, use Interlocked to prevent running a new iteration before current iteration is complete. If you need to iterations on the main, use a dispatcher. You can run more than 1 iteration per timer event, use a parameter for this and together with time measurements determine how many interations to run per timer event and how often you want the timer events.
I do not recommend using less than 5mSec for the timer, as the timer events will suffocate the CPU.
As I wrote ealier in my comment, use DispatcherPriority.Input when dispatching to the main thread, that way the UI's CPU time isn't suffocated by the dispatches. This is the same priority the UI messages have, so that way they are not ignored.
There has a Collection variable and an Add method at my WCF servcie side. The Add method would log some records to the Collection while something trigger .
Here are the requirements: Now at the client Side, there has a Thread to read the Collection at WCF service side every second and remove the recorde read before and display them to the foreground.
My question is : How to Control the conccur of reading and writing would be better.( is not thread safe, so there will get duplicate values while reading it)
Currently I use a Timer to concurrency it, when the timer event is triggered, make the Timer stop untill the reading ended. And then let the Timer start(to prevent the timer Event not executed completely , triggered the next Operation again). I use Monitor.TryEnter as thread lock, and the code is like the following:
void ReadFromService()
{
//do........
timer.start();//
}
Timer timer=new Timer(1000);
timer.Elapsed +=(s,e)=>{
timer.stop();
ReadFromService();
};
timer.start();
I think this is not a good idea, I hope someone can give me something good suggestion.
It would be preferable not to have a timer in your Reader client. It's not that concurrency is especially difficult, it's just plain inefficient to poll over a communication channel.
You could instead have the Add method trigger an event in the Reader client to either pass it the added records, or have that event act as a trigger to read an updated snapshot of the collection, and then the Service locks the Collection appropriately before returning the new snapshot.
This is a limited form of the Observer pattern, and easily extensible if you have more full collection readers, or new readers that wanted to see changes of different subsets of data from the collection.
Info on WCF events (callbacks) here.
Have a look at the
ConcurrentList<T>
class at this link: http://www.deanchalk.me.uk/post/Task-Parallel-Concurrent-List-Implementation.aspx
My application handles a data feed. When a new packet comes in, a dispatcher collects it, and raises an event that a proper listener can pick up and do what it needs to do.
I am trying to simulate the live feed to perform some testing. I made a class that feeds the dispatcher with packets, based on the number of active listeners.
This is the code I use to start the Feed() method which sits in memory and generate a packet every given interval:
foreach (var item in Listeners)
{
object listener = item;
Task.Factory.StartNew(()=> Feed(listener), TaskCreationOptions.LongRunning);
}
The Feed() method works something like this:
while(run)
{
packet = GenerateThePacket(listener.Id); // Make a packet with the listener id
FeedHandler.OnPacketRecieved(this, packet); // Raises the FeedHandler's event as if it came from outside.
Thread.Sleep(1000/interval) // interval determines how many packets per second
}
So, if I have a 100 listeners, it'll start 100 instances of Feed(), each with different listener id, and fire up PacketRecieved events at the same time with the requested interval.
I guess many of you already know whats bad about it, but I'll explain the problem anyway:
When I use an interval of 1 or 2 it works great. When I choose 10 (that is, a packet every 100ms) it doesn't work right. Each thread fires up at different intervals, where the latest one created works good and fast (10/sec), and the first ones create works real slow (1/sec or less).
I guess that a 100 threads can't operate at the same time and so they are just waiting. I think.
What exactly is happening and how can I implement a true feed generator which simulates 10 packets a sec simultaneously for a 100 listeners.
I think you're approaching this from the wrong angle....
Have a read through these:
http://blogs.msdn.com/b/pfxteam/archive/2010/04/21/9997559.aspx (link to pdf on there)
http://www.sadev.co.za/content/pulled-apart-part-vii-plinq-not-easy-first-assumed
In a nutshell, the Task library will get a thread from the threadpool and if one is not available, the tasks will be queued until a thread is available..... so, the number of threads than can concurrently run depends on your system and the size of your threadpool.
For me, there are 2 ways to go... use the Parallel.ForEach static method or use the PLinq AsParallel() option as described in the articles above. At the end of the day, down to you which one to use.
Using plinq... something like this:
var parallelQuery = Listeners.AsParallel().Select(item=> Feed(item)); //creates the parallel query
parallelQuery.ForAll(item=> <dosomething>); //begin the parallel process and do whatever you need to do for each result.
your feed method/object can look like this:
while(run)
{
packet = GenerateThePacket(listener.Id);
FeedHandler.OnPacketRecieved(this, packet); // Raises the FeedHandler's event as if it came from outside.
//No more Thread.Sleep
}
This is just a basic intro for you but the links i've added above are quite helpful and informative. It's up to you which method to use.
Keep in mind there are additional options you can add.... all in the links above.
Hope this helps!
Often in my code I start threads which basically look like this:
void WatchForSomething()
{
while(true)
{
if(SomeCondition)
{
//Raise Event to handle Condition
OnSomeCondition();
}
Sleep(100);
}
}
just to know if some condition is true or not (for example if a have a bad coded library with no events, just boolean variables and I need a "live-view" of them).
I wonder if there is a better way to accomplish this kind of work like a Windows function to hook in which can run my methods all x sec. Or should I code a global event for my app, raising all x secs and let him call my methods like this:
//Event from Windows or selfmade
TicEvent += new TicEventHandler(WatchForSomething));
and then this method:
void WatchForSomething()
{
if(SomeCondition)
{
//Raise Event to handle Condition
OnSomeCondition();
}
}
So, I hope this is not closed because of being a "subjective question" or something, I just want to know what the best practice for this kind of work is.
There isn't necessarily a "best way" to write long-running event processing code. It depends on what kind of application you are developing.
The first example you show is the idiomatic way in which you would often see the main method of a long-running thread written. While it's generally desirable to use a mutex or waitable event synchronization primitive rather than a call to Sleep() - it is otherwise a typical pattern used to implement event processing loops. The benefit of this approach is that it allows specialized processing to run on a separate thread - allowing your application's main thread to perform other tasks or remain responsive to user input. The downside of this approach is that it may require the use of memory barriers (such as locks) to ensure that shared resources are not corrupted. It also makes it more difficult to update your UI, since you must generally marshal such calls back to the UI thread.
The second approach is often used as well - particularly in systems that already have an event-drive API such as WinForms, WPF, or Silverlight. Using a timer object or Idle event is the typical manner in which periodic background checks can be made if there is no user-initiated event that triggers your processing. The benefit here is that it's easy to interact and update user interface objects (since they are directly accessible from the same thread) and it's mitigates the need for locks and mutexes to protected data. One potential downside of this approach is if the processing that must be performed is time-consuming it can make your application unresponsive to user input.
If you are not writing applications that have a user interface (such as services) then the first form is used much more often.
As an aside ... when possible, it's better to use a synchronization object like an EventWaitHandle or Semaphore to signal when work is available to be processed. This allows you to avoid using Thread.Sleep and/or Timer objects. It reduces the average latency between when work is available to be performed and when event processing code is triggered, and it minimizes the overhead of using background threads, since they can be more efficiently scheduled by the runtime environment and won't consume any CPU cycles until there's work to do.
It's also worth mentioning that if the processing you do is in response to communications with external sources (MessageQueues, HTTP, TCP, etc) you can use technologies like WCF to provide the skeleton of your event handling code. WCF provides base classes that make it substantially easier to implement both Client and Server systems that asynchronously respond to communication event activity.
If you have a look at Reactive Extensions, it provides an elegant way of doing this using the observable pattern.
var timer = Observable.Interval(TimeSpan.FromMilliseconds(100));
timer.Subscribe(tick => OnSomeCondition());
A nice thing about observables is the ability to compose and combine further observables from existing ones, and even use LINQ expressions to create new ones. For example, if you wanted to have a second timer that was in sync with the first, but only triggering every 1 second, you could say
var seconds = from tick in timer where tick % 10 == 0 select tick;
seconds.Subscribe(tick => OnSomeOtherCondition());
By the way, Thread.Sleep is probably never a good idea.
A basic problem with Thread.Sleep that people are usually not aware of, is that the internal implementation of Thread.Sleep does not pump STA messages. The best and easiest alternative, if you have to wait a given time and can't use a kernel sync object, is to replace Thread.Sleep with Thread.Join on the current thread, with the wanted timeout. Thread.Join will behave the same, i.e. the thread would wait the wanted time, but in the meantime STA objects will be pumped.
Why this is important (some detailed explanatiopn follows)?
Sometimes, without you even knowing, one of your threads may have created an STA COM object. (For example this sometimes happens behind the scenes when you use Shell APIs). Now suppose a thread of yours has created an STA COM object, and is now in a call to Thread.Sleep.
If at sometime the COM object has to be deleted (which can happen at an unexpected time by the GC), then the Finalizer thread will try calling the object's distruvtor. This call will be marshalled to the object's STA thread, which will be blocked.
Now, in fact, you will have a blocked Finalizer thread. In this situations objects can't be freed from memory, and bad things will follow.
So the bottom line: Thread.Sleep=bad. Thread.Join=reasonable alternative.
The first example you show is a rather inelegant way to implement a periodic timer. .NET has a number of timer objects that make this kind of thing almost trivial. Look into System.Windows.Forms.Timer, System.Timers.Timer and System.Threading.Timer.
For example, here's how you'd use a System.Threading.Timer to replace your first example:
System.Threading.Timer MyTimer = new System.Threading.Timer(CheckCondition, null, 100, 100);
void CheckCondition(object state)
{
if (SomeCondition())
{
OnSomeCondition();
}
}
That code will call CheckCondition every 100 milliseconds (or thereabouts).
You don't provide a lot of background on why you're doing this, or what you're trying to accomplish, but if its possible, you might want to look into creating a windows service.
Use a BackgroundWoker for additional thread safe measures:
BackgroundWorker bw = new BackgroundWorker();
bw.WorkerSupportsCancellation = true;
bw.WorkerReportsProgress = true;
.
.
.
private void bw_DoWork(object sender, DoWorkEventArgs e)
{
BackgroundWorker worker = sender as BackgroundWorker;
for (;;)
{
if (worker.CancellationPending == true)
{
e.Cancel = true;
break;
}
else
{
// Perform a time consuming operation and report progress.
System.Threading.Thread.Sleep(100);
}
}
}
For more info visit: http://msdn.microsoft.com/en-us/library/cc221403%28v=vs.95%29.aspx
A very simple way for non blocking wait other threads/tasks is:
(new ManualResetEvent(false)).WaitOne(500); //Waits 500ms