I have a scenario that I've tried to solve with TPL. The result is decent but I'm wondering if my design is flawed and if there is room for any improvement or radical changes.
Scenario:
An user can "subscribe" to X number of items and can set a given interval for updates of that item. The user will then get notifications if the item has changed its data. Since time is
a vital factor I want to show an item as updated straight away instead of waiting for all items to be updated and then notify the user about all updated items in a batch, or is this a bad idea?
My approach:
A user subscribes to an event called ItemUpdated.
A method, called Process is called each time with the given interval. The method is called in a fire and forget way by creating running it on a BackgroundWorker. The Process
method works in the following way:
2.1 Retrieve JSON strings and post them to a BufferBlock which is linked to a TransformBlock.
2.2 The TransformBlock parses each JSON string into a domain object called
Item. The TransformBlock is linked to an ActionBlock.
2.3 The ActionBlock invokes the event ItemUpdated for each Item it receives.
My question is basically: Is this an optimal solution or should i re-think my strategy? My biggest concern is that I notify the user about updated items with an event. Should I use an async callback method that will be given a list of all updated items instead or is there an alternative solution?
Related
This question has been asked a lot, and I've read all the posts on Stackoverflow on it.
I wanted to confirm my conclusion though.
Where the source of information being populated is quite fast (ie it doesn't take much processing time to get the information itself), it seems to me that the processing that takes the time is actually feeding it into the DataGridView. However that has to be done on the UI thread as that's where the control is.
If that's the case there seems to be limited benefit to trying to do anything in the background, and that the corollary is that there's no effective way to populate a DataGridView without bunging up the UI thread. Is that right?
Surely there must somehow be a way to populate a DataGridView entirely asynchronously while the user can still interact with the UI?
Surely there must somehow be a way to populate a DataGridView entirely asynchronously while the user can still interact with the UI?
There are many ways it can be done.
Paging via Async/await Inlined with Regular Code
If whatever it is you are fetching the data from (whether it be a direct database connection; REST or WCF call) supports paging, then you could fetch pages of data via inline async/await and add rows for each item returned in the page.
e.g.
// somewhere in your UI code
async Task LoadAsync(List<Page> pages)
{
foreach (var page in pages)
{
var stuff = await service.GetMovieSalesPagedAsync (page);
foreach (var item in stuff)
{
_dataGrid.Rows.Add (/* convert item then add it here */);
}
}
}
This is faster than say requesting all the data in one go and then trying to add rows for each item. The latter would just block the UI.
The benefit of the above approach is that the code is inline and easier to read.
Dedicated Task with Progressive Filling During Application Idle
This technique is better for when there is a large amount of data to display and you want the best performance UI-wise. It's also useful if the source does not support paging.
Here you can spawn a Task whose job is to retrieve the data a page (or all at once) at a time, then add each page results to say a ConcurrentQueue<> for the benefit of the UI thread. If you have to retrieve all of it, then break the results into pages manually.
Meanwhile, in your Application.Idle handler try to pop an item off the queue and if one is found, add the new items as rows to the datagrid. Now depending on your app, you may choose to process all available pages or wait for the next application idle event. It might take a bit of fine tuning. Waiting for the next application idle allows your app to play nice with UI responsiveness.
This will cause the datagrid to be filled progressively rather than all at once.
A con of this approach is that the code is no longer inline. You have one block of code responsible for fetching and storing data; another for pumping it into the UI.
How do I setup an event loop (main loop) in a UWP app?
My goal is to have an app that has a main page with a continuously updating calculation, that then continuously updates an image on the page. While that is constantly happening, the user can click a button to change the calculation behavior, or to go to a second page and edit related data.
The user can then click back to the main page and see the image and calculation restart and continuously update. (The calculation is complex, so it should go as fast as possible and use up as much app time as possible).
If there is a way to accomplish this without an event loop I would like to know that also, but so far I have not found a way.
With an event loop, I can simply update the calculation and UI every time through the loop. But I have not found any way to do so. This question asked nearly the same thing, but was never directly answered and had a different use case anyway.
The closest I have found to a solution is to grab the CoreDispatcher from the CoreWindow and use the RunIdleAsync() method to create a loop:
public MainPage()
{
this.InitializeComponent();
Windows.UI.Core.CoreWindow appwindow = Windows.UI.Core.CoreWindow.GetForCurrentThread();
Windows.UI.Core.CoreDispatcher appdispatcher = appwindow.Dispatcher;
//create a continuously running idle task (the app loop)
appdispatcher.RunIdleAsync( (dummyt) =>
{
//do the event loop here
.
.
.
if (appdispatcher.ShouldYield()) //necessary to prevent blocking UI
{
appdispatcher.ProcessEvents(Windows.UI.Core.CoreProcessEventsOption.ProcessAllIfPresent);
}
});
}
The main problem with this is that you can't switch between pages (you get a system exception from dispatching events within an already dispatched event).
Second, this is very messy and requires maintaining extra state in the event loop. Besides, why should I have to go through these contortions just to have some calculations happening while the app is waiting for user input?
Is there a way to do this (besides switching to a C++ DirectX app)?
I don't know about setting up your own event loop, but there is no reason to do so.
What you are talking about sounds like a great case for Tasks. You would start a calculation Task whenever your user did something, having it report its progress via standard C# events if you need mid-operation updates. Those updates would modify properties in your view model which the binding system would then pick up.
You could also make your calculation code cancellable so changes can abort a previous calculation.
All of this involves pretty standard UWP concepts; no need for a special event loop. That you are even considering that makes me think you need to study MVVM and multi-threading/tasks; you are still thinking in a very "Win-Forms" kind of way.
If we're talking about some event loop, or stream, .Net has a great library named Rx, or Reactive Extensions, which may be helpful for you. You can set up a simple flow, something like this:
var source = Observable
// fire event every second
.Interval(TimeSpan.FromSeconds(1), Scheduler.DispatcherScheduler)
// add timestamp for event
.Timestamp()
// gather system information to calculate
.Select(GetSystemInfo);
Note that the events right now are on UI thread, as you need to access the controls. Now you have two options: use Rx for background processing too or use TPL Dataflow' TransformBlock for processing your system information into new image (it can be Observer and Observable at a time). After that you need to get back to UI thread.
First option:
var source = Observable
// fire event every second
.Interval(TimeSpan.FromSeconds(1), DispatcherScheduler.Current)
// add timestamp for event
.Timestamp()
// gather system information to calculate
.Select(GetSystemInfo)
// expensive calculations are done in background
.Select(x => x.ObserveOn(DefaultScheduler.Instance))
.Select(x => Expensive(x))
.Select(x => x.ObserveOn(DispatcherScheduler.Current))
.Select(x => UpdateUI(x));
You probably should split this chain into several observers and observables, still the idea is the same, more information here: Rx Design Guidelines.
Second option:
var action = new TransformBlock<SystemInfo, ImageDelta>(CalculateDelta,
new ExecutionDataflowBlockOptions
{
// we can start as many item processing as processor count
MaxDegreeOfParallelism = Environment.ProcessorCount,
});
IDisposable subscription = source.Subscribe(action.AsObserver());
var uiObserver = action.AsObservable()
.Select(x => x.ObserveOn(DispatcherScheduler.Current))
.Select(x => UpdateUI(x));
I want to note that UWP and MVVM pattern do provide a possibility to work with binding between UI and ObservableCollection, which will help you to notify user in most natural way.
I am playing around with an idea in C#, and would like some advice on the best way to go about asynchronously updating a large number of nodes in a graph. I haven't read anything about how to do things like that, everything I've seen in textbooks / examples use graphs whose nodes don't really change.
Suppose I have a graph of some large number of nodes (thousands). Each node has some internal state that depends on some public properties of each of its neighbors, as well as potentially some external input.
So schematically a node is simply:
class Node
{
State internalState;
public State exposedState;
Input input;
List<Node> neigbors;
void Update()
{
while (true)
{
DoCalculations(input, internalState, neighbors);
exposedState = ExposedState(internalState);
}
}
State ExposedState(State state) { ... }
void DoCalculations() { ... }
}
The difficulty is that I would like nodes to be updated as soon as either their their input state changes (by subscribing to an event or polling) or their neighbor's state changes. If I try to do this synchronously in the naive way, I have the obvious problem:
Node A updates when input changes
Its neighbor B sees A has changed, updates.
Node A sees its neighbor B has changed, updates
B updates
A updates
....
Stack overflows
If I update by instead, enumerating through all nodes and calling their update methods, nodes may be inconsistently updated (e.g.: A's input changes, B updates and doesn't see A's change, A updates and changes exposed state).
I could update by trying to maintain a stack of nodes who want to be updated first, but then their neighbors need to be updated next, and theirs next, etc, which means each update cycle I would need to carefully walk the graph and determine the right update order, which could be very slow...
The naive asynchronous way is to have each node in its own thread (or more simply, an initial asynchronous method call happens to each node's update method, which updates indefinitely in a while(true){...}). The problem with his is that having thousands of threads does not seem like a good idea!
It seems like this should have a simple solution; this isn't too different from cellular automata, but any synchronous solution I come up with either has to update a large number of times compared to the number of nodes to get a message from one end to the other, or solving some kind of complicated graph-walking problem with multiple starting points.
The async method seems trivially simple, if only I could have thousands of threads...
So what is the best way to go about doing something like this?
I would think that Rx (The Reactive Extensions) would be a good starting point.
Each piece of state that other nodes might need to depend on is exposed as an IObserable<TState> and other nodes can then subscribe to those observables:
otherNode.PieceOfState.SubScribe(v => { UpdateMyState(v) });
Rx provides lots of filtering and processing functions for observables: these can be used to filter duplicate events (but you'll need to define "duplicate" of course).
Here's an introductory article: http://weblogs.asp.net/podwysocki/archive/2009/10/14/introducing-the-reactive-framework-part-i.aspx
First you need to make sure your updates converge. This has nothing to do with how you perform them (synchronously, asynchronously, serially or in parallel).
Suppose you have two nodes A and B, that are connection. A changes, triggering a recalculation of B. B then changes, triggering a recalculation of A. If the recalculation of A changes A's value, it will trigger a recalculation of B and so on. You need this sequence of triggers to stop at one point - you need your changes to converge. If they don't, no technique you use can fix it.
Once you are sure the calculations converge and you don't get into endless recalculations you should start with the simple single-threaded synchronous calculation and see if it performs well. If it's fast enough, stop there. If not, you can try to parallelize it.
I wouldn't create a thread per calculation, it doesn't scale at all. Instead use a queue of the calculations that need to be performed, and each time you change the value of node A, put all its neighbors in the queue. You can have a few threads processing the queue, making it a much more scalable architecture.
If this still isn't fast enough, you'll need to optimize what you put in the queue and how you handle it. I think it's way too early to worry about that now.
I have a checkbox list that each time the user selects one item, my ViewModel will ask my service to send the data related to that option.
_myService.GetAssetSpotDataCompleted += GetAssetSpotDataCompleted;
_myService.GetAssetSpotDataAsync(descItem);
Each selected item will call the same service Method and Debugging the service it sends back the right data.
My problem appears when the user checks some of the items while the data is not still received in my ViewModel. Example: the user selects item 1 and item 2, but my viewModel still has no answer from the service.
When my ViewModel receives the information comes the problem, I always receive twice the same data in my e.Result.
That means that it enters to the method GetAssetSpotDataAsync twice but always with the same result instead of the result for the item 1 and then for the item 2.
I have debugged everything and I have focused the problem in these first two lines of the method GetAssetSpotDataCompleted:
((MyServiceClient)sender).GetAssetSpotDataCompleted -= GetAssetSpotDataCompleted;
if (e.Result != null)
Anyone can help me with this?
What is happening is that by the time the response to first request has arrived the service finds 2 delegates listening on GetAssetSpotDataCompleted (one was added when the request was made the other when the second still outstanding request was made).
It will call both the delegates, it has no way to know that the second delegate was only meant for the second outstanding request. When the first is called its code removes one of the delegates from the event. When the second is called then it also removes the remaining delegate leaving the GetAssetSpotDataCompleted as null.
Now when the second request finally completes the service finds GetAssetSpotDataCompleted event is null and does nothing.
One solution would be to only add the event handler once, perhaps at the same point that _myService gets assigned in the ViewModel.
However there may be another issue, there is no guarantee that the responses to the two outstanding requests will arrive in the same order they were sent (although it highly likely that they will.) It may be better then to add an IsBusy boolean property to the ViewModel and set this true when an outstanding request is made, clearing it when completed. Bind this property to a BusyIndicator control (found in the Toolkit). This will prevent user interaction whilst an async operation that will ultimate change the state of the UI is in progress.
I have some nice, working edit-undo functionality in my winforms application. It works using a CommandStack class, which is two Stack<IStateCommand>s (one for undo, one for redo). Each command has an Execute and an Undo method, and the CommandStack object itself has an event that is fired when the stacks are changed.
The CommandStack also works out if the LogCommand method is called from its own Undo function, and therefore adding it to the redo stack, rather than the undo stack. This is done by simply adding the current ManagingThreadId to a List<int> object, then removing it after the Undo command is completed (as opposed to using the stack trace, which I believe would be much slower and a bit dirty).
There is a lot of different commands within my application so this formula is sort of set in stone as it'll take me a few days to redo all those IStateCommands implementations.
The only problem with this, currently, some UI events within also call other UI events, both of which log an IStateCommand to the undo history. Is there any way in C# that I can detect if the LogCommand function has already been called from the same UI event (Click, DragDrop, SelectedIndexChanged, TextChanged, etc), then I can combine the commands into one command (using my CommandList class, which also inherits IStateCommand)?
I've thought of saving the current time when the undo event was called, then if the next command is logged less than x milliseconds later, combine them in the history, but this seems a bit sloppy. I've also considered searching the stack trace, but I don't really know what to look for to find the root UI event, nor do I know whether I would tell the different between one button click, then a different click on the same button.
It may also be helpful to know that all of these commands are being called from the UI thread from event handlers (mostly from events from custom user controls). The only part of my application that uses another thread runs after most UI events, after the undo history is logged.
Thanks!
Sort Version
The same method is being called twice from the same UI event (eg, MouseUp, DragDrop). The second time this method is called, how do I check that it has already been called once by the same UI event?
Edit: The solution (sort of)
It's a bit of a dirty one as I don't have the time to completely re-write this system. However I've implemented it in such a way that gives the option not to be so dirty in the future.
The solution is based on one of Erno's comments on his answer (so I will mark his answer as accepted), where he suggests added a parameter. I added another overload to my LogCommand(IStackCommand) method in the CommandStack class, LogCommand(IStackCommand, string). The string is the actionId, which is stored for each command, and if this string is the same as the last, the commands are combined. This gives the option to go through each event and give a unique ID.
However, the dirty part - to get it working before we have to show the client, the actionId defaults to System.Windows.Forms.Cursor.Position.ToString(), ouch!! Since the cursor position is not changed while the UI thread is executing, this combines each command. It actually even combines TextChanged commands (as long as they don't move their mouse!)
It might be an option to add a local stack of called-commands to a command.
When a command executes other commands add the command to the local stack so you can undo the commands on this local stack when the command must be undone or redone.
EDIT
I am not quite sure what you don't understand.
I would simply add a CommandList property to the StateCommand. Everytime the StateCommand invokes/triggers another StateCommand it should add the new StateCommand to the CommandList. So the global CommandList keeps track of the Commands that can be undone from the UI and each StateCommand keeps track of the StateCommands it invoked (so these are not added to the global undo CommandList)
EDIT 2
If you can't or do not want to change to that setup you would have to pass a parameter to the execution of the commands that links them together.
Did you try to inspect the method stack and analyze it method-by-method:
StackTrace st = new StackTrace();
for ( int i=0; i<st.FrameCount; i++ )
{
StackFrame sf = st.GetFrame(i);
MethodBase mb = sf.GetMethod();
// do whatever you want
}
I don't know what you need exactly to achieve, but I implemented something similar, maybe you can get some ideas...
In summary, you can store some information in a ThreadStatic variable. Then, any time you want to log a command, inspect the thread static variable to find out the context in which you are logging the command. If it's empty, you are starting a new command logging sequence. If not, you are inside a sequence.
Maybe you can store the entry event (e.g. Click, DragDrop,...), or the command itself... It depends on your needs.
When the initial event callback is completed, clean the static variable to signal that the sequence has been completed.
I successfully implemented a similar strategy to track commands executed upon an object model. I encapsulated the logic within an IDisposable class that also implemented the reference counting to handle the nested usings. The first using started the sequence, subsequents using statements increased and decreased the reference counting to know when the sequence was completed. The outermost context disposing fired an event containing all the nested commands. In my specific case it has worked perfectly, I don't know if it may fulfill your needs...