I have to create a WPF UI, which subsribes to real time Fx Rate(Currency + rate) updates and displays them in a grid (roughly 1000 updates per second, which means each row in the grid could get updated upto 1000 times per second).The grid would have atleast 50 rows at any point in time.
Towards this, I have created a Viewmodel which subscribes to the update events, and store those updates inside a concurrent dictionary with key as symbol and value as a RateViewModel object. Then I have another observable collection which has all those rateviewmodel objects, and bind that to a grid.
Code:
public class MyViewModel
{
private readonly IRatesService ratesService;
private readonly ConcurrentDictionary<string, RateViewModel> rateDictionary;
private object _locker = new object();
public MyViewModel(IRatesService ratesService)
{
this.ratesService = ratesService;
this.ratesService.OnUpdate += OnUpdate;
rateDictionary = new ConcurrentDictionary<string, RateViewModel>();
RateViewModels = new ObservableCollection<RateViewModel>();
}
private void OnUpdate(object sender, RateUpdateEventArgs e)
{
RateViewModel exisistingRate;
if (!rateDictionary.TryGetValue(e.Update.Currency, out exisistingRate))
{
exisistingRate = new RateViewModel(new Rate(e.Update.Currency, e.Update.Rate));
rateDictionary.TryAdd(e.Update.Currency, exisistingRate);
return;
}
lock (_locker)
{
exisistingRate.UpdateRate(e.Update.Rate);
}
Application.Current.Dispatcher.BeginInvoke(new Action(() => SearchAndUpdate(exisistingRate)));
}
public ObservableCollection<RateViewModel> RateViewModels { get; set; }
private void SearchAndUpdate(RateViewModel rateViewModel)
{
//Equals is based on Currency
if (!RateViewModels.Contains(rateViewModel))
{
RateViewModels.Add(rateViewModel);
return;
}
var index = RateViewModels.IndexOf(rateViewModel);
RateViewModels[index] = rateViewModel;
}
}
I have 4 questions over this:
Is there a way I can eliminate the ObservableCollection, as it's leading to 2 different datastructures storing the same items - but still have my updates relayed to the UI?
I have used Concurrent Dictionary, which leads to locking the whole update operation. Is there any other clever way of handling this rather than locking the whole dicitionary or for that matter any datastructure?
My UpdateRate method also locks - all my properties on my RateviewModel is read only except the price, as this is getting updated. Is there a way to make this atomic, please note that the price is coming in as a double.
Is there a way I can optimize the SearchAndUpdate method, this is kind of related to 1st. At the moment I believe it's an O(n)operation.
Using .NET 4.0 and have omitted INPC for brevity.
*EDIT:*Could you please help me in rewriting this in a better manner taking all the 4 points into account? Psuedocode will do.
Thanks,
-Mike
1) I wouldn't worry about 50 extra refs floating around
2) Yes, lockless data structures are doable. Interlocked Is your friend here and they are pretty much all one offs. ReaderWriterLock is another good option if you aren't changing what items are in your dictionary often.
3) Generally, if you are dealing with more data more data than the UI can handle you are going to want to do the updates in the background, only fire INPC on the UI thread, and more importantly have a facility to drop UI updates (while still updating the backing field). Basic approach is going to be something like:
Do an Interlocked.Exchange on the backing field
Use Interlocked.CompareExchange to set a private field to 1, if this returns 1 exit becuase there is still a pending UI update
If Interlocked.CompareExchange returned 0, invoke to the UI and fire your property changed event and update you throttling field to 0 (technically there is more you need to do if you care about non x86)
4) SearchAndUpdate Seems superfluous... UpdateRate should be bubbling to the UI and you only need to Invoke to the UI thread if you need to add or remove an item to the observable collection.
Update: here is a sample implementation... things are little more complicated because you are using doubles which don't get atomicity for free on 32 bit CPUs.
class MyViewModel : INotifyPropertyChanged
{
private System.Windows.Threading.Dispatcher dispatcher;
public MyViewModel(System.Windows.Threading.Dispatcher dispatcher)
{
this.dispatcher = dispatcher;
}
int myPropertyUpdating; //needs to be marked volatile if you care about non x86
double myProperty;
double MyPropery
{
get
{
// Hack for Missing Interlocked.Read for doubles
// if you are compiled for 64 bit you should be able to just do a read
var retv = Interlocked.CompareExchange(ref myProperty, myProperty, -myProperty);
return retv;
}
set
{
if (myProperty != value)
{
// if you are compiled for 64 bit you can just do an assignment here
Interlocked.Exchange(ref myProperty, value);
if (Interlocked.Exchange(ref myPropertyUpdating, 1) == 0)
{
dispatcher.BeginInvoke(() =>
{
try
{
PropertyChanged(this, new PropertyChangedEventArgs("MyProperty"));
}
finally
{
myPropertyUpdating = 0;
Thread.MemoryBarrier(); // This will flush the store buffer which is the technically correct thing to do... but I've never had problems with out it
}
}, null);
}
}
}
}
public event PropertyChangedEventHandler PropertyChanged = delegate {};
}
Mike -
I would approach this a little differently. You really dont need an Observable Collection unless new Fx rows are being added. Observable Collection as you know only gives you built-in change notification in that scenario. If you have a list of 50 rows (for example) and the Fx object (which represents each individual row) is updated 1000 times a second - then you can very well use the INotifyPropertyChanged on the Fx Properties on the Object and let that mechanism update the UI as they change. My line of thought is - this is a simpler approach for UI updates rather than move them from one collection to another
Now with regards to your second point - 1000 updates in a second (to an existing FX object) - which technically is unreadable from a UI perspective - the approach I have taken is freeze and thaw - which means you essentially intercept the InotifyPropertyChanged (as its firing to the UI) and keep it frequency based - so for example - every 1 sec - whatever my status of all FX objects is (refresh the UI). Now within that second - whatever updates happen to the FX properties - they keep overwriting on themselves - and the latest/correct value when the 1 second interval happens - is shown to UI. That way - data being shown to UI is always correct and relevant when its displayed to UI.
There are a couple of factors to take into account, especially if the number of displayed rates will change dynamically. I'm assuming the 1000 updates/sec are coming from a thread other than the UI thread.
The first is that you will need to marshall the updates to the UI thread - done for you for updates to an existing ViewModel, not done for you for new/deleted ViewModels. With a 1000 updates a second you probably want to control the granularity of the marshalling to the UI thread and the context switching that this entails. Ian Griffiths wrote a great blog series on this.
The second is that if you want your UI to remain responsive you probably want to avoid as many gen 2 garbage collections as possible which means minimising the pressure on the GC. This might be an issue in your case as you create a new Rate object update for each update.
Once you start to have a few screens that do the same thing you'll want to find a way to abstract this updating behaviour out into a common component. Other wise you'll be sprinkling threading code through your ViewModels which is error prone.
I've created an open source project, ReactiveTables, which addresses these three concerns and adds a couple of other features such as being able to filter, sort, join your model collections. Also there are demos showing how to use it with virtual grids to get the best performance. Maybe this can help you out/inspire you.
Related
I have this static class
static class LocationMemoryCache
{
public static readonly ConcurrentDictionary<int, LocationCityContract> LocationCities = new();
}
My process
Api starts and initializes an empty dictionary
A background job starts and runs once every day to reload the dictionary from the database
Requests come in to read from the dictionary or update a specific city in the dictionary
My problem
If a request comes in to update the city
I update the database
If the update was successful, update the city object in the dictionary
At the same time, the background job started and queried all cities before I updated the specific city
The request finishes and the dictionary city now has the old values because the background job finished last
My solution I thought about first
Is there a way to lock/reserve the concurrent dictionary from reads/writes and then release it when I am done?
This way when the background job starts, it can lock/reserve the dictionary only for itself and when it's done it will release it for other requests to be used.
Then a request might have been waiting for the dictionary to be released and update it with the latest values.
Any ideas on other possible solutions?
Edit
What is the purpose of the background job?
If I manually update/delete something in the database I want those changes to show up after the background job runs again. This could take a day for the changes to show up and I am okay with that.
What happens when the Api wants to access the cache but its not loaded?
When the Api starts I block requests to this particular "Location" project until the background job marks IsReady to true. The cache I implemented is thread safe until I add the background job.
How much time does it take to reload the cache?
I would say less then 10 seconds for a total of 310,000+ records in the "Location" project.
Why I chose the answer
I chose Xerillio's answer because it solves the background job problem by keeping track of date times. Similar to a "object version" approach. I won't be taking this path as I have decided that if I do a manual update in the database, I might as well create an API route that does it for me so that I can update the db and cache at the same time. So I might remove the background job after all or just run it once a week. Thank you for all the answers and I am ok with a possible data inconsistency with the way I am updating the objects because if one route updates 2 specific values and another route updates 2 different specific values then the possibility of having a problem is very minimal
Edit 2
Let's imagine I have this cache now and 10,000 active users
static class LocationMemoryCache
{
public static readonly ConcurrentDictionary<int, LocationCityUserLogContract> LocationCityUserLogs = new();
}
Things I took into consideration
An update will only happen to objects that the user owns and the rate at which the user might update those objects is most likely once every minute. So that reduces the possibility of a problem by a lot for this specific example.
Most of my cache objects are related only to a specific user so it relates with bullet point 1.
The application owns the data, I don't. So I should never manually update the database unless it's critical.
Memory might be a problem but 1,000,000 normalish objects is somewhere between 80MB - 150MB. I can have a lot of objects in memory to gain performance and reduce the load on the database.
Having a lot of objects in memory will put pressure on Garbage Collection and that is not good but I don't think its bad at all for me because Garbage Collection only runs when memory gets low and all I have to do is just plan ahead to make sure there is enough memory. Yes it will run because of day to day operations but it won't be a big impact.
All of these considerations just so that I can have an in memory cache right at my finger tips.
I would suggest adding a UpdatedAt/CreatedAt property to your LocationCityContract or creating a wrapper object (CacheItem<LocationCityContract>) with such a property. That way you can check if the item you're about to add/update with is newer than the existing object like so:
public class CacheItem<T>
{
public T Item { get; }
public DateTime CreatedAt { get; }
// In case of system clock synchronization, consider making CreatedAt
// a long and using Environment.TickCount64. See comment from #Theodor
public CacheItem(T item, DateTime? createdAt = null)
{
Item = item;
CreatedAt = createdAt ?? DateTime.UtcNow;
}
}
// Use it like...
static class LocationMemoryCache
{
public static readonly
ConcurrentDictionary<int, CacheItem<LocationCityContract>> LocationCities = new();
}
// From some request...
var newItem = new CacheItem(newLocation);
// or the background job...
var newItem = new CacheItem(newLocation, updateStart);
LocationMemoryCache.LocationCities
.AddOrUpdate(
newLocation.Id,
newItem,
(_, existingItem) =>
newItem.CreatedAt > existingItem.CreatedAt
? newItem
: existingItem)
);
When a request wants to update the cache entry they do as above with the timestamp of whenever they finished adding the item to the database (see notes below).
The background job should, as soon as it starts, save a timestamp (let's call it updateStart). It then reads everything from the database and adds the items to the cache like above, where CreatedAt for the newLocation is set to updateStart. This way, the background job only updates the cache items that haven't been updated since it started. Perhaps you're not reading all items from DB as the first thing in the background job, but instead you read them one at a time and update the cache accordingly. In that case updateStart should instead be set right before reading each value (we could call it itemReadStart instead).
Since the way of updating the item in the cache is a little more cumbersome and you might be doing it from a lot of places, you could make a helper method to make the call to LocationCities.AddOrUpdate a little easier.
Note:
Since this approach is not synchronizing (locking) updates to the database, there's a race condition that means you might end up with a slightly out-of-date item in the cache. This can happen if two requests wants to update the same item simultaneously. You can't know for sure which one updated the DB last, so even if you set CreatedAt to the timestamp after updating each, it might not truly reflect which one was updated last. Since you're ok with a 24 hour delay from manually updating the DB until the background job updates the cache, perhaps this race condition is not a problem for you as the background job will fix it when run.
As #Theodor mentioned in the comments, you should avoid updating the object from the cache directly. Either use the C# 9 record type (as opposed to a class type) or clone the object if you want to cache new updates. That means, don't use LocationMemoryCache[locationId].Item.CityName = updatedName. Instead you should e.g. clone it like:
// You need to implement a constructor or similar to clone the object
// depending on how complex it is
var newLoc = new LocationCityContract(LocationMemoryCache[locationId].Item);
newLoc.CityName = updatedName;
var newItem = new CacheItem(newLoc);
LocationMemoryCache.LocationCities
.AddOrUpdate(...); /* <- like above */
By not locking the whole dictionary you avoid having requests being blocked by each other because they're trying to update the cache at the same time. If the first point is not acceptable you can also introduce locking based on the location ID (or whatever you call it) when updating the database, so that DB and cache are updated atomically. This avoids blocking requests that are trying to update other locations so you minimize the risk of requests affecting each other.
No, there is no way to lock a ConcurrentDictionary on demand from reads/writes, and then release it when you are done. This class does not offer this functionality. You could manually use a lock every time you are accessing the ConcurrentDictionary, but by doing so you would lose all the advantages that this specialized class has to offer (low contention under heavy usage), while keeping all its disadvantages (awkward API, overhead, allocations).
My suggestion is to use a normal Dictionary protected with a lock. This is a pessimistic approach that will result occasionally to some threads unnecessarily blocked, but it is also very simple and easy to reason about its correctness. Essentially all access to the dictionary and the database will be serialized:
Every time a thread wants to read an object stored in the dictionary, will first have to take the lock, and keep the lock until it's done reading the object.
Every time a thread wants to update the database and then the corresponding object, will first have to take the lock (before even updating the database), and keep the lock until all the properties of the object have been updated.
Every time the background job wants to replace the current dictionary with a new dictionary, will first have to take the lock (before even querying the database), and keep the lock until the new dictionary has taken the place of the old one.
In case the performance of this simple approach proves to be unacceptable, you should look at more sophisticated solutions. But the complexity gap between this solution and the next simplest solution (that also offers guaranteed correctness) is likely to be quite significant, so you'd better have good reasons before going that route.
I have a WPF application that consist of two threads simulating an enterprise producting and selling items in 52 weeks (only one transaction is allowed per week). I need to use a background worker as well so that I can display the data in a listview. As of right now, my UI freezes when clicking on simulate but I can see that the output is still working in the debugging terminal. I have tried everything that I can think of and to be honest, I have had the help of my teacher and even he couldn't find a working solution.
What is freezing my UI when I call Simulate() ?
When my code is different and my UI isn't freezing, my listview never updates because it seems that DataProgress() doesn't work — e.UserStart is never iterating.
Simulate button calls :
private void Simulate(object sender, RoutedEventArgs e)
{
// Declare BackgroundWorker
Data = new ObservableCollection<Operations>();
worker = new BackgroundWorker();
worker.WorkerReportsProgress = true;
worker.WorkerSupportsCancellation = true;
worker.RunWorkerAsync(52);
worker.DoWork += ShowData;
worker.ProgressChanged += DataProgress;
worker.RunWorkerCompleted += DataToDB;
Production = new Production(qtyProduction, timeExecProd);
Sales = new Sales(qtySales, timeExecSales);
Thread prod = new Thread(Production.Product);
prod.Start();
Thread.Sleep(100);
Thread sales = new Thread(Sales.Sell);
sales.Start();
}
DoWork : ShowData() :
Console.WriteLine("Simulation started | Initial stock : 500");
Production = new Production(qtyProduction, timeExecProd);
Sales = new Sales(qtySales, timeExecSales);
while (Factory.Week < max) // max = 52
{
if (worker.CancellationPending) // also this isn't reacting to worker.CancelAsync();
e.Cancel = true;
// My teacher tried to call my threads from here, but it breaks the purpose of having
// two threads as he was just calling 52 times two functions back to back and therefore
// wasn't "randomizing" the transactions.
int progressPercentage = Convert.ToInt32(((double)(Factory.Week) / max) * 100);
(sender as BackgroundWorker).ReportProgress(progressPercentage, Factory.Week);
}
ProgressChanged : DataProgress() :
if (e.UserState != null) // While using debugger, it looks like this is called over & over
{
Data.Add(new Operations()
{
id = rnd.Next(1,999),
name = Factory.name,
qtyStock = Factory.Stock,
averageStock = Factory.AverageStock,
week = Factory.Week
});
listview.ItemsSource = Data;
}
RunWorkerCompleted : DataToDB() :
// Outputs "Work done" for now.
In case you want to know what happens when I call my threads, it looks like this :
Sell() :
while (Factory.Week <= 52)
{
lock (obj)
{
// some math function callings¸
Factory.Week++;
}
Thread.Sleep(timeExecSales);
}
Should I use a third thread just for updating my listview? I don't see how as I need it to be synced with my static variables. This is my first project for learning multithreading... I'm kind of clueless and flabbergasted that even my teacher can't help.
On the one hand, there isnt enough context in the code posted to get a full picture to answer your questions accurately. We can, however, deduce what is going wrong just from the code you have posted.
First, lets try to answer your two questions. We can likely infer the following:
This code here:
if (e.UserState != null)
{
Data.Add(new Operations()
{
id = rnd.Next(1,999),
name = Factory.name,
qtyStock = Factory.Stock,
averageStock = Factory.AverageStock,
week = Factory.Week
});
listview.ItemsSource = Data;
}
You are using a Windows Forms background thread object to try and update a WPF GUI object which should only be done on the main GUI thread. There is also the obvious no-no of never updating GUI objects from non-UI threads. Using BackgroundWorker also has its own issues with threading (foreground/background), contexts and execution, as it relies on the Dispatcher and SynchronizationContexts to get the job done.
Then there is the curiosity of setting the binding over and over in this line:
listview.ItemsSource = Data;
Let's put a pin in that for a moment...
There is, as the other commenter pointer out already, no exit strategy in your while loop:
while (Factory.Week < max) // max = 52
{
if (worker.CancellationPending) // also this isn't reacting to worker.CancelAsync();
e.Cancel = true;
// My teacher tried to call my threads from here, but it breaks the purpose of having
// two threads as he was just calling 52 times two functions back to back and therefore
// wasn't "randomizing" the transactions.
int progressPercentage = Convert.ToInt32(((double)(Factory.Week) / max) * 100);
(sender as BackgroundWorker).ReportProgress(progressPercentage, Factory.Week);
}
But thats not the bigger problem... in addition to the misuse/misunderstanding of when/how many/how to use threading, there doesnt seem to be any kind of thread synchronization of any kind. There is no way to predict or track thread execution of lifetime in this way.
At this point the question is technically more or less answered, but I feel like this will just leave you more frustrated and no better off than you started. So maybe a quick crash course in basic design might help straighten out this mess, something your teacher should have done.
Assuming you are pursuing software development, and since you have chosen WPF here as your "breadboard" so to speak, you will likely come across terms such as MVC (model view controller) or MVVM (model view view-model). You will also likely come across design principles such as SOLID, separation of concerns, and grouping things into services.
Your code here is a perfect example of why all of these frameworks and principles exist. Lets look at some of the problems you have encountered and how to fix them:
You have threading code (logic and services - controller [loosely speaking]) mixed in with presentation code (listview update - view) and collection update (your observable collection - model). Thats one reason (of many) you are having such a difficult time coding, fixing and maintaining the problem at hand. To clean it up, separate it out (separation of concerns). You might even move each operation into its own class with an interface/API to that class (service/ micro-service).
Not everything needs to be solved with threads. But for now, lets learn to crawl, then walk before we run. Before you start learning about async/await or the TPL (task parallel library) lets go old school. Get a good book... something even from 20 years ago is find... go old school, and learn how to use the ThreadPool and kernel synchronization objects such as mutexes, events, etc and how to signal between threads. Once you master that, then learn about TPL and async/await.
Dont cross the streams. Dont mix WinForms, WPF and I even saw Console.WriteLine.
Learn about Data Binding, and in particular how it works in WPF. ObservableCollection is your friend, bind your ItemsSource to it once, then update the ObservableCollection and leave the GUI object alone.
Hopefully this will help you straighten out the code and get things running.
Good luck!
I am tasked with writing a system to process result files created by a different process(which I have no control over) and and trying to modify my code to make use of Parallel.Foreach. The code works fine when just calling a foreach but I have some concerns about thread safety when using the parallel version. The base question I need answered here is "Is the way I am doing this going to guarantee thread safety?" or is this going to cause everything to go sideways on me.
I have tried to make sure all calls are to instances and have removed every static anything except the initial static void Main. It is my current understanding that this will do alot towards assuring thread safety.
I have basically the following, edited for brevity
static void Main(string[] args)
{
MyProcess process = new MyProcess();
process.DoThings();
}
And then in the actual process to do stuff I have
public class MyProcess
{
public void DoThings()
{
//Get some list of things
List<Thing> things = getThings();
Parallel.Foreach(things, item => {
//based on some criteria, take actions from MyActionClass
MyActionClass myAct = new MyActionClass(item);
string tempstring = myAct.DoOneThing();
if(somecondition)
{
MyAct.DoOtherThing();
}
...other similar calls to myAct below here
};
}
}
And over in the MyActionClass I have something like the following:
public class MyActionClass
{
private Thing _thing;
public MyActionClass(Thing item)
{
_thing = item;
}
public string DoOneThing()
{
return _thing.GetSubThings().FirstOrDefault();
}
public void DoOtherThing()
{
_thing.property1 = "Somenewvalue";
}
}
If I can explain this any better I'll try, but I think that's the basics of my needs
EDIT:
Something else I just noticed. If I change the value of a property of the item I'm working with while inside the Parallel.Foreach (in this case, a string value that gets written to a database inside the loop), will that have any affect on the rest of the loop iterations or just the one I'm on? Would it be better to create a new instance of Thing inside the loop to store the item i'm working with in this case?
There is no shared mutable state between actions in the Parallel.ForEach that I can see, so it should be thread-safe, because at most one thread can touch one object at a time.
But as it has been mentioned there is nothing shared that can be seen. It doesn't mean that in the actual code you use everything is as good as it seems here.
Or that nothing will be changed by you or your coworker that will make some state both shared and mutable (in the Thing, for example), and now you start getting difficult to reproduce crashes at best or just plain wrong behaviour at worst that can be left undetected for a long time.
So, perhaps you should try to go fully immutable near threading code?
Perhaps.
Immutability is good, but it is not a silver bullet, and it is not always easy to use and implement, or that every task can be reasonably expressed through immutable objects. And even that accidental "make shared and mutable" change may happen to it as well, though much less likely.
It should at least be considered as a possible option/alternative.
About the EDIT
If I change the value of a property of the item I'm working with while
inside the Parallel.Foreach (in this case, a string value that gets
written to a database inside the loop), will that have any affect on
the rest of the loop iterations or just the one I'm on?
If you change a property and that object is not used anywhere else, and it doesn't rely on some global mutable state (for example, sort of a public static Int32 ChangesCount that increments with each state change), then you should be safe.
a string value that gets written to a database inside the loop - depending on the used data access technology and how you use it, you may be in trouble, because most of them are not designed for multithreaded environment, like EF DbContext, for example. And obviously do not forget that dealing with concurrent access in database is not always easy, though that is a bit away from our original theme.
Would it be better to create a new instance of Thing inside the loop to store the item i'm working with in this case - if there is no risk of external concurrent changes, then it is just an unnecessary work. And if there is a chance of another threads(not Parallel.For) making changes to those objects that are being persisted, then you already have bigger problems than Parallel.For.
Objects should always have observable consistent state (unlike when half of properties set by one thread, and half by another, while you try to persist that who-knows-what), and if they are used by many threads, then they should be already thread-safe - there should be no way to put them into inconsistent state.
And if they want to be persisted by external code, such objects should probably provide:
Either SyncRoot property to synchronize property reading code.
Or some current state snapshot DTO that is created internally by some thread-safe method like ThingSnapshot Thing.GetCurrentData() { lock() {} }.
Or something more exotic.
I have a question about Reactive ui, its bindings, and how it handles ui updates. I always assumed that using ReactiveUi would take care of all ui updates on the ui thread. But I recently found out this isn't always the case.
In short the question is: How can I use reactiveui to two-way-model-bind a viewmodel and a view, and assure that updating the ViewModel doesn't crash when run on a different thread than the ui-thread? Without having to manually subscribe to changes and update explicitely on the uiThread, as that defeats the purpose of reactiveui, as well as making it harder to encapsulate all logic in the PCL.
Below I've provided a very simple (Android) project using Xamaring and Reactiveui, to do the following:
Button with the text 'Hello World'
Clicking on it appends 'a' to the button's text.
I let the Activity implement IViewFor, and I use a ViewModel deriving from ReactiveObject, containing the text that I want to change.
I bind the Activity's button.Text to the ViewModel.Text, to let reactiveui deal with all changes and ui updates.
Finally, I add a function to the button's onclick to append 'a' to the ViewModel.
The issue I have is the following:
button.Click += delegate
{
this.ViewModel.Text += "a"; // does not crash
Task.Run(() => { this.ViewModel.Text += "a"; }); // crash
};
Directly appending 'a' is not an issue. However, adding 'a' on a different thread results in the well-known Java exception: Exception: Only the original thread that created a view hierarchy can touch its views.
I understand the exception and where it's coming from. In fact, if I were to append the 'a' on a different thread, I already had it working with simply not binding the text. But rather by subscribing to changes, and using the RunOnUiThread-method to make changes to the ui. But this scenario kind of defeats the purpose of using ReactiveUi. I really like the clean coding way of the simple statement 'this.Bind(ViewModel, x => x.Text, x => x.button.Text);', but if this has to run on the uiThread, I can't see how to make it work.
And naturally this is the bare mininum to show the problem. The actual problem as to why I bring this up is because I want to use the 'GetAndFetchLatest'-method from akavache. It gets data asynchroniously and caches it, and executes a function (being updating the ViewModel). If the data is already in the cache, it will execute the ViewModel-update with the cached result AND do the computationlogic in a different thread, and then call the same function again once it's done (resulting in the crash, because that's on a different thread, updates the ViewModel, which results in the crash).
Note that even though explicitely using RunOnUiThread works, I really don't want (can't even) to call this within the ViewModel. Because I have a more complex piece of code in which a button simply tells the ViewModel to go fetch data and update itself. If I were required to do this on the uiThread (i.e. after I got data back, I update the ViewModel), then I can't bind iOS to the same ViewModel anymore.
And lastly, here's the entire code to make it crash. I've seen the Task.Run-part sometimes work, but if you add some more tasks and keep updating the ViewModel in them, it's bound to crash eventually on the UI-thread.
public class MainActivity : Activity, IViewFor<MainActivity.RandomViewModel>
{
public RandomViewModel ViewModel { get; set; }
private Button button;
protected override void OnCreate(Bundle bundle)
{
base.OnCreate(bundle);
SetContentView(Resource.Layout.Main);
this.button = FindViewById<Button>(Resource.Id.MyButton);
this.ViewModel = new RandomViewModel { Text = "hello world" };
this.Bind(ViewModel, x => x.Text, x => x.button.Text);
button.Click += delegate
{
this.ViewModel.Text += "a"; // does not crash
Task.Run(() => { this.ViewModel.Text += "a"; }); // crash
};
}
public class RandomViewModel : ReactiveObject
{
private string text;
public string Text
{
get
{
return text;
}
set
{
this.RaiseAndSetIfChanged(ref text, value);
}
}
}
object IViewFor.ViewModel
{
get
{
return ViewModel;
}
set
{
ViewModel = value as RandomViewModel;
}
}
}
This has been already discussed here and there, and the short answer is "as designed, for performance reasons".
I'm personally not really convinced by the later (performance is usually a bad driver when designing an API), but I'll try to explain why I think this design is correct anyway:
When binding an object to a view, you usually expect the view to come and peak (read) at your object properties, and it's doing so from the UI thread.
Once you acknowledge that, the only sane (as in thread-safe and guaranteed to work) way to modify this object (which is being peaked into from the UI thread) is to do so also from the UI thread.
Modifications from other threads may work, but only within specific conditions, that devs usually don't care about (up until they get UI artifacts, in which case they ... perform a refresh...).
For instance if you're using INPC, and your property values are immutable (e.g. string), and your view won't feel bad about observing a value change before it receives the notification of it (simple controls probably are ok with it, grids with filtering/sorting capabilities are probably not ok, unless they completely deep-copy their source).
You should design your ViewModel with the fact that it lives in the UI context in mind.
With Rx, that means having .OnserverOn(/* ui sheduler */) right before ViewModel modification code.
History of the problem
This is continuation of my previous question
How to start a thread to keep GUI refreshed?
but since Jon shed new light on the problem, I would have to completely rewrite original question, which would make that topic unreadable. So, new, very specific question.
The problem
Two pieces:
CPU hungry heavy-weight processing as a library (back-end)
WPF GUI with databinding which serves as monitor for the processing (front-end)
Current situation -- library sends so many notifications about data changes that despite it works within its own thread it completely jams WPF data binding mechanism, and in result not only monitoring the data does not work (it is not refreshed) but entire GUI is frozen while processing the data.
The aim -- well-designed, polished way to keep GUI up to date -- I am not saying it should display the data immediately (it can skip some changes even), but it cannot freeze while doing computation.
Example
This is simplified example, but it shows the problem.
XAML part:
<StackPanel Orientation="Vertical">
<Button Click="Button_Click">Start</Button>
<TextBlock Text="{Binding Path=Counter}"/>
</StackPanel>
C# part (please NOTE this is one piece code, but there are two sections of it):
public partial class MainWindow : Window,INotifyPropertyChanged
{
// GUI part
public MainWindow()
{
InitializeComponent();
DataContext = this;
}
private void Button_Click(object sender, RoutedEventArgs e)
{
var thread = new Thread(doProcessing);
thread.IsBackground = true;
thread.Start();
}
// this is non-GUI part -- do not mess with GUI here
public event PropertyChangedEventHandler PropertyChanged;
public void OnPropertyChanged(string property_name)
{
if (PropertyChanged != null)
PropertyChanged(this, new PropertyChangedEventArgs(property_name));
}
long counter;
public long Counter
{
get { return counter; }
set
{
if (counter != value)
{
counter = value;
OnPropertyChanged("Counter");
}
}
}
void doProcessing()
{
var tmp = 10000.0;
for (Counter = 0; Counter < 10000000; ++Counter)
{
if (Counter % 2 == 0)
tmp = Math.Sqrt(tmp);
else
tmp = Math.Pow(tmp, 2.0);
}
}
}
Known workarounds
(Please do not repost them as answers)
I sorted the list according how much I like the workaround, i.e. how much work it requires, limitations of it, etc.
this is mine, it is ugly, but simplicity of it kills -- before sending notification freeze a thread -- Thread.Sleep(1) -- to let the potential receiver "breathe" -- it works, it is minimalistic, it is ugly though, and it ALWAYS slows down computation even if no GUI is there
based on Jon idea -- give up with data binding COMPLETELY (one widget with databinding is enough for jamming), and instead check from time to time data and update the GUI manually -- well, I didn't learn WPF just to give up with it now ;-)
Thomas idea -- insert proxy between library and frontend which would receiver all notifications from the library, and pass some of them to WPF, like for example every second -- the downside is you have to duplicate all objects that send notifications
based on Jon idea - pass GUI dispatcher to library and use it for sending notifications -- why it is ugly? because it could be no GUI at all
My current "solution" is adding Sleep in the main loop. The slowdown is negligible, but it is enough for WPF to be refreshed (so it is even better than sleeping before each notification).
I am all ears for real solutions, not some tricks.
Remarks
Remark on giving up with databinding -- for me the design of it is broken, in WPF you have single channel of communication, you cannot bind directly to the source of the change. The databinding filters the source based on name (string!). This requires some computation even if you use some clever structure to keep all the strings.
Edit: Remark on abstractions -- call me old timer, but I started learning computer convinced, that computers should help humans. Repetitive tasks are domain of computers, not humans. No matter how you call it -- MVVM, abstractions, interface, single inheritance, if you write the same code, over and over, and you don't have way to automatize the things you do, you use broken tool. So for example lambdas are great (less work for me) but single inheritance is not (more work for me), data binding (as an idea) is great (less work) but the need of proxy layer for EVERY library I bind to is broken idea because it requires a lot of work.
In my WPF applications I don't send the property change directly from the model to the GUI. It always goes via a proxy (ViewModel).
The property change events are put in a queue which is read from the GUI thread on a timer.
Don't understand how that can be so much more work. You just need another listener for your model's propertychange event.
Create a ViewModel class with a "Model" property which is your current datacontext. Change the databindings to "Model.Property" and add some code to hook up the events.
It looks something like this:
public MyModel Model { get; private set; }
public MyViewModel() {
Model = new MyModel();
Model.PropertyChanged += (s,e) => SomethingChangedInModel(e.PropertyName);
}
private HashSet<string> _propertyChanges = new HashSet<string>();
public void SomethingChangedInModel(string propertyName) {
lock (_propertyChanges) {
if (_propertyChanges.Count == 0)
_timer.Start();
_propertyChanges.Add(propertyName ?? "");
}
}
// this is connected to the DispatherTimer
private void TimerCallback(object sender, EventArgs e) {
List<string> changes = null;
lock (_propertyChanges) {
_Timer.Stop(); // doing this in callback is safe and disables timer
if (!_propertyChanges.Contain(""))
changes = new List<string>(_propertyChanges);
_propertyChanges.Clear();
}
if (changes == null)
OnPropertyChange(null);
else
foreach (string property in changes)
OnPropertyChanged(property);
}
This isn't really a WPF issue per se. When you have a long-running operation that updates a set of data rapidly, keeping the UI updated - any UI, whether it's WPF or WinForms or just VT100 emulation - is going to present the same problem. UI updates are comparatively slow and complex, and integrating them with a fast-changing complex process without hurting that process requires a clean separation between the two.
That clean separation is even more important in WPF because the UI and the long-running operation need to run on separate threads so that the UI doesn't freeze while the operation is running.
How do you achieve that clean separation? By implementing them independently, providing a mechanism for periodically updating the UI from within the long-running process, and then testing everything to figure out how frequently that mechanism should be invoked.
In WPF, you'll have three components: 1) a view, which is the physical model of your UI, 2) a view model, which is the logical model of the data that is displayed in the UI, and that pushes changes in the data out to the UI through change notification, and 3) your long-running process.
The long-running process can be almost completely unaware of the UI, so long as it does two things. It needs to expose public properties and/or methods so that the view model can examine its state, and it needs to raise an event whenever the UI should be updated.
The view model listens to that event. When the event is raised, it copies state information from the process to its data model, and its built-in change notification pushes those out to the UI.
Multithreading complicates this, but only a bit. The process needs to run on a different thread than the UI, and when its progress-reporting event is handled, its data will be copied across threads.
Once you've built these three pieces, the multithreading is very straightforward to accomplish using WPF's BackgroundWorker. You create the object that's going to run the process, wire its progress-reporting event up with the BackgroundWorker's ReportProgress event, and marshal data from the object's properties to the view model in that event handler. Then fire off the object's long-running method in the BackgroundWorker's DoWork event handler and you're good to go.
A user interface that changes faster than the human eye can observe (~25 updates/sec) is not a usable user interface. A typical user will observe the spectacle for at most a minute before giving up completely. You are well past this if you made the UI thread freeze.
You have to design for a human, not a machine.
Since there are too many notifications for the UI to handle, why not just throttle the notifications a bit? This seems to work fine:
if (value % 500 == 0)
OnPropertyChanged("Counter");
You could also limit the frequency of the notifications, using a timer:
public SO4522583()
{
InitializeComponent();
_timer = new DispatcherTimer();
_timer.Interval = TimeSpan.FromMilliseconds(50);
_timer.Tick += new EventHandler(_timer_Tick);
_timer.Start();
DataContext = this;
}
private bool _notified = false;
private DispatcherTimer _timer;
void _timer_Tick(object sender, EventArgs e)
{
_notified = false;
}
...
long counter;
public long Counter
{
get { return counter; }
set
{
if (counter != value)
{
counter = value;
if (!_notified)
{
_notified = true;
OnPropertyChanged("Counter");
}
}
}
}
EDIT: if you cannot afford to skip notifications because they're used by other parts of your code, here's a solution that doesn't require big changes in your code:
create a new property UICounter, which throttles the notifications as shown above
in the Counter setter, update UICounter
in your UI, bind to UICounter rather than Counter
A layer between UI and the library is necessary. This will ensure that you will be able to do interaction testing and also allow you to swap out the library with another implementation in future without much change. This isn't a duplication, but a way of providing an interface for UI layer to communicate. This layer will accept objects from library, convert them to specific data transfer objects and pass them onto another layer which will have the responsibility to throttle the updates and convert them to your specific VM objects.
My opinion is that VMs should be as dumb as possible and their only responsibility should be to provide data to views.
Your qestion sounds similar to slow-down-refresh-rate-of-bound-datagrid.
At least the answers are similar
Have a shadow copy of your data bound to the gui element instead of binding the original data.
Add an eventhandler that update the shadow-copy with a certain delay from the original data.
You need to disconnect the source of the notifications from the target for the notifications. The way you have it set up now, every time the value changes, you go through an entire refresh cycle (which I believe is blocking your processing function from continuing as well). This is not what you want.
Provide an Output stream to your processing function which it would use to write its notifications.
On the monitoring side, attach an input stream to that outputstream and use it as the data source for your UI component. This way there isn't any notification event handling going on at all - the processing is running flat out as fast as it can, outputting monitor data to the output stream you provide. Your monitor UI is simply rendering whatever it receives in the input stream.
You will need a thread to continuously read from the input stream. If no data is available, then it should block. If it reads some data, it should dump it into the UI.
Regards,
Rodney