When to use mvvmcross data bindings? - c#

I'm pretty new to mvvmcross, and have started to databind pretty much everything I display on the screen. Although it's usually not a lot of extra code, there is a bit of extra effort involved, and it certainly makes some things more cumbersome (say, when you have some conditional rules about screen layout that depend on the data itself).
There are quite a few cases in an app I'm implementing where data really is mostly static. Like, for example, when showing the times a restaurant is open at. That data simply won't change, and it's known at the time we initialize the screen. Because I'm somewhat new to the pattern I've been kind of just blindly binding stuff everywhere.
Now that I've done this quite a bit, I'm reflecting on how silly it is in certain situation to do the databinding at all, for what is essentially "read only" data that won't change.
I'm thinking I already know the answer here, but wanted to see what people think, overall. Are there reasons that are not obvious, where it's "better" to just data bind everything?

Data binding everything is beneficial because it keeps your code consistent regardless of whether it is a one-way or two-way binding. There is no performance penalty for having a one-way binding.
You can also use data binding for hiding/showing, enabling/disabling, etc. elements in your layout. You can create ValueConverters to manipulate any property of your View.
The more logic you move out of your View and into your ViewModel, the more portable your code will be. Data binding is the bridge between your View and ViewModel.
Given all that, ultimately the only true answer is "It depends." Each app is different, and you'll need to determine what is the best way for your situation.
Hope this helps.

Related

Issues with Orchard CMS

I have researched for some time the Orchard CMS and I'm pleased with some of his futures but also I have some issues that I don't know how to deal with them:
All the items (content type) are linear and they don't support a tree like data structure
(Ex: books > titles > web-links)
One of the big problem (depending how you see things) is that the model and the view for the items are coupled (content part > driver with display / editor views)
So for a new page the model, view and position are locked and you can have only one view of the model.
Use of advance language futures the are not suited for beginner developers and are not very clear (dynamic functions, clay objects - nice future, ...)
// Creating table VPlayerRecord
SchemaBuilder.CreateTable("VPlayerRecord", table => table
.ContentPartRecord()
.Column("Title", DbType.String)
.Column("VideoUrl", DbType.String)
.Column("WidthPx", DbType.Double)
.Column("HeightPx", DbType.Double)
);
This syntax is not very clear for beginner developers and is a bit over engineered. Also because the model is a dynamic object in the view we don't have any intellisense support.
To build a new page we have something like three degree of separation (3 projects)
Build a content part module
Build a content type
Build a theme module
How do you overcome these issues in your projects with Orchard CMS? and what other issues have you found and fixes :)
read this: http://orchardproject.net/docs/Creating-1-n-and-n-n-relations.ashx and this: http://orchardproject.net/docs/Creating-lists.ashx
How is this a problem and why do you see this as coupling? What alternative do you see?
Where do you see this as a problem and how has it blocked you?
edit on 2: it is not true that you can have only one view of the model. You can have any number of display types. For example, the summary view of items is handled this way. You also have display types for admin views, and you can add your own.
Not sure what you mean by "position is locked". If we mean the same thing by position, I'm puzzled by how you could have gotten such an idea. Relative positioning of parts and fields can be changed through placement.info.
edit on 3: even with this example, I'm not sure what would be difficult here. This is fairly expressive imo. Were you confused yourself or are you just assuming people would be?
You are claiming that this is over-engineered. How would you simplify it then? What feature do you think is not needed?
You don't get IntelliSense in views on model objects but the flexibility you gain by doing so justifies it by a very large margin. Ask anyone who's been making real use of it.
new 4th point: I can't see a reason why you would separate that into three modules or why you think you should. I've certainly never seen an example of that. I would also point out that creating a part and a type are often done by two different people (a type creator is often just a consumer of existing parts). But again you don't have to separate them into different modules.
A theme is clearly a different concern from the two others and makes sense to be a separate project but a theme can come with code and can actually in principle do everything a module is doing. So if you want to package a part, type and theme into a single package, you could do that. It wouldn't make a lot of sense but you could.
Finally, I don't see how any of those four points are related to page creation.
Orchard has to be taken as a challenge. As a beginner I have quickly built a few sites with ease. After that the learning curve became steeper. I've read many articles on the subject, numerous times.
I have used some CMS's before and had some knowledge what is the nature of managing content. Learning Orchard opened a whole new definition of content management. Now I can think of solving some everyday tasks and various business processes by implementing Orchard.
The whole thing is built in a very abstract layer, forcing you to think abstract too. If you follow this way, there are many blog posts, as well as official documentation to help you.
There are few basic building blocks and concepts that can be used like a bricks. Sounds like a phrase, I've heard it hunderts of times. I have also seen thousands of houses built from red square bricks, and they were all different, while the bricks were all equal. Such things can be accomplished with Orchard.
Read and understand the programming patterns. They are essential part of knowledge that will help you in solving Orchard based tasks. They will also help you change the way you are accomplishing your non Orchard related tasks.
I would say, there are two basic areas one need to understand. Storing and retreiving the piece of content is one, while presenting it to the crowd is the other. It might look difficult, it is difficult, but the goodies behind are delightfull. Not to mention great guys, some from evil empire, some not, that will certainly help you along the way. Not to forget, git's are your best friend. There are many wheels already invented. Caution, neither comes with free lunch.
P.S. I haven't write such a long post since usenet times. It might not be suitable for a site like this. It's kind a way to give a thanks to this French guy, and to all other Orchard evangelists from Poland, over Cyprus to the States. They saved my ass in many occasions.

Undo Redo in WPF/C# in an already functional application

I have done some research already as to how I can achieve the title of this question. The app I am working on has been under development for a couple of years or so (slow progress though, you all know how it is in the real world). It is now a requirement for me to put in Undo/Redo multiple level functionality. It's a bit late to say "you should have thought about this before you started" ... well, we did think about it - and we did nothing about it and now here it is. From searching around SO (and external links) I can see that the two most common methods appear to be ...
Command Pattern
Memento Pattern
The command pattern looks like it would be a hell of a lot of work, I can only imagine it throwing up thousands of bugs in the process too so I don't really fancy that one.
The Memento pattern is actually a lot like what I had in my head for this. I was thinking if there was some way to quickly take a snapshot of the object model currently in memory, then I would be able to store it somewhere (maybe also in memory, maybe in a file). It seems like a great idea, the only problem I can see for this, is how it will integrate with what we have already written. You see the app as we have it draws images in a big panel (potentially hundreds) and then allows the user to manipulate them either via the UI or via a custom built properties grid. The entire app is linked up with a big observer pattern. The second anything changes, events are fired and everything that needs to update does. This is nice but I cant help thinking that if a user is entering text into a texfield on the properties grid there will be a bit of delay before the UI catches up (seems as everytime the user presses a key, a new snapshot will be added to the undo list). So my question to you is ....
Do you know of any good alternatives to the Memento pattern that might work.
Do you think the Memento pattern will fit in here or will it slow the app down too much.
If the Memento pattern is the way to go, what is the most efficient way to make a snapshot of the object model (i was thinking serialising it or something)
Should the snapshots be stored in memory or is it possible to put them into files?
If you have got this far, thankyou kindly for reading. Any input you have will be valuable and very much appreciated.
Well , Here is my thought on this problem.
1- You need multi level undo/redo functionality. so you need to store user actions performed which can be stored in a stack.
2- Your second problem how to identify what has been changed by a operation i think through Memento pattern , it is quite a challenge. Memento is all about toring initial object state in your memory.
either , you need to store what is changed by a operation so that you can use this information to undo the opertions.
Command pattern is designed for the Undo/Redo functionality and i would say that its late but its worth while to implement the design which is being used for several years and works for most of the applications.
If performance allows it you could serialize your domain before each action. A few hundred objects is not much if the objects aren't big themselves.
Since your object graph is probably non trivial (i.e. uses inheritance, cycles,...) the integrated XmlSerializer and JsonSerializers are out of question. Json.net supports these, but does some lossy conversions on some types (local DateTimes, numbers,...) so it's bad too.
I think the protobuf serializers need either some form of DTD(.proto file) or decoration of all properties with attributes mapping their name to a number, so it might not be optimal.
BinaryFormatter can serialize most stuff, you just need to decorate all classes with the [Serializable] attribute. But I haven't used it myself, so there might be pitfalls I'm not aware of. Perhaps related to Singletons or events.
The critical things for undo/redo are
knowing what state you need to save and restore
knowing when you need to save the state
Adding undo/redo after the fact is always a painful thing to do - (I know this comment is of no use to you now, but it's always best to design support into the application framework before you start, as it helps people use undo-friendly patterns throughout development).
Possibly the simplest approach will be a memento-based one:
Locate all the data that makes up your "document". Can you unify this data in some way so that it forms a coherent whole? Usually if you can serialise your document structure to a file, the logic you need is in the serialisation system, so that gives you a way in. The down side to using this directly is usually that you will usually have to serialise everything so your undo will be huge and slow. If possible, refactor code so that (a) there is a common serialisation interface used throughout the application (so any and every part of your data can be saved/restored using a generic call), and (b) every sub-system is encapsulated so that modifications to the data have to go through a common interface (rather than lots of people modifying member variables directly, they should all call an API provided by the object to request that it makes changes to itself) and (c) every sub-portion of the data keeps a "version number". Every time an alteration is made (through the interface in (b)) it should increment that version number. This approach means you can now scan your entire document and use the version numbers to find just the parts of it that have changed since you last looked, and then serialise the minimal amount to save and restore the changed state.
Provide a mechanism whereby a single undo step can be recorded. This means allowing multple systems to make changes to the data structure, and then when everything has been updated, triggering an undo recording. Working out when to do this may be tricky, but it can usually be accomplished by scanning your document for changes (see above) in your message loop, when your UI has finished processing each input event.
Beyond that, I'd advise going for a command based approach, because there are many benefits to it besides undo/redo.
You may find the Monitored Undo Framework to be useful. http://muf.codeplex.com/
It uses something similar to the memento pattern, by monitoring for changes as they happen and allows you to put delegates on the undo stack that will reverse / redo the change.
I considered an approach that would serialize / deserialize the document but was concerned about the overhead. Instead, I monitor for changes in the model (or view model) on a property by property bases. Then, as needed, I use the MUF library to "batch" related changes so that they undo / redo as a unit of change.
The fact that you have your UI setup to react to changes in the underlying model is good. It sounds like you could inject the undo / redo logic there and the changes would bubble up to the UI.
I don't think that you'd see much lag or performance degradation. I have a similar application, with a diagram that we render based on the data in the model. We've had good results with this so far.
You can find more info and documentation on the codeplex site at http://muf.codeplex.com/. The library is also available via NuGet, with support for .NET 3.5, 4.0, SL4 and WP7.

How do I avoid 'Binding to Large CLR Objects'?

This guide on optimizing DataBinding says:
There is a significant performance impact when you data bind to a single CLR object with thousands of properties. You can minimize this impact by dividing the single object into multiple CLR objects with fewer properties.
What does this mean? I am still trying to get familiar with DataBinding, but my analogy here is that properties are like SQL table fields, and objects are rows. This advice then translates to "to avoid problems with a large number of fields, use less fields and create more rows". As this doesn't make any sense to me, possibly my understanding of databinding is completely askew?
Does this advice actually apply? I am unsure if it is specific to .NET 4/WPF, while i am using 3.5 and a custom WinForms based control library (DevExpress)
As an aside: am I correct in thinking DataBinding uses reflection when an using IList style datasource?
This is not just a academic question. I am currently trying to speed up loading a XtraGridView (DevExpress Control) with ~100,000 objects with 50 properties or so.
This advice then translates to "to avoid problems with a large number of fields, use less fields and create more rows"
I think it should translate to "use less fields and create smaller tables" (i.e. with less fields). And the original advice should read "[...]dividing the single class into multiple classes", with fewer properties. As you correctly noted, it wouldn't make sense to create more "rows"...
Anyway, if you do have a class that exposes hundreds or thousands of properties, you have a far more serious problem than binding performance... This is a serious design flaw that you should fix after reading some OO principles.
Does this advice actually apply? I am unsure if it is specific to .NET 4/WPF, while i am using 3.5 and a custom WinForms based control library (DevExpress)
Well, the page you mentioned is about WPF, but I think the idea of binding to smaller objects can apply to WinForms too (because the more properties need to be watched, the slower it will be)
As an aside: am I correct in thinking DataBinding uses reflection when an using IList style datasource?
You're partially correct... it actually uses TypeDescriptor, which in turns uses reflection to examine regular CLR objects. But this mechanism is much more flexible than reflection: a type can implement ICustomTypeDescriptor to provide its own description, list of members, etc (DataTable is one example of such a type)
You are solving the wrong problem. It will take a typical user well over a week to find back what she is looking for when she's got 5 million fields to search through. The speed of your UI becomes irrelevant. Only a machine can do a better job finding the data back.
You've got one. Help the user narrow down what she is searching for by letting her enter search terms so that the total query result doesn't contain more than, say, a hundred rows. The dbase engine helps you make that fast. And it automatically solves your grid perf problem.

wpf: usercontrol vs. customcontrol performance issue

Which one is better from performance view user control or custom control?
Right now I am using user control and In a specific scenario, I am creating around 200(approx.) different instances of this control but it is bit slow while loading and I need to wait atlest 20-30 second to complete the operation. What should I do to increase the performance?
Edit:
The scenario is:
In my Window, I have a TreeView, each item of this represents different user-defined types, So I have defined DataTemplate for each type. These DataTemplates are using user-controls and these usercontrols are binded with properties of user-defined types. As simple, TreeView maps a Hierarchical Data Structure of user-defined types. Now I read from Xml and create the Heirarchical structure and assign it to TreeView and it takes a lot of time to load. Any help?
I have an application that is loading around 500 hundred small controls. We originally built these as user controls, but loading the baml seems to cause the controls to load slow (each one is really fast, but when we get around 300, the total of all of them together seems to add up). The user controls also seem to use up a good amount of memory. We switched these to custom controls and the app launches almost twice as fast and takes up about 1/3 the ram. Not saying this will always be the case, but custom controls made a big difference for us.
FYI: Here's a link on using a VirtualizingPanel with the TreeView: http://msdn.microsoft.com/en-us/library/cc716882.aspx
Make sure to SuspendLayout while adding controls en masse. Try to completely configure the control before adding it to any container.
Here is the follow-up article to my issues with WPFs Virtualizing Stack Panel and TreeView. I hope this helps you.
http://lucisferre.net/2010/04/21/virtualizing-stack-panel-wpf-part-duex/
Long story short: It is possible to do the navigation with the current VSP, but it is a bit of a hack. The current VSP design needs a rework, as the way it currently virtualizes the View breaks the coupling between the View and ViewModel which, in turn, breaks the whole concept of MVVM.
I worked at Microsoft and was not allowed to use the UserControl because of its poor performance. We always created controls in C#. Not sure about the performance of DataTemplates, but am interested in knowing if it is better. I suspect that it is.

WPF advice on XML data binding: XmlDataPresenter vs custom class?

I'm just starting out in WPF and I'm building a simple movie library as a test project to help me find my feet.
I'm storing my movie information in XML, for example:
<movie id="1" title="Back To The Future" date="1985" ... />
The movies appear in a list and when clicked the fields will become editable. I've got that working by using two Data Templates to style the list items, applied via a Trigger on the ListBoxItem.IsSelected property.
But for binding and editing the data, I'd be really grateful for some advice. I've tried each of the approaches below with varying degrees of success. Which do you think makes the most sense? Are there any other things I could try?
Approach 1: Bind to XML directly using XMLDataProvider in XAML. In this case the ListBox binds to the XmlDataProvider and the individual control elements are bound via XPath expressions. It's very simple to get started, but this approach seems to become tricky when editing the data. We need rather a lot of ValueConverters just to ensure that the data is presented sensibly and saved back to the XML file in the appropriate format.
Approach 2: Create a custom Movie class that receives the XML and exposes all the movie attributes as public properties to which the controls are bound. This feels more robust and flexible, but seems to require a lot more work to implement. Every property needs to be explicitly exposed in code and validation is a pain.
Approach 3: A bit like approach 2 but with a Movie user control instead of the ListBox + Data Templates. I started with this and quickly abandoned it because I couldn't see any advantages over approach 2.
I realise it's not a straightforward question and that there are a lot of factors to consider, but any thoughts on the various pros/cons (or whether there are any tricks I'm missing) would be appreciated. Please let me know if you would like more info or code samples. And apologies if this question is inappropriate or unwelcome.
Many thanks.
If its just for displaying, whether or not you use templates, approach one is more efficient as the wpf engine works directly with the xml.
If your going to do editing operations on the data, it is best to go with Approach two as you have the flexibility to manipulate the data far better. Using a class you can do the exact same thing as in approach one with binding as the engine will read from the objects just like the xml file. This approach allows for easily binding the elements 2 way and if its from an IObservable Collection, you can ovveride the method that is called when the data is changed, so it updates the xml.

Categories