I have done some research already as to how I can achieve the title of this question. The app I am working on has been under development for a couple of years or so (slow progress though, you all know how it is in the real world). It is now a requirement for me to put in Undo/Redo multiple level functionality. It's a bit late to say "you should have thought about this before you started" ... well, we did think about it - and we did nothing about it and now here it is. From searching around SO (and external links) I can see that the two most common methods appear to be ...
Command Pattern
Memento Pattern
The command pattern looks like it would be a hell of a lot of work, I can only imagine it throwing up thousands of bugs in the process too so I don't really fancy that one.
The Memento pattern is actually a lot like what I had in my head for this. I was thinking if there was some way to quickly take a snapshot of the object model currently in memory, then I would be able to store it somewhere (maybe also in memory, maybe in a file). It seems like a great idea, the only problem I can see for this, is how it will integrate with what we have already written. You see the app as we have it draws images in a big panel (potentially hundreds) and then allows the user to manipulate them either via the UI or via a custom built properties grid. The entire app is linked up with a big observer pattern. The second anything changes, events are fired and everything that needs to update does. This is nice but I cant help thinking that if a user is entering text into a texfield on the properties grid there will be a bit of delay before the UI catches up (seems as everytime the user presses a key, a new snapshot will be added to the undo list). So my question to you is ....
Do you know of any good alternatives to the Memento pattern that might work.
Do you think the Memento pattern will fit in here or will it slow the app down too much.
If the Memento pattern is the way to go, what is the most efficient way to make a snapshot of the object model (i was thinking serialising it or something)
Should the snapshots be stored in memory or is it possible to put them into files?
If you have got this far, thankyou kindly for reading. Any input you have will be valuable and very much appreciated.
Well , Here is my thought on this problem.
1- You need multi level undo/redo functionality. so you need to store user actions performed which can be stored in a stack.
2- Your second problem how to identify what has been changed by a operation i think through Memento pattern , it is quite a challenge. Memento is all about toring initial object state in your memory.
either , you need to store what is changed by a operation so that you can use this information to undo the opertions.
Command pattern is designed for the Undo/Redo functionality and i would say that its late but its worth while to implement the design which is being used for several years and works for most of the applications.
If performance allows it you could serialize your domain before each action. A few hundred objects is not much if the objects aren't big themselves.
Since your object graph is probably non trivial (i.e. uses inheritance, cycles,...) the integrated XmlSerializer and JsonSerializers are out of question. Json.net supports these, but does some lossy conversions on some types (local DateTimes, numbers,...) so it's bad too.
I think the protobuf serializers need either some form of DTD(.proto file) or decoration of all properties with attributes mapping their name to a number, so it might not be optimal.
BinaryFormatter can serialize most stuff, you just need to decorate all classes with the [Serializable] attribute. But I haven't used it myself, so there might be pitfalls I'm not aware of. Perhaps related to Singletons or events.
The critical things for undo/redo are
knowing what state you need to save and restore
knowing when you need to save the state
Adding undo/redo after the fact is always a painful thing to do - (I know this comment is of no use to you now, but it's always best to design support into the application framework before you start, as it helps people use undo-friendly patterns throughout development).
Possibly the simplest approach will be a memento-based one:
Locate all the data that makes up your "document". Can you unify this data in some way so that it forms a coherent whole? Usually if you can serialise your document structure to a file, the logic you need is in the serialisation system, so that gives you a way in. The down side to using this directly is usually that you will usually have to serialise everything so your undo will be huge and slow. If possible, refactor code so that (a) there is a common serialisation interface used throughout the application (so any and every part of your data can be saved/restored using a generic call), and (b) every sub-system is encapsulated so that modifications to the data have to go through a common interface (rather than lots of people modifying member variables directly, they should all call an API provided by the object to request that it makes changes to itself) and (c) every sub-portion of the data keeps a "version number". Every time an alteration is made (through the interface in (b)) it should increment that version number. This approach means you can now scan your entire document and use the version numbers to find just the parts of it that have changed since you last looked, and then serialise the minimal amount to save and restore the changed state.
Provide a mechanism whereby a single undo step can be recorded. This means allowing multple systems to make changes to the data structure, and then when everything has been updated, triggering an undo recording. Working out when to do this may be tricky, but it can usually be accomplished by scanning your document for changes (see above) in your message loop, when your UI has finished processing each input event.
Beyond that, I'd advise going for a command based approach, because there are many benefits to it besides undo/redo.
You may find the Monitored Undo Framework to be useful. http://muf.codeplex.com/
It uses something similar to the memento pattern, by monitoring for changes as they happen and allows you to put delegates on the undo stack that will reverse / redo the change.
I considered an approach that would serialize / deserialize the document but was concerned about the overhead. Instead, I monitor for changes in the model (or view model) on a property by property bases. Then, as needed, I use the MUF library to "batch" related changes so that they undo / redo as a unit of change.
The fact that you have your UI setup to react to changes in the underlying model is good. It sounds like you could inject the undo / redo logic there and the changes would bubble up to the UI.
I don't think that you'd see much lag or performance degradation. I have a similar application, with a diagram that we render based on the data in the model. We've had good results with this so far.
You can find more info and documentation on the codeplex site at http://muf.codeplex.com/. The library is also available via NuGet, with support for .NET 3.5, 4.0, SL4 and WP7.
Related
I am in need of tracking any changes done to a complex model (a very complex model must I say with all kinds of relationships). Once I have identified these changes, I must save them into a separate table, in order to be approved by an administrator at a later stage.
I've tried using the change tracker of Entity Framework and have even tried to customize it but it has just been giving me problem after problem.
What do you suggest I could use in order to track these changes, which does not involve Entity Framework?
UPDATE: I ended up solving this by creating my own custom checker. Took more time but in the end it was more worth it as I had total control over the changes.
Thanks for you opinions,
Steve :)
Sorry for not providing code example. As commented this is more of an idea (to broad for this Exchange) but it is a high level way that I have done before. Back when "reflection" was highly frowned upon we called it "meta data" but essentially employed reflection - and for that reason, today it is known as meta programming.
Your problem, is a lovely use case for meta programming. Reflection used to be very slow in "80's" only due to low memory and restricted CPU.
Serialises, such as JSON use reflection or the infamously slow XML (but not anymore)
Dependency Injection is the mother of meta programming
Helpers like auto mapper is mostly reflection too.
Today it has been highly optimised and works extremly well due to excellent computational power. As long as you do not write hacky code, or try to optimise it further you will be OK. You should trust the framework and compilers for that.
You can do some fancy things such as intercepting changes but that can get quite complex. To keep it a bit simpler all you have to do is follow a bit of DDD
Your classes should only allow changes via the properties you expose. Each Set or operation that mutates the state can then be sent your lovely state tracking code.
in NET 4.5 reflection is really fast and meta programming is already used in Dependency Injection allover the show.
To remember changes use an optimised collection like maybe a Dictionary or HashSet. Depends on your needs. Using GetType store that as the key and the value can be the new value, or a class that hold metadata like. Old Value, New Value, Version (for rolling back), etc etc.
Once you get that going in your class you then move all the logic into singleton, and define some generic methods that you will reuse on all your "entities"
I'm developing a PC app in Visual Studio where I'm showing the status of hundreds of sensors that are connected via WiFi. The thing is that I need to hold on to the sensor data even after I close the app, so I'm considering some form of permanent storage. These are the options I've considered:
1) My Sensor object is relatively compact with only a few properties. I could serialize all the objects before closing the app and load them every time the app starts anew.
2) I could throw all the properties (which are mostly strings and doubles) into a simple text file and create a custom protocol for storage and retrieval.
3) I could integrate a database with my app. Someone told me this is the best way to go about it, but I'm a bit hesitant seeing as I'm not familiar with DBs.
Which method would yield the best results in terms of resource usage and speed? Or is there some other, better way to go about this?
First thing you need is to understand is your problem. For example, when the program is running do you need to have everything in memory at the same time or do you work with your sensors one at a time?
What is a "large amount of data"? For example, to me that will never be less than million (or billion in some cases).
Once you know that you shouldn't be scared of using something just because you are not familiar to it. Otherwise you are not looking for the best solution for your problem, you are just hacking around it in a way that you feel comfortable.
This being said, you have several ways of doing this. Like you said you can serialize data, using json to store and a few other alternatives but if we are talking about a "large amount of data that we want to persist" I would always call for the use of Databases (the name says a lot). If you don't need to have everything in memory at the same time then I believe that this is you best option.
I personally don't like them (again, personal choice) but one way of not learning SQL (a lot) while you still use your objects is to use an ORM like NHibernate (you will also need to learn how to use it so you don't get things a slower).
If you need to have everything loaded at the same time (most often that is not the case so be sure of this) you need to know what you want to keep and serialize it. If you want that data to be readable by another tool or organize in a given way consider a data format like XML or JSON.
Also, you can use mmap-file.
File is permanent, and keep data between program run.
So, you just keep your data structs in the mmap-ed area, and no more.
MSDN manual here:
https://msdn.microsoft.com/en-us/library/windows/desktop/aa366556%28v=vs.85%29.aspx
Since you need to load all the data once at the start of the program, the database case seems doubtful. The DB necessary when you need to load a bit of data many times.
So first two cases seem more preferred. I would advice to hide a specific solution behind an interface, then you'll can change it later.
Standard .NET serialization of sensors' array is more simple probably, and it will be easier to expand.
I am working on essentially a drawing editor that allows you to define geometries based on key points on existing geometries. The user is then able to add some information about the thing they just added, such as name, expected size, etc. The API I am using to accomplish it is the awesome Reversible API, though I hope that the question extends beyond the API that I am using.
There are basically a couple questions that I am seeking a little clarity on:
1) If you are supporting Undo/Redo with an application that supports selection in a Master/Detail manner, should changing the state of a drawing object also cause it to be selected? The example being that an undo operation changed the name of an element, and that change would not be obvious unless the element was selected. Is there considered a standard behavior for something like this?
2) When dealing with certain types of incremental changes (Dragging box, or using a numeric spinner), it seems to be standard form for a set of changes to be grouped into a single user interaction (mouse swipe, or the act of releasing the spinner button), but when dealing with MVVM, I currently only know that the property has changed and not the source of the change. Is there a standard way for these types of interactions to propagate to the view model without completely disintegrating the pattern?
When in doubt the best approach is to take a look at typical behaviour of OS controls and other applications on the platform in order to be consistent with what users will be familiar with. In particular, consistency with the most commonly-used applications. If you examine how other apps approach a UI issue you can often learn a lot, especially about subtle cases you may not have considered in your own design.
1) Conventionally, undoing tends to select the changed item(s), both to highlight what changed and to move the user's input focus back to the last edit so that they can continue. This works particularly well for content like text because if you undo/redo something you typed, chances are you want to continue editing in the area of the text you've just undone/redone. The main choice for you to make with master/detail is whether to select the master object only, or to select the precise detail that changed.
2) Your undo manager can use some intelligence to conglomerate similar actions into a single undo step. For example, if the user types several characters in a row, it could notice that these actions are all alike and concatenate them into a single undo step. Just how it does this depends on how you are storing and processing the undo, but with a decent object oriented design this should be an easy option to add (i.e. ask undo records themselves if they can be conglomerated so you can easily add new types of undo record in future). Beware though that accumulating too many changes into one step can be intensely irritating, so you may find the lazier implementation of one action = 1 step actually achieves a better UX than trying to be too clever. I'd start with brute force and add conglomeration only if you find you end up with lots of repetitive undo sequences (like 100 single pixel-left movements instead of just one 100-pixel jump)
I'm building a WinRT/WP8 app using MVVM Cross, and one of the requirements is for the user to be able to upload images. As far as the main application is concerned, a "picture" is just a byte array with some meta-data - where it actually came from is none of it's business. What I have then (so far for WinRT, haven't implemented phone at all yet) is a "IPictureSource" interface, with a GetBytes method, and 2 implementations - LivePicture and FileSystem. Each does what it needs to do to take/find an image, and returns it in the required format.
The app is a bit clunky at the moment, as the UI layer is sniffing device capabilities, and only allowing filesystem if a camera isn't available
What I want to do is abstract these a bit, possibly have two child viewmodels, one dedicated to the camera (that enables itself if available) and one for the filesystem, or maybe even a collection, if the device has more than one camera, to give the user the maximum choice.
Either way, I want to have a design whereby I have multiple sources for a picture, that are all capable of returning the appropriate data.
In the old days, I would expose a "PictureTaken" event on IPictureSource, and cycle through the child objects from the parent, register each event and process them through a common handler.
I can't see why that wouldn't still work, but as I've got a bit of breathing room to make the most of the new technologies (particularly async/await) is there now a better way of doing that, particularly one I could unit test?
If you'd like to get rid of these event handlers, check my answer here. Perhaps your MVVM framework provides already an event aggregator.
Okay, I got this small program which tags (as in ID3v2.4 etc.) some music files. Now I want the user to have the option to move and/or rename those tagged files if he/she wishes to.
Considering that I am trying to keep a fairly clean and loosely coupled design in this system (even though extensibility is not really important here, it's just fun), would you just call someFileInfoObject.Move(someWhere) where someWhere is the applied pattern or would it be wise to implement some classes - maybe MoveFileStrategy, RenameFileStrategy (I know that moving/renaming can be considered the same in some systems, but I want them to be enabled separately) - which figure out the destination and whether the strategy should be applied when an Apply(FileInfo file) method or so is called.
If you think that some strategy classes may be useful, do you have any suggestion on a good implementation strategy?
As already said, over-engineering is not really an issue here, because it is a fun project mainly targeted at getting some programming and engineering practice. :)
Off the top of my head, building on #S.Lott, keep the commands themselves simple and atomic, and create a command queue. The UI add's commands to the queue, and the program executes the commands sequentially.
Additionally, you could hang onto (memento's) of executed commands and provide an undo facility.
You can make the case that you have a virtually unlimited number of commands for your files. Think of this class hierarchy.
Command
CopyCommand
RenameCommand
MoveCommand
DiffCommand
CompressCommand
These aren't really strategies. They're just ordinary classes with a simple "execute" method. You provide the options and arguments through ordinary setters. Then you execute the method.
This borrows from Ant's design pattern for Tasks that can be plugged in.