I want to make backup List for Undo/Redo,
But the object in the BackUp List will be change after I modified the object in the original List.
How can I deal with this problem? "const", "in" seems not working.
private List<GeminiFileStruct> BackUpForUndoRedo(List<GeminiFileStruct> gfl,
ToolStripMenuItem tm)
{
var li =
(from i in gfl
select i).ToList();
tm.Enabled = true;
return li;
}
Sorry, it used to be struct. Cause some problem, I change to class.
Could struct has Get/Set???
Green hand to C#.
While Bernoulli IT describes the actual problem with using a shallow copy, I wanted to provide some more background for undo/redo. There are two main approaches for undo/redo
Memento pattern. Before doing a change to an object, a memento is created that can be used to restore the state of said object. This can be applied to the whole application, i.e. before any change, the application state is serialized, just like it would if the user saves to a file. This serialized state can then be restored, just like when loading a file. Assuming there is a function to save to file, and that this represents the application state. Note that serialization/deserialization will implicitly create a deep copy.
Command pattern. Each change should be done by a command that knows how to reverse the change. A downside with this is that it can be complicated to make sure all actions generate these objects correctly.
What you need is a so-called deep copy of the list:
Items in the backup list will be clones of the items in the original
list. Fresh new instances of items with identical properties.
Not a shallow copy:
A backup list with "just" references to items in the original list. This will cause
changes to item A in the backup list to be changed to item A in the
original list because they reference the same item 😉.
Have a look at this SO post or any of these web pages: tutorial 1, tutorial 2.
Deep copying is not a trivial programming technique as you will discover. But under the right assumptions in the right context it can be done safely.
Note
As #Llama points out a deep copy of a list with structs is automagically obtained when doing new List<TStruct>(originalListWithStructs). Struct is a value type and behaves different compared to a reference type.
Related
I have a private List<Experience> experiences; that tracks generic experiences and experience specific information. I am using Json Serialize and Deserialize to save and load my list. When you start the application the List populates itself with the current saved information automatically and when a new experience is added to the list it saves the new list to file.
A concern that is popping into my head that I would like to get ahead of is, there is nothing that would stop the user from at any point doing something like experiences = new List<Experience>(); and then adding new experiences to it. Saving this would result in a loss of all previous data as right now the file is overwritten with each save. In an ideal world, this wouldn't happen, but I would like to figure out how to better structure my code to guard against it. Essentially I want to disallow removing items from the List or setting the list to a new list after the list has already been populated from load.
I have toyed with the idea of just appending the newest addition to the file, but I also want to cover the case where you change properties of an existing item in the List, and given that the list will never be all that large of a file, I thought overwriting would be the simplest approach as the cost isn't a concern.
Any help in figuring out the best approach is greatly appreciated.
Edit* Looked into the repository pattern https://www.infoworld.com/article/3107186/application-development/how-to-implement-the-repository-design-pattern-in-c.html and this seems like a potential approach.
I'm making an assumption that your user in this case is a code-level consumer of your API and that they'll be using the results inside the same memory stack, which is making you concerned about reference mutation.
In this situation, I'd return a copy of the list rather than the list itself on read-operations, and on writes allow only add and remove as maccettura recommends in the comments. You could keep the references to the items in the list intact if you want the consumer to be able to mutate them, but I'd think carefully about whether that's appropriate for your use case and consider instead requiring the consumer to call an update function (which could be the same as your add function a-la HTTP PUT).
Sometimes when you want to highlight that your collection should not be modified, exposing it as an IEnumerable except List may be enough, but in case you are writing some serious API, something like repository pattern seems to, be a good solution.
I have a List with a large amount of elements in it. I need to create a copy of this list to perform operations on it without altering the original list. However, the operations typically only access a small proportion of the elements of the list, and so it is inefficient to copy the entire thing when most of it will go unused. Is there a simple way to create an object which is a clone of a list, but only clones elements when they are accessed? I have looked into the Lazy<T> class, which seems to be what I want, but I don't know how to apply it in this situation.
I want to be able to do something like this:
LazyListCopy<SomeType> lazyCopy = LazyListCopy.Copy(myList); // No elements have been copied at this point
DoSomethingWith(lazyCopy[34]); // 35th element has now been copied
And this:
foreach(SomeType listElement in LazyCopy.Copy(myOtherList))
{
if (!Check(listElement)) // Object corresponding to listElement has been cloned
break;
}
I don't mind if the solution isn't generic enough to handle Lists of any type; I would be fine with it being specific to one or two classes I've created.
Preferably this would be a deep copy rather than a shallow copy, but a shallow copy would still be useful, and I would appreciate examples of that if it is shorter/simpler too.
Sounds like you want to end up with your original list plus a sparse collection of overrides.
Why not create a dictionary for the overrides, keyed on the index into the original list? You can then manually add values from your original list as they are needed.
You could wrap this functionality up into a class that wraps IList<T> if it's something you're going to use often.
I've been reading Rockford Lhotka's "Expert C# 2008 Business Objects", where there is such a thing as a data portal which nicely abstracts where the data comes from. When using the DataPortal.Update(this), which as you might guess persists 'this' to the database, an object is returned - the persisted 'this' with any changes the db made to it, eg. a timestamp.
Lhotka has written often and very casually, that you have to make sure to update all references to the old object to the new returned object. Makes sense, but is there an easy way to find all references to the old object and change them? Obviously the GC tracks references, is it possible to tap into that?
Cheers
There are profiling API's to do this but nothing for general consumption. One possible solution and one which I've used myself is to implement in a base class a tracking mechanism where each instance of the object adds a WeakReference to itself to a static collection.
I have this conditionally compiled for DEBUG builds but it probably wouldn't be a good idea to rely on this in a release build.
// simplified example
// do not use. performance would suck
abstract class MyCommonBaseClass {
static readonly List<WeakReference> instances = new List<WeakReference>();
protected MyCommonBaseClass() {
lock (instances) {
RemoveDeadOnes();
instances.Add(new WeakReference(this));
}
}
}
The GC doesn't actually track the references to the objects. Instead, it calculates which objects are reachable starting from global and stack objects at the runtime, and executing some variant of "flood fill" algorithm.
Specifically for your problem, why not just have a proxy holding reference to the "real" object? This way you need to update at only one place.
There isn't a simple way to do this directly, however, Son of Strike has this capability. It allows you to delve into all object references tracked by the CLR, and look at what objects are referencing any specific object, etc.
Here is a good tutorial for learning CLR debugging via SoS.
If you are passing object references around and those object references remain unchanged, then any changes made to the object in a persistence layer will be instantly visible to any other consumers of the object. However if your object is crossing a service boundary then the assemblies on each side of the object will be viewing different objects that are just carbon copies. Also if you have made clones of the object, or have created anonymous types that incorporate properties from the original object, then those will be tough to track down - and of course to the GC these are new objects that have no tie-in to the original object.
If you have some sort of key or ID in the object then this becomes easier. The key doesn't have to be a database ID, it can be a GUID that is new'ed up when the object is instantiated, and does not get changed for the entire lifecycle of the object (i.e. it is a property that has a getter but no setter) - as it is a property it will persist across service boundaries, so your object will still be identifiable. You can then use LINQ or even old-fashioned loops (icky!) to iterate through any collection that is likely to hold a copy of the updated object, and if one is found you can then merge the changes back in.
Having said this, i wouldn't think that you have too many copies floating around. IF you do then the places where these copies are should be very localized. Ensuring that your object implements INotifyPropertyChanged will also help propagate notifications of changes if you hold a list in one spot which is then bound to directly or indirectly in several other spots.
When using Linq2sql everything automagically works. My experience is that going with the flow is not always the best solution and to understand how something internally works is better so you use the technique optimally.
So, my question is about linq2sql.
If I do a query and get some database objects, or I create a new one, somehow the linqcontext object keeps references to these objects. If something changes in one of the objects, the context object 'knows' what has changed and needs updating.
If my references to the object are set to null, does this mean that the context object also removes it's link to this object? Or is the context object slowly getting filled with tons of references, and keeping my database objects from garbage collecting?
If not, how does this work??
Also, is it not very slow for the database object to always go through the entire list to see what changed and to update it?
Any insight in how this works would be excellent!
thanks
yes, the context keeps references of the loaded objects. That's one of the reasons why it isn't meant to be used with a single instance shared accross the different requests.
It keeps lists for the inserts/deletes. I am not sure if it captures update adding those to a list, or it loops at the end. But, u shouldn't be loading large sets of data at a time, because that alone would be a bigger hit to performance than any last check it might do on the list.
The DataContext registers to your objects PropertyChanged event to know when it is modified. At this point it clones the original object and keeps it to compare the 2 objects together later when you do your SubmitChanges().
If my references to the object are set to null, does this mean that the context object also removes it's link to this object?
Edit: No. Sorry for my original answer I had misinterpreted what you had written. In that case the data context still has a reference to both object but will remove the relationship with those 2 objects on next SubmitChanges().
Be careful though. If you created your own objects instead of using the ones generated from the .dbml, the "magic" that the datacontext performs might not work properly.
Let's say I have a class Collection which holds a list of Items.
public class Collection
{
private List<Item> MyList;
//...
}
I have several instances of this Collection class which all have different MyLists but share some Items.
For example: There are 10 Items, Collection1 references Items 1-4, Collection2 has Items 2-8 and Collection3 4,7,8 and 10 on its List.
I implemented this as follows: I have one global List which holds any Items available. Before I create a new Collection I check if there are already Items I need in this list -- if not I create the Item and add it to the global List (and to the Collection of course).
The problem I see is that those Items will never be released - even if all Collections are gone, the memory they consume is still not freed because the global list still references them.
Is this something I need to worry about? If so, what should I do? I thought of adding a counter to the global list to see when an Item is not needed anymore and remove its reference.
Edit:
It is in fact a design problem, I think. I will discard the idea of a global list and instead loop through all Collections and see if they have the needed Item already.
If the global list needs references to the items then you can't realistically free them. Do you actually need references to the items in the global list? When should you logically be able to remove items from the global list?
You could consider using weak references in the global list, and periodically pruning the WeakReference values themselves if their referents have been collected.
It looks like a bit of a design problem, do you really need the global list?
Apart from weakreferences that Jon mentions, you could also periodically rebuild the global list (for example after deleting a collection) or only build it dynamically when you need it and release it again.
You'll have to decide which method is most appropriate, we don't have enough context here.