I'm looking for a way to sort a list of object (of any type possible) so that whatever happens to the objects, as long as they are not destructed, the order keeps the same (so the hashCode isn't a good idea because in some classes it's changing over a time), for that reason I was thinking to use the address of the object in the memory but I'm not sure that this always keeps the same (Can the address change by a garbage collecting call for instance?). However I'm looking for properties of objects (of any type) that will keep the same as long as the object isn't destroyed. Are there any of them? And if yes, what are they?
Yes, objects can be moved around in memory by the garbage collector, unless you specifically ask it not to (and it's generally recommended to let the GC do its thing).
What you need here is a side table: create a dictionary keyed by the objects themselves, and for the value put anything you like (could be the original hashcode of the object, or even a random number). When you sort, sort by that side key. Now if for example the object a has value "1" in this dictionary, it'll always be sorted first - regardless of what changes are made to a, because you'll look in your side dictionary for the key and the code for a doesn't know to go there and change it (and of course you are careful to keep that data immutable). You can use weak references to make sure that your dictionary entries go away if there is no other reference to the object a.
Updated given the detail now added to the question (comments); just take a copy of the list contents before you sort it...
No the address is not fixed. And for arbitrary objects, no there is no sensible way of doing this. For your own objects you could add something common, like:
interface ISequence { int Order { get; } }
static class Sequence {
private static int next;
public static int Next() {
return Interlocked.Increment(ref next); }
}
class Foo : ISequence {
private readonly int sequence;
int ISequence.Order { get { return sequence; } }
public Foo() {
sequence = Sequence.Next();
}
}
A bit scrappy, but it should work, and could be used in a base-class. The Order is now non-changing and sequential. But only AppDomain-specific, and not all serialization APIs will respect it (you'd need to use serialization-callbacks to initialize the sequence in such cases).
Sorting by memory address is only possible of reference objects of course. So not all types are possible to be sorted this way, primitive types and structs are not.
The other way is to depend on a certain interface, where you require that each of this instances can return a Guid. This is created in the constructor and not changed.
public interface ISortable
{
Guid SortId { get; }
}
class Foo : ISortable
{
Foo()
{
SortId = Guid.NewGuid();
}
Guid SortId { get; private set; }
}
The advantage of a guid is that it could be created independently in each class. You need no synchronization, you just give every class an id.
By the way: If you are using the objects in a Dictionary as a Key, they must not change their hash code. They must be immutable. This could probably be a constraint you could depend on.
Edit: you could write your specialized list that is able to keep orderings.
Either you store the original order when the list is created from another list, and then you're able to restore the order at any point in time. New items could be put at the end. (are there new items anyway?)
Or you do something more sophisticated and store the order of any object that has ever be seen by your list class in a static memory. Then you can sort all the lists independently. But beware of the references you are holding, which will avoid the objects be cleaned up by the GC. You'll need week references, I think there are weak references in C# but I've never used them.
Even better would be to put this logic into a sorting class. So it works for every list that has been sorted by your sorting class.
Related
I have a class called "RelativeData" that is used for comparing values through a state machine. I am trying to build a class that takes snapshots of the data at given intervals. The intervals being: StateBegins, StateMachineBegins, and the one I'm having issue with, liveValue.
The challenge im having is the liveValue(s) I wish to track are in a separate class, There is a large list of values I want to add to track with my RelativeData objects, so my class needs to be generic.
the point of this class is to take snapshots of data.
class RelativeData<T>
{
T initialValue;
T stateValue;
T liveValue; //currently not implemented, since i can't store a ref
public void StateMachineActive(T curValue)
{
initialValue = curValue;
}
public void StateUpdated(T curValue)
{
stateValue = curValue;
}
}
ex) a float called "energy" which belongs to a spell. When the state machine activates, it calls StateMachineActive on my list of relativeData, which preferably could internally update using liveValue, instead of having to pass curValue
Current fix)
I have a manager that adds these values to dictionaries of the generic types.
Dictionary<myKey,relativeData<bool>> relativeBoolData
Dictionary<myKey,relativeData<float>> relativeFloatData
& etc...
However to call StateUpdated, i need to pass the current value, which i need to get from a large switch case using myKey. Doable, but back in C++ days I could just store a T*. Can't turn on unsafe mode since Unity doesn't seem to allow it. I could have the actual values just stored in there, but then everytime I want to get the currentValue (which happens ALOT more often than getting a relative value) everywhere throughout the code, it would increase complexity(be more proc heavy). At least I assume.
Question:
1) Can I store a pointer to a Icomparable type? Can I have liveValue point to a valueType/Primitive data type (ie int, long, enum) elsewhere, like I could in C++ with T*. I know I can pass using "ref" but I can't seem to store it.
2) Should I just store the actual values in there instead, and have everything be retreived through a two tier switch case (First to know which dict to call, the float, bool... and then to retrive the value) would that not increase runtime too heavily? The non-relative values: energy, stability, etc, are used continuously in spells for internal calculations. The relative only used for state machine. (may have 2~60 spells active at once)
3) Is this just a bad pattern, is there a better pattern that I am just not seeing, a better way to take snapshots of generic data?
4) I know arrays are by ref, but that would mean my original data would all have to become arrays right?
5) Should I just leave it the way it is. It works, just a tad messier and slower.
all of them are IComparable: ints, floats, bools, enums..
You can do one of the following:
Create a wrapper class for the values and keep a reference on the wrapper.
public class Wrapper<T>
{
public T Value { get; set; }
}
Store the values in an array and remember the indexes instead of the values themselves.
Store the values in a dictionary and remember their keys instead of the values themselves.
Store an accessor delegate (Func<T>). Note: Lambda expressions automatically keep a reference on the context in which they were first declared. See The Beauty of Closures (by Jon Skeet).
When working with HashSets in C#, I recently came across an annoying problem: HashSets don't guarantee unicity of the elements; they are not Sets. What they do guarantee is that when Add(T item) is called the item is not added if for any item in the set item.equals(that) is true. This holds no longer if you manipulate items already in the set. A small program that demonstrates (copypasta from my Linqpad):
void Main()
{
HashSet<Tester> testset = new HashSet<Tester>();
testset.Add(new Tester(1));
testset.Add(new Tester(2));
foreach(Tester tester in testset){
tester.Dump();
}
foreach(Tester tester in testset){
tester.myint = 3;
}
foreach(Tester tester in testset){
tester.Dump();
}
HashSet<Tester> secondhashset = new HashSet<Tester>(testset);
foreach(Tester tester in secondhashset){
tester.Dump();
}
}
class Tester{
public int myint;
public Tester(int i){
this.myint = i;
}
public override bool Equals(object o){
if (o== null) return false;
Tester that = o as Tester;
if (that == null) return false;
return (this.myint == that.myint);
}
public override int GetHashCode(){
return this.myint;
}
public override string ToString(){
return this.myint.ToString();
}
}
It will happily manipulate the items in the collection to be equal, only filtering them out when a new HashSet is built. What is advicible when I want to work with sets where I need to know the entries are unique? Roll my own, where Add(T item) adds a copy off the item, and the enumerator enumerates over copies of the contained items? This presents the challenge that every contained element should be deep-copyable, at least in its items that influence it's equality.
Another solution would be to roll your own, and only accepts elements that implement INotifyPropertyChanged, and taking action on the event to re-check for equality, but this seems severely limiting, not to mention a whole lot of work and performance loss under the hood.
Yet another possible solution I thought of is making sure that all fields are readonly or const in the constructor. All solutions seem to have very large drawbacks. Do I have any other options?
You're really talking about object identity. If you're going to hash items they need to have some kind of identity so they can be compared.
If that changes, it is not a valid identity method. You currently have public int myint. It really should be readonly, and only set in the constructor.
If two objects are conceptually different (i.e. you want to treat them as different in your specific design) then their hash code should be different.
If you have two objects with the same content (i.e. two value objects that have the same field values) then they should have the same hash codes and should be equal.
If your data model says that you can have two objects with the same content but they can't be equal, you should use a surrogate id, not hash the contents.
Perhaps your objects should be immutable value types so the object can't change
If they are mutable types, you should assign a surrogate ID (i.e. one that is introduced externally, like an increasing counter id or using the object's hashcode) that never changes for the given object
This is a problem with your Tester objects, not the set. You need to think hard about how you define identity. It's not an easy problem.
When I need a 1-dimensional collection of guaranteed unique items I usually go with Dictionary<TKey, Tvalue>: you cannot add elements with the same Key, plus I usually need to attach some properties to the items and the Value comes in handy (my go-to value type is Tuple<> for many values...).
OF course, it's not the most performant nor the least memory-hungry solution, but I don't usually have performance/memory concerns.
You should implement your own IEqualityComparer and pass it to the constructor of the HashSet to ensure you get the desired equality comparer.
And as Joe said, if you want the collection to remain unique even beyond .Add(T item) you need to use ValueObjects that are created by the constructor and have no publicly visible set attributes.
i.e.
Say I have a simple object such as
class Something
{
public int SomeInt { get; set; }
}
I have read that using immutable objects are faster and a better means of using business objects? If this is so, should i strive to make all my objects as such:
class ImmutableSomething
{
public int SomeInt { get { return m_someInt; } }
private int m_someInt = 0;
public void ChangeSomeInt(int newValue)
{
m_someInt = newvalue;
}
}
What do you reckon?
What you depict is not an immutable object; simply moving the set code into a dedicated setter method doesn't make an object immutable. An immutable object, by definition, can't change, so the fact that you can alter the value of any of the object's properties or fields means that it isn't immutable.
In terms of "faster" or "better", immutable objects are not intrinsically faster or "better" than mutable objects; there isn't anything special about them, other than the fact that you can't change any values.
As others have said what you've posted isn't immutable. This is what an immutable object looks like. The readonly keyword means that the only place that the backing field for the property can be set is in the constructor. Essentially, after the object is constructed that's it forever.
public class ImmutableSomething
{
private readonly int _someInt;
public int SomeInt
{
get
{
return _someInt;
}
}
public ImmutableSomething(int i)
{
_someInt = i;
}
public ImmutableSomething Add(int i){
return new ImmutableSomething(_someInt + i);
}
}
This is a big deal in functional programming because instead of talking about objects you get to talk about Values. Values never change, so you know that when you pass them to a function or method your original value will never be changed or updated. With mutable objects you can't make that guarantee.
Code built with immutable objects can be easily parallelized because there is no writable shared state. If some other thread gets your Value and wants to do something to it, it can't. "Changing" it in any way produces a new object with a brand new spot in memory just for that object.
So once you're dealing with values you can do some special things like interning which is what .NET does with strings. Because "Hello World" is a value and will never change, one reference to "Hello World" is just as good as any other, so instead of having "Hello World" in a hundred slots in memory, you have it in one slot and set all the references to "Hello World" to point to that one slot. So you get a big win on the memory side but you pay a performance penalty because whenever you create a new string you have to check the intern pool to make sure it doesn't already exist.
The primary advantage of deeply immutable objects is that it's very easy to take a "snapshot" of their properties--simply copy a reference to the object. No matter how big or complicated the object might be, one can "snapshot" the whole thing simply by copying one reference.
By contrast, if one wants to take a snapshot of a mutable object's properties, it's necessary to copy all of them. If any of those properties are themselves mutable objects, it will be necessary to copy all of those as well. In some cases, making a usable copy of a mutable object's state can be very complicated or even impossible, since objects may have their state intertwined with those of singletons.
Although immutable objects are far easier to "snapshot" than mutable ones, it can sometimes be difficult to, given an immutable object, produce an instance which is similar to the first one except for some minor change. Mutable objects can sometimes be easier to work with in that regard. Sometimes it can be useful to copy data from an immutable object into a mutable object, change it, and then produce a new immutable object which holds the changed data. Unfortunately, there isn't any general means to automate such conversion with classes. There is, however, an alternative.
A struct with exposed fields which are all value primitives or immutable class references can offer a very convenient means of holding information in an immutable object, copying it to a mutable form, modifying it, and then making a new immutable object. Alternatively, one may copy the data in a struct easily by just copying the struct itself. Copying a struct which contains more than one or two int-sized fields is somewhat more expensive than copying an immutable-object reference, but copying a struct and changing it is generally much cheaper than making a new changed immutable object. It's important to note that because of some quirks in the way .net languages handle structs, some people regard mutable structs as evil. I would recommend that in general it's best for structs to simply expose their fields, and avoid having any methods (other than constructors) which mutate this. That will avoid most of the quirks associated with mutable structs, and will often offer better performance and clearer semantics than can be obtained with immutable classes, mutable classes, or so-called immutable (really "mutable by assignment only") structs.
Incidentally, it is sometimes useful to design an immutable object so that producing a slightly-different object will copy everything from the old object to a new one, but with the appropriate part changed, and it is sometimes useful to design an immutable object so that a slightly-different object will be able to "reuse" most of the original object. In many cases, the former approach will be the more efficient one if the object will be read much more often than it is written, and there isn't going to be any need to keep old snapshots around after an object has been changed. The latter approach can be the more efficient one in cases where one will want to keep around many snapshots, and updates are frequent relative to read accesses.
That's not an immutable object. An immutable version of this would be something like
class ImmutableSomething : ISomething
{
public readonly int SomeInt;
public ImmutableSomething(int i)
{
SomeInt = i;
}
public ImmutableSomething AddValue(int add)
{
return new ImmutableSomething(this.SomeInt + add);
}
}
The main benefit of an immutable object is that the object itself will never change, so you don't risk one part of your code changing the underlying values, especially in multithreading situations, but this applies in general. These guarantees often makes objects "better" in that you know what to expect, but there's nothing that makes immutables inherently "faster" than mutable objects.
For example, DateTimes are immutable, so you can do stuff like
DateTime someReferenceTime = DateTime.Now;
myBusinessLayer.DoABunchOfProcessingBasedOnTime(someReferenceTime);
// Here you are guaranteed that someReferenceTime has not changed, and you can do more with it.
Versus something like
StringBuilder sb = new StringBuilder("Seed");
myBusinessLayer.DoStuffBasedOnStringBuilder(sb);
// You have no guarantees about what sb contains here.
Leaving aside the point that the example doesn't actually show an immutable object, the main benefit for immutable objects is that they make certain multi-threaded operations dead simple and lock-free. For example, enumerating an immutable tree structure is possible without locks in a multi-threaded environment, whereas if the tree was mutable, you would need to introduce locks in order to safely enumerate it.
But there is nothing magical about immutable objects that makes them inherently faster.
So I'm thinking of using a reference type as a key to a .NET Dictionary...
Example:
class MyObj
{
private int mID;
public MyObj(int id)
{
this.mID = id;
}
}
// whatever code here
static void Main(string[] args)
{
Dictionary<MyObj, string> dictionary = new Dictionary<MyObj, string>();
}
My question is, how is the hash generated for custom objects (ie not int, string, bool etc)? I ask because the objects I'm using as keys may change before I need to look up stuff in the Dictionary again. If the hash is generated from the object's address, then I'm probably fine... but if it is generated from some combination of the object's member variables then I'm in trouble.
EDIT:
I should've originally made it clear that I don't care about the equality of the objects in this case... I was merely looking for a fast lookup (I wanted to do a 1-1 association without changing the code of the classes involved).
Thanks
The default implementation of GetHashCode/Equals basically deals with identity. You'll always get the same hash back from the same object, and it'll probably be different to other objects (very high probability!).
In other words, if you just want reference identity, you're fine. If you want to use the dictionary treating the keys as values (i.e. using the data within the object, rather than just the object reference itself, to determine the notion of equality) then it's a bad idea to mutate any of the equality-sensitive data within the key after adding it to the dictionary.
The MSDN documentation for object.GetHashCode is a little bit overly scary - basically you shouldn't use it for persistent hashes (i.e. saved between process invocations) but it will be consistent for the same object which is all that's required for it to be a valid hash for a dictionary. While it's not guaranteed to be unique, I don't think you'll run into enough collections to cause a problem.
The hash used is the return value of the .GetHashcode method on the object. By default this essentially a value representing the reference. It is not guaranteed to be unique for an object, and in fact likely won't be in many situations. But the value for a particular reference will not change over the lifetime of the object even if you mutate it. So for this particular sample you will be OK.
In general though, it is a very bad idea to use objects which are not immutable as keys to a Dictionary. It's way too easy to fall into a trap where you override Equals and GetHashcode on an object and break code where the type was formerly used as a key in a Dictionary.
The dictionary will use the GetHashCode method defined on System.Object, which will not change over the object's lifetime regardless of field changes etc. So you won't encounter problems in the scenario you describe.
You can override GetHashCode, and should do so if you override Equals so that objects which are equal also return the same hash code. However, if the type is mutable then you must be aware that if you use the object as a key of a dictionary you will not be able to find it again if it is subsequently altered.
The default implementation of the
GetHashCode method does not guarantee
unique return values for different
objects. Furthermore, the .NET
Framework does not guarantee the
default implementation of the
GetHashCode method, and the value it
returns will be the same between
different versions of the .NET
Framework. Consequently, the default
implementation of this method must not
be used as a unique object identifier
for hashing purposes.
http://msdn.microsoft.com/en-us/library/system.object.gethashcode.aspx
For CustomObject , derived from objects, the hash code will be generated in beginning of the object and they will remain same throughout its life of its instance. Further more, hash code will never change as values of internal fields/properties will change.
Hashtable/Dictionary will not use GetHashCode as unique identifier but rather it will only use it as "hash buckets". For example string "aaa123" and "aaa456" may have hash as "aaa" and that all objects having same hash "aaa" will be stored in one bucket. Whenever you will insert/retrive an object, Dictionary will always call GetHashCode and determine the bucket to further individual address comparison of objects.
Custom Object as Dictionary key should be taken as if, Dictionary only stores theirs "Reference (addresses or memory pointers)" it doesnt know its contents, and contents of objects change but Reference never change. This also means that if two objects are exact replica of each other, but they are different in memory, your hashtable will not consider them as same because their memory pointers are different.
Best way to guarentee identity equality is to override method "Equals" as following... if you are having any problem.
class MyObj
{
private int mID;
public MyObj(int id)
{
this.mID = id;
}
public bool override Equals(Object obj)
{
MyObj mobj = obj as MyObj;
if(mobj==null)
return false;
return this.mID == mobj.mID;
}
}
What is the proper way to implement assignment by value for a reference type? I want to perform an assignment, but not change the reference.
Here is what I'm talking about:
void Main()
{
A a1 = new A(1);
A a2 = new A(2);
a1 = a2; //WRONG: Changes reference
a1.ValueAssign(a2); //This works, but is it the best way?
}
class A
{
int i;
public A(int i)
{
this.i = i;
}
public void ValueAssign(A a)
{
this.i = a.i;
}
}
Is there some sort of convention I should be using for this? I feel like I'm not the first person that has encountered this. Thanks.
EDIT:
Wow. I think I need to tailor my question more toward the actual problem I'm facing. I'm getting a lot of answers that do not meet the requirement of not changing the reference. Cloning is not the issue here. The problem lies in ASSIGNING the clone.
I have many classes that depend on A - they all share a reference to the same object of class A. So, whenever one classes changes A, it's reflected in the others, right? That's all fine and well until one of the classes tries to do this:
myA = new A();
In reality I'm not doing new A() but I'm actually retrieving a serialized version of A off the hard drive. But anyways, doing this causes myA to receive a NEW REFERENCE. It no longer shares the same A as the rest of the classes that depend on A. This is the problem that I am trying to address. I want all classes that have the instance of A to be affected by the line of code above.
I hope this clarifies my question. Thank you.
It sounds like you're talking about cloning. Some objects will support this (via ICloneable) but most won't. In many cases it doesn't make sense anyway - what does it mean to copy a FileStream object? ICloneable is generally regarded as a bad interface to use, partly because it doesn't specify the depth of the clone.
It's better to try to change your way of thinking so this isn't necessary. My guess is that you're a C++ programmer - and without wishing to cast any judgements at all: don't try to write C# as if it's C++. You'll end up with unidiomatic C# which may not work terrible well, may be inefficient, and may be unintuitive for C# developers to understand.
One option is to try to make types immutable where possible - at that point it doesn't matter whether or not there's a copy, as you wouldn't be able to change the object anyway. This is the approach that String takes, and it works very well. It's just a shame that there aren't immutable collections in the framework (yet).
In your case, instead of having the ValueAssign method, you would have WithValue which would return a new instance with just the value changed. (Admittedly that's the only value available in your case...) I realise that this sort of copying (of all but the property that's about to change) goes against what I was saying about copying being somewhat unidiomatic in C#, but it's within the class rather than an outside body deciding when to copy.
I suspect I'm not explaining this terribly well, but my general advice is to design around it rather than to try to explicitly copy all over the place.
I believe you should be using a struct instead of a class than, as structs work by value and not by reference.
For what you want to do, I think A.ValueAssign(otherA) is the best way.
Given that you want to have one reference of A around, ensuring that the reference isn't destroyed is key.
Wouldn't you also be served by using a singleton pattern here as well?
One approach is to use a copy constructor. e.g.,
MyClass orig = ...;
MyClass copy = new MyClass(orig);
Where you copy the elements of MyClass. Depending on how many reference types the class contains this might involve recursive use of copy constructors.
We have cases where we do exactly what you are talking about. We have many objects referencing a particular instance of an object and we want to change the instance of the object so that every object referencing that existing instance see the change.
The pattern we follow is almost what you have - just the names are different:
class A
{
int i;
public A(int i)
{
this.i = i;
}
public void Copy(A source)
{
this.i = source.i;
}
}
Others have suggested cloning in their answer, but that's only part of the deal. You also want to use the results of a (possibly deep) clone to replace the contents of an existing object. That's a very C++-like requirement.
It just doesn't come up very often in C#, so there's no standard method name or operator meaning "replace the contents of this object with a copy of the contents of that object".
The reason it occurs so often in C++ is because of the need to track ownership so that cleanup can be performed. If you have a member:
std::vector<int> ints;
You have the advantage that it will be properly destroyed when the enclosing object is destroyed. But if you want to replace it with a new vector, you need swap to make that efficient. Alternatively you could have:
std::vector<int> *ints;
Now you can swap in a new one easily, but you have to remember to delete the old one first, and in the enclosing class's destructor.
In C# you don't need to worry about that. There's one right way:
List<int> ints = new List<int>();
You don't have to clean it up, and you can swap in a new one by reference. Best of both.
Edit:
If you have multiple "client" objects that need to hold a reference to an object and you want to be able to replace that object, you would make them hold a reference to an intermediate object that would act as a "wrapper".
class Replaceable<T>
{
public T Instance { get; set; }
}
The other classes would hold a reference to the Replaceable<T>. So would the code that needs to swap in a replacement. e.g.
Replaceable<FileStream> _fileStream;
It might also be useful to declare an event, so clients could subscribe to find out when the stored instance was replaced. Reusable version here.
You could also define implicit conversion operators to remove some syntax noise.
In several WinForms based applications, I've needed similar functionality, in my case to allow a data entry form to work on a copy of the object, information from which is copied onto the original object only if the user elects to save the changes.
To make this work, I brought across an idea from my Delphi days - the Assign() method.
Essentially, I wrote (well, ok, generated) a method that copies across properties (and list contents, etc etc) from one instance to another. This allowed me to write code like this:
var person = PersonRespository.FindByName("Bevan");
...
var copy = new Person();
copy.Assign(person);
using (var form = new PersonDataEntryForm(copy))
{
if (form.ShowAsModelessDialog() == MessageReturn.Save)
{
person.Assign(copy);
}
}
Changes made within the dialog are private until the user chooses to save them, then the public variable (person) is updated.
An Assign() method for Person might look like this:
public void Assign(Person source)
{
Name = source.Name;
Gender = source.Gender;
Spouse = source.Spouse;
Children.Clear();
Children.AddRange( source.Children);
}
As an aside, having an Assign() method makes a copy-constructor almost trivially easy to write:
public Person(Person original)
: this()
{
Assign(original);
}
I wish there was a "second best" answer option, because anyone who mentioned Observer deserves it. The observer pattern would work, however it is not necessary and in my opinion, is overkill.
If multiple objects need to maintain a reference to the same object ("MyClass", below) and you need to perform an assignment to the referenced object ("MyClass"), the easiest way to handle it is to create a ValueAssign function as follows:
public class MyClass
{
private int a;
private int b;
void ValueAssign(MyClass mo)
{
this.a = mo.a;
this.b = mo.b;
}
}
Observer would only be necessary if other action was required by the dependent objects at the time of assignment. If you wish to only maintain the reference, this method is adequate. This example here is the same as the example that I proposed in my question, but I feel that it better emphasizes my intent.
Thank you for all your answers. I seriously considered all of them.
I had the exact same problem.
The way I solved it was by putting the object that everything is referencing inside another object and had everything reference the outer object instead. Then you could change the inner object and everything would be able to reference the new inner object.
OuterObject.InerObject.stuff
It's not possible to overload the assignment operator in c# as it would be in c/c++. However even if that was an option I'd say your trying to fix the symptom not the problem. Your problem is that the assignment of a new reference is breaking code, why not just assign the read values to the original reference? and If you are affraid that others might new and assign make the object into a singleton or similar so that it can't be altered after creation but the reference will stay the same
If I got it right, you are talking about proper Singleton deserialization.
If you are using .Net native serialization then you might take a look at the MSDN ISerializable example. The example shows exactly that - how to override ISerializable.GetObjectData to return the same instance on each call.
If you are using Xml serialization (XmlSerializer), then you manually implement IXmlSerializable in your object's parent class, and then take care to get a single instance each time.
A simplest way would be to ensure this in your parent property's setter, by accessing some kind of a static cache. (I find this pretty dirty, but that's an easy way to do it).
For example:
public class ParentClass
{
private ReferencedClass _reference;
public ReferencedClass Reference
{
get
{
return _reference;
}
set
{
// don't assign the value, but consult the
// static dictionary to see if we already have
// the singleton
_reference = StaticCache.GetSingleton(value);
}
}
}
And then you would have a static class with some kind of a dictionary where you could quickly retrieve the singleton instance (or create it if it doesn't exist).
Although this may work for you, I also agree with the others that this is rarely the best (or only) way to do it. There is surely a way to refactor your code so that this becomes unnecessary, but you should provide some additional info about what is the intended usage, where is this data accessed from, or simply why do classes really need to reference a single object.
[Edit]
Since you are using a static cache similar to a singleton, you should take care to implement it properly. That means several things:
Your cache class should have a private constructor. You don't want anyone to create a new instance explicitly - this is not an option when using singletons. So this means you should expose some public static property like Cache.Instance which will always return the reference to the same private object.
Since the constructor is private, you are sure that only Cache class can create the instance during initialization (or first update).
Details on implementing this pattern can be found at http://www.yoda.arachsys.com/csharp/singleton.html (which is also a great thread-safe implementation).
When all you object have the same instance, then you can simply notify cache to update the single private instance (e.g. call Cache.Update() from somewhere). This way you can update the only instance which everyone is using.
But it still not clear from your example how exactly you are notifying your clients that the data has been updated anyway. An event-driven mechanism would work better, as it would allow you to decouple your code - Singletons are evil.
What if you did this:
public class B
{
B(int i) {A = new A(i);}
public A A {get; set;}
}
...
void Main()
{
B b1 = new B(1);
A a2 = new A(2);
b1.A = a2;
}
Throughout your program, references to A should only be accessed via an instance of B. When you reassign b.A, you're changing the reference of A, but that doesn't matter because all of your external references are still pointing to B. This has a lot of similarities with your original solution, but it allows you to change A any way you want without having to update its ValueAssign method.