I have a class library containing several structures each consisting of several value and reference types. Most of the value types are mandatory, a few value types and all reference types are optional. All structures are XmlSerializable (which is mandatory).
As far as the class library is targeted to mobile devices I want to reduce the memory footprint. My first idea was to use Nullable<T> for the value types, but this increases the memory size by 4 bytes per Nullable<T>. My second idea is to pack all optional value types into a separate structure that is only instantiated when any of its members is needed. But this would force me to implement IXmlSerializable on the "main" structure.
Are there any other approaches to "shrink" the structures?
[EDIT]
Beg your pardon for this bad question. I think I have to clarify some things and get more specific:
The class library is designed to serialize data info GPX (GPS Exchange Format). The structures are e.g. Waypoint or Track. They have mandatory fields as latitude, longitude etc. Optional fields are Vertical/Horizontal/Position Dilution of Precision, a description, a link.
The library is mainly targeted to Mobile devices such as PDAs. RAM is short, but plenty of non-volatile Memory is available.
Code examples cannot be shown as far as there are none. I want to think about several pitfalls before starting the implementation.
Here is a technique to aggressively reduce in memory overhead whilst allowing Xml Serialization.
Update: the orignal inline linked list idea is more efficient for 1 and 2 entries than a standard list with count construct but the use of fixed size optionals for zero, one and two cases is even more efficient.
Proviso:
This is predicated on you knowing that you really do need to shave the memory, as such
(since you haven't done any coding yet) this may well be a massively premature
optimization.
Also this design is predicated on the optional fields being very rare.
I use double as a 'placeholder' whatever format best allows you to represent the precision/units involved should be used.
public class WayPoint
{
// consumes IntPtr.Size fixed cost
private IOptional optional = OptionalNone.Default;
public double Latitude { get; set; }
public double Longitude { get; set; }
public double Vertical
{
get { return optional.Get<double>("Vertical") ?? 0.0; }
set { optional = optional.Set<double>("Vertical", value); }
}
[XmlIgnore] // need this pair for every value type
public bool VerticalSpecified
{
get { return optional.Get<double>("Vertical").HasValue; }
}
public void ClearVertical()
{
optional = optional.Clear<double>("Vertical");
}
public string Description // setting to null clears it
{
get { return optional.GetRef<string>("Description"); }
set { optional = optional.SetRef<string>("Description", value); }
}
// Horizontal, Position, DilutionOfPrecision etc.
}
The real heavy lifting is done here:
internal interface IOptional
{
T? Get<T>(string id) where T : struct;
T GetRef<T>(string id) where T : class;
IOptional Set<T>(string id, T value);
IOptional Clear(string id);
}
internal sealed class OptionalNone : IOptional
{
public static readonly OptionalNone Default = new OptionalNone();
public T? Get<T>(string id) where T : struct
{
return null;
}
public T GetRef<T>(string id) where T : class
{
return null;
}
public IOptional Set<T>(string id, T value)
{
if (value == null)
return Clear(id);
return new OptionalWithOne<T>(id, value);
}
public IOptional Clear(string id)
{
return this; // no effect
}
}
The fixed size ones become more interesting to write, there is no point writing these as structs as they would be boxed to be placed within the IOptional field within the WayPoint class.
internal sealed class OptionalWithOne<X> : IOptional
{
private string id1;
private X value1;
public OptionalWithOne(string id, X value)
{
this.id1 = id;
this.value1 = value;
}
public T? Get<T>(string id) where T : struct
{
if (string.Equals(id, this.id1))
return (T)(object)this.value1;
return null;
}
public T GetRef<T>(string id) where T : class
{
if (string.Equals(id, this.id1))
return (T)(object)this.value1;
return null;
}
public IOptional Set<T>(string id, T value)
{
if (string.Equals(id, this.id1))
{
if (value == null)
return OptionalNone.Default;
this.value1 = (X)(object)value;
return this;
}
else
{
if (value == null)
return this;
return new OptionalWithTwo<X,T>(this.id1, this.value1, id, value);
}
}
public IOptional Clear(string id)
{
if (string.Equals(id, this.id1))
return OptionalNone.Default;
return this; // no effect
}
}
Then for two (you can extend this idea as far as you want but as you can see the code gets unpleasant quickly.
internal sealed class OptionalWithTwo<X,Y> : IOptional
{
private string id1;
private X value1;
private string id2;
private Y value2;
public OptionalWithTwo(
string id1, X value1,
string id2, Y value2)
{
this.id1 = id1;
this.value1 = value1;
this.id2 = id2;
this.value2 = value2;
}
public T? Get<T>(string id) where T : struct
{
if (string.Equals(id, this.id1))
return (T)(object)this.value1;
if (string.Equals(id, this.id2))
return (T)(object)this.value2;
return null;
}
public T GetRef<T>(string id) where T : class
{
if (string.Equals(id, this.id1))
return (T)(object)this.value1;
if (string.Equals(id, this.id2))
return (T)(object)this.value2;
return null;
}
public IOptional Set<T>(string id, T value)
{
if (string.Equals(id, this.id1))
{
if (value == null)
return Clear(id);
this.value1 = (X)(object)value;
return this;
}
else if (string.Equals(id, this.id2))
{
if (value == null)
return Clear(id);
this.value2 = (Y)(object)value;
return this;
}
else
{
if (value == null)
return this;
return new OptionalWithMany(
this.id1, this.value1,
this.id2, this.value2,
id, value);
}
}
public IOptional Clear(string id)
{
if (string.Equals(id, this.id1))
return new OptionalWithOne<Y>(this.id2, this.value2);
if (string.Equals(id, this.id2))
return new OptionalWithOne<X>(this.id1, this.value1);
return this; // no effect
}
}
Before finally ending with the relatively inefficient
internal sealed class OptionalWithMany : IOptional
{
private List<string> ids = new List<string>();
// this boxes, if you had a restricted set of data types
// you could do a per type list and map between them
// it is assumed that this is sufficiently uncommon that you don't care
private List<object> values = new List<object>();
public OptionalWithMany(
string id1, object value1,
string id2, object value2,
string id3, object value3)
{
this.ids.Add(id1);
this.values.Add(value1);
this.ids.Add(id2);
this.values.Add(value2);
this.ids.Add(id3);
this.values.Add(value3);
}
public T? Get<T>(string id) where T : struct
{
for (int i= 0; i < this.values.Count;i++)
{
if (string.Equals(id, this.ids[i]))
return (T)this.values[i];
}
return null;
}
public T GetRef<T>(string id) where T : class
{
for (int i= 0; i < this.values.Count;i++)
{
if (string.Equals(id, this.ids[i]))
return (T)this.values[i];
}
return null;
}
public IOptional Set<T>(string id, T value)
{
for (int i= 0; i < this.values.Count;i++)
{
if (string.Equals(id, this.ids[i]))
{
if (value == null)
return Clear(id);
this.values[i] = value;
return this;
}
}
if (value != null)
{
this.ids.Add(id);
this.values.Add(value);
}
return this;
}
public IOptional Clear(string id)
{
for (int i= 0; i < this.values.Count;i++)
{
if (string.Equals(id, this.ids[i]))
{
this.ids.RemoveAt(i);
this.values.RemoveAt(i);
return ShrinkIfNeeded();
}
}
return this; // no effect
}
private IOptional ShrinkIfNeeded()
{
if (this.ids.Count == 2)
{
//return new OptionalWithTwo<X,Y>(
// this.ids[0], this.values[0],
// this.ids[1], this.values[1]);
return (IOptional)
typeof(OptionalWithTwo<,>).MakeGenericType(
// this is a bit risky.
// your value types may not use inhertence
this.values[0].GetType(),
this.values[1].GetType())
.GetConstructors().First().Invoke(
new object[]
{
this.ids[0], this.values[0],
this.ids[1], this.values[1]
});
}
return this;
}
}
OptionalWithMany could be written rather better than this but it gives you the idea.
With restricted type support you could do a global Key -> value map per type 'heap' like so:
internal struct Key
{
public readonly OptionalWithMany;
public readonly string Id;
// define equality and hashcode as per usual
}
Then simply store the list of Id's currently in use within OptionalToMany. Shrinking would be slightly more complex (but better from a type point of view since you would scan each global 'heap' till you found the matching entry and use the type of the heap to construct the OptionalWithTwo. This would allow polymorphism in the property values.
Regardless of the internals the primary benefit of this is that the public surface of the WayPoint class hides all this entirely.
You can then set up the class however you want for serialization though attributes, IXmlSerializable (which would remove the need for the annoying xxxSpecified properties).
I used strings as Id's for simplicity in my example.
If you really care about size and speed you should change the Id's to be enumerations. Given packing behaviour this won't save you much even if you can fit all needed values into a byte but it would give you compile time sanity checking. The strings are all compile time constants so occupy next to no space (but are slower for checking equality).
I urge you to only do something like this after you check that it is needed. The plus side is that this does not limit your xml serialization so you can mould it to whatever format you desire. Also the public face of the 'data packet' can be kept clean (except for the xxxSpecified junk).
If you want to avoid the xxxSpecified hassles and you know you have some 'out of band' values you can use the following trick:
[DefaultValue(double.MaxValue)]
public double Vertical
{
get { return optional.Get<double>("Vertical") ?? double.MaxValue; }
set { optional = optional.Set<double>("Vertical", value); }
}
public void ClearVertical()
{
optional = optional.ClearValue<double>("Vertical");
}
However the rest of you API must be capable of detecting these special values. In general I would say that the specified route is better.
If a particular set of properties become 'always available' on certain devices, or in certain modes you should switch to alternate classes where the properties for those are simple ones. Since the xml form will be identical this means they can interoperate simply and easily but memory usage in those cases will be much less.
If the number of these groups becomes large you may even consider a code-gen scenario (at runtime even, though this increases your support burden considerably)
For some serious fun:
apply Flyweight and store all instances in a bitmap? With a small memory device you don't need 4 byte pointers.
[Edit] With Flyweight, you can have a separate storage strategy for each field. I do not suggest to directly store the string value in the bitmap, but you could store an index.
The type is not stored in the bitmap, but in the unique object factory.
It is probably good to know that the XmlSerializer doesn't care about your internal object layout, it only cares about your public fields and properties. You can hide the internal memory optimizations behind your property accessors, and the XmlSerializer wouldn't even know.
For instance, if you know that you usually have only 2 references set, but on occasion more, you can store the two frequent ones as part of your main object, and hide the infrequent ones inside an object[] or ListDictionary or a specialized private class of your own making. However, be careful that each indirect container object also contains overhead, as it needs to be a reference type. Or when you have 8 nullable integers as part of your public contract, internally you could use 8 regular integers and a single byte containing the is-this-int-null status as its bits.
If you want to specialize even further, perhaps create specialized subclasses depending on the available data, you would have to go the route of IXmlSerializable, but usually that's not really needed.
You can do a couple of things:
Make sure to use the smallest type possible for a particular value. For example, if you look at the schema, dgpsStationType has a min value of 0, and a max of 1023. This can be stored as a ushort. Reduce the size of these items when possible.
Make sure that your fields are 4-byte aligned. The end resulting size of your structure will be some multiple of 4-bytes in size (assuming 32-bit). The default layout for a class has the items stored sequentially. If the fields are not packed correctly, the compiler will have wasted space making sure that your fields are 4-byte aligned. You can specify the layout explicitly using StructLayoutAttribute.
Bad Example: these fields in a class take up 12 bytes. The int must take up 4 contiguous bytes, and the other members must be 4-byte aligned.
public class Bad {
byte a;
byte b;
int c;
ushort u;
}
Better Example: these fields in a class take up 8 bytes. These fields are packed efficiently.
public class Better {
byte a;
byte b;
ushort u;
int c;
}
Reduce the size of your object graph. Each reference type takes up 8 bytes of overhead. If you've got a deep graph, that's a lot of overhead. Pull everything you can into functions that operate on data in you main class. Think more 'C' like, and less OOD.
Its still a good idea to lazy-load some optional parameters, but you should draw your line clearly. Create 1 or maybe 2 sets of 'optional' values that can be loaded or null. Each set will mandate a reference type, and its overhead.
Use structs where you can. Be careful of value-type semantics though, they can be tricky.
Consider not implementing ISerializable. Interface methods are by definition virtual. Any class with virtual methods contains a reference to a vtable (another 4 bytes). Instead implement xml serialization manually in an external class.
Build your own serializing in order to minimize your structure. And serialize to binary and not xml.
Something along the lines of:
internal void Save(BinaryWriter w)
{
w.Write(this.id);
w.Write(this.name);
byte[] bytes = Encoding.UTF8.GetBytes(this.MyString);
w.Write(bytes.Length);
w.Write(bytes);
w.Write(this.tags.Count); // nested struct/class
foreach (Tag tag in this.tags)
{
tag.Save(w);
}
}
and have a constructor which builds it back up
public MyClass(BinaryReader reader)
{
this.id = reader.ReadUInt32();
etc.
}
Some sort of binary serialization will often do much better than XML serialization. You'll have to try it out for your specific data structures to see if you gain much.
Check out MSDN an example using BinaryFormatter.
Related
I already make twice same bug in code like following:
void Foo(Guid appId, Guid accountId, Guid paymentId, Guid whateverId)
{
...
}
Guid appId = ....;
Guid accountId = ...;
Guid paymentId = ...;
Guid whateverId =....;
//BUG - parameters are swapped - but compiler compiles it
Foo(appId, paymentId, accountId, whateverId);
OK, I want to prevent these bugs, so I created strongly typed GUIDs:
[ImmutableObject(true)]
public struct AppId
{
private readonly Guid _value;
public AppId(string value)
{
var val = Guid.Parse(value);
CheckValue(val);
_value = val;
}
public AppId(Guid value)
{
CheckValue(value);
_value = value;
}
private static void CheckValue(Guid value)
{
if(value == Guid.Empty)
throw new ArgumentException("Guid value cannot be empty", nameof(value));
}
public override string ToString()
{
return _value.ToString();
}
}
And another one for PaymentId:
[ImmutableObject(true)]
public struct PaymentId
{
private readonly Guid _value;
public PaymentId(string value)
{
var val = Guid.Parse(value);
CheckValue(val);
_value = val;
}
public PaymentId(Guid value)
{
CheckValue(value);
_value = value;
}
private static void CheckValue(Guid value)
{
if(value == Guid.Empty)
throw new ArgumentException("Guid value cannot be empty", nameof(value));
}
public override string ToString()
{
return _value.ToString();
}
}
These structs are almost same, there is a lot of duplication of code. Isn't is?
I cannot figure out any elegant way to solve it except using class instead of struct. I would rather use struct, because of null checks, less memory footprint, no garbage collector overhead etc...
Do you have some idea how to use struct without duplicating code?
First off, this is a really good idea. A brief aside:
I wish C# made it easier to create cheap typed wrappers around integers, strings, ids, and so on. We are very "string happy" and "integer happy" as programmers; lots of things are represented as strings and integers which could have more information tracked in the type system; we don't want to be assigning customer names to customer addresses. A while back I wrote a series of blog posts (never finished!) about writing a virtual machine in OCaml, and one of the best things I did was wrapped every integer in the virtual machine with a type that indicates its purpose. That prevented so many bugs! OCaml makes it very easy to create little wrapper types; C# does not.
Second, I would not worry too much about duplicating the code. It's mostly an easy copy-paste, and you are unlikely to edit the code much or make mistakes. Spend your time solving real problems. A little copy-pasted code is not a big deal.
If you do want to avoid the copy-pasted code, then I would suggest using generics like this:
struct App {}
struct Payment {}
public struct Id<T>
{
private readonly Guid _value;
public Id(string value)
{
var val = Guid.Parse(value);
CheckValue(val);
_value = val;
}
public Id(Guid value)
{
CheckValue(value);
_value = value;
}
private static void CheckValue(Guid value)
{
if(value == Guid.Empty)
throw new ArgumentException("Guid value cannot be empty", nameof(value));
}
public override string ToString()
{
return _value.ToString();
}
}
And now you're done. You have types Id<App> and Id<Payment> instead of AppId and PaymentId, but you still cannot assign an Id<App> to Id<Payment> or Guid.
Also, if you like using AppId and PaymentId then at the top of your file you can say
using AppId = MyNamespace.Whatever.Id<MyNamespace.Whatever.App>
and so on.
Third, you will probably need a few more features in your type; I assume this is not done yet. For example, you'll probably need equality, so that you can check to see if two ids are the same.
Fourth, be aware that default(Id<App>) still gives you an "empty guid" identifier, so your attempt to prevent that does not actually work; it will still be possible to create one. There is not really a good way around that.
We do the same, it works great.
Yes, it's a lot of copy and paste, but that is exactly what code-generation is for.
In Visual Studio, you can use T4 templates for this. You basically write your class once and then have a template where you say "I want this class for App, Payment, Account,..." and Visual Studio will generate you one source code file for each.
That way you have one single source (The T4 template) where you can make changes if you find a bug in your classes and it will propagate to all your Identifiers without you having to think about changing all of them.
This has a nice side effect. You can have these overloads for an add:
void Add(Account account);
void Add(Payment payment);
However you cannot have overloads for the get:
Account Get(Guid id);
Payment Get(Guid id);
I always disliked this asymmetry. You have to do:
Account GetAccount(Guid id);
Payment GetPayment(Guid id);
With the above approach this is possible:
Account Get(Id<Account> id);
Payment Get(Id<Payment> id);
Symmetry achieved.
This is fairly easy to implement using record.
public readonly record struct UserId(Guid Id)
{
public override string ToString() => Id.ToString();
public static implicit operator Guid(UserId userId) => userId.Id;
}
The implicit operator allows us to use the strongly typed UserId as a regular Guid where applicable.
var id = Guid.NewGuid();
GuidTypeImportant(id); // ERROR
GuidTypeImportant(new UserId(id)); // OK
DontCareAboutGuidType(new UserId(id)); // OK
DontCareAboutGuidType(id); // OK
void GuidTypeImportant(UserId id) { }
void DontCareAboutGuidType(Guid id) { }
You might be able to use subclassing with a different programming language.
I have a C# class which I need to use to instantiate millions of objects. So I need to make the class light weight and super fast. I declared some functions into it. So my concern is, declaring all those functions will make the class slow or consume more memory. I also have choice to declare those functions into another class. Here is the class:
internal class Var
{
public dynamic data;
public int index;
public VarTypes type;
public bool doClone = false;
public Var Clone(bool doClone)
{
var tmpVar = Clone();
tmpVar.doClone = doClone;
return tmpVar;
}
public Var Clone()
{
if (doClone)
return new Var() { data = data, index = index, type = type };
else
return this;
}
public void Clone(Var old)
{
this.data = old.data;
this.index = old.index;
this.type = old.type;
}
public override string ToString()
{
if (type == VarTypes.Function)
{
StringBuilder builder = new StringBuilder("function ");
if (data.Count == 4)
builder.Append(data[3].ToString());
builder.Append("(");
for (int i = 1; i < data[1][1].Count; i++)
builder.Append(data[1][1][i].ToString() + ",");
if (builder[builder.Length - 1] == ',')
builder.Remove(builder.Length - 1, 1);
builder.Append(")");
return builder.ToString();
}
else
return data.ToString();
}
}
Your class instances will not consume more memory as a result of adding more methods to the class. A class instance has a constant minimum size and then its size increases only as you add fields (or autoproperties, in the sense that each autoproperty adds a field for you). This is because when you're instantiating a class, you're really instantiating a memory region that (for the most part) only contains the values of the fields of that instance.
The minimum size exists because each class instance stores some information that enables various operations of the runtime, such as the GC. This information is mainly stored in the form of pointers to type-wide internal structures of the runtime, which means that they don't scale with the number of class instances - you'll get the same flat overhead for storing a type's methods whether you instantiate zero or a thousand instances.
From another answer, if you're worried about function call overhead, turn on aggressive inlining for each method:
// in mscorlib.dll so should not need to include extra references
using System.Runtime.CompilerServices;
⋮
[MethodImpl(MethodImplOptions.AggressiveInlining)]
void MyMethod(...)
I'm running into an issue with Xml serialization of my own class. It is a derived class, which doesn't naturally have a parameterless constructor - I had to add one just for the sake of serialization. Of course, because of that I'm running into dependency/order issue.
Here's a simplification, which I hope still illustrates the problem (I reserve the right to augment the illustration if it turns out I didn't capture the problem - I just didn't want to dump a complicated Object Model on you :))
public class Base{
public virtual Vector Value{ get; set;}
}
public class Derived : Base{
public Vector Coefficient { get; set; }
public override Vector Value{
get { return base.Value * Coefficient; }
set { base.Value = value / Coefficient; }
}
}
EDIT: to avoid confusion, I substituted the value type double in the original post with a not-shown-here Vector type
When XmlSerializer de-serializes Derived, I run into null value exception - Both base.Value and this.Coefficient are null.
Is there any way to fix this?
It seems that a lot of the issues here stem from using your domain model for serialization. Now, this can work, but it can also be hugely problematic if your domain model deviates even slightly from what the serializer wants to do.
I strongly suggest trying to add a second parallel representation of the data, as a "DTO model" - meaning: a set of objects whose job is to represent the data for serialization. S instead of a complicated property with calculations and dependencies, you just have:
public double SomeValue { get; set; }
etc. The key point is that is is simple and represents the data, not your system's rules. You serialize to/from this model - which should not be simple - and you map this to/from your domain model. Conversion operators can be useful, but a simple "ToDomainModel" / "FromDomainModel" method works fine too. Likewise, tools like AutoMapper might help, but 15 lines of DTO-to/from-Domain code isn't going to hurt either.
This avoids issues with:
constructors
non-public members
assignment order
read-only members
versioning
And a range of other common pain points in serialization.
You need to tell the serializer that your base object has derived items. Try:
[XmlInclude(typeof(Derived))]
public class Base {
Alternatively, you can explain this at run time with:
public XmlSerializer(Type type, Type[] extraTypes){..}
In your case: new XmlSerializer(typeof(Base), new Type[] { typeof(Derived), ..});
And to make things even more generic, if there is a huge hierarchy, you can use reflection to get a list of the derived types:
// You'll want to cache this result, and it could be a lot of work to run this
// multiple times if you have lots of classes
var knownTypes = Assembly.GetExecutingAssembly().GetTypes().Where(
t => typeof(Base).IsAssignableFrom(t)).ToArray();
var serializer = new XmlSerializer(typeof(Base), knownTypes);
One problem with your Value getter and setter is that if Coefficient is not loaded at the time when de-serializing the Value, then it will cause divide by zero errors. Even worse, it might not break, but instead, actually do the calculation against an incorrect value since Coefficient may have a pre-deserialization value stored in it. The following will solve the divide by zero situation, and hopefully update value correctly if coefficient loads second. In truth, these situations are usually handled better by serializing the non calculated value and then using [XmlIgnoreAttribute] on the derived property.
public class Derived : Base{
public override double Value{
get { return _coefficient; }
set {
if(Coefficient == 0){
base.Value = value;
}else{
base.Value = value / Coefficient;
}
}
private double _coefficient;
public double Coefficient{
get { return _coefficient; }
set {
if(Coefficient == 0)
{
temp = base.Value;
_coefficient = value;
Value = temp;
}
else{
_coefficient = value;
}
}
}
// Example by serializing unmodified value
public double Coefficient { get; set; }
public double BaseValue { get; set; }
[XmlIgnoreAttribute]
public double Value
{
get { return BaseValue * Coefficient; }
set
{
if(Coefficient != 0){
BaseValue = value / Coefficient;
}else{
BaseValue = value;
}
}
*Solved. Thanks for the explanations guys, I didn't fully understand the implications of using a value type in this situation.
I have a struct that I'm using from a static class. However, the behavior is showing unexpected behavior when I print it's internal state at runtime. Here's my struct:
public struct VersionedObject
{
public VersionedObject(object o)
{
m_SelectedVer = 0;
ObjectVersions = new List<object>();
ObjectVersions.Add(o);
}
private int m_SelectedVer;
public int SelectedVersion
{
get
{
return m_SelectedVer;
}
}
public List<object> ObjectVersions;//Clarifying: This is only used to retrieve values, nothing is .Added from outside this struct in my code.
public void AddObject(object m)
{
ObjectVersions.Add(m);
m_SelectedVer = ObjectVersions.Count - 1;
}
}
Test code
VersionedObject vo = new VersionedObject(1);
vo.AddObject(2);//This is the second call to AddObject()
//Expected value of vo.SelectedVerion: 1
//Actual value of vo.SelectedVersion: 1
Now, if you test this code in isolation, i.e., copy it into your project to give it a whirl, it will return the expected result.
The problem; What I'm observing in my production code is this debug output:
objectName, ObjectVersions.Count:2, SelectedVer:0,
Why? From my understanding, and testing, this should be completely impossible under any circumstances.
My random guess is that there is some sort of immutability going on, that for some reason a new struct is being instanced via default constructor, and the ObjectVersions data is being copied over, but the m_SelectedVersion is private and cannot be copied into the new struct?
Does my use of Static classes and methods to manipulate the struct have anything to do with it?
I'm so stumped I'm just inventing wild guesses at this point.
Struct is value type. So most likely you are creating multiple copies of your object in your actual code.
Consider simply changing struct to class as content of your struct is not really good fit for value type (as it is mutable and also contains mutable reference type).
More on "struct is value type":
First check FAQ which have many good answers already.
Value types are passed by value - so if you call function to update such object it will not update original. You can treat them similar to passing integer value to function: i.e. would you expect SomeFunction(42) to be able to change value of 42?
struct MyStruct { public int V;}
void UpdateStruct(MyStruct x)
{
x.V = 42; // updates copy of passed in object, changes will not be visible outside.
}
....
var local = new MyStruct{V = 13}
UpdateStruct(local); // Hope to get local.V == 42
if (local.V == 13) {
// Expected. copy inside UpdateStruct updated,
// but this "local" is untouched.
}
Why is this a struct and not a class? Even better, why are you tracking the size of the backing store (List<T>) rather than letting the List<T> track that for you. Since that underlying backing store is public, it can be manipulated without your struct's knowledge. I suspect something in your production code is adding to the backing store without going through your struct.
If it were me, I'd set it up something like this, though I'd make it a class...but that's almost certainly a breaking change:
public struct VersionedObject
{
public VersionedObject()
{
this.ObjectVersions = new List<object>() ;
return ;
}
public VersionedObject(object o) : this()
{
ObjectVersions.Add(o);
return ;
}
public VersionedObject( params object[] o ) : this()
{
ObjectVersions.AddRange( o ) ;
return ;
}
public int SelectedVersion
{
get
{
int value = this.ObjectVersions.Count - 1 ;
return value ;
}
}
public List<object> ObjectVersions ;
public void AddObject(object m)
{
ObjectVersions.Add(m);
return ;
}
}
You'll note that this has the same semantics as your struct, but the SelectedVersion property now reflects what's actually in the backing store.
Assuming I have a struct:
struct Vector
{
public int X, Y;
// ...
// some other stuff
}
and a class:
class Map
{
public Vector this[int i]
{
get
{
return elements[i];
}
set
{
elements[i] = value;
}
}
private Vector[] elements
// ...
// some other stuff
}
I want to be able to do something like: map[index].X = 0; but I can't, because the return value is not a variable.
How do I do this, if at all possible?
You should avoid mutable structs.
If you want your type to be mutable use a class instead.
class Vector
{
public int X { get; set; } // Use public properties instead of public fields!
public int Y { get; set; }
// ...
// some other stuff
}
If you want to use a struct, make it immutable:
struct Vector
{
private readonly int x; // Immutable types should have readonly fields.
private readonly int y;
public int X { get { return x; }} // No setter.
public int Y { get { return y; }}
// ...
// some other stuff
}
The compiler prevents you from doing this because the indexer returns a copy of an object not a reference (struct is passed by value). The indexer returns a copy, you modify this copy and you simply don't see any result. The compiler helps you avoid this situation.
If you want to handle such situation you should use class instead or change the way you deal with Vector. You shouldn't modify it's value but initialize it's values in constructor, more on this topic: Why are mutable structs “evil”?.
define Vector as class,
or
store value in a temporary variable
var v = map[index];
v.X = 0;
map[index] = v;
or
add function to change
map[index] = map[index].Offset()
or
let the [] operator return a setter class
class Setter { Vector[] Data; int Index; public double X { get { return Data[Index]; } set { Data[Index] = new Vector(value, Data[Index].Y); }}}
public Setter this[int i]
{
get
{
return new Setter() { Data = elements, Index= i };
}
}
Although generic classes work pretty well for many purposes, they do not provide any reasonable way to access structs by reference. This is unfortunate since in many cases a collection of structs would offer better performance (both reduced memory footprint and improved cache locality) and clearer semantics than a collection of class objects. When using arrays of structs, one can use a statement like ArrayOfRectangle[5].Width += 3; with very clear effect: it will update field X of ArrayOfRectangle[5] but it will not affect field X of any other storage location of type Rectangle. The only things one needs to know to be certain of that are that ArrayOfRectangle is a Rectangle[], and Rectangle is a struct with a public int field X. If Rectangle were a class, and the instance held in ArrayOfRectangle[5] had ever been exposed to the outside world, could be difficult or impossible to determine whether the instance referred to by ArrayOfRectangle[5] was also held by some other code which was expecting that field X of its instance wouldn't change. Such problems are avoided when using structures.
Given the way .net's collections are implemented, the best one can do is usually to make a copy of a struct, modify it, and store it back. Doing that is somewhat icky, but for structs that aren't too big, the improved memory footprint and cache locality achieved by using value types may outweigh the extra code to explicitly copy objects from and to the data structures. It will almost certainly be a major win compared with using immutable class types.
Incidentally, what I'd like to see would be for collections to expose methods like:
OperateOnElement<paramType>(int index, ref T element, ref paramType param, ActionByRef<T,paramType> proc) which would call proc with the appropriate element of the collection along with the passed-in parameter. Such routines could in many cases be called without having to create closures; if such a pattern were standardized, compilers could even use it to auto-generate field-update code nicely.