I have a C# class which I need to use to instantiate millions of objects. So I need to make the class light weight and super fast. I declared some functions into it. So my concern is, declaring all those functions will make the class slow or consume more memory. I also have choice to declare those functions into another class. Here is the class:
internal class Var
{
public dynamic data;
public int index;
public VarTypes type;
public bool doClone = false;
public Var Clone(bool doClone)
{
var tmpVar = Clone();
tmpVar.doClone = doClone;
return tmpVar;
}
public Var Clone()
{
if (doClone)
return new Var() { data = data, index = index, type = type };
else
return this;
}
public void Clone(Var old)
{
this.data = old.data;
this.index = old.index;
this.type = old.type;
}
public override string ToString()
{
if (type == VarTypes.Function)
{
StringBuilder builder = new StringBuilder("function ");
if (data.Count == 4)
builder.Append(data[3].ToString());
builder.Append("(");
for (int i = 1; i < data[1][1].Count; i++)
builder.Append(data[1][1][i].ToString() + ",");
if (builder[builder.Length - 1] == ',')
builder.Remove(builder.Length - 1, 1);
builder.Append(")");
return builder.ToString();
}
else
return data.ToString();
}
}
Your class instances will not consume more memory as a result of adding more methods to the class. A class instance has a constant minimum size and then its size increases only as you add fields (or autoproperties, in the sense that each autoproperty adds a field for you). This is because when you're instantiating a class, you're really instantiating a memory region that (for the most part) only contains the values of the fields of that instance.
The minimum size exists because each class instance stores some information that enables various operations of the runtime, such as the GC. This information is mainly stored in the form of pointers to type-wide internal structures of the runtime, which means that they don't scale with the number of class instances - you'll get the same flat overhead for storing a type's methods whether you instantiate zero or a thousand instances.
From another answer, if you're worried about function call overhead, turn on aggressive inlining for each method:
// in mscorlib.dll so should not need to include extra references
using System.Runtime.CompilerServices;
⋮
[MethodImpl(MethodImplOptions.AggressiveInlining)]
void MyMethod(...)
Related
I have 2 functions that are returning an IEnumerable<Stat>.
This function returns all the individual properties belonging to the CharacterStats class:
private IEnumerable<Stat> GetAllStats()
{
yield return Level;
yield return Health;
yield return Damage;
yield return Defense;
//several others
yield return LifePerSecond;
}
This function gets the stored (base) values from my newly added sqlite db:
private IEnumerable<Stat> GetBaseStatsFromDB()
{
//connection code ...
while (rdr.Read())
{
yield return new Stat
{
Name = (Enums.StatName)Enum.Parse(typeof(Enums.StatName), rdr.GetString(1)),
MinValue = rdr.GetInt32(2),
MaxValue = rdr.GetInt32(3),
CurrentValue = rdr.GetInt32(4)
};
}
//close connection
}
What I would like to do is return a new instance of CharacterStats where each Stat is initialized from the DB and assigned to the instance, something like:
GetAllStats.ToList().ForEach( //apply matching stat from GetBaseStatsFromDB());
I am able to successfully get the values from the DB and create a new Stat object out of each. However, I've been unable to apply these values to the CharacterStats class.
There are currently ~ 16 stats and these may be added/removed as I continue building this project, so ideally I'd like to find a method that will continue to work as this list changes.
How can I apply the result of GetBaseStatsFromDB() to all properties on CharacterStats by using the GetAllStats() method?
EDIT for clarification:
CharacterStats is a class that holds the different Stats that are generated. Example:
public class CharacterStats
{
public Stat Level;
public Stat Name;
...
public Stat LifePerSecond;
}
GetAllStats() is a function used to enumerate each of these properties one at a time (like a list)
GetBaseStatsFromDB() is my attempt at initializing these values from the DB. What I would like to do is take each individual property on CharacterStats and apply the matching stat from GetBaseStatsFromDB().
So ... GetBaseStatsFromDB() would return (in addition to others):
Stat
{
Name = "Level",
MinValue = 1,
MaxValue = 50,
CurrentValue = 1
}
I would like to take this result and apply it to the Stat:Level property on the CharacterStats class, then repeat for all additional stats.
Reflection is the obvious answer here (not the only one).
First, ensure that the values of Enums.StatName have names identical to the property names on CharacterStats (which I'm guessing you did anyway, because obvious).
// Instance method of CharacterStats
public void SetStats(IEnumerable<Stat> stats)
{
var cstype = this.GetType();
foreach (var stat in stats) {
var prop = cstype.GetProperty(stat.Name.ToString());
prop.SetValue(this, stat);
}
}
// Toss in a constructor too.
public CharacterStats(IEnumerable<Stat> stats)
{
SetStats(stats);
}
// And a factory method
public static FromStats(IEnumerable<Stat> stats)
{
return new CharacterStats(stats);
}
Use like so:
var stats = new CharacterStats(GetBaseStatsFromDB());
var stats2 = CharacterStats.FromStats(GetBaseStatsFromDB());
// later on, maybe you want to copy stats from one to another...
stats.SetStats(stats2.GetStats());
But if we may share Stat instances (and nothing in this code prevents that), it would be safer to clone the Stat instances when we copy them:
public Stat() { ... }
public Stat(Stat copyMe)
{
this.Name = copyMe.Name;
this.MinValue = copyMe.MinValue;
this.MaxValue = copyMe.MaxValue;
this.CurrentValue = copyMe.CurrentValue;
}
...and then...
// This version is much safer.
public void SetStats(IEnumerable<Stat> stats)
{
var cstype = this.GetType();
foreach (var stat in stats) {
var prop = cstype.GetProperty(stat.Name.ToString());
prop.SetValue(this, new Stat(stat));
}
}
#PanagiotisKanavos notes that (at least based on what we've seen), the Enums.StatName enum isn't necessary at all. If you just use a string for Name, you can introduce new stats in the DB without having to alter the enum.
If you're using the enum to avoid "magic strings" (maybe some stats are referred to by name in hard-coded references), then an enum is the right thing to do, for static checking and IntelliSense. But you might want to think about keeping an enum or readonly globals for the stats the code needs to know by name, but then accept arbitrary string names from the database for stats that don't have any special status in your code. That's a bridge you don't need to cross today though.
Panagiotis also notes that you could make CharacterStats a subclass of Dictionary<String, Stat>, and avoid messing around with reflection at all. That's not a bad idea. One drawback would be the magic strings thing again: If, for example, every character has Level, Damage, etc., and the code specifically interacts with Level, then making Level a class property is sensible.
I am currently working with C# using the Unity3D engine and have come upon the following problem:
I created a class that has two private references to instances of another class which it has to access. Once I create multiple instances of the class and set the references I found out that all instances were using the same variable. I realized this as I was destroying an instance and just before that set the two variables holding the references to null. Immediately after doing that all other instances were throwing NullReferenceExceptions because they were still trying to access the references. The referenced objects are fine, other scripts can still access them.
Here is some pseudo code illustrating the structure:
public class Character
{
// Character data
}
public class StatusEffect
{
private Character target;
private Character originator;
public void Init(Character _Target, Character _Originator)
{
target = _Target;
originator = _Originator;
}
public void Destroy()
{
target = null;
originator = null;
}
}
In the program it would be called like this:
StatusEffect effect = new StatusEffect();
effect.Init(player1, player2);
// Time goes by
effect.Destroy();
After calling Destroy() every StatusEffect's two references will be null.
This is not only an issue when destroying StatusEffects, but also when creating new ones. As soon as I touch the references from within a new instance all StatusEffects will reference the two Characters specified by the new StatusEffect.
I do not understand why or how I can fix this issue. Can someone enlighten me on this matter?
Cheers,
Valtaroth
EDIT:
Here is the real code as requested:
I have a container class holding several StatusEffects. As soon as it starts, it initializes all of them.
public class CElementTag
{
// ..Other data..
public float f_Duration; // Set in the editor
private CGladiator gl_target;
private CGladiator gl_originator;
private float f_currentDuration;
public CStatusEffect[] ar_statusEffects;
// Starts the effect of the element tag
public void StartEffect(CGladiator _Originator, CGladiator _Target)
{
gl_originator = _Originator;
gl_target = _Target;
f_currentDuration = f_Duration;
for(int i = 0; i < ar_statusEffects.Length; i++)
ar_statusEffects[i].Initialize(gl_originator, gl_target);
}
// Ends the effect of the element tag
public void EndEffect()
{
for(int i = 0; i < ar_statusEffects.Length; i++)
{
if(ar_statusEffects[i] != null)
ar_statusEffects[i].Destroy();
}
}
// Called every update, returns true if the tag can be destroyed
public bool ActivateEffect()
{
f_currentDuration -= Time.deltaTime;
if(f_currentDuration <= 0.0f)
{
EndEffect();
return true;
}
for(int i = 0; i < ar_statusEffects.Length; i++)
{
if(ar_statusEffects[i] != null && ar_statusEffects[i].Update())
RemoveStatusEffect(i);
}
return false;
}
// Removes expired status effects
private void RemoveStatusEffect(int _Index)
{
// Call destroy method
ar_statusEffects[_Index].Destroy();
// Remove effect from array
for(int i = _Index; i < ar_statusEffects.Length - 1; i++)
ar_statusEffects[i] = ar_statusEffects[i+1];
ar_statusEffects[ar_statusEffects.Length - 1] = null;
}
}
The actual StatusEffect class is holding the two references as well as some other data it needs to work. It has virtual methods because there are some classes inheriting from it.
public class CStatusEffect
{
// ..Necessary data..
// References
protected CGladiator gl_target;
protected CGladiator gl_originator;
virtual public void Initialize(CGladiator _Target, CGladiator _Originator)
{
gl_target = _Target;
gl_originator = _Originator;
// ..Initialize other necessary stuff..
}
virtual public void Destroy()
{
gl_target = null;
gl_originator = null;
// ..Tidy up other data..
}
virtual public bool Update()
{
// ..Modifying data of gl_target and gl_originator..
// Returns true as soon as the effect is supposed to end.
}
}
That should be all the relevant code concerning this problem.
EDIT2
#KeithPayne I have a static array of ElementTags defined in the editor and saved to xml. At the beginning of the program the static array is loading the xml and stores all element tags. When creating a new element tag to use I utilize this constructor:
// Receives a static tag as parameter
public CElementTag(CElementTag _Tag)
{
i_ID = _Tag.i_ID;
str_Name = _Tag.str_Name;
enum_Type = _Tag.enum_Type;
f_Duration = _Tag.f_Duration;
ar_statusEffects = new CStatusEffect[_Tag.ar_statusEffects.Length];
Array.Copy(_Tag.ar_statusEffects, ar_statusEffects, _Tag.ar_statusEffects.Length);
}
Do I have to use a different method to copy the array to the new tag? I thought Array.Copy would make a deep copy of the source array and stored it in the destination array. If it is in fact making a shallow copy, I understand where the problem is coming from now.
From Array.Copy Method (Array, Array, Int32):
If sourceArray and destinationArray are both reference-type arrays or
are both arrays of type Object, a shallow copy is performed. A shallow
copy of an Array is a new Array containing references to the same
elements as the original Array. The elements themselves or anything
referenced by the elements are not copied. In contrast, a deep copy of
an Array copies the elements and everything directly or indirectly
referenced by the elements.
Consider this fluent version of the StatusEffect class and its usage below:
public class StatusEffect
{
public Character Target { get; private set; }
public Character Originator { get; private set; }
public StatusEffect Init(Character target, Character originator)
{
Target = target.Clone()
Originator = originator.Clone();
return this;
}
//...
}
public CElementTag(CElementTag _Tag)
{
i_ID = _Tag.i_ID;
str_Name = _Tag.str_Name;
enum_Type = _Tag.enum_Type;
f_Duration = _Tag.f_Duration;
ar_statusEffects = _Tag.ar_statusEffects.Select(eff =>
new StatusEffect().Init(eff.Target, eff.Originator)).ToArray();
// ar_statusEffects = new CStatusEffect[_Tag.ar_statusEffects.Length];
// Array.Copy(_Tag.ar_statusEffects, ar_statusEffects, _Tag.ar_statusEffects.Length);
}
Because you're passing in references to the objects via your Init() method, you're not actually "copying" the objects, just maintaining a reference to the same underlying objects in memory.
If you have multiple players with the same references to the same underlying objects, then changes made by player 1 will effect the objects being used by player 2.
Having said all that, you're not actually disposing the objects in your Destory method. Just setting the local instance references to Null which shouldn't affect any other instances of StatusEffects. Are you sure something else isn't disposing the objects, or that you haven't properly init'd your other instances.
If you do want to take a full copy of the passed in objects, take a look at the ICloneable interface. It looks like you want to pass in a copy of the objects into each Player.
public class Character : ICloneable
{
// Character data
//Implement Clone Method
}
public class StatusEffect
{
private Character target;
private Character originator;
public void Init(Character _Target, Character _Originator)
{
target = _Target.Clone()
originator = _Originator.Clone();
}
The fields aren't shared(static) among other instances. So calling target = null; in Destroy() won't affect other instances.
StatusEffect effect1 = new StatusEffect();
effect1.Init(player1, player2);
StatusEffect effect2 = new StatusEffect();
effect2.Init(player1, player2);
// Time goes by
effect2.Destroy();
// Some more time goes by
// accessing effect1.target won't give a `NullReferenceException` here unless player1 was null before passed to the init.
effect1.Destroy();
I think you did forget the Init(..) on the other instances. Every time you create an instance of StatusEffect, you need to call Init(...).
Update:
This line will clear the reference to the effect, but you never recreate it:
ar_statusEffects[ar_statusEffects.Length - 1] = null;
so the next time you call ar_statusEffects[x].Update() or Initialize() etc it will throw a NullReferenceException
If you want to clear out effects within you array, you could create an Enable bool in the effect, this way you only have to set/reset it.
for(int i = 0; i < ar_statusEffects.Length; i++)
if(ar_statusEffects[i].IsEnabled)
ar_statusEffects[i].Update();
Why don't you use a List instead? Arrays will be faster as long you don't have to shuffle in it. (like circulair buffers etc)
Thanks to Keith Payne I figured out where the problem was. I was creating a deep copy of CElementTag, but not of my ar_statusEffects array. I wrongly assumed Array.Copy was creating a deep copy of an array when it actually was not.
I implemented the IClonable interface for my CStatusEffect and use the Clone() method to create a true deep copy for each member of the static array and add it to the new tags ar_statusEffects array. This way I have seperate instances of the effects instead of references to the same static effect.
Thanks to everyone, especially Keith Payne, for their help and support!
*Solved. Thanks for the explanations guys, I didn't fully understand the implications of using a value type in this situation.
I have a struct that I'm using from a static class. However, the behavior is showing unexpected behavior when I print it's internal state at runtime. Here's my struct:
public struct VersionedObject
{
public VersionedObject(object o)
{
m_SelectedVer = 0;
ObjectVersions = new List<object>();
ObjectVersions.Add(o);
}
private int m_SelectedVer;
public int SelectedVersion
{
get
{
return m_SelectedVer;
}
}
public List<object> ObjectVersions;//Clarifying: This is only used to retrieve values, nothing is .Added from outside this struct in my code.
public void AddObject(object m)
{
ObjectVersions.Add(m);
m_SelectedVer = ObjectVersions.Count - 1;
}
}
Test code
VersionedObject vo = new VersionedObject(1);
vo.AddObject(2);//This is the second call to AddObject()
//Expected value of vo.SelectedVerion: 1
//Actual value of vo.SelectedVersion: 1
Now, if you test this code in isolation, i.e., copy it into your project to give it a whirl, it will return the expected result.
The problem; What I'm observing in my production code is this debug output:
objectName, ObjectVersions.Count:2, SelectedVer:0,
Why? From my understanding, and testing, this should be completely impossible under any circumstances.
My random guess is that there is some sort of immutability going on, that for some reason a new struct is being instanced via default constructor, and the ObjectVersions data is being copied over, but the m_SelectedVersion is private and cannot be copied into the new struct?
Does my use of Static classes and methods to manipulate the struct have anything to do with it?
I'm so stumped I'm just inventing wild guesses at this point.
Struct is value type. So most likely you are creating multiple copies of your object in your actual code.
Consider simply changing struct to class as content of your struct is not really good fit for value type (as it is mutable and also contains mutable reference type).
More on "struct is value type":
First check FAQ which have many good answers already.
Value types are passed by value - so if you call function to update such object it will not update original. You can treat them similar to passing integer value to function: i.e. would you expect SomeFunction(42) to be able to change value of 42?
struct MyStruct { public int V;}
void UpdateStruct(MyStruct x)
{
x.V = 42; // updates copy of passed in object, changes will not be visible outside.
}
....
var local = new MyStruct{V = 13}
UpdateStruct(local); // Hope to get local.V == 42
if (local.V == 13) {
// Expected. copy inside UpdateStruct updated,
// but this "local" is untouched.
}
Why is this a struct and not a class? Even better, why are you tracking the size of the backing store (List<T>) rather than letting the List<T> track that for you. Since that underlying backing store is public, it can be manipulated without your struct's knowledge. I suspect something in your production code is adding to the backing store without going through your struct.
If it were me, I'd set it up something like this, though I'd make it a class...but that's almost certainly a breaking change:
public struct VersionedObject
{
public VersionedObject()
{
this.ObjectVersions = new List<object>() ;
return ;
}
public VersionedObject(object o) : this()
{
ObjectVersions.Add(o);
return ;
}
public VersionedObject( params object[] o ) : this()
{
ObjectVersions.AddRange( o ) ;
return ;
}
public int SelectedVersion
{
get
{
int value = this.ObjectVersions.Count - 1 ;
return value ;
}
}
public List<object> ObjectVersions ;
public void AddObject(object m)
{
ObjectVersions.Add(m);
return ;
}
}
You'll note that this has the same semantics as your struct, but the SelectedVersion property now reflects what's actually in the backing store.
Assuming I have a struct:
struct Vector
{
public int X, Y;
// ...
// some other stuff
}
and a class:
class Map
{
public Vector this[int i]
{
get
{
return elements[i];
}
set
{
elements[i] = value;
}
}
private Vector[] elements
// ...
// some other stuff
}
I want to be able to do something like: map[index].X = 0; but I can't, because the return value is not a variable.
How do I do this, if at all possible?
You should avoid mutable structs.
If you want your type to be mutable use a class instead.
class Vector
{
public int X { get; set; } // Use public properties instead of public fields!
public int Y { get; set; }
// ...
// some other stuff
}
If you want to use a struct, make it immutable:
struct Vector
{
private readonly int x; // Immutable types should have readonly fields.
private readonly int y;
public int X { get { return x; }} // No setter.
public int Y { get { return y; }}
// ...
// some other stuff
}
The compiler prevents you from doing this because the indexer returns a copy of an object not a reference (struct is passed by value). The indexer returns a copy, you modify this copy and you simply don't see any result. The compiler helps you avoid this situation.
If you want to handle such situation you should use class instead or change the way you deal with Vector. You shouldn't modify it's value but initialize it's values in constructor, more on this topic: Why are mutable structs “evil”?.
define Vector as class,
or
store value in a temporary variable
var v = map[index];
v.X = 0;
map[index] = v;
or
add function to change
map[index] = map[index].Offset()
or
let the [] operator return a setter class
class Setter { Vector[] Data; int Index; public double X { get { return Data[Index]; } set { Data[Index] = new Vector(value, Data[Index].Y); }}}
public Setter this[int i]
{
get
{
return new Setter() { Data = elements, Index= i };
}
}
Although generic classes work pretty well for many purposes, they do not provide any reasonable way to access structs by reference. This is unfortunate since in many cases a collection of structs would offer better performance (both reduced memory footprint and improved cache locality) and clearer semantics than a collection of class objects. When using arrays of structs, one can use a statement like ArrayOfRectangle[5].Width += 3; with very clear effect: it will update field X of ArrayOfRectangle[5] but it will not affect field X of any other storage location of type Rectangle. The only things one needs to know to be certain of that are that ArrayOfRectangle is a Rectangle[], and Rectangle is a struct with a public int field X. If Rectangle were a class, and the instance held in ArrayOfRectangle[5] had ever been exposed to the outside world, could be difficult or impossible to determine whether the instance referred to by ArrayOfRectangle[5] was also held by some other code which was expecting that field X of its instance wouldn't change. Such problems are avoided when using structures.
Given the way .net's collections are implemented, the best one can do is usually to make a copy of a struct, modify it, and store it back. Doing that is somewhat icky, but for structs that aren't too big, the improved memory footprint and cache locality achieved by using value types may outweigh the extra code to explicitly copy objects from and to the data structures. It will almost certainly be a major win compared with using immutable class types.
Incidentally, what I'd like to see would be for collections to expose methods like:
OperateOnElement<paramType>(int index, ref T element, ref paramType param, ActionByRef<T,paramType> proc) which would call proc with the appropriate element of the collection along with the passed-in parameter. Such routines could in many cases be called without having to create closures; if such a pattern were standardized, compilers could even use it to auto-generate field-update code nicely.
I have a class library containing several structures each consisting of several value and reference types. Most of the value types are mandatory, a few value types and all reference types are optional. All structures are XmlSerializable (which is mandatory).
As far as the class library is targeted to mobile devices I want to reduce the memory footprint. My first idea was to use Nullable<T> for the value types, but this increases the memory size by 4 bytes per Nullable<T>. My second idea is to pack all optional value types into a separate structure that is only instantiated when any of its members is needed. But this would force me to implement IXmlSerializable on the "main" structure.
Are there any other approaches to "shrink" the structures?
[EDIT]
Beg your pardon for this bad question. I think I have to clarify some things and get more specific:
The class library is designed to serialize data info GPX (GPS Exchange Format). The structures are e.g. Waypoint or Track. They have mandatory fields as latitude, longitude etc. Optional fields are Vertical/Horizontal/Position Dilution of Precision, a description, a link.
The library is mainly targeted to Mobile devices such as PDAs. RAM is short, but plenty of non-volatile Memory is available.
Code examples cannot be shown as far as there are none. I want to think about several pitfalls before starting the implementation.
Here is a technique to aggressively reduce in memory overhead whilst allowing Xml Serialization.
Update: the orignal inline linked list idea is more efficient for 1 and 2 entries than a standard list with count construct but the use of fixed size optionals for zero, one and two cases is even more efficient.
Proviso:
This is predicated on you knowing that you really do need to shave the memory, as such
(since you haven't done any coding yet) this may well be a massively premature
optimization.
Also this design is predicated on the optional fields being very rare.
I use double as a 'placeholder' whatever format best allows you to represent the precision/units involved should be used.
public class WayPoint
{
// consumes IntPtr.Size fixed cost
private IOptional optional = OptionalNone.Default;
public double Latitude { get; set; }
public double Longitude { get; set; }
public double Vertical
{
get { return optional.Get<double>("Vertical") ?? 0.0; }
set { optional = optional.Set<double>("Vertical", value); }
}
[XmlIgnore] // need this pair for every value type
public bool VerticalSpecified
{
get { return optional.Get<double>("Vertical").HasValue; }
}
public void ClearVertical()
{
optional = optional.Clear<double>("Vertical");
}
public string Description // setting to null clears it
{
get { return optional.GetRef<string>("Description"); }
set { optional = optional.SetRef<string>("Description", value); }
}
// Horizontal, Position, DilutionOfPrecision etc.
}
The real heavy lifting is done here:
internal interface IOptional
{
T? Get<T>(string id) where T : struct;
T GetRef<T>(string id) where T : class;
IOptional Set<T>(string id, T value);
IOptional Clear(string id);
}
internal sealed class OptionalNone : IOptional
{
public static readonly OptionalNone Default = new OptionalNone();
public T? Get<T>(string id) where T : struct
{
return null;
}
public T GetRef<T>(string id) where T : class
{
return null;
}
public IOptional Set<T>(string id, T value)
{
if (value == null)
return Clear(id);
return new OptionalWithOne<T>(id, value);
}
public IOptional Clear(string id)
{
return this; // no effect
}
}
The fixed size ones become more interesting to write, there is no point writing these as structs as they would be boxed to be placed within the IOptional field within the WayPoint class.
internal sealed class OptionalWithOne<X> : IOptional
{
private string id1;
private X value1;
public OptionalWithOne(string id, X value)
{
this.id1 = id;
this.value1 = value;
}
public T? Get<T>(string id) where T : struct
{
if (string.Equals(id, this.id1))
return (T)(object)this.value1;
return null;
}
public T GetRef<T>(string id) where T : class
{
if (string.Equals(id, this.id1))
return (T)(object)this.value1;
return null;
}
public IOptional Set<T>(string id, T value)
{
if (string.Equals(id, this.id1))
{
if (value == null)
return OptionalNone.Default;
this.value1 = (X)(object)value;
return this;
}
else
{
if (value == null)
return this;
return new OptionalWithTwo<X,T>(this.id1, this.value1, id, value);
}
}
public IOptional Clear(string id)
{
if (string.Equals(id, this.id1))
return OptionalNone.Default;
return this; // no effect
}
}
Then for two (you can extend this idea as far as you want but as you can see the code gets unpleasant quickly.
internal sealed class OptionalWithTwo<X,Y> : IOptional
{
private string id1;
private X value1;
private string id2;
private Y value2;
public OptionalWithTwo(
string id1, X value1,
string id2, Y value2)
{
this.id1 = id1;
this.value1 = value1;
this.id2 = id2;
this.value2 = value2;
}
public T? Get<T>(string id) where T : struct
{
if (string.Equals(id, this.id1))
return (T)(object)this.value1;
if (string.Equals(id, this.id2))
return (T)(object)this.value2;
return null;
}
public T GetRef<T>(string id) where T : class
{
if (string.Equals(id, this.id1))
return (T)(object)this.value1;
if (string.Equals(id, this.id2))
return (T)(object)this.value2;
return null;
}
public IOptional Set<T>(string id, T value)
{
if (string.Equals(id, this.id1))
{
if (value == null)
return Clear(id);
this.value1 = (X)(object)value;
return this;
}
else if (string.Equals(id, this.id2))
{
if (value == null)
return Clear(id);
this.value2 = (Y)(object)value;
return this;
}
else
{
if (value == null)
return this;
return new OptionalWithMany(
this.id1, this.value1,
this.id2, this.value2,
id, value);
}
}
public IOptional Clear(string id)
{
if (string.Equals(id, this.id1))
return new OptionalWithOne<Y>(this.id2, this.value2);
if (string.Equals(id, this.id2))
return new OptionalWithOne<X>(this.id1, this.value1);
return this; // no effect
}
}
Before finally ending with the relatively inefficient
internal sealed class OptionalWithMany : IOptional
{
private List<string> ids = new List<string>();
// this boxes, if you had a restricted set of data types
// you could do a per type list and map between them
// it is assumed that this is sufficiently uncommon that you don't care
private List<object> values = new List<object>();
public OptionalWithMany(
string id1, object value1,
string id2, object value2,
string id3, object value3)
{
this.ids.Add(id1);
this.values.Add(value1);
this.ids.Add(id2);
this.values.Add(value2);
this.ids.Add(id3);
this.values.Add(value3);
}
public T? Get<T>(string id) where T : struct
{
for (int i= 0; i < this.values.Count;i++)
{
if (string.Equals(id, this.ids[i]))
return (T)this.values[i];
}
return null;
}
public T GetRef<T>(string id) where T : class
{
for (int i= 0; i < this.values.Count;i++)
{
if (string.Equals(id, this.ids[i]))
return (T)this.values[i];
}
return null;
}
public IOptional Set<T>(string id, T value)
{
for (int i= 0; i < this.values.Count;i++)
{
if (string.Equals(id, this.ids[i]))
{
if (value == null)
return Clear(id);
this.values[i] = value;
return this;
}
}
if (value != null)
{
this.ids.Add(id);
this.values.Add(value);
}
return this;
}
public IOptional Clear(string id)
{
for (int i= 0; i < this.values.Count;i++)
{
if (string.Equals(id, this.ids[i]))
{
this.ids.RemoveAt(i);
this.values.RemoveAt(i);
return ShrinkIfNeeded();
}
}
return this; // no effect
}
private IOptional ShrinkIfNeeded()
{
if (this.ids.Count == 2)
{
//return new OptionalWithTwo<X,Y>(
// this.ids[0], this.values[0],
// this.ids[1], this.values[1]);
return (IOptional)
typeof(OptionalWithTwo<,>).MakeGenericType(
// this is a bit risky.
// your value types may not use inhertence
this.values[0].GetType(),
this.values[1].GetType())
.GetConstructors().First().Invoke(
new object[]
{
this.ids[0], this.values[0],
this.ids[1], this.values[1]
});
}
return this;
}
}
OptionalWithMany could be written rather better than this but it gives you the idea.
With restricted type support you could do a global Key -> value map per type 'heap' like so:
internal struct Key
{
public readonly OptionalWithMany;
public readonly string Id;
// define equality and hashcode as per usual
}
Then simply store the list of Id's currently in use within OptionalToMany. Shrinking would be slightly more complex (but better from a type point of view since you would scan each global 'heap' till you found the matching entry and use the type of the heap to construct the OptionalWithTwo. This would allow polymorphism in the property values.
Regardless of the internals the primary benefit of this is that the public surface of the WayPoint class hides all this entirely.
You can then set up the class however you want for serialization though attributes, IXmlSerializable (which would remove the need for the annoying xxxSpecified properties).
I used strings as Id's for simplicity in my example.
If you really care about size and speed you should change the Id's to be enumerations. Given packing behaviour this won't save you much even if you can fit all needed values into a byte but it would give you compile time sanity checking. The strings are all compile time constants so occupy next to no space (but are slower for checking equality).
I urge you to only do something like this after you check that it is needed. The plus side is that this does not limit your xml serialization so you can mould it to whatever format you desire. Also the public face of the 'data packet' can be kept clean (except for the xxxSpecified junk).
If you want to avoid the xxxSpecified hassles and you know you have some 'out of band' values you can use the following trick:
[DefaultValue(double.MaxValue)]
public double Vertical
{
get { return optional.Get<double>("Vertical") ?? double.MaxValue; }
set { optional = optional.Set<double>("Vertical", value); }
}
public void ClearVertical()
{
optional = optional.ClearValue<double>("Vertical");
}
However the rest of you API must be capable of detecting these special values. In general I would say that the specified route is better.
If a particular set of properties become 'always available' on certain devices, or in certain modes you should switch to alternate classes where the properties for those are simple ones. Since the xml form will be identical this means they can interoperate simply and easily but memory usage in those cases will be much less.
If the number of these groups becomes large you may even consider a code-gen scenario (at runtime even, though this increases your support burden considerably)
For some serious fun:
apply Flyweight and store all instances in a bitmap? With a small memory device you don't need 4 byte pointers.
[Edit] With Flyweight, you can have a separate storage strategy for each field. I do not suggest to directly store the string value in the bitmap, but you could store an index.
The type is not stored in the bitmap, but in the unique object factory.
It is probably good to know that the XmlSerializer doesn't care about your internal object layout, it only cares about your public fields and properties. You can hide the internal memory optimizations behind your property accessors, and the XmlSerializer wouldn't even know.
For instance, if you know that you usually have only 2 references set, but on occasion more, you can store the two frequent ones as part of your main object, and hide the infrequent ones inside an object[] or ListDictionary or a specialized private class of your own making. However, be careful that each indirect container object also contains overhead, as it needs to be a reference type. Or when you have 8 nullable integers as part of your public contract, internally you could use 8 regular integers and a single byte containing the is-this-int-null status as its bits.
If you want to specialize even further, perhaps create specialized subclasses depending on the available data, you would have to go the route of IXmlSerializable, but usually that's not really needed.
You can do a couple of things:
Make sure to use the smallest type possible for a particular value. For example, if you look at the schema, dgpsStationType has a min value of 0, and a max of 1023. This can be stored as a ushort. Reduce the size of these items when possible.
Make sure that your fields are 4-byte aligned. The end resulting size of your structure will be some multiple of 4-bytes in size (assuming 32-bit). The default layout for a class has the items stored sequentially. If the fields are not packed correctly, the compiler will have wasted space making sure that your fields are 4-byte aligned. You can specify the layout explicitly using StructLayoutAttribute.
Bad Example: these fields in a class take up 12 bytes. The int must take up 4 contiguous bytes, and the other members must be 4-byte aligned.
public class Bad {
byte a;
byte b;
int c;
ushort u;
}
Better Example: these fields in a class take up 8 bytes. These fields are packed efficiently.
public class Better {
byte a;
byte b;
ushort u;
int c;
}
Reduce the size of your object graph. Each reference type takes up 8 bytes of overhead. If you've got a deep graph, that's a lot of overhead. Pull everything you can into functions that operate on data in you main class. Think more 'C' like, and less OOD.
Its still a good idea to lazy-load some optional parameters, but you should draw your line clearly. Create 1 or maybe 2 sets of 'optional' values that can be loaded or null. Each set will mandate a reference type, and its overhead.
Use structs where you can. Be careful of value-type semantics though, they can be tricky.
Consider not implementing ISerializable. Interface methods are by definition virtual. Any class with virtual methods contains a reference to a vtable (another 4 bytes). Instead implement xml serialization manually in an external class.
Build your own serializing in order to minimize your structure. And serialize to binary and not xml.
Something along the lines of:
internal void Save(BinaryWriter w)
{
w.Write(this.id);
w.Write(this.name);
byte[] bytes = Encoding.UTF8.GetBytes(this.MyString);
w.Write(bytes.Length);
w.Write(bytes);
w.Write(this.tags.Count); // nested struct/class
foreach (Tag tag in this.tags)
{
tag.Save(w);
}
}
and have a constructor which builds it back up
public MyClass(BinaryReader reader)
{
this.id = reader.ReadUInt32();
etc.
}
Some sort of binary serialization will often do much better than XML serialization. You'll have to try it out for your specific data structures to see if you gain much.
Check out MSDN an example using BinaryFormatter.