Find size of object instance in bytes in c# - c#

For any arbitrary instance (collections of different objects, compositions, single objects, etc)
How can I determine its size in bytes?
(I've currently got a collection of various objects and i'm trying to determine the aggregated size of it)
EDIT: Has someone written an extension method for Object that could do this? That'd be pretty neat imo.

First of all, a warning: what follows is strictly in the realm of ugly, undocumented hacks. Do not rely on this working - even if it works for you now, it may stop working tomorrow, with any minor or major .NET update.
You can use the information in this article on CLR internals MSDN Magazine Issue 2005 May - Drill Into .NET Framework Internals to See How the CLR Creates Runtime Objects - last I checked, it was still applicable. Here's how this is done (it retrieves the internal "Basic Instance Size" field via TypeHandle of the type).
object obj = new List<int>(); // whatever you want to get the size of
RuntimeTypeHandle th = obj.GetType().TypeHandle;
int size = *(*(int**)&th + 1);
Console.WriteLine(size);
This works on 3.5 SP1 32-bit. I'm not sure if field sizes are the same on 64-bit - you might have to adjust the types and/or offsets if they are not.
This will work for all "normal" types, for which all instances have the same, well-defined types. Those for which this isn't true are arrays and strings for sure, and I believe also StringBuilder. For them you'll have add the size of all contained elements to their base instance size.

You may be able to approximate the size by pretending to serializing it with a binary serializer (but routing the output to oblivion) if you're working with serializable objects.
class Program
{
static void Main(string[] args)
{
A parent;
parent = new A(1, "Mike");
parent.AddChild("Greg");
parent.AddChild("Peter");
parent.AddChild("Bobby");
System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bf =
new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter();
SerializationSizer ss = new SerializationSizer();
bf.Serialize(ss, parent);
Console.WriteLine("Size of serialized object is {0}", ss.Length);
}
}
[Serializable()]
class A
{
int id;
string name;
List<B> children;
public A(int id, string name)
{
this.id = id;
this.name = name;
children = new List<B>();
}
public B AddChild(string name)
{
B newItem = new B(this, name);
children.Add(newItem);
return newItem;
}
}
[Serializable()]
class B
{
A parent;
string name;
public B(A parent, string name)
{
this.parent = parent;
this.name = name;
}
}
class SerializationSizer : System.IO.Stream
{
private int totalSize;
public override void Write(byte[] buffer, int offset, int count)
{
this.totalSize += count;
}
public override bool CanRead
{
get { return false; }
}
public override bool CanSeek
{
get { return false; }
}
public override bool CanWrite
{
get { return true; }
}
public override void Flush()
{
// Nothing to do
}
public override long Length
{
get { return totalSize; }
}
public override long Position
{
get
{
throw new NotImplementedException();
}
set
{
throw new NotImplementedException();
}
}
public override int Read(byte[] buffer, int offset, int count)
{
throw new NotImplementedException();
}
public override long Seek(long offset, System.IO.SeekOrigin origin)
{
throw new NotImplementedException();
}
public override void SetLength(long value)
{
throw new NotImplementedException();
}
}

Not directly answers the question, but for those who are interested to investigate object sizes while debugging:
Start debugging in VS, make sure the Diagnostics Tools window is shown (Debug > Windows > Show Diagnostic Tools)
Set a breakpoint (optional)
Click Take Snapshot in the Memory Usage while paused
Explore the snapshot (optionally sort the object list alphabetically to find the type you're interested in)

For unmanaged types aka value types, structs:
Marshal.SizeOf(object);
For managed objects the closer i got is an approximation.
long start_mem = GC.GetTotalMemory(true);
aclass[] array = new aclass[1000000];
for (int n = 0; n < 1000000; n++)
array[n] = new aclass();
double used_mem_median = (GC.GetTotalMemory(false) - start_mem)/1000000D;
Do not use serialization.A binary formatter adds headers, so you can change your class and load an old serialized file into the modified class.
Also it won't tell you the real size in memory nor will take into account memory alignment.
[Edit]
By using BiteConverter.GetBytes(prop-value) recursivelly on every property of your class you would get the contents in bytes, that doesn't count the weight of the class or references but is much closer to reality.
I would recommend to use a byte array for data and an unmanaged proxy class to access values using pointer casting if size matters, note that would be non-aligned memory so on old computers is gonna be slow but HUGE datasets on MODERN RAM is gonna be considerably faster, as minimizing the size to read from RAM is gonna be a bigger impact than unaligned.

safe solution with some optimizations
CyberSaving/MemoryUsage code.
some case:
/* test nullable type */
TestSize<int?>.SizeOf(null) //-> 4 B
/* test StringBuilder */
StringBuilder sb = new StringBuilder();
for (int i = 0; i < 100; i++) sb.Append("わたしわたしわたしわ");
TestSize<StringBuilder>.SizeOf(sb ) //-> 3132 B
/* test Simple array */
TestSize<int[]>.SizeOf(new int[100]); //-> 400 B
/* test Empty List<int>*/
var list = new List<int>();
TestSize<List<int>>.SizeOf(list); //-> 205 B
/* test List<int> with 100 items*/
for (int i = 0; i < 100; i++) list.Add(i);
TestSize<List<int>>.SizeOf(list); //-> 717 B
It works also with classes:
class twostring
{
public string a { get; set; }
public string b { get; set; }
}
TestSize<twostring>.SizeOf(new twostring() { a="0123456789", b="0123456789" } //-> 28 B

This doesn't apply to the current .NET implementation, but one thing to keep in mind with garbage collected/managed runtimes is the allocated size of an object can change throughout the lifetime of the program. For example, some generational garbage collectors (such as the Generational/Ulterior Reference Counting Hybrid collector) only need to store certain information after an object is moved from the nursery to the mature space.
This makes it impossible to create a reliable, generic API to expose the object size.

This is impossible to do at runtime.
There are various memory profilers that display object size, though.
EDIT: You could write a second program that profiles the first one using the CLR Profiling API and communicates with it through remoting or something.

For anyone looking for a solution that doesn't require [Serializable] classes and where the result is an approximation instead of exact science.
The best method I could find is json serialization into a memorystream using UTF32 encoding.
private static long? GetSizeOfObjectInBytes(object item)
{
if (item == null) return 0;
try
{
// hackish solution to get an approximation of the size
var jsonSerializerSettings = new JsonSerializerSettings
{
DateFormatHandling = DateFormatHandling.IsoDateFormat,
DateTimeZoneHandling = DateTimeZoneHandling.Utc,
MaxDepth = 10,
ReferenceLoopHandling = ReferenceLoopHandling.Ignore
};
var formatter = new JsonMediaTypeFormatter { SerializerSettings = jsonSerializerSettings };
using (var stream = new MemoryStream()) {
formatter.WriteToStream(item.GetType(), item, stream, Encoding.UTF32);
return stream.Length / 4; // 32 bits per character = 4 bytes per character
}
}
catch (Exception)
{
return null;
}
}
No, this won't give you the exact size that would be used in memory. As previously mentioned, that is not possible. But it'll give you a rough estimation.
Note that this is also pretty slow.

Use Son Of Strike which has a command ObjSize.
Note that actual memory consumed is always larger than ObjSize reports due to a synkblk which resides directly before the object data.
Read more about both here MSDN Magazine Issue 2005 May - Drill Into .NET Framework Internals to See How the CLR Creates Runtime Objects.

AFAIK, you cannot, without actually deep-counting the size of each member in bytes. But again, does the size of a member (like elements inside a collection) count towards the size of the object, or a pointer to that member count towards the size of the object? Depends on how you define it.
I have run into this situation before where I wanted to limit the objects in my cache based on the memory they consumed.
Well, if there is some trick to do that, I'd be delighted to know about it!

For value types, you can use Marshal.SizeOf. Of course, it returns the number of bytes required to marshal the structure in unmanaged memory, which is not necessarily what the CLR uses.

I have created benchmark test for different collections in .NET: https://github.com/scholtz/TestDotNetCollectionsMemoryAllocation
Results are as follows for .NET Core 2.2 with 1,000,000 of objects with 3 properties allocated:
Testing with string: 1234567
Hashtable<TestObject>: 184 672 704 B
Hashtable<TestObjectRef>: 136 668 560 B
Dictionary<int, TestObject>: 171 448 160 B
Dictionary<int, TestObjectRef>: 123 445 472 B
ConcurrentDictionary<int, TestObject>: 200 020 440 B
ConcurrentDictionary<int, TestObjectRef>: 152 026 208 B
HashSet<TestObject>: 149 893 216 B
HashSet<TestObjectRef>: 101 894 384 B
ConcurrentBag<TestObject>: 112 783 256 B
ConcurrentBag<TestObjectRef>: 64 777 632 B
Queue<TestObject>: 112 777 736 B
Queue<TestObjectRef>: 64 780 680 B
ConcurrentQueue<TestObject>: 112 784 136 B
ConcurrentQueue<TestObjectRef>: 64 783 536 B
ConcurrentStack<TestObject>: 128 005 072 B
ConcurrentStack<TestObjectRef>: 80 004 632 B
For memory test I found the best to be used
GC.GetAllocatedBytesForCurrentThread()

For arrays of structs/values, I have different results with:
first = Marshal.UnsafeAddrOfPinnedArrayElement(array, 0).ToInt64();
second = Marshal.UnsafeAddrOfPinnedArrayElement(array, 1).ToInt64();
arrayElementSize = second - first;
(oversimplified example)
Whatever the approach, you really need to understand how .Net works to correctly interpret the results.
For instance, the returned element size is the "aligned" element size, with some padding.
The overhead and thus the size is different depending on the usage of a type: "boxed" on the GC heap, on the stack, as a field, as an array element.
(I wanted to know what would be the memory impact of using "dummy" empty structs (without any field) to mimic "optional" arguments of generics; making tests with different layouts involving empty structs, I can see that an empty struct uses (at least) 1 byte per element; I vaguely remember it is because .Net needs a different address for each field, which wouldn't work if a field really was empty/0-sized).

You can use reflection to gather all the public member or property information (given the object's type). There is no way to determine the size without walking through each individual piece of data on the object, though.

From Pavel and jnm2:
private int DumpApproximateObjectSize(object toWeight)
{
return Marshal.ReadInt32(toWeight.GetType().TypeHandle.Value, 4);
}
On a side note be careful because it only work with contiguous memory objects

Simplest way is: int size = *((int*)type.TypeHandle.Value + 1)
I know this is implementation detail but GC relies on it and it needs to be as close to start of the methodtable for efficiency plus taking into consideration how GC code complex is nobody will dare to change it in future. In fact it works for every minor/major versions of .net framework+.net core. (Currently unable to test for 1.0)
If you want more reliable way, emit a struct in a dynamic assembly with [StructLayout(LayoutKind.Auto)] with exact same fields in same order, take its size with sizeof IL instruction. You may want to emit a static method within struct which simply returns this value. Then add 2*IntPtr.Size for object header. This should give you exact value.
But if your class derives from another class, you need to find each size of base class seperatly and add them + 2*Inptr.Size again for header. You can do this by getting fields with BindingFlags.DeclaredOnly flag.
Arrays and strings just adds that size its length * element size.
For cumulative size of aggreagate objects you need to implement more sophisticated solution which involves visiting every field and inspect its contents.

For anyone looking for a rough approximation comparing the sizes of disparate object graphs/collections, just serialize to JSON - e.g.:
Console.WriteLine($"Size1:\t{(JsonConvert.SerializeObject(someBusyObject)).Length}")); Console.WriteLine($"Size2:\t{(JsonConvert.SerializeObject(someOtherObject)).Length}"));
In my case I have a bunch of IEnumerable's being pulled during a login I'm benchmarking, and I just wanted to roughly size them to see their relative weight.
They're expensive operations and won't give you direct heap allocation size or anything like that, but it was good enough for my use case and was readily available.

Related

Exclude extra private field in struct with LayoutKind.Explicit from being part of the structure layout

Let's say we have one structure :
[StructLayout(LayoutKind.Explicit, Size=8)] // using System.Runtime.InteropServices;
public struct AirportHeader {
[FieldOffset(0)]
[MarshalAs(UnmanagedType.I4)]
public int Ident; // a 4 bytes ASCII : "FIMP" { 0x46, 0x49, 0x4D, 0x50 }
[FieldOffset(4)]
[MarshalAs(UnmanagedType.I4)]
public int Offset;
}
What I want to have : Both direct access to type string and int values, for the field Ident in this structure, without breaking the 8 bytes size of the structure, nor having to compute a string value each time from the int value.
The field Ident in that structure as int is interesting because I can fast compare with other idents if they match, other idents may come from datas that are unrelated to this structure, but are in the same int format.
Question : Is there a way to define a field that is not part of the struture layout ? Like :
[StructLayout(LayoutKind.Explicit, Size=8)]
public struct AirportHeader {
[FieldOffset(0)]
[MarshalAs(UnmanagedType.I4)]
public int Ident; // a 4 bytes ASCII : "FIMP" { 0x46, 0x49, 0x4D, 0x50 }
[FieldOffset(4)]
[MarshalAs(UnmanagedType.I4)]
public int Offset;
[NoOffset()] // <- is there something I can do the like of this
string _identStr;
public string IdentStr {
get { // EDIT ! missed the getter on this property
if (string.IsNullOrEmpty(_identStr)) _identStr =
System.Text.Encoding.ASCII.GetString(Ident.GetBytes());
// do the above only once. May use an extra private bool field to go faster.
return _identStr;
}
}
}
PS : I use pointers ('*' and '&', unsafe) because I need to deal with endianness (Local system, binary files/file format, network) and fast type conversions, fast arrays filling. I also use many flavours of Marshal methods (fixing structures on byte arrays), and a little of PInvoke and COM interop. Too bad some assemblies I'm dealing with doesn't have their dotNet counterpart yet.
TL;DR; For details only
The question is all it is about, I just don't know the answer. The following should answer most questions like "other approaches", or "why not do this instead", but could be ignored as the answer would be straightforward. Anyway, I preemptively put everything so it's clear from the start what am I trying to do. :)
Options/Workaround I'm currently using (or thinking of using) :
Create a getter (not a field) that computes the string value each time :
public string IdentStr {
get { return System.Text.Encoding.ASCII.GetString(Ident.GetBytes()); }
// where GetBytes() is an extension method that converts an int to byte[]
}
This approach, while doing the job, performs poorly : The GUI displays aircraft from a database of default flights, and injects other flights from the network with a refresh rate of one second (I should increase that to 5 seconds). I have around 1200 flights within a area, relating to 2400 airports (departure and arrival), meaning I have 2400 calls to the above code each second to display the ident in a DataGrid.
Create another struct (or class), which only purpose is to manage
data on GUI side, when not reading/writing to a stream or file. That means, read
the data with the explicit layout struct. Create another struct with
the string version of the field. Work with GUI. That will perform
better on an overall point of view, but, in the process of defining
structures for the game binaries, I'm already at 143 structures of
the kind (just with older versions of the game datas; there are a bunch I didn't write yet, and I plan to add structures for the newest datas types). ATM, more than half of them require one or more extra
fields to be of meaningful use. It's okay if I were the only one to use the assembly, but
other users will probably get lost with AirportHeader,
AirportHeaderEx, AirportEntry, AirportEntryEx,
AirportCoords, AirportCoordsEx.... I would avoid doing that.
Optimize option 1 to make computations perform faster (thanks to SO,
there are a bunch of ideas to look for - currently working on the idea). For the Ident field, I
guess I could use pointers (and I will). Already doing it for fields I must display in little endian and read/write in big
endian. There are other values, like 4x4 grid informations that are
packed in a single Int64 (ulong), that needs bit shifting to
expose the actual values. Same for GUIDs or objects pitch/bank/yaw.
Try to take advantage of overlapping fields (on study). That would work for GUIDs. Perhaps it may work for the Ident example, if MarshalAs can constrain the
value to an ASCII string. Then I just need to specify the same
FieldOffset, '0' in this case. But I'm unsure setting the field
value (entry.FieldStr = "FMEP";) actually uses the Marshal constrain on the managed code side. My undestanding is it will store the string in Unicode on managed side (?).
Furthermore, that wouldn't work for packed bits (bytes that contains
several values, or consecutive bytes hosting values that have to be
bit shifted). I believe it is impossible to specify value position, length and format
at bit level.
Why bother ? context :
I'm defining a bunch of structures to parse binary datas from array of bytes (IO.File.ReadAllBytes) or streams, and write them back, datas related to a game. Application logic should use the structures to quickly access and manipulate the datas on demand. Assembly expected capabilities is read, validate, edit, create and write, outside the scope of the game (addon building, control) and inside the scope of the game (API, live modding or monitoring). Other purpose is to understand the content of binaries (hex) and make use of that understanding to build what's missing in the game.
The purpose of the assembly is to provide a ready to use basis components for a c# addon contributor (I don't plan to make the code portable). Creating applications for the game or processing addon from source to compilation into game binaries. It's nice to have a class that loads the entire content of a file in memory, but some context require you to not do that, and only retrieve from the file what is necessary, hence the choice of the struct pattern.
I need to figure out the trust and legal issues (copyrighted data) but that's outside the scope of the main concern. If that matter, Microsoft did provide over the years public freely accessible SDKs exposing binaries structures on previous versions of the game, for the purpose of what I'm doing (I'm not the first and probably not the last to do so). Though, I wouldn't dare to expose undocumented binaries (for the latest game datas for instance), nor facilitate a copyright breach on copyrighted materials/binaries.
I'm just asking confirmation if there is a way or not to have private fields not being part of the structure layout. Naive belief ATM is "that's impossible, but there are workarounds". It's just that my c# experience is pretty sparce, so maybe I'm wrong, why I ask. Thanks !
As suggested, there are several ways to get the job done. Here are the getters/setters I came up with within the structure. I'll measure how each code performs on various scenarios later. The dict approach is very seducing as on many scenarios, I would need a directly accessible global database of (59000) airports with runways and parking spots (not just the Ident), but a fast check between struct fields is also interesting.
public string IdentStr_Marshal {
get {
var output = "";
GCHandle pinnedHandle; // CS0165 for me (-> c# v5)
try { // Fast if no exception, (very) slow if exception thrown
pinnedHandle = GCHandle.Alloc(this, GCHandleType.Pinned);
IntPtr structPtr = pinnedHandle.AddrOfPinnedObject();
output = Marshal.PtrToStringAnsi(structPtr, 4);
// Cannot use UTF8 because the assembly should work in Framework v4.5
} finally { if (pinnedHandle.IsAllocated) pinnedHandle.Free(); }
return output;
}
set {
value.PadRight(4); // Must fill the blanks - initial while loop replaced (Charlieface's)
IntPtr intValuePtr = IntPtr.Zero;
// Cannot use UTF8 because some users are on Win7 with FlightSim 2004
try { // Put a try as a matter of habit, but not convinced it's gonna throw.
intValuePtr = Marshal.StringToHGlobalAnsi(value);
Ident = Marshal.ReadInt32(intValuePtr, 0).BinaryConvertToUInt32(); // Extension method to convert type.
} finally { Marshal.FreeHGlobal(intValuePtr); // freeing the right pointer }
}
}
public unsafe string IdentStr_Pointer {
get {
string output = "";
fixed (UInt32* ident = &Ident) { // Fixing the field
sbyte* bytes = (sbyte*)ident;
output = new string(bytes, 0, 4, System.Text.Encoding.ASCII); // Encoding added (#Charlieface)
}
return output;
}
set {
// value must not exceed a length of 4 and must be in Ansi [A-Z,0-9,whitespace 0x20].
// value validation at this point occurs outside the structure.
fixed (UInt32* ident = &Ident) { // Fixing the field
byte* bytes = (byte*)ident;
byte[] asciiArr = System.Text.Encoding.ASCII.GetBytes(value);
if (asciiArr.Length >= 4) // (asciiArr.Length == 4) would also work
for (Int32 i = 0; i < 4; i++) bytes[i] = asciiArr[i];
else {
for (Int32 i = 0; i < asciiArr.Length; i++) bytes[i] = asciiArr[i];
for (Int32 i = asciiArr.Length; i < 4; i++) bytes[i] = 0x20;
}
}
}
}
static Dictionary<UInt32, string> ps_dict = new Dictionary<UInt32, string>();
public string IdentStr_StaticDict {
get {
string output; // logic update with TryGetValue (#Charlieface)
if (ps_dict.TryGetValue(Ident, out output)) return output;
output = System.Text.Encoding.ASCII.GetString(Ident.ToBytes(EndiannessType.LittleEndian));
ps_dict.Add(Ident, output);
return output;
}
set { // input can be "FMEE", "DME" or "DK". length of 2 characters is the minimum.
var bytes = new byte[4]; // Need to convert value to a 4 byte array
byte[] asciiArr = System.Text.Encoding.ASCII.GetBytes(value); // should be 4 bytes or less
// Put the valid ASCII codes in the array.
if (asciiArr.Length >= 4) // (asciiArr.Length == 4) would also work
for (Int32 i = 0; i < 4; i++) bytes[i] = asciiArr[i];
else {
for (Int32 i = 0; i < asciiArr.Length; i++) bytes[i] = asciiArr[i];
for (Int32 i = asciiArr.Length; i < 4; i++) bytes[i] = 0x20;
}
Ident = BitConverter.ToUInt32(bytes, 0); // Set structure int value
if (!ps_dict.ContainsKey(Ident)) // Add if missing
ps_dict.Add(Ident, System.Text.Encoding.ASCII.GetString(bytes));
}
}
As mentioned by others, it is not possible to exclude a field from a struct for marshalling.
You also cannot use a pointer as a string in most places.
If the number of different possible strings is relatively small (and it probably will be, given it's only 4 characters), then you could use a static Dictionary<int, string> as a kind of string-interning mechanism.
Then you write a property to add/retrieve the real string.
Note that dictionary access is O(1), and hashing an int just returns itself, so it will be very, very fast, but will take up some memory.
[StructLayout(LayoutKind.Explicit, Size=8)]
public struct AirportHeader
{
[FieldOffset(0)]
[MarshalAs(UnmanagedType.I4)]
public int Ident; // a 4 bytes ASCII : "FIMP" { 0x46, 0x49, 0x4D, 0x50 }
[FieldOffset(4)]
[MarshalAs(UnmanagedType.I4)]
public int Offset;
static Dictionary<int, string> _identStrings = new Dictionary<int, string>();
public string IdentStr =>
_identStrings.TryGetValue(Ident, out var ret) ? ret :
(_identStrings[Ident] = Encoding.ASCII.GetString(Ident.GetBytes());
}
This is not possible because a structure must contain all of its values ​​in a specific order. Usually this order is controlled by the CLR itself. If you want to change the order of the data order, you can use the StructLayout. However, you cannot exclude a field or that data would simply not exist in memory.
Instead of a string (which is a reference type) you can use a pointer to point directly to that string and use that in your structure in combination with the StructLayout. To get this string value, you can use a get-only property that reads directly from unmanaged memory.

How To Allocate Contiguous and Fixed Memory in .NET Core

I am working on a physics simulation project where performance is critical, and I think one bottleneck is my memory management. Currently, I have buffer objects that contain a fixed number of rigid bodies, particles and forces. All of the physics entities are initialized within its buffer before the simulation starts. When an entity is needed, the first inactive one is selected, otherwise, the oldest, and when it is no longer needed, it is moved to the end, so the active entities are all front packed.
Here is the data structure that I am using.
public sealed class Buffer<TValue> : IEnumerable<BufferElement<TValue>> where TValue : new()
{
public Buffer(int capacity)
{
Count = 0;
Capacity = capacity;
Elements = new BufferElement<TValue>[capacity];
for (var index = 0; index < Elements.Length; index++)
{
Elements[index] = new BufferElement<TValue>();
}
}
public int Count { get; private set; }
public int Capacity { get; private set; }
private int ActiveCount { get; set; }
private BufferElement<TValue>[] Elements { get; }
public BufferElement<TValue> Activate()
{
if (Count == ActiveCount) Count = 0;
var bufferElement = Elements[Count++];
if (!bufferElement.Active)
{
bufferElement.Active = true;
ActiveCount++;
}
return bufferElement;
}
public void Deactivate(BufferElement element)
{
if (!element.Active) return;
element.Active = false;
var lhs = element.Index;
var rhs = --ActiveCount;
Elements[lhs] = Elements[rhs];
Elements[rhs] = element;
Elements[lhs].Index = lhs;
Elements[rhs].Index = rhs;
}
}
After reading up on how .NET Core treats arrays, there are two things that might be an issue. The first is each time you access an element in the array, it performs safety checks, and the second being that the GC can copy the array to a new memory address.
I would like to have all of my buffers that contain physics entities to not preform any safety checks and to be fixed in contiguous memory, if possible. I believe this should be possible, since the size of each buffer is fixed, and the size of the elements (rigid body, particle, force) are fixed as well.
There seems to be many ways of managing memory in C#, and I am having a difficult time figuring out which would be right for me in this situation.
Pointer
Marshal
Memory
Now, the question boils down to three parts:
Can this be done?
If so, what is the best method to manage memory?
And, what would the proper implementation look like?
First, yes, it can be done. Second, the best really depends, since many tradeoffs exist, and this quickly becomes an opinionated post. Third, the lightest alternative would be to use the fixed keyword to allocate your array in an unsafe context (no safety checks on arrays neither in here and allows you to use C style pointers) and pin its address so that it doesn't change, although using Span or Memory might be easier, the low level approach of fixed and unsafe can yield better performance when used correctly. Check out this, it's the official docs and it's filled with neat examples. One last tip, try to switch to a struct instead if it's possible, they have no memory overhead and higher memory density, yielding much better cache access times since many more will fit in the cache.

Allocate array in the same object as class

I have an array inside a class:
class MatchNode
{
public short X;
public short Y;
public NodeVal[] ControlPoints;
private MatchNode()
{
ControlPoints = new NodeVal[4];
}
}
The NodeVal is:
struct NodeVal
{
public readonly short X;
public readonly short Y;
public NodeVal(short x, short y)
{
X = x;
Y = y;
}
}
Now what if we wanted to take performance to next level and avoid having a separate object for the array. Actually it doesn't have to have an array. The only restriction is that the client code should be able to access NodeVal by index like:
matchNode.ControlPoints[i]
OR
matchNode[i]
and of course the solution should be faster or as fast as array access since it's supposed to be an optimization.
EDIT: As Ryan suggested it seems I should explain more about the motivation:
The MatchNode class is used heavily in the project. Millions of them are used in the project and each are accessed hundreds of times so having them as compact and concise as possible can lead to less cache misses and overall performance.
Let's consider a 64bit machine. In the current implementation the class the array takes 8 bytes for the ControlPoints reference and the size of the array object would be at least 16 bytes of object overhead (for method table and sync block) and 16 byte for the actual byte. So we have at least 24 overhead bytes beside 16 bytes of actual data.
These objects are used in bottlenecks of the project so it matters if we could optimize them more.
Of course we could just have a super big array of NodeVal and just save an index in MatchNode that would locate the actual data but again it will change every client codes that uses the MatchNodes, let alone be a dirty non-object oriented solution.
It is okay to have a messy MatchNode that uses every kind of nasty trick like unsafe or static cache code. It is not okay to leak these optimizations out to the client code.
You´re looking for indexers:
class MatchNode
{
public short X;
public short Y;
private NodeVal[] myField;
public NodeVal this[int i] { get { return myField[i]; } set { myField[i] = value; } }
public MatchNode(int size) { this.myField = new NodeVal[size]; }
}
Now you can simply use this:
var m = new MatchNode(10);
m[0] = new NodeVal();
However I doubt this will affect performance (at least in means of speed) in any way and you should consider the actual problems using a profiling tool (dotTrace for instance). Furthermore this approach will also create a private backing-field which will produce the same memory-footprint.

Can you use List<List<struct>> to get around the 2gb object limit?

I'm running up against the 2gb object limit in c# (this applies even in 64 bit for some annoying reason) with a large collection of structs (est. size of 4.2 gig in total).
Now obviously using List is going to give me a list of size 4.2gb give or take, but would using a list made up of smaller lists, which in turn contain a portion of the structs, allow me to jump this limit?
My reasoning here is that it's only a hard-coded limit in the CLR that stops me instantiating a 9gig object on my 64bit platform, and it's entirely unrelated to system resources. Also Lists and Arrays are reference types, and so a List containing lists would only actually contain the references to each list. No one object therefore exceeds the size limit.
Is there any reason why this wouldn't work? I'd try this myself right now but I don't have a memory profiler on hand to verify.
Now obviously using List is going to give me a list of size 4.2gb give or take, but would using a list made up of smaller lists, which in turn contain a portion of the structs, allow me to jump this limit?
Yes - though, if you're trying to work around this limit, I'd consider using arrays yourself instead of letting the List<T> class manage the array.
The 2gb single object limit in the CLR is exactly that, a single object instance. When you make an array of a struct (which List<T> uses internally), the entire array is "one object instance" in the CLR. However, by using a List<List<T>> or a jagged array, each internal list/array is a separate object, which allows you to effectively have any size object you wish.
The CLR team actually blogged about this, and provided a sample BigArray<T> implementation that acts like a single List<T>, but does the "block" management internally for you. This is another option for getting >2gb lists.
Note that .NET 4.5 will have the option to provide larger than 2gb objects on x64, but it will be something you have to explicitly opt in to having.
The List holds references which are 4 or 8 bytes, depending on if you're running in 32-bit or 64-bit mode, therefore if you reference a 2GB object that would not increase the actual List size to 2 GB but it would only increase it by the number of bytes it is necessary to reference that object.
This will allow you to reference millions of objects and each object could be 2GB. If you have 4 objects in the List and each is 2 GB, then you would have 8 GB worth of objects referenced by the List, but the List object would have only used up an extra 4*8=32 bytes.
The number of references you can hold on a 32-bit machine before the List hits the 2GB limit is 536.87 million, on a 64-bit machine it's 268.43 million.
536 million references * 2 GB = A LOT OF DATA!
P.S. Reed pointed out, the above is true for reference types but not for value types. So if you're holding value types, then your workaround is valid. Please see the comment below for more info.
There's an interesting post around this subject here:
http://blogs.msdn.com/b/joshwil/archive/2005/08/10/450202.aspx
Which talks about writing your own 'BigArray' object.
In versions of .NET prior to 4.5, the maximum object size is 2GB. From 4.5 onwards you can allocate larger objects if gcAllowVeryLargeObjects is enabled. Note that the limit for string is not affected, but "arrays" should cover "lists" too, since lists are backed by arrays.
class HugeList<T>
{
private const int PAGE_SIZE = 102400;
private const int ALLOC_STEP = 1024;
private T[][] _rowIndexes;
private int _currentPage = -1;
private int _nextItemIndex = PAGE_SIZE;
private int _pageCount = 0;
private int _itemCount = 0;
#region Internals
private void AddPage()
{
if (++_currentPage == _pageCount)
ExtendPages();
_rowIndexes[_currentPage] = new T[PAGE_SIZE];
_nextItemIndex = 0;
}
private void ExtendPages()
{
if (_rowIndexes == null)
{
_rowIndexes = new T[ALLOC_STEP][];
}
else
{
T[][] rowIndexes = new T[_rowIndexes.Length + ALLOC_STEP][];
Array.Copy(_rowIndexes, rowIndexes, _rowIndexes.Length);
_rowIndexes = rowIndexes;
}
_pageCount = _rowIndexes.Length;
}
#endregion Internals
#region Public
public int Count
{
get { return _itemCount; }
}
public void Add(T item)
{
if (_nextItemIndex == PAGE_SIZE)
AddPage();
_itemCount++;
_rowIndexes[_currentPage][_nextItemIndex++] = item;
}
public T this[int index]
{
get { return _rowIndexes[index / PAGE_SIZE][index % PAGE_SIZE]; }
set { _rowIndexes[index / PAGE_SIZE][index % PAGE_SIZE] = value; }
}
#endregion Public
}

Are public fields ever OK?

Before you react from the gut, as I did initially, read the whole question please. I know they make you feel dirty, I know we've all been burned before and I know it's not "good style" but, are public fields ever ok?
I'm working on a fairly large scale engineering application that creates and works with an in memory model of a structure (anything from high rise building to bridge to shed, doesn't matter). There is a TON of geometric analysis and calculation involved in this project. To support this, the model is composed of many tiny immutable read-only structs to represent things like points, line segments, etc. Some of the values of these structs (like the coordinates of the points) are accessed tens or hundreds of millions of times during a typical program execution. Because of the complexity of the models and the volume of calculation, performance is absolutely critical.
I feel that we're doing everything we can to optimize our algorithms, performance test to determine bottle necks, use the right data structures, etc. etc. I don't think this is a case of premature optimization. Performance tests show order of magnitude (at least) performance boosts when accessing fields directly rather than through a property on the object. Given this information, and the fact that we can also expose the same information as properties to support data binding and other situations... is this OK? Remember, read only fields on immutable structs. Can anyone think of a reason I'm going to regret this?
Here's a sample test app:
struct Point {
public Point(double x, double y, double z) {
_x = x;
_y = y;
_z = z;
}
public readonly double _x;
public readonly double _y;
public readonly double _z;
public double X { get { return _x; } }
public double Y { get { return _y; } }
public double Z { get { return _z; } }
}
class Program {
static void Main(string[] args) {
const int loopCount = 10000000;
var point = new Point(12.0, 123.5, 0.123);
var sw = new Stopwatch();
double x, y, z;
double calculatedValue;
sw.Start();
for (int i = 0; i < loopCount; i++) {
x = point._x;
y = point._y;
z = point._z;
calculatedValue = point._x * point._y / point._z;
}
sw.Stop();
double fieldTime = sw.ElapsedMilliseconds;
Console.WriteLine("Direct field access: " + fieldTime);
sw.Reset();
sw.Start();
for (int i = 0; i < loopCount; i++) {
x = point.X;
y = point.Y;
z = point.Z;
calculatedValue = point.X * point.Y / point.Z;
}
sw.Stop();
double propertyTime = sw.ElapsedMilliseconds;
Console.WriteLine("Property access: " + propertyTime);
double totalDiff = propertyTime - fieldTime;
Console.WriteLine("Total difference: " + totalDiff);
double averageDiff = totalDiff / loopCount;
Console.WriteLine("Average difference: " + averageDiff);
Console.ReadLine();
}
}
result:
Direct field access: 3262
Property access: 24248
Total difference: 20986
Average difference: 0.00020986
It's only 21 seconds, but why not?
Your test isn't really being fair to the property-based versions. The JIT is smart enough to inline simple properties so that they have a runtime performance equivalent to that of direct field access, but it doesn't seem smart enough (today) to detect when the properties access constant values.
In your example, the entire loop body of the field access version is optimized away, becoming just:
for (int i = 0; i < loopCount; i++)
00000025 xor eax,eax
00000027 inc eax
00000028 cmp eax,989680h
0000002d jl 00000027
}
whereas the second version, is actually performing the floating point division on each iteration:
for (int i = 0; i < loopCount; i++)
00000094 xor eax,eax
00000096 fld dword ptr ds:[01300210h]
0000009c fdiv qword ptr ds:[01300218h]
000000a2 fstp st(0)
000000a4 inc eax
000000a5 cmp eax,989680h
000000aa jl 00000096
}
Making just two small changes to your application to make it more realistic makes the two operations practically identical in performance.
First, randomize the input values so that they aren't constants and the JIT isn't smart enough to remove the division entirely.
Change from:
Point point = new Point(12.0, 123.5, 0.123);
to:
Random r = new Random();
Point point = new Point(r.NextDouble(), r.NextDouble(), r.NextDouble());
Secondly, ensure that the results of each loop iteration are used somewhere:
Before each loop, set calculatedValue = 0 so they both start at the same point. After each loop call Console.WriteLine(calculatedValue.ToString()) to make sure that the result is "used" so the compiler doesn't optimize it away. Finally, change the body of the loop from "calculatedValue = ..." to "calculatedValue += ..." so that each iteration is used.
On my machine, these changes (with a release build) yield the following results:
Direct field access: 133
Property access: 133
Total difference: 0
Average difference: 0
Just as we expect, the x86 for each of these modified loops is identical (except for the loop address)
000000dd xor eax,eax
000000df fld qword ptr [esp+20h]
000000e3 fmul qword ptr [esp+28h]
000000e7 fdiv qword ptr [esp+30h]
000000eb fstp st(0)
000000ed inc eax
000000ee cmp eax,989680h
000000f3 jl 000000DF (This loop address is the only difference)
Given that you deal with immutable objects with readonly fields, I would say that you have hit the one case when I don't find public fields to be a dirty habit.
IMO, the "no public fields" rule is one of those rules which are technically correct, but unless you are designing a library intended to be used by the public it is unlikely to cause you any problem if you break it.
Before I get too massively downvoted, I should add that encapsulation is a good thing. Given the invariant "the Value property must be null if HasValue is false", this design is flawed:
class A {
public bool HasValue;
public object Value;
}
However, given that invariant, this design is equally flawed:
class A {
public bool HasValue { get; set; }
public object Value { get; set; }
}
The correct design is
class A {
public bool HasValue { get; private set; }
public object Value { get; private set; }
public void SetValue(bool hasValue, object value) {
if (!hasValue && value != null)
throw new ArgumentException();
this.HasValue = hasValue;
this.Value = value;
}
}
(and even better would be to provide an initializing constructor and make the class immutable).
I know you feel kind of dirty doing this, but it isn't uncommon for rules and guidelines to get shot to hell when performance becomes an issue. For example, quite a few high traffic websites using MySQL have data duplication and denormalized tables. Others go even crazier.
Moral of the story - it may go against everything you were taught or advised, but the benchmarks don't lie. If it works better, just do it.
If you really need that extra performance, then it's probably the right thing to do. If you don't need the extra performance then it's probably not.
Rico Mariani has a couple of related posts:
Ten Questions on Value-Based Programming
Ten Questions on Value-Based Programming : Solution
Personally, the only time I would consider using public fields is in a very implementation-specific private nested class.
Other times it just feels too "wrong" to do it.
The CLR will take care of performance by optimising out the method/property (in release builds) so that shouldn't be an issue.
Not that I disagree with the other answers, or with your conclusion... but I'd like to know where you get the order of magnitude performance difference stat from. As I understand the C# compiler, any simple property (with no additional code other than direct access to the field), should get inlined by the JIT compiler as a direct access anyway.
The advantedge of using properties even in these simple cases (in most situations) was that by writing it as a property you allow for future changes that might modify the property. (Although in your case there would not be any such changes in future of course)
Try compiling a release build and running directly from the exe instead of through the debugger. If the application was run through a debugger then the JIT compiler will not inline the property accessors. I was not able to replicate your results. In fact, each test I ran indicated that there was virtually no difference in execution time.
But, like the others I am not completely oppossed to direct field access. Especially because it is easy to make the field private and add a public property accessor at a later time without needed make any more code modifications to get the application to compile.
Edit: Okay, my initial tests used an int data type instead of double. I see a huge difference when using doubles. With ints the direct vs. property is virtually the same. With doubles property access is about 7x slower than direct access on my machine. This is somewhat puzzling to me.
Also, it is important to run the tests outside of the debugger. Even in release builds the debugger adds overhead which skews the results.
Here's some scenarios where it is OK (from the Framework Design Guidelines book):
DO use constant fields for constants
that will never change.
DO use public
static readonly fields for predefined
object instances.
And where it is not:
DO NOT assign instances of mutable
types to readonly fields.
From what you have stated I don't get why your trivial properties don't get inlined by the JIT?
If you modify your test to use the temp variables you assign rather than directly access the properties in your calculation you will see a large performance improvement:
sw.Start();
for (int i = 0; i < loopCount; i++)
{
x = point._x;
y = point._y;
z = point._z;
calculatedValue = x * y / z;
}
sw.Stop();
double fieldTime = sw.ElapsedMilliseconds;
Console.WriteLine("Direct field access: " + fieldTime);
sw.Reset();
sw.Start();
for (int i = 0; i < loopCount; i++)
{
x = point.X;
y = point.Y;
z = point.Z;
calculatedValue = x * y / z;
}
sw.Stop();
Perhaps I'll repeat someone else, but here's my point too if it may help.
Teachings are to give you the tools you need to achieve a certain level of ease when encountering such situations.
The Agile Software development methodology says that you have to first deliver the product to your client no matter what your code might look like. Second, you may optimize and make your code "beautiful" or according to the programming states of the art.
Here, either you or your client require performance. Within your project, PERFORMANCE is CRUCIAL, if I understand correctly.
So, I guess you'll agree with me that we don't care about what the code might look like or whether it respects the "art". Do what you have to to make it performant and powerful! Properties allow your code to "format" the data I/O if required. A property has its own memory address, then it looks for its member address when you return the member's value, so you got two searches of address. If performance is such critical, just do it, and make your immutable members public. :-)
This reflects some others point of view too, if I read correctly. :)
Have a good day!
Types which encapsulate functionality should use properties. Types which only serve to hold data should use public fields, except in the case of immutable classes (where wrapping fields in read-only properties is the only way to reliably protect them against modification). Exposing members as public fields essentially proclaims "these members may be freely modified at any time without regard for anything else". If the type in question is a class type, it further proclaims "anyone who exposes a reference to this thing will be allowing the recipient to change these members at any time in any fashion they see fit." While one shouldn't expose public fields in cases where such a proclamation would be inappropriate, one should expose public fields in cases where such a proclamation would be appropriate and client code could benefit from the assumptions enabled thereby.

Categories