Understanding of .NET internal StringBuilderCache class configuration - c#

When I was looking at decompiled .NET assemblies to see some internals, I've noticed interesting StringBuilderCache class used by multiple framework's methods:
internal static class StringBuilderCache
{
[ThreadStatic]
private static StringBuilder CachedInstance;
private const int MAX_BUILDER_SIZE = 360;
public static StringBuilder Acquire(int capacity = 16)
{
if (capacity <= 360)
{
StringBuilder cachedInstance = StringBuilderCache.CachedInstance;
if (cachedInstance != null && capacity <= cachedInstance.Capacity)
{
StringBuilderCache.CachedInstance = null;
cachedInstance.Clear();
return cachedInstance;
}
}
return new StringBuilder(capacity);
}
public static void Release(StringBuilder sb)
{
if (sb.Capacity <= 360)
{
StringBuilderCache.CachedInstance = sb;
}
}
public static string GetStringAndRelease(StringBuilder sb)
{
string result = sb.ToString();
StringBuilderCache.Release(sb);
return result;
}
}
Example usage we can find for example in string.Format method:
public static string Format(IFormatProvider provider, string format, params object[] args)
{
...
StringBuilder stringBuilder = StringBuilderCache.Acquire(format.Length + args.Length * 8);
stringBuilder.AppendFormat(provider, format, args);
return StringBuilderCache.GetStringAndRelease(stringBuilder);
}
While it is quite clever and for sure I will remember about such caching pattern, I wonder why MAX_BUILDER_SIZE is so small? Setting it to, let's set 2kB, wouldn't be better? It would prevent from creating bigger StringBuilder instances with a quite little memory overhead.

It is a per-thread cache so a low number is expected. Best to use the Reference Source for questions like this, you'll see the comments as well, which looks like (edited to fit):
// The value 360 was chosen in discussion with performance experts as a
// compromise between using as litle memory (per thread) as possible and
// still covering a large part of short-lived StringBuilder creations on
// the startup path of VS designers.
private const int MAX_BUILDER_SIZE = 360;
"VS designers" is a wee bit puzzling. Well, not really, surely this work was done to optimize Visual Studio. Neelie Kroes would have a field day and the EU would have another billion dollars if she would find out :)

Most strings built are probably small, so using a relatively small buffer size will cover most of the operations while not using up too much memory. Consider that there is a thread pool with possibly many threads being created. If every one of them would take up to 2kB for a cached buffer, it would add up to some amount of memory.

Related

Efficiency of static constant list initialization in C# vs C++ static arrays

I apologize in advance. My domain is mostly C (and C++). I'm trying to write something similar in C#. Let me explain with code.
In C++, I can use large static arrays that are processed during compile-time and stored in a read-only section of the PE file. For instance:
typedef struct _MY_ASSOC{
const char* name;
unsigned int value;
}MY_ASSOC, *LPMY_ASSOC;
bool GetValueForName(const char* pName, unsigned int* pnOutValue = nullptr)
{
bool bResult = false;
unsigned int nValue = 0;
static const MY_ASSOC all_assoc[] = {
{"name1", 123},
{"name2", 213},
{"name3", 1433},
//... more to follow
{"nameN", 12837},
};
for(size_t i = 0; i < _countof(all_assoc); i++)
{
if(strcmp(all_assoc[i].name, pName) == 0)
{
nValue = all_assoc[i].value;
bResult = true;
break;
}
}
if(pnOutValue)
*pnOutValue = nValue;
return bResult;
}
In the example above, the initialization of static const MY_ASSOC all_assoc is never called at run-time. It is entirely processed during the compile-time.
Now if I write something similar in C#:
public struct NameValue
{
public string name;
public uint value;
}
private static readonly NameValue[] g_arrNV_Assoc = new NameValue[] {
new NameValue() { name = "name1", value = 123 },
new NameValue() { name = "name2", value = 213 },
new NameValue() { name = "name3", value = 1433 },
// ... more to follow
new NameValue() { name = "nameN", value = 12837 },
};
public static bool GetValueForName(string name, out uint nOutValue)
{
foreach (NameValue nv in g_arrNV_Assoc)
{
if (name == nv.name)
{
nOutValue = nv.value;
return true;
}
}
nOutValue = 0;
return false;
}
The line private static readonly NameValue[] g_arrNV_Assoc has to be called once during the host class initialization, and it is done for every single element in that array!
So my question -- can I somehow optimize it so that the data stored in g_arrNV_Assoc array is stored in the PE section and not initialized at run-time?
PS. I hope I'm clear for the .NET folks with my terminology.
Indeed the terminology is sufficient enough, large static array is fine.
There is nothing you can really do to make it more efficient out of the box.
It will load initially once (at different times depending on which version of .net and if you have a static constructor). However, it will load before you call it.
Even if you created it empty with just the predetermined size, the CLR is still going to initialize each element to default, then you would have to buffer copy over your data somehow which in turn will have to be loaded from file.
The question are though
How much overhead does loading the default static array of struct actually have compared to what you are doing in C
Does it matter when in the lifecycle of the application when its loaded
And if this is way too much over-head (which i have already assumed you have determined), what other options are possibly available outside the box?
You could possibly pre-allocate a chunk of unmanaged memory, then read and copy the bytes in from somewhere, then inturn access using pointers.
You could also create this in a standard Dll, Pinvoke just like an other un-managed DLL. However i'm not really sure you will get much of a free-lunch here anyway, as there is overhead to marshal these sorts of calls to load your dll.
If your question is only academic, these are really your only options. However if this is actually a performance problem you have, you will need to try and benchmark this for micro-optimization and try to figure out what is suitable to you.
Anyway, i don't profess to know everything, maybe someone else has a better idea or more information. Good luck

How to instance a C# class in UNmanaged memory? (Possible?)

UPDATE: There is now an accepted answer that "works". You should never, ever, ever, ever use it. Ever.
First let me preface my question by stating that I'm a game developer. There's a legitimate - if highly unusual - performance-related reason for wanting to do this.
Say I have a C# class like this:
class Foo
{
public int a, b, c;
public void MyMethod(int d) { a = d; b = d; c = a + b; }
}
Nothing fancy. Note that it is a reference type that contains only value types.
In managed code I'd like to have something like this:
Foo foo;
foo = Voodoo.NewInUnmanagedMemory<Foo>(); // <- ???
foo.MyMethod(1);
What would the function NewInUnmanagedMemory look like? If it can't be done in C#, could it be done in IL? (Or maybe C++/CLI?)
Basically: Is there a way - no matter how hacky - to turn some totally arbitrary pointer into an object reference. And - short of making the CLR explode - damn the consequences.
(Another way to put my question is: "I want to implement a custom allocator for C#")
This leads to the follow-up question: What does the garbage collector do (implementation-specific, if need be) when faced with a reference that points outside of managed memory?
And, related to that, what would happen if Foo had a reference as a member field? What if it pointed at managed memory? What if it only ever pointed at other objects allocated in unmanaged memory?
Finally, if this is impossible: Why?
Update: Here are the "missing pieces" so far:
#1: How to convert an IntPtr to an object reference? It might be possible though unverifiable IL (see comments). So far I've had no luck with this. The framework seems to be extremely careful to prevent this from happening.
(It would also be nice to be able to get the size and layout information for non-blittable managed types at runtime. Again, the framework tries to make this impossible.)
#2: Assuming problem one can be solved - what does the GC do when it encounters an object reference that points outside of the GC heap? Does it crash? Anton Tykhyy, in his answer, guesses that it will. Given how careful the framework is to prevent #1, it does seem likely. Something that confirms this would be nice.
(Alternatively the object reference could point to pinned memory inside the GC heap. Would that make a difference?)
Based on this, I'm inclined to think that this idea for a hack is impossible - or at least not worth the effort. But I'd be interested to get an answer that goes into the technical details of #1 or #2 or both.
I have been experimenting creating classes in unmanaged memory. It is possible but has a problem I am currently unable to solve - you can't assign objects to reference-type fields -see edit at the bottom-, so you can have only structure fields in your custom class.
This is evil:
using System;
using System.Reflection.Emit;
using System.Runtime.InteropServices;
public class Voodoo<T> where T : class
{
static readonly IntPtr tptr;
static readonly int tsize;
static readonly byte[] zero;
public static T NewInUnmanagedMemory()
{
IntPtr handle = Marshal.AllocHGlobal(tsize);
Marshal.Copy(zero, 0, handle, tsize);
IntPtr ptr = handle+4;
Marshal.WriteIntPtr(ptr, tptr);
return GetO(ptr);
}
public static void FreeUnmanagedInstance(T obj)
{
IntPtr ptr = GetPtr(obj);
IntPtr handle = ptr-4;
Marshal.FreeHGlobal(handle);
}
delegate T GetO_d(IntPtr ptr);
static readonly GetO_d GetO;
delegate IntPtr GetPtr_d(T obj);
static readonly GetPtr_d GetPtr;
static Voodoo()
{
Type t = typeof(T);
tptr = t.TypeHandle.Value;
tsize = Marshal.ReadInt32(tptr, 4);
zero = new byte[tsize];
DynamicMethod m = new DynamicMethod("GetO", typeof(T), new[]{typeof(IntPtr)}, typeof(Voodoo<T>), true);
var il = m.GetILGenerator();
il.Emit(OpCodes.Ldarg_0);
il.Emit(OpCodes.Ret);
GetO = m.CreateDelegate(typeof(GetO_d)) as GetO_d;
m = new DynamicMethod("GetPtr", typeof(IntPtr), new[]{typeof(T)}, typeof(Voodoo<T>), true);
il = m.GetILGenerator();
il.Emit(OpCodes.Ldarg_0);
il.Emit(OpCodes.Ret);
GetPtr = m.CreateDelegate(typeof(GetPtr_d)) as GetPtr_d;
}
}
If you care about memory leak, you should always call FreeUnmanagedInstance when you are done with your class.
If you want more complex solution, you can try this:
using System;
using System.Reflection.Emit;
using System.Runtime.InteropServices;
public class ObjectHandle<T> : IDisposable where T : class
{
bool freed;
readonly IntPtr handle;
readonly T value;
readonly IntPtr tptr;
public ObjectHandle() : this(typeof(T))
{
}
public ObjectHandle(Type t)
{
tptr = t.TypeHandle.Value;
int size = Marshal.ReadInt32(tptr, 4);//base instance size
handle = Marshal.AllocHGlobal(size);
byte[] zero = new byte[size];
Marshal.Copy(zero, 0, handle, size);//zero memory
IntPtr ptr = handle+4;
Marshal.WriteIntPtr(ptr, tptr);//write type ptr
value = GetO(ptr);//convert to reference
}
public T Value{
get{
return value;
}
}
public bool Valid{
get{
return Marshal.ReadIntPtr(handle, 4) == tptr;
}
}
public void Dispose()
{
if(!freed)
{
Marshal.FreeHGlobal(handle);
freed = true;
GC.SuppressFinalize(this);
}
}
~ObjectHandle()
{
Dispose();
}
delegate T GetO_d(IntPtr ptr);
static readonly GetO_d GetO;
static ObjectHandle()
{
DynamicMethod m = new DynamicMethod("GetO", typeof(T), new[]{typeof(IntPtr)}, typeof(ObjectHandle<T>), true);
var il = m.GetILGenerator();
il.Emit(OpCodes.Ldarg_0);
il.Emit(OpCodes.Ret);
GetO = m.CreateDelegate(typeof(GetO_d)) as GetO_d;
}
}
/*Usage*/
using(var handle = new ObjectHandle<MyClass>())
{
//do some work
}
I hope it will help you on your path.
Edit: Found a solution to reference-type fields:
class MyClass
{
private IntPtr a_ptr;
public object a{
get{
return Voodoo<object>.GetO(a_ptr);
}
set{
a_ptr = Voodoo<object>.GetPtr(value);
}
}
public int b;
public int c;
}
Edit: Even better solution. Just use ObjectContainer<object> instead of object and so on.
public struct ObjectContainer<T> where T : class
{
private readonly T val;
public ObjectContainer(T obj)
{
val = obj;
}
public T Value{
get{
return val;
}
}
public static implicit operator T(ObjectContainer<T> #ref)
{
return #ref.val;
}
public static implicit operator ObjectContainer<T>(T obj)
{
return new ObjectContainer<T>(obj);
}
public override string ToString()
{
return val.ToString();
}
public override int GetHashCode()
{
return val.GetHashCode();
}
public override bool Equals(object obj)
{
return val.Equals(obj);
}
}
"I want to implement a custom allocator for C#"
GC is at the core of the CLR. Only Microsoft (or the Mono team in case of Mono) can replace it, at a great cost in development effort. GC being at the core of the CLR, messing around with the GC or the managed heap will crash the CLR — quickly if you're very-very lucky.
What does the garbage collector do (implementation-specific, if need be) when faced with a reference that points outside of managed memory?
It crashes in an implementation-specific way ;)
Purely C# Approach
So, there are a few options. The easiest is to use new/delete in an unsafe context for structs. The second is to use built-in Marshalling services to deal with unmanaged memory (code for this is visible below). However, both of these deal with structs (though I think the latter method is very close to what you want). My code has a limitation in that you must stick to structures throughout and use IntPtrs for references (using ChunkAllocator.ConvertPointerToStructure to get the data and ChunkAllocator.StoreStructure to store the changed data). This is obviously cumbersome, so you'd better really want the performance if you use my approach. However, if you are dealing with only value-types, this approach is sufficient.
Detour: Classes in the CLR
Classes have a 8 byte "prefix" in their allocated memory. Four bytes are for the sync index for multithreading, and four bytes are for identifying their type (basically, virtual method table and run-time reflection). This makes it hard to deal with unamanaged memory since these are CLR specific and since the sync index can change during run-time. See here for details on run-time object creation and here for an overview of memory layout for a reference type. Also check out CLR via C# for a more in-depth explanation.
A Caveat
As usual, things are rarely so simple as yes/no. The real complexity of reference types has to do with how the garbage collector compacts allocated memory during a garbage collection. If you can somehow ensure that a garbage collection doesn't happen or that it won't affect the data in question (see the fixed keyword) then you can turn an arbitrary pointer into an object reference (just offset the pointer by 8 bytes, then interpret that data as a struct with the same fields and memory layout; perhaps use StructLayoutAttribute to be sure). I would experiment with non-virtual methods to see if they work; they should (especially if you put them on the struct) but virtual methods are no-go due to the virtual method table that you'd have to discard.
One Does Not Simply Walk Into Mordor
Simply put, this means that managed reference types (classes) cannot be allocated in unmanaged memory. You could use managed reference types in C++, but those would be subject to garbage collection... and the process and code is more painful than the struct-based approach. Where does that leave us? Back where we started, of course.
There is a Secret Way
We could brave Shelob's Lair memory allocation ourselves. Unfortunately, this is where our paths must part, because I am not that knowledgeable about it. I will provide you with a link or two - perhaps three or four in actuality. This is rather complicated and begs the question: Are there other optimizations you could try? Cache coherency and superior algorithms is one approach, as is judicious application of P/Invoke for performance-critical code. You could also apply the aforementioned structures-only memory allocation for key methods/classes.
Good luck, and let us know if you find a superior alternative.
Appendix: Source Code
ChunkAllocator.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Runtime.InteropServices;
namespace MemAllocLib
{
public sealed class ChunkAllocator : IDisposable
{
IntPtr m_chunkStart;
int m_offset;//offset from already allocated memory
readonly int m_size;
public ChunkAllocator(int memorySize = 1024)
{
if (memorySize < 1)
throw new ArgumentOutOfRangeException("memorySize must be positive");
m_size = memorySize;
m_chunkStart = Marshal.AllocHGlobal(memorySize);
}
~ChunkAllocator()
{
Dispose();
}
public IntPtr Allocate<T>() where T : struct
{
int reqBytes = Marshal.SizeOf(typeof(T));//not highly performant
return Allocate<T>(reqBytes);
}
public IntPtr Allocate<T>(int reqBytes) where T : struct
{
if (m_chunkStart == IntPtr.Zero)
throw new ObjectDisposedException("ChunkAllocator");
if (m_offset + reqBytes > m_size)
throw new OutOfMemoryException("Too many bytes allocated: " + reqBytes + " needed, but only " + (m_size - m_offset) + " bytes available");
T created = default(T);
Marshal.StructureToPtr(created, m_chunkStart + m_offset, false);
m_offset += reqBytes;
return m_chunkStart + (m_offset - reqBytes);
}
public void Dispose()
{
if (m_chunkStart != IntPtr.Zero)
{
Marshal.FreeHGlobal(m_chunkStart);
m_offset = 0;
m_chunkStart = IntPtr.Zero;
}
}
public void ReleaseAllMemory()
{
m_offset = 0;
}
public int AllocatedMemory
{
get { return m_offset; }
}
public int AvailableMemory
{
get { return m_size - m_offset; }
}
public int TotalMemory
{
get { return m_size; }
}
public static T ConvertPointerToStruct<T>(IntPtr ptr) where T : struct
{
return (T)Marshal.PtrToStructure(ptr, typeof(T));
}
public static void StoreStructure<T>(IntPtr ptr, T data) where T : struct
{
Marshal.StructureToPtr(data, ptr, false);
}
}
}
Program.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace MemoryAllocation
{
class Program
{
static void Main(string[] args)
{
using (MemAllocLib.ChunkAllocator chunk = new MemAllocLib.ChunkAllocator())
{
Console.WriteLine(">> Simple data test");
SimpleDataTest(chunk);
Console.WriteLine();
Console.WriteLine(">> Complex data test");
ComplexDataTest(chunk);
}
Console.ReadLine();
}
private static void SimpleDataTest(MemAllocLib.ChunkAllocator chunk)
{
IntPtr ptr = chunk.Allocate<System.Int32>();
Console.WriteLine(MemAllocLib.ChunkAllocator.ConvertPointerToStruct<Int32>(ptr));
System.Diagnostics.Debug.Assert(MemAllocLib.ChunkAllocator.ConvertPointerToStruct<Int32>(ptr) == 0, "Data not initialized properly");
System.Diagnostics.Debug.Assert(chunk.AllocatedMemory == sizeof(Int32), "Data not allocated properly");
int data = MemAllocLib.ChunkAllocator.ConvertPointerToStruct<Int32>(ptr);
data = 10;
MemAllocLib.ChunkAllocator.StoreStructure(ptr, data);
Console.WriteLine(MemAllocLib.ChunkAllocator.ConvertPointerToStruct<Int32>(ptr));
System.Diagnostics.Debug.Assert(MemAllocLib.ChunkAllocator.ConvertPointerToStruct<Int32>(ptr) == 10, "Data not set properly");
Console.WriteLine("All tests passed");
}
private static void ComplexDataTest(MemAllocLib.ChunkAllocator chunk)
{
IntPtr ptr = chunk.Allocate<Person>();
Console.WriteLine(MemAllocLib.ChunkAllocator.ConvertPointerToStruct<Person>(ptr));
System.Diagnostics.Debug.Assert(MemAllocLib.ChunkAllocator.ConvertPointerToStruct<Person>(ptr).Age == 0, "Data age not initialized properly");
System.Diagnostics.Debug.Assert(MemAllocLib.ChunkAllocator.ConvertPointerToStruct<Person>(ptr).Name == null, "Data name not initialized properly");
System.Diagnostics.Debug.Assert(chunk.AllocatedMemory == System.Runtime.InteropServices.Marshal.SizeOf(typeof(Person)) + sizeof(Int32), "Data not allocated properly");
Person data = MemAllocLib.ChunkAllocator.ConvertPointerToStruct<Person>(ptr);
data.Name = "Bob";
data.Age = 20;
MemAllocLib.ChunkAllocator.StoreStructure(ptr, data);
Console.WriteLine(MemAllocLib.ChunkAllocator.ConvertPointerToStruct<Person>(ptr));
System.Diagnostics.Debug.Assert(MemAllocLib.ChunkAllocator.ConvertPointerToStruct<Person>(ptr).Age == 20, "Data age not set properly");
System.Diagnostics.Debug.Assert(MemAllocLib.ChunkAllocator.ConvertPointerToStruct<Person>(ptr).Name == "Bob", "Data name not set properly");
Console.WriteLine("All tests passed");
}
struct Person
{
public string Name;
public int Age;
public Person(string name, int age)
{
Name = name;
Age = age;
}
public override string ToString()
{
if (string.IsNullOrWhiteSpace(Name))
return "Age is " + Age;
return Name + " is " + Age + " years old";
}
}
}
}
You can write code in C++ and call it from .NET using P/Invoke or you can you can write code in managed C++ that gives you full access to the native API from inside a .NET language. However, on the managed side you can only work with managed types so you will have to encapsulate your unmanaged objects.
To give a simple example: Marshal.AllocHGlobal allows you to allocate memory on the Windows heap. The handle returned is not of much use in .NET but can be required when calling a native Windows API requiring a buffer.
This is not possible.
However you can use a managed struct and create a pointer of this struct type. This pointer can point anywhere (including to unmanaged memory).
The question is, why would you want to have a class in unmanaged memory? You wouldn't get GC features anyway. You can just use a pointer-to-struct.
Nothing like that is possible. You can access managed memory in unsafe context, but said memory is still managed and subject to GC.
Why?
Simplicity and security.
But now that I think about it, I think you can mix managed and unmanaged with C++/CLI. But I'm not sure about that, because I never worked with C++/CLI.
I don't know a way to hold a C# class instance in the unmanaged heap, not even in C++/CLI.
It's possible to design a value-type allocator entirely within .net, without using any unmanaged code, which can allocate and free an arbitrary number of value-type instances without any significant GC pressure. The trick is to create a relatively small number of arrays (possibly one for each type) to hold the instances, and then pass around "instance reference" structs which hold the array indices of the index in question.
Suppose, for example, that I want to have a "creature" class which holds XYZ positions (float), XYZ velocity (also float), roll/pitch/yaw (ditto), damage (float), and kind (enumeration). An interface "ICreatureReference" would define getters and setters for all those properties. A typical implementation would be a struct CreatureReference with a single private field int _index, and property accessors like:
float Position {
get {return Creatures[_index].Position;}
set {Creatures[_index].Position = value;}
};
The system would keep a list of which array slots are used and vacant (it could, if desired, use one of the fields within Creatures to form a linked list of vacant slots). The CreatureReference.Create method would allocate an item from the vacant-items list; the Dispose method of a CreatureReference instance would add its array slot to the vacant-items list.
This approach ends up requiring an annoying amount of boilerplate code, but it can be reasonably efficient and avoid GC pressure. The biggest problems with are probably that (1) it makes structs behave more like reference types than structs, and (2) it requires absolute discipline with calling IDispose, since non-disposed array slots will never get reclaimed. Another irksome quirk is that one will be unable to use property setters for read-only values of type CreatureReference, even though the property setters would not try to mutate any fields of the CreatureReference instance to which they are applied. Using an interface ICreatureReference may avoid this difficulty, but one must be careful to only declare storage locations of generic types constrained to ICreatureReference, rather than declaring storage locations of ICreatureReference.

C# OutOfMemory, Mapped Memory File or Temp Database

Seeking some advice, best practice etc...
Technology: C# .NET4.0, Winforms, 32 bit
I am seeking some advice on how I can best tackle large data processing in my C# Winforms application which experiences high memory usage (working set) and the occasional OutOfMemory exception.
The problem is that we perform a large amount of data processing "in-memory" when a "shopping-basket" is opened. In simplistic terms when a "shopping-basket" is loaded we perform the following calculations;
For each item in the "shopping-basket" retrieve it's historical price going all the way back to the date the item first appeared in-stock (could be two months, two years or two decades of data). Historical price data is retrieved from text files, over the internet, any format which is supported by a price plugin.
For each item, for each day since it first appeared in-stock calculate various metrics which builds a historical profile for each item in the shopping-basket.
The result is that we can potentially perform hundreds, thousand and/or millions of calculations depending upon the number of items in the "shopping-basket". If the basket contains too many items we run the risk of hitting a "OutOfMemory" exception.
A couple of caveats;
This data needs to be calculated for each item in the "shopping-basket" and the data is kept until the "shopping-basket" is closed.
Even though we perform steps 1 and 2 in a background thread, speed is important as the number of items in the "shopping-basket" can greatly effect overall calculation speed.
Memory is salvaged by the .NET garbage collector when a "shopping-basket" is closed. We have profiled our application and ensure that all references are correctly disposed and closed when a basket is closed.
After all the calculations are completed the resultant data is stored in a IDictionary. "CalculatedData is a class object whose properties are individual metrics calculated by the above process.
Some ideas I've thought about;
Obviously my main concern is to reduce the amount of memory being used by the calculations however the volume of memory used can only be reduced if I
1) reduce the number of metrics being calculated for each day or
2) reduce the number of days used for the calculation.
Both of these options are not viable if we wish to fulfill our business requirements.
Memory Mapped Files
One idea has been to use memory mapped files which will store the data dictionary. Would this be possible/feasible and how can we put this into place?
Use a temporary database
The idea is to use a separate (not in-memory) database which can be created for the life-cycle of the application. As "shopping-baskets" are opened we can persist the calculated data to the database for repeated use, alleviating the requirement to recalculate for the same "shopping-basket".
Are there any other alternatives that we should consider? What is best practice when it comes to calculations on large data and performing them outside of RAM?
Any advice is appreciated....
The easiest solution is a database, perhaps SQLite. Memory mapped files don't automatically become dictionaries, you would have to code all the memory management yourself, and thereby fight with the .net GC system itself for ownership of he data.
If you're interested in trying the memory mapped file approach, you can try it now. I wrote a small native .NET package called MemMapCache that in essence creates a key/val database backed by MemMappedFiles. It's a bit of a hacky concept, but the program MemMapCache.exe keeps all references to the memory mapped files so that if your application crashes, you don't have to worry about losing the state of your cache.
It's very simple to use and you should be able to drop it in your code without too many modifications. Here is an example using it: https://github.com/jprichardson/MemMapCache/blob/master/TestMemMapCache/MemMapCacheTest.cs
Maybe it'd be of some use to you to at least further figure out what you need to do for an actual solution.
Please let me know if you do end up using it. I'd be interested in your results.
However, long-term, I'd recommend Redis.
As an update for those stumbling upon this thread...
We ended up using SQLite as our caching solution. The SQLite database we employ exists separate to the main data store used by the application. We persist calculated data to the SQLite (diskCache) as it's required and have code controlling cache invalidation etc. This was a suitable solution for us as we were able to achieve write speeds up and around 100,000 records per second.
For those interested, this is the code that controls inserts into the diskCache. Full credit for this code goes to JP Richardson (shown answering a question here) for his excellent blog post.
internal class SQLiteBulkInsert
{
#region Class Declarations
private SQLiteCommand m_cmd;
private SQLiteTransaction m_trans;
private readonly SQLiteConnection m_dbCon;
private readonly Dictionary<string, SQLiteParameter> m_parameters = new Dictionary<string, SQLiteParameter>();
private uint m_counter;
private readonly string m_beginInsertText;
#endregion
#region Constructor
public SQLiteBulkInsert(SQLiteConnection dbConnection, string tableName)
{
m_dbCon = dbConnection;
m_tableName = tableName;
var query = new StringBuilder(255);
query.Append("INSERT INTO ["); query.Append(tableName); query.Append("] (");
m_beginInsertText = query.ToString();
}
#endregion
#region Allow Bulk Insert
private bool m_allowBulkInsert = true;
public bool AllowBulkInsert { get { return m_allowBulkInsert; } set { m_allowBulkInsert = value; } }
#endregion
#region CommandText
public string CommandText
{
get
{
if(m_parameters.Count < 1) throw new SQLiteException("You must add at least one parameter.");
var sb = new StringBuilder(255);
sb.Append(m_beginInsertText);
foreach(var param in m_parameters.Keys)
{
sb.Append('[');
sb.Append(param);
sb.Append(']');
sb.Append(", ");
}
sb.Remove(sb.Length - 2, 2);
sb.Append(") VALUES (");
foreach(var param in m_parameters.Keys)
{
sb.Append(m_paramDelim);
sb.Append(param);
sb.Append(", ");
}
sb.Remove(sb.Length - 2, 2);
sb.Append(")");
return sb.ToString();
}
}
#endregion
#region Commit Max
private uint m_commitMax = 25000;
public uint CommitMax { get { return m_commitMax; } set { m_commitMax = value; } }
#endregion
#region Table Name
private readonly string m_tableName;
public string TableName { get { return m_tableName; } }
#endregion
#region Parameter Delimiter
private const string m_paramDelim = ":";
public string ParamDelimiter { get { return m_paramDelim; } }
#endregion
#region AddParameter
public void AddParameter(string name, DbType dbType)
{
var param = new SQLiteParameter(m_paramDelim + name, dbType);
m_parameters.Add(name, param);
}
#endregion
#region Flush
public void Flush()
{
try
{
if (m_trans != null) m_trans.Commit();
}
catch (Exception ex)
{
throw new Exception("Could not commit transaction. See InnerException for more details", ex);
}
finally
{
if (m_trans != null) m_trans.Dispose();
m_trans = null;
m_counter = 0;
}
}
#endregion
#region Insert
public void Insert(object[] paramValues)
{
if (paramValues.Length != m_parameters.Count)
throw new Exception("The values array count must be equal to the count of the number of parameters.");
m_counter++;
if (m_counter == 1)
{
if (m_allowBulkInsert) m_trans = m_dbCon.BeginTransaction();
m_cmd = m_dbCon.CreateCommand();
foreach (var par in m_parameters.Values)
m_cmd.Parameters.Add(par);
m_cmd.CommandText = CommandText;
}
var i = 0;
foreach (var par in m_parameters.Values)
{
par.Value = paramValues[i];
i++;
}
m_cmd.ExecuteNonQuery();
if(m_counter != m_commitMax)
{
// Do nothing
}
else
{
try
{
if(m_trans != null) m_trans.Commit();
}
catch(Exception)
{ }
finally
{
if(m_trans != null)
{
m_trans.Dispose();
m_trans = null;
}
m_counter = 0;
}
}
}
#endregion
}

Large static arrays are slowing down class load, need a better/faster lookup method

I have a class with a couple static arrays:
an int[] with 17,720 elements
a string[] with 17,720 elements
I noticed when I first access this class it takes almost 2 seconds to initialize, which causes a pause in the GUI that's accessing it.
Specifically, it's a lookup for Unicode character names. The first array is an index into the second array.
static readonly int[] NAME_INDEX = {
0x0000, 0x0001, 0x0005, 0x002C, 0x003B, ...
static readonly string[] NAMES = {
"Exclamation Mark", "Digit Three", "Semicolon", "Question Mark", ...
The following code is how the arrays are used (given a character code). [Note: This code isn't a performance problem]
int nameIndex = Array.BinarySearch<int>(NAME_INDEX, code);
if (nameIndex > 0)
{
return NAMES[nameIndex];
}
I guess I'm looking at other options on how to structure the data so that 1) The class is quickly loaded, and 2) I can quickly get the "name" for a given character code.
Should I not be storing all these thousands of elements in static arrays?
Update
Thanks for all the suggestions. I've tested out a Dictionary approach and the performance of adding all the entries seems to be really poor.
Here is some code with the Unicode data to test out Arrays vs Dictionaries
http://drop.io/fontspace/asset/fontspace-unicodesupport-zip
Solution Update
I tested out my original dual arrays (which are faster than both dictionary options) with a background thread to initialize and that helped performance a bit.
However, the real surprise is how well the binary files in resource streams works. It is the fastest solution discussed in this thread. Thanks everyone for your answers!
So a couple of observations. Binary Search is only going to work if your array is sorted, and from your above code snippet, it doesn't look to be sorted.
Since your primary goal is to find a specific name, your code is begging for a hash table. I would suggest using a Dictionary, it will give you O(1) (on average) lookup, without much more overhead than just having the arrays.
As for the load time, I agree with Andrey that the best way is going to be by using a separate thread. You are going to have some initialization overhead when using the amount of data you are using. Normal practice with GUIs is to use a separate thread for these activites so you don't lock up the UI.
First
A Dictionary<int, string> is going to perform far better than your duelling arrays will. Putting aside how this data gets into the arrays/Dictionary (hardcoded vs. read in from another location, like a resource file), this is still a better and more intuitive storage mechanism
Second
As others have suggested, do your loading in another thread. I'd use a helper function to help you deal with this. You could use an approach like this:
public class YourClass
{
private static Dictionary<int, string> characterLookup;
private static ManualResetEvent lookupCreated;
static YourClass()
{
lookupCreated = new ManualResetEvent(false);
ThreadPool.QueueUserWorkItem(LoadLookup);
}
static void LoadLookup(object garbage)
{
// add your pairs by calling characterLookup.Add(...)
lookupCreated.Set();
}
public static string GetDescription(int code)
{
if (lookupCreated != null)
{
lookupCreated.WaitOne();
lookupCreated.Close();
lookupCreated = null;
}
string output;
if(!characterLookup.TryGetValue(code, out output)) output = null;
return output;
}
}
In your code, call GetDescription in order to translate your integer into the corresponding string. If the UI doesn't call this until later, then you should see a marked decrease in startup time. To be safe, though, I've included a ManualResetEvent that will cause any calls to GetDescription to block until the dictionary has been fully loaded.
"Should I not be storing all these thousands of elements in static arrays?"
A much better way would be to store your data as binary stream in resources in the assembly and then load from the resources. Will be some more programming overhead but therefore doesn't need any object initialization.
Basic idea would be (no real code):
// Load data (two streams):
indices = ResourceManager.GetStream ("indexData");
strings = ResourceManager.GetStream ("stringData");
// Retrieving an entry:
stringIndex = indices.GetIndexAtPosition (char);
string = strings.GetStringFromPosition (stringIndex);
If you want a really good solution (for even some more work) look into using memmapped data files.
Initialize your arrays in separate thread that will not lock the UI
http://msdn.microsoft.com/en-us/library/hz49h034.aspx
if you store the arrays in a file you could do a lazy load
public class Class1
{
const int CountOfEntries = 17700; //or what ever the count is
IEnumerable<KeyValuePair<int, string>> load()
{
using (var reader = File.OpenText("somefile"))
{
while (!reader.EndOfStream)
{
var line = reader.ReadLine();
var pair = line.Split(',');
yield return new KeyValuePair<int, string>(int.Parse(pair[0]), pair[1]);
}
}
}
private static Dictionary<int, string> _lookup = new Dictionary<int, string>();
private static IEnumerator<KeyValuePair<int, string>> _loader = null;
private string LookUp(int index)
{
if (_lookup.Count < CountOfEntries && !_lookup.ContainsKey(index))
{
if(_loader == null)
{
_loader = load().GetEnumerator();
}
while(_loader.MoveNext())
{
var pair = _loader.Current;
_lookup.Add(pair.Key,pair.Value);
if (pair.Key == index)
{
return index;
}
}
}
string name;
if (_lookup.TryGetValue(index,out name))
{
return return name;
}
throw new KeyNotFoundException("The given index was not found");
}
}
the code expectes the file to have one pair on each line like so:
index0,name0
index1,name1
If the first index sought is at the end this will perform slower probably (due to IO mainly) but if the access is random the average case woul be reading half of the values the first time if the access is not random make sure to keep the most used in the top of the file
there are a few more issues to considere. The above code is not threadsafe for the load operation and to increase responsiveness of the rest of the code keep the loading in a background thread
hope this helps
What about using a dictionary instead of two arrays? You could initialize the dictionary asynchronously using a thread or thread pool. The lookup would be O(1) instead of O(log(n)) as well.
public static class Lookup
{
private static readonly ManualResetEvent m_Initialized = new ManualResetEvent(false);
private static readonly Dictionary<int, string> m_Dictionary = new Dictionary<int, string>();
public static Lookup()
{
// Start an asynchronous operation to intialize the dictionary.
// You could use ThreadPool.QueueUserWorkItem instead of creating a new thread.
Thread thread = new Thread(() => { Initialize(); });
thread.Start();
}
public static string Lookup(int code)
{
m_Initialized.WaitOne();
lock (m_Dictionary)
{
return m_Dictionary[code];
}
}
private static void Initialize()
{
lock (m_Dictionary)
{
m_Dictionary.Add(0x0000, "Exclamation Point");
// Keep adding items to the dictionary here.
}
m_Initialized.Set();
}
}

How can one simplify network byte-order conversion from a BinaryReader?

System.IO.BinaryReader reads values in a little-endian format.
I have a C# application connecting to a proprietary networking library on the server side. The server-side sends everything down in network byte order, as one would expect, but I find that dealing with this on the client side is awkward, particularly for unsigned values.
UInt32 length = (UInt32)IPAddress.NetworkToHostOrder(reader.ReadInt32());
is the only way I've come up with to get a correct unsigned value out of the stream, but this seems both awkward and ugly, and I have yet to test if that's just going to clip off high-order values so that I have to do fun BitConverter stuff.
Is there some way I'm missing short of writing a wrapper around the whole thing to avoid these ugly conversions on every read? It seems like there should be an endian-ness option on the reader to make things like this simpler, but I haven't come across anything.
There is no built-in converter. Here's my wrapper (as you can see, I only implemented the functionality I needed but the structure is pretty easy to change to your liking):
/// <summary>
/// Utilities for reading big-endian files
/// </summary>
public class BigEndianReader
{
public BigEndianReader(BinaryReader baseReader)
{
mBaseReader = baseReader;
}
public short ReadInt16()
{
return BitConverter.ToInt16(ReadBigEndianBytes(2), 0);
}
public ushort ReadUInt16()
{
return BitConverter.ToUInt16(ReadBigEndianBytes(2), 0);
}
public uint ReadUInt32()
{
return BitConverter.ToUInt32(ReadBigEndianBytes(4), 0);
}
public byte[] ReadBigEndianBytes(int count)
{
byte[] bytes = new byte[count];
for (int i = count - 1; i >= 0; i--)
bytes[i] = mBaseReader.ReadByte();
return bytes;
}
public byte[] ReadBytes(int count)
{
return mBaseReader.ReadBytes(count);
}
public void Close()
{
mBaseReader.Close();
}
public Stream BaseStream
{
get { return mBaseReader.BaseStream; }
}
private BinaryReader mBaseReader;
}
Basically, ReadBigEndianBytes does the grunt work, and this is passed to a BitConverter. There will be a definite problem if you read a large number of bytes since this will cause a large memory allocation.
I built a custom BinaryReader to handle all of this. It's available as part of my Nextem library. It also has a very easy way of defining binary structs, which I think will help you here -- check out the Examples.
Note: It's only in SVN right now, but very stable. If you have any questions, email me at cody_dot_brocious_at_gmail_dot_com.

Categories