C# huge size 2-dim arrays - c#

I need to declare square matrices in C# WinForms with more than 20000 items in a row.
I read about 2GB .Net object size limit in 32bit and also the same case in 64bit OS.
So as I understood the single answer - is using unsafe code or separate library built withing C++ compiler.
The problem for me is worth because ushort[20000,20000] is smaller then 2GB but actually I cannot allocate even 700MB of memory. My limit is 650MB and I don't understand why - I have 32bit WinXP with 3GB of memory.
I tried to use Marshal.AllocHGlobal(700<<20) but it throws OutOfMemoryException, GC.GetTotalMemory returns 4.5MB before trying to allocate memory.
I found only that many people say use unsafe code but I cannot find example of how to declare 2-dim array in heap (any stack can't keep so huge amount of data) and how to work with it using pointers.
Is it pure C++ code inside of unsafe{} brackets?
PS. Please don't ask WHY I need so huge arrays... but if you want - I need to analyze texts (for example books) and found lot of indexes. So answer is - matrices of relations between words
Edit: Could somebody please provide a small example of working with matrices using pointers in unsafe code. I know that under 32bit it is impossible to allocate more space but I spent much time in googling such example and found NOTHING

Why demand a huge 2-D array? You can simulate this with, for example, a jagged array - ushort[][] - almost as fast, and you won't hit the same single-object limit. You'll still need buckets-o-RAM of course, so x64 is implied...
ushort[][] arr = new ushort[size][];
for(int i = 0 ; i < size ; i++) {
arr[i] = new ushort[size];
}
Besides which - you might want to look at sparse-arrays, eta-vectors, and all that jazz.

The reason why you can't get near even the 2Gb allocation in 32 bit Windows is that arrays in the CLR are laid out in contiguous memory. In 32 bit Windows you have such a restricted address space that you'll find nothing like a 2Gb hole in the virtual address space of the process. Your experiments suggest that the largest region of available address space is 650Mb. Moving to 64 bit Windows should at least allow you to use a full 2Gb allocation.
Note that the virtual address space limitation on 32 bit Windows has nothing to do with the amount of physical memory you have in your computer, in your case 3Gb. Instead the limitation is caused by the number of bits the CPU uses to address memory addresses. 32 bit Windows uses, unsurprisingly, 32 bits to access each memory address which gives a total addressable memory space of 4Gbytes. By default Windows keeps 2Gb for itself and gives 2Gb to the currently running process, so you can see why the CLR will find nothing like a 2Gb allocation. With some trickery you can change the OS/user allocation so that Windows only keeps 1Gb for itself and gives the running process 3Gb which might help. However with 64 bit windows the addressable memory assigned to each process jumps up to 8 Terabytes so here the CLR will almost certainly be able to use full 2Gb allocations for arrays.

I'm so happy! :) Recently I played around subject problem - tried to resolve it using database but only found that this way is far to be perfect. Matrix [20000,20000] was implemented as single table.
Even with properly set up indexes time required only to create more than 400 millions records is about 1 hour on my PC. It is not critical for me.
Then I ran algorithm to work with that matrix (require twice to join the same table!) and after it worked more than half an hour it made no even single step.
After that I understood that only way is to find a way to work with such matrix in memory only and back to C# again.
I created pilot application to test memory allocation process and to determine where exactly allocation process stops using different structures.
As was said in my first post it is possible to allocate using 2-dim arrays only about 650MB under 32bit WinXP.
Results after using Win7 and 64bit compilation also were sad - less than 700MB.
I used JAGGED ARRAYS [][] instead of single 2-dim array [,] and results you can see below:
Compiled in Release mode as 32bit app - WinXP 32bit 3GB phys. mem. - 1.45GB
Compiled in Release mode as 64bit app - Win7 64bit 2GB under VM - 7.5GB
--Sources of application which I used for testing are attached to this post.
I cannot find here how to attach source files so just describe design part and put here manual code.
Create WinForms application.
Put on form such contols with default names:
1 button, 1 numericUpDown and 1 listbox
In .cs file add next code and run.
private void button1_Click(object sender, EventArgs e)
{
//Log(string.Format("Memory used before collection: {0}", GC.GetTotalMemory(false)));
GC.Collect();
//Log(string.Format("Memory used after collection: {0}", GC.GetTotalMemory(true)));
listBox1.Items.Clear();
if (string.IsNullOrEmpty(numericUpDown1.Text )) {
Log("Enter integer value");
}else{
int val = (int) numericUpDown1.Value;
Log(TryAllocate(val));
}
}
/// <summary>
/// Memory Test method
/// </summary>
/// <param name="rowLen">in MB</param>
private IEnumerable<string> TryAllocate(int rowLen) {
var r = new List<string>();
r.Add ( string.Format("Allocating using jagged array with overall size (MB) = {0}", ((long)rowLen*rowLen*Marshal.SizeOf(typeof(int))) >> 20) );
try {
var ar = new int[rowLen][];
for (int i = 0; i < ar.Length; i++) {
try {
ar[i] = new int[rowLen];
}
catch (Exception e) {
r.Add ( string.Format("Unable to allocate memory on step {0}. Allocated {1} MB", i
, ((long)rowLen*i*Marshal.SizeOf(typeof(int))) >> 20 ));
break;
}
}
r.Add("Memory was successfully allocated");
}
catch (Exception e) {
r.Add(e.Message + e.StackTrace);
}
return r;
}
#region Logging
private void Log(string s) {
listBox1.Items.Add(s);
}
private void Log(IEnumerable<string> s)
{
if (s != null) {
foreach (var ss in s) {
listBox1.Items.Add ( ss );
}
}
}
#endregion
The problem is solved for me. Guys, thank you in advance!

If sparse array does not apply, it's probably better to just do it in C/C++ with platform APIs related to memory mapped file: http://en.wikipedia.org/wiki/Memory-mapped_file

For the OutOfMemoryException read this thread (especially nobugz and Brian Rasmussen's answer):
Microsoft Visual C# 2008 Reducing number of loaded dlls

If you explained what you are trying to do it would be easier to help. Maybe there are better ways than allocating such a huge amount of memory at once.
Re-design is also choice number one in this great blog post:
BigArray, getting around the 2GB array size limit
The options suggested in this article are:
Re-design
Native memory for array containing simple types, sample code available here:
Unsafe Code Tutorial
Unsafe Code and Pointers (C# Programming Guide)
How to: Use Pointers to Copy an Array of Bytes (C# Programming Guide)
Writing a BigArray class which segments the large data structure into smaller segments of manageable size, sample code in the above blog post

Related

Memory limitted to about 2.5 GB for single .net process

I am writing .NET applications running on Windows Server 2016 that does an http get on a bunch of pieces of a large file. This dramatically speeds up the download process since you can download them in parallel. Unfortunately, once they are downloaded, it takes a fairly long time to pieces them all back together.
There are between 2-4k files that need to be combined. The server this will run on has PLENTLY of memory, close to 800GB. I thought it would make sense to use MemoryStreams to store the downloaded pieces until they can be sequentially written to disk, BUT I am only able to consume about 2.5GB of memory before I get an System.OutOfMemoryException error. The server has hundreds of GB available, and I can't figure out how to use them.
MemoryStreams are built around byte arrays. Arrays cannot be larger than 2GB currently.
The current implementation of System.Array uses Int32 for all its internal counters etc, so the theoretical maximum number of elements is Int32.MaxValue.
There's also a 2GB max-size-per-object limit imposed by the Microsoft CLR.
As you try to put the content in a single MemoryStream the underlying array gets too large, hence the exception.
Try to store the pieces separately, and write them directly to the FileStream (or whatever you use) when ready, without first trying to concatenate them all into 1 object.
According to the source code of the MemoryStream class you will not be able to store more than 2 GB of data into one instance of this class.
The reason for this is that the maximum length of the stream is set to Int32.MaxValue and the maximum index of an array is set to 0x0x7FFFFFC7 which is 2.147.783.591 decimal (= 2 GB).
Snippet MemoryStream
private const int MemStreamMaxLength = Int32.MaxValue;
Snippet array
// We impose limits on maximum array lenght in each dimension to allow efficient
// implementation of advanced range check elimination in future.
// Keep in sync with vm\gcscan.cpp and HashHelpers.MaxPrimeArrayLength.
// The constants are defined in this method: inline SIZE_T MaxArrayLength(SIZE_T componentSize) from gcscan
// We have different max sizes for arrays with elements of size 1 for backwards compatibility
internal const int MaxArrayLength = 0X7FEFFFFF;
internal const int MaxByteArrayLength = 0x7FFFFFC7;
The question More than 2GB of managed memory has already been discussed long time ago on the microsoft forum and has a reference to a blog article about BigArray, getting around the 2GB array size limit there.
Update
I suggest to use the following code which should be able to allocate more than 4 GB on a x64 build but will fail < 4 GB on a x86 build
private static void Main(string[] args)
{
List<byte[]> data = new List<byte[]>();
Random random = new Random();
while (true)
{
try
{
var tmpArray = new byte[1024 * 1024];
random.NextBytes(tmpArray);
data.Add(tmpArray);
Console.WriteLine($"{data.Count} MB allocated");
}
catch
{
Console.WriteLine("Further allocation failed.");
}
}
}
As has already been pointed out, the main problem here is the nature of MemoryStream being backed by a byte[], which has fixed upper size.
The option of using an alternative Stream implementation has been noted. Another alternative is to look into "pipelines", the new IO API. A "pipeline" is based around discontiguous memory, which means it isn't required to use a single contiguous buffer; the pipelines library will allocate multiple slabs as needed, which your code can process. I have written extensively on this topic; part 1 is here. Part 3 probably has the most code focus.
Just to confirm that I understand your question: you're downloading a single very large file in multiple parallel chunks and you know how big the final file is? If you don't then this does get a bit more complicated but it can still be done.
The best option is probably to use a MemoryMappedFile (MMF). What you'll do is to create the destination file via MMF. Each thread will create a view accessor to that file and write to it in parallel. At the end, close the MMF. This essentially gives you the behavior that you wanted with MemoryStreams but Windows backs the file by disk. One of the benefits to this approach is that Windows manages storing the data to disk in the background (flushing) so you don't have to, and should result in excellent performance.

Declaring a jagged array succeeds, but out of memory when declaring a multi-dimen array of same size

I get an out of memory exception when running this line of code:
double[,] _DataMatrix = new double[_total_traces, _samples_per_trace];
But this code completes successfully:
double[][] _DataMatrix = new double[_total_traces][];
for (int i = 0; i < _total_traces; i++)
{
_DataMatrix[i] = new double[_samples_per_trace];
}
My first question is why is this happening?
As a followup question, my ultimate goal is to run Principal Component Analysis (PCA) on this data. It's a pretty large dataset. The number of "rows" in the matrix could be a couple million. The number of "columns" will be around 50. I found a PCA library in the Accord.net framework that seems popular. It takes a jagged array as input (which I can successfully create and populate with data), but I run out of memory when I pass it to PCA - I guess because it is passing by value and creating a copy of the data(?). My next thought was to just write my own method to do the PCA so I wouldn't have to copy the data, but I haven't got that far yet. I haven't really had to deal with memory management much before, so I'm open to tips.
Edit: This is not a duplicate of the topic linked below, because that link did not explain how the memory of the two was stored differently and why one would cause memory issues despite them both being the same size.
In 32bits it is complex to have a continuous range of addresses of more than some hundred mb (see for example https://stackoverflow.com/a/30035977/613130). But it is easy to have scattered pieces of memory totalling some hundred mb (or even 1gb)...
The multidimensional array is a single slab of continuous memory, the jagged array is a collection of small arrays (so of small pieces of memory).
Note that in 64bits it is much easier to create an array of the maximum size permitted by .NET (around 2gb or even more... see https://stackoverflow.com/a/2338797/613130)

Maximum capacity of Collection<T> different than expected for x86

The main question in about the maximum number of items that can be in a collection such as List. I was looking for answers on here but I don't understand the reasoning.
Assume we are working with a List<int> with sizeof(int) = 4 bytes... Everyone seems to be sure that for x64 you can have a maximum 268,435,456 int and for x86 a maximum of 134,217,728 int. Links:
List size limitation in C#
Where is the maximum capacity of a C# Collection<T> defined?
What's the max items in a List<T>?
However, when I tested this myself I see that it's not the case for x86. Can anyone point me to where I may be wrong?
//// Test engine set to `x86` for `default processor architecture`
[TestMethod]
public void TestMemory()
{
var x = new List<int>();
try
{
for (long y = 0; y < long.MaxValue; y++)
x.Add(0);
}
catch (Exception)
{
System.Diagnostics.Debug.WriteLine("Actual capacity (int): " + x.Count);
System.Diagnostics.Debug.WriteLine("Size of objects: " + System.Runtime.InteropServices.Marshal.SizeOf(x.First().GetType())); //// This gives us "4"
}
}
For x64: 268435456 (expected)
For x86: 67108864 (2 times less than expected)
Why do people say that a List containing 134217728 int is exactly 512MB of memory... when you have 134217728 * sizeof(int) * 8 = 4,294,967,296 = 4GB... what's way more than 2GB limit per process.
Whereas 67108864 * sizeof(int) * 8 = 2,147,483,648 = 2GB... which makes sense.
I am using .NET 4.5 on a 64 bit machine running windows 7 8GB RAM. Running my tests in x64 and x86.
EDIT: When I set capacity directly to List<int>(134217728) I get a System.OutOfMemoryException.
EDIT2: Error in my calculations: multiplying by 8 is wrong, indeed MB =/= Mbits. I was computing Mbits. Still 67108864 ints would only be 256MB... which is way smaller than expected.
The underlying storage for a List<T> class is a T[] array. A hard requirement for an array is that the process must be able to allocate a contiguous chunk of memory to store the array.
That's a problem in a 32-bit process. Virtual memory is used for code and data, you allocate from the holes that are left between them. And while a 32-bit process will have 2 gigabytes of memory, you'll never get anywhere near a hole that's close to that size. The biggest hole in the address space you can get, right after you started the program, is around 500 or 600 megabytes. Give or take, it depends a lot on what DLLs get loaded into the process. Not just the CLR, the jitter and the native images of the framework assemblies but also the kind that have nothing to do with managed code. Like anti-malware and the raft of "helpful" utilities that worm themselves into every process like Dropbox and shell extensions. A poorly based one can cut a nice big hole in two small ones.
These holes will also get smaller as the program has been allocating and releasing memory for a while. A general problem called address space fragmentation. A long-running process can fail on a 90 MB allocation, even though there is lots of unused memory laying around.
You can use SysInternals' VMMap utility to get more insight. A copy of Russinovich's book Windows Internals is typically necessary as well to make sense of what you see.
This could maybe also help but i was able to replicate this 67108864 limit by creating a test project with the provided code
in console, winform, wpf, i was able to get the 134217728 limit
in asp.net i was getting 33554432 limit
so in one of your comment you said [TestMethod], this seem to be the issue.
While you can have MaxValue Items, in practice you will run out of memory before then.
Running as x86 the most ram you can have even on a x46 box would be 4GB more likely 2GB or 3GB is the max if on a x86 version of Windows.
The available ram is most likely much smaller as you would only be able to allocate the biggest continuous space to the array.

Struct vs class memory overhead

I'm writing an app that will create thousands of small objects and store them recursively in array. By "recursively" I mean that each instance of K will have an array of K instances which will have and array of K instances and so on, and this array + one int field are the only properties + some methods. I found that memory usage grows very fast for even small amount of data - about 1MB), and when the data I'm processing is about 10MB I get the "OutOfMemoryException", not to mention when it's bigger (I have 4GB of RAM) :). So what do you suggest me to do? I figured, that if I'd create separate class V to process those objects, so that instances of K would have only array of K's + one integer field and make K as a struct, not a class, it should optimize things a bit - no garbage collection and stuff... But it's a bit of a challenge, so I'd rather ask you whether it's a good idea, before I start a total rewrite :).
EDIT:
Ok, some abstract code
public void Add(string word) {
int i;
string shorter;
if (word.Length > 0) {
i = //something, it's really irrelevant
if (t[i] == null) {
t[i] = new MyClass();
}
shorterWord = word.Substring(1);
//end of word
if(shorterWord.Length == 0) {
t[i].WordEnd = END;
}
//saving the word letter by letter
t[i].Add(shorterWord);
}
}
}
For me already when researching deeper into this I had the following assumptions (they may be inexact; i'm getting old for a programmer). A class has extra memory consumption because a reference is required to address it. Store the reference and an Int32 sized pointer is needed on a 32bit compile. Allocated always on the heap (can't remember if C++ has other possibilities, i would venture yes?)
The short answer, found in this article, Object has a 12bytes basic footprint + 4 possibly unused bytes depending on your class (has no doubt something to do with padding).
http://www.codeproject.com/Articles/231120/Reducing-memory-footprint-and-object-instance-size
Other issues you'll run into is Arrays also have an overhead. A possibility would be to manage your own offset into a larger array or arrays. Which in turn is getting closer to something a more efficient language would be better suited for.
I'm not sure if there are libraries that may provide Storage for small objects in an efficient manner. Probably are.
My take on it, use Structs, manage your own offset in a large array, and use proper packing instructions if it serves you (although i suspect this comes at a cost at runtime of a few extra instructions each time you address unevenly packed data)
[StructLayout(LayoutKind.Sequential, Pack = 1)]
Your stack is blowing up.
Do it iteratively instead of recursively.
You're not blowing the system stack up, your blowing the code stack up, 10K function calls will blow it out of the water.
You need proper tail recursion, which is just an iterative hack.
Make sure you have enough memory in your system. Over 100mb+ etc. It really depends on your system. Linked list, recursive objects is what you are looking at. If you keep recursing, it is going to hit the memory limit and nomemoryexception will be thrown. Make sure you keep track of the memory usage on any program. Nothing is unlimited, especially memory. If memory is limited, save it to a disk.
Looks like there is infinite recursion in your code and out of memory is thrown. Check the code. There should be start and end in recursive code. Otherwise it will go over 10 terrabyte memory at some point.
You can use a better data structure
i.e. each letter can be a byte (a-0, b-1 ... ). each word fragment can be in indexed also especially substrings - you should get away with significantly less memory (though a performance penalty)
Just list your recursive algorithm and sanitize variable names. If you are doing BFS type of traversal and keep all objects in memory, you will run out of mem. For example, in this case, replace it with DFS.
Edit 1:
You can speed up the algo by estimating how many items you will generate then allocate that much memory at once. As the algo progresses, fill up the allocated memory. This reduces fragmentation and reallocation & copy-on-full-array operations.
Nonetheless, after you are done operating on these generated words you should delete them from your datastructure so they can be GC-ed so you don't run out of mem.

Microsoft Visual C# 2008 Reducing number of loaded dlls

How can I reduce the number of loaded dlls When debugging in Visual C# 2008 Express Edition?
When running a visual C# project in the debugger I get an OutOfMemoryException due to fragmentation of 2GB virtual address space and we assume that the loaded dlls might be the reason for the fragmentation.
Brian Rasmussen, you made my day! :)
His proposal of "disabling the visual studio hosting process" solved the problem.
(for more information see history of question-development below)
Hi,
I need two big int-arrays to be loaded in memory with ~120 million elements (~470MB) each, and both in one Visual C# project.
When I'm trying to instantiate the 2nd Array I get an OutOfMemoryException.
I do have enough total free memory and after doing a web-search I thought my problem is that there aren't big enough contiguous free memory blocks on my system.
BUT! - when I'm instantiating only one of the arrays in one Visual C# instance and then open another Visual C# instance, the 2nd instance can instantiate an array of 470MB.
(Edit for clarification: In the paragraph above I meant running it in the debugger of Visual C#)
And the task-manager shows the corresponding memory usage-increase just as you would expect it.
So not enough contiguous memory blocks on the whole system isn't the problem. Then I tried running a compiled executable that instantiates both arrays which works also (memory usage 1GB)
Summary:
OutOfMemoryException in Visual C# using two big int arrays, but running the compiled exe works (mem usage 1GB) and two separate Visual C# instances are able to find two big enough contiguous memory blocks for my big arrays, but I need one Visual C# instance to be able to provide the memory.
Update:
First of all special thanks to nobugz and Brian Rasmussen, I think they are spot on with their prediction that "the Fragmentation of 2GB virtual address space of the process" is the problem.
Following their suggestions I used VMMap and listdlls for my short amateur-analysis and I get:
* 21 dlls listed for the "standalone"-exe. (the one that works and uses 1GB of memory.)
* 58 dlls listed for vshost.exe-version. (the version which is run when debugging and that throws the exception and only uses 500MB)
VMMap showed me the biggest free memory blocks for the debugger version to be 262,175,167,155,108MBs.
So VMMap says that there is no contiguous 500MB block and according to the info about free blocks I added ~9 smaller int-arrays which added up to more than 1,2GB memory usage and actually did work.
So from that I would say that we can call "fragmentation of 2GB virtual address space" guilty.
From the listdll-output I created a small spreadsheet with hex-numbers converted to decimal to check free areas between dlls and I did find big free space for the standalone version inbetween (21) dlls but not for the vshost-debugger-version (58 dlls). I'm not claiming that there can't be anything else between and I'm not really sure if what I'm doing there makes sense but it seems consistent with VMMaps analysis and it seems as if the dlls alone already fragment the memory for the debugger-version.
So perhaps a solution would be if I would be able to reduce the number of dlls used by the debugger.
1. Is that possible?
2. If yes how would I do that?
You are battling virtual memory address space fragmentation. A process on the 32-bit version of Windows has 2 gigabytes of memory available. That memory is shared by code as well as data. Chunks of code are the CLR and the JIT compiler as well as the ngen-ed framework assemblies. Chunks of data are the various heaps used by .NET, including the loader heap (static variables) and the garbage collected heaps. These chunks are located at various addresses in the memory map. The free memory is available for you to allocate your arrays.
Problem is, a large array requires a contiguous chunk of memory. The "holes" in the address space, between chunks of code and data, are not large enough to allow you to allocate such large arrays. The first hole is typically between 450 and 550 Megabytes, that's why your first array allocation succeeded. The next available hole is a lot smaller. Too small to fit another big array, you'll get OOM even though you've got an easy gigabyte of free memory left.
You can look at the virtual memory layout of your process with the SysInternals' VMMap utility. Okay for diagnostics, but it isn't going to solve your problem. There's only one real fix, moving to a 64-bit version of Windows. Perhaps better: rethink your algorithm so it doesn't require such large arrays.
3rd update: You can reduce the number of loaded DLLs significantly by disabling the Visual Studio hosting process (project properties, debug). Doing so will still allow you to debug the application, but it will get rid of a lot of DLLs and a number of helper threads as well.
On a small test project the number of loaded DLLs went from 69 to 34 when I disabled the hosting process. I also got rid of 10+ threads. All in all a significant reduction in memory usage which should also help reduce heap fragmentation.
Additional info on the hosting process: http://msdn.microsoft.com/en-us/library/ms242202.aspx
The reason you can load the second array in a new application is that each process gets a full 2 GB virtual address space. I.e. the OS will swap pages to allow each process to address the total amount of memory. When you try to allocate both arrays in one process the runtime must be able to allocate two contiguous chunks of the desired size. What are you storing in the array? If you store objects, you need additional space for each of the objects.
Remember an application doesn't actually request physical memory. Instead each application is given an address space from which they can allocate virtual memory. The OS then maps the virtual memory to physical memory. It is a rather complex process (Russinovich spends 100+ pages on how Windows handle memory in his Windows Internal book). For more details on how Windows does this please see http://blogs.technet.com/markrussinovich/archive/2008/11/17/3155406.aspx
Update: I've been pondering this question for a while and it does sound a bit odd. When you run the application through Visual Studio, you may see additional modules loaded depending on your configuration. On my setup I get a number of different DLLs loaded during debug due to profilers and TypeMock (which essentially does its magic via the profiler hooks).
Depending on the size and load address of these they may prevent the runtime from allocating contiguous memory. Having said that, I am still a bit surprised that you get an OOM after allocating just two of those big arrays as their combined size is less than 1 GB.
You can look at the loaded DLLs using the listdlls tools from SysInternals. It will show you load addresses and size. Alternatively, you can use WinDbg. The lm command shows loaded modules. If you want size as well, you need to specify the v option for verbose output. WinDbg will also allow you to examine the .NET heaps, which may help you to pinpoint why memory cannot be allocated.
2nd Update: If you're on Windows XP, you can try to rebase some of the loaded DLLs to free up more contiguous space. Vista and Windows 7 uses ASLR, so I am not sure you'll benefit from rebasing on those platforms.
This isn't an answer per se, but perhaps an alternative might work.
If the problem is indeed that you have fragmented memory, then perhaps one workaround would be to just use those holes, instead of trying to find a hole big enough for everything consecutively.
Here's a very simple BigArray class that doesn't add too much overhead (some overhead is introduced, especially in the constructor, in order to initialize the buckets).
The statistics for the array is:
Main executes in 404ms
static Program-constructor doesn't show up
The statistics for the class is:
Main took 473ms
static Program-constructor takes 837ms (initializing the buckets)
The class allocates a bunch of 8192-element arrays (13 bit indexes), which on 64-bit for reference types will fall below the LOB limit. If you're only going to use this for Int32, you can probably up this to 14 and probably even make it nongeneric, although I doubt it will improve performance much.
In the other direction, if you're afraid you're going to have a lot of holes smaller than the 8192-element arrays (64KB on 64-bit or 32KB on 32-bit), you can just reduce the bit-size for the bucket indexes through its constant. This will add more overhead to the constructor, and add more memory-overhead, since the outmost array will be bigger, but the performance should not be affected.
Here's the code:
using System;
using NUnit.Framework;
namespace ConsoleApplication5
{
class Program
{
// static int[] a = new int[100 * 1024 * 1024];
static BigArray<int> a = new BigArray<int>(100 * 1024 * 1024);
static void Main(string[] args)
{
int l = a.Length;
for (int index = 0; index < l; index++)
a[index] = index;
for (int index = 0; index < l; index++)
if (a[index] != index)
throw new InvalidOperationException();
}
}
[TestFixture]
public class BigArrayTests
{
[Test]
public void Constructor_ZeroLength_ThrowsArgumentOutOfRangeException()
{
Assert.Throws<ArgumentOutOfRangeException>(() =>
{
new BigArray<int>(0);
});
}
[Test]
public void Constructor_NegativeLength_ThrowsArgumentOutOfRangeException()
{
Assert.Throws<ArgumentOutOfRangeException>(() =>
{
new BigArray<int>(-1);
});
}
[Test]
public void Indexer_SetsAndRetrievesCorrectValues()
{
BigArray<int> array = new BigArray<int>(10001);
for (int index = 0; index < array.Length; index++)
array[index] = index;
for (int index = 0; index < array.Length; index++)
Assert.That(array[index], Is.EqualTo(index));
}
private const int PRIME_ARRAY_SIZE = 10007;
[Test]
public void Indexer_RetrieveElementJustPastEnd_ThrowsIndexOutOfRangeException()
{
BigArray<int> array = new BigArray<int>(PRIME_ARRAY_SIZE);
Assert.Throws<IndexOutOfRangeException>(() =>
{
array[PRIME_ARRAY_SIZE] = 0;
});
}
[Test]
public void Indexer_RetrieveElementJustBeforeStart_ThrowsIndexOutOfRangeException()
{
BigArray<int> array = new BigArray<int>(PRIME_ARRAY_SIZE);
Assert.Throws<IndexOutOfRangeException>(() =>
{
array[-1] = 0;
});
}
[Test]
public void Constructor_BoundarySizes_ProducesCorrectlySizedArrays()
{
for (int index = 1; index < 16384; index++)
{
BigArray<int> arr = new BigArray<int>(index);
Assert.That(arr.Length, Is.EqualTo(index));
arr[index - 1] = 42;
Assert.That(arr[index - 1], Is.EqualTo(42));
Assert.Throws<IndexOutOfRangeException>(() =>
{
arr[index] = 42;
});
}
}
}
public class BigArray<T>
{
const int BUCKET_INDEX_BITS = 13;
const int BUCKET_SIZE = 1 << BUCKET_INDEX_BITS;
const int BUCKET_INDEX_MASK = BUCKET_SIZE - 1;
private readonly T[][] _Buckets;
private readonly int _Length;
public BigArray(int length)
{
if (length < 1)
throw new ArgumentOutOfRangeException("length");
_Length = length;
int bucketCount = length >> BUCKET_INDEX_BITS;
bool lastBucketIsFull = true;
if ((length & BUCKET_INDEX_MASK) != 0)
{
bucketCount++;
lastBucketIsFull = false;
}
_Buckets = new T[bucketCount][];
for (int index = 0; index < bucketCount; index++)
{
if (index < bucketCount - 1 || lastBucketIsFull)
_Buckets[index] = new T[BUCKET_SIZE];
else
_Buckets[index] = new T[(length & BUCKET_INDEX_MASK)];
}
}
public int Length
{
get
{
return _Length;
}
}
public T this[int index]
{
get
{
return _Buckets[index >> BUCKET_INDEX_BITS][index & BUCKET_INDEX_MASK];
}
set
{
_Buckets[index >> BUCKET_INDEX_BITS][index & BUCKET_INDEX_MASK] = value;
}
}
}
}
I had a similar issue once and what I ended up doing was using a list instead of an array. When creating the lists I set the capacity to the required sizes and I defined both lists BEFORE I tried adding values to them. I'm not sure if you can use lists instead of arrays but it might be something to consider. In the end I had to run the executable on a 64 bit OS, because when I added the items to the list the overall memory usage went above 2GB, but at least I wa able to run and debug locally with a reduced set of data.
A question: Are all elements of your array occupied? If many of them contain some default value then maybe you could reduce memory consumption using an implementation of a sparse array that only allocates memory for the non-default values. Just a thought.
Each 32bit process has a 2GB address space (unless you ask the user to add /3GB in boot options), so if you can accept some performance drop-off, you can start a new process to get 2GB more in address space - well, a little less than that. The new process would be still fragmented with all the CLR dlls plus all the Win32 DLLs they use, so you can get rid of all address space fragmentation caused by CLR dlls by writing the new process in a native language e.g. C++. You can even move some of your calculation to the new process so you get more address space in your main app and less chatty with your main process.
You can communicate between your processes using any of the interprocess communication methods. You can find many IPC samples in the All-In-One Code Framework.
I have experience with two desktop applications and one moble application hitting out-of-memory limits. I understand the issues. I do not know your requirements, but I suggest moving your lookup arrays into SQL CE. Performance is good, you will be surprised, and SQL CE is in-process. With the last desktop application, I was able to reduce my memory footprint from 2.1GB to 720MB, which had the benefit of speeding up the application due to significantly reducing page faults. (Your problem is fragmentation of the AppDomain's memory, which you have no control over.)
Honestly, I do not think you will be satisfied with performance after squeezing these arrays into memory. Don't forget, excessive page faults has a significant impact on performance.
If you do go SqlServerCe, make sure to keep the connection open to improve performance. Also, single row lookups (scalar) may be slower than returning a result set.
If you really want to know what is going on with memory, use CLR Profiler. VMMap is not going to help. The OS does not allocate memory to your application. The Framework does by grabbing large chucks of OS memory for itself (caching the memory) then allocating, when needed, pieces of this memory to applications.
CLR Profiler for the .NET Framework 2.0 at
https://github.com/MicrosoftArchive/clrprofiler

Categories