I have written following function
public void TestSB()
{
string str = "The quick brown fox jumps over the lazy dog.";
StringBuilder sb = new StringBuilder();
int j = 0;
int len = 0;
try
{
for (int i = 0; i < (10000000 * 2); i++)
{
j = i;
len = sb.Length;
sb.Append(str);
}
Console.WriteLine("Success ::" + sb.Length.ToString());
}
catch (Exception ex)
{
Console.WriteLine(
ex.Message + " :: " + j.ToString() + " :: " + len.ToString());
}
}
Now I suppose, that StringBuilder has the capacity to take over 2 billion character (2,147,483,647 to be precise).
But when I ran the above function it gave System.OutOfMemoryException just on reaching the capacity of about 800 million.
Moreover, I am seeing widely different result on different PC having same memory and similar amount of load.
Can anyone please provide or explain me the reason for this?
Each character requires 2 bytes (as a char in .NET is a UTF-16 code unit). So by the time you've reached 800 million characters, that's 1.6GB of contiguous memory required1. Now when the StringBuilder needs to resize itself, it has to create another array of the new size (which I believe tries to double the capacity) - which means trying to allocate a 3.2GB array.
I believe that the CLR (even on 64-bit systems) can't allocate a single object of more than 2GB in size. (That certainly used to be the case.) My guess is that your StringBuilder is trying to double in size, and blowing that limit. You may be able to get a little higher by constructing the StringBuilder with a specific capacity - a capacity of around a billion may be feasible.
In the normal course of things this isn't a problem, of course - even strings requiring hundreds of megs are rare.
1 I believe the implementation of StringBuilder actually changed in .NET 4 to use fragments in some situations - but I don't know the details. So it may not always need contiguous memory while still in builder form... but it would if you ever called ToString.
Related
This code doesn't do anything practical I was just seeing what would happen.
As far as I can tell, the only two variables that are preserved are the (eventually) massive string, and a neglible size int tracking the string's length.
On my machine, the string gets to be about 0.75GB at which point the OutOfMemoryException occurs. At this stage Visual Studio is showing about 5GB of usage. So I'm wondering why there is a disparity.
var initialText = "Test content";
var text = initialText;
var length = text.Length;
while (true)
{
try
{
var currentLength = text.Length;
Console.WriteLine($"Current Length - {currentLength}");
Console.WriteLine($"Current Size in GB - {System.Text.Encoding.UTF8.GetByteCount(text)/1024.0/1024.0/1024.0}");
text = Convert.ToBase64String(System.Text.Encoding.UTF8.GetBytes(text));
Console.WriteLine($"Change In Size - {currentLength / (length + 0.0)}");
length = currentLength;
}
catch (OutOfMemoryException)
{
break;
}
}
As a second question, when I begin to run the code my machine has about 11GB free according to Task Manager, and when it hits the exception, it's gone up by about 3GB, which doesn't tally up with the above numbers. Any ideas?
First, strings in .net is a sequence of UTF-16 words, so each char takes 2 bytes. To get a size of the string in memory in bytes you need to multiply its length by 2 (ignoring CLR instance header).
Console.WriteLine($"Current Size in GB - {text.Length * 2.0 /1024/1024/1024}");
Another limitation is an array size in .NET, read remarks here as #TheGenral noted. There are 2 limits you can hit: max size(2GB) and max index.
Below is modified version of your test:
var text = "Test content";
long length = text.Length;
try
{
while (true)
{
var currentLength = text.Length;
Console.WriteLine($"Current Length - {currentLength}");
Console.WriteLine($"Current Size in GB - {text.Length * 2.0 / 1024 / 1024 / 1024}");
text += new string('a', 500 * 1024*1024);
length = currentLength;
GC.Collect();
}
}
catch (OutOfMemoryException e)
{
Console.WriteLine(e);
}
StringBuilder version difference:
var text = new StringBuilder("Test content");
...
text.Append('a', 500 * 1024*1024);
If you don't enable gcAllowVeryLargeObjects then you'll get OOM with 1B elements.
I was not able to get 2B elements using string concatenation, but if you rework this test using StringBuilder, then you can reach 2B of chars. In this case you'll hit a second limitation: arrays cannot hold more that 2 billion elements. Here is a discussion about upper limit.
In this thread max string length is discussed.
If you run this code in the Release mode you'll see process memory consumption almost equal to string size in console output.
Another interesting thing that I noticed and cannot explain is that with StringBuilder, gcAllowVeryLargeObjects and Debug mode I'm able to reach 4GB, but in the Release mode it nearly hits 3GB. Comments are welcome why that happens :)
I have a byte's array and I want to calculate what would be the file size if I'll write these bytes to file. Is it possible without writing the file to disc?
What about array.Length? Looks like a size in bytes.
Um, yes:
int length = byteArray.Length;
A byte in memory would be a byte on disk... at least in higher level file system terms. You also need to potentially consider how many individual blocks/clusters would be used (and overhead for a directory entry), and any compression that the operating system may supply, but it's not clear from the question whether that's what you're after.
If you really do want to know the "size on disk" as opposed to the file size (in the same way that Windows can show the two numbers) I suspect you'd genuinely have to write it to disk - and then use a Win32 API to find out the actual size on disk.
Array.Length would give your total size expressed in byte.
Physical dimension on disk may be a little bit more considering cluster size.
some time ago I found this snipped and since then I like to do this
public static string GetSizeInMemory(this long bytesize)
{
string[] sizes = { "B", "KB", "MB", "GB", "TB" };
double len = Convert.ToDouble(bytesize);
int order = 0;
while(len >= 1024D && order < sizes.Length - 1)
{
order++;
len /= 1024;
}
return string.Format(CultureInfo.CurrentCulture,"{0:0.##} {1}", len, sizes[order]);
}
You can use this extension method when accessing anything that provides file or memory size like a FileInfo or a Process, bellow are 2 samples
private void ValidateResources(object _)
{
Process p = Process.GetCurrentProcess();
double now = p.TotalProcessorTime.TotalMilliseconds;
double cpuUsage = (now - processorTime) / MONITOR_INTERVAL;
processorTime = now;
ThreadPool.GetMaxThreads(out int maxThreads, out int _);
ThreadPool.GetAvailableThreads(out int availThreads, out int _);
int cpuQueue = maxThreads - availThreads;
var displayString= string.Concat(
p.WorkingSet64.GetSizeInMemory() + "/"
, p.PeakWorkingSet64.GetSizeInMemory(), " RAM, "
, p.Threads.Count, " threads, ", p.HandleCount.ToString("N0", CultureInfo.CurrentCulture)
, " handles, ", Math.Min(cpuUsage, 1).ToString("P2", CultureInfo.CurrentCulture)
, " CPU usage, ", cpuQueue, " CPU queue depth"
));
}
or with a file
FileInfo file = new FileInfo(pathtoFile);
string displaySize= file.Length.GetSizeInMemory();
(btw. This refers to 32 bit OS)
SOME UPDATES:
This is definitely an alignment issue
Sometimes the alignment (for whatever reason?) is so bad that access to the double is more than 50x slower than its fastest access.
Running the code on a 64 bit machine cuts down the issue, but I think it was still alternating between two timing (of which I could get similar results by changing the double to a float on a 32 bit machine)
Running the code under mono exhibits no issue -- Microsoft, any chance you can copy something from those Novell guys???
Is there a way to memory align the allocation of classes in c#?
The following demonstrates (I think!) the badness of not having doubles aligned correctly. It does some simple math on a double stored in a class, timing each run, running 5 timed runs on the variable before allocating a new one and doing it over again.
Basically the results looks like you either have a fast, medium or slow memory position (on my ancient processor, these end up around 40, 80 or 120ms per run)
I have tried playing with StructLayoutAttribute, but have had no joy - maybe something else is going on?
class Sample
{
class Variable { public double Value; }
static void Main()
{
const int COUNT = 10000000;
while (true)
{
var x = new Variable();
for (int inner = 0; inner < 5; ++inner)
{
// move allocation here to allocate more often so more probably to get 50x slowdown problem
var stopwatch = Stopwatch.StartNew();
var total = 0.0;
for (int i = 1; i <= COUNT; ++i)
{
x.Value = i;
total += x.Value;
}
if (Math.Abs(total - 50000005000000.0) > 1)
throw new ApplicationException(total.ToString());
Console.Write("{0}, ", stopwatch.ElapsedMilliseconds);
}
Console.WriteLine();
}
}
}
So I see lots of web pages about alignment of structs for interop, so what about alignment of classes?
(Or are my assumptions wrong, and there is another issue with the above?)
Thanks,
Paul.
Interesting look in the gears that run the machine. I have a bit of a problem explaining why there are multiple distinct values (I got 4) when a double can be aligned only two ways. I think alignment to the CPU cache line plays a role as well, although that only adds up to 3 possible timings.
Well, nothing you can do about it, the CLR only promises alignment for 4 byte values so that atomic updates on 32-bit machines are guaranteed. This is not just an issue with managed code, C/C++ has this problem too. Looks like the chip makers need to solve this one.
If it is critical then you could allocate unmanaged memory with Marshal.AllocCoTaskMem() and use an unsafe pointer that you can align just right. Same kind of thing you'd have to do if you allocate memory for code that uses SIMD instructions, they require a 16 byte alignment. Consider it a desperation-move though.
To prove the concept of misalignment of objects on heap in .NET you can run following code and you'll see that now it always runs fast. Please don't shoot me, it's just a PoC, but if you are really concerned about performance you might consider using it ;)
public static class AlignedNew
{
public static T New<T>() where T : new()
{
LinkedList<T> candidates = new LinkedList<T>();
IntPtr pointer = IntPtr.Zero;
bool continue_ = true;
int size = Marshal.SizeOf(typeof(T)) % 8;
while( continue_ )
{
if (size == 0)
{
object gap = new object();
}
candidates.AddLast(new T());
GCHandle handle = GCHandle.Alloc(candidates.Last.Value, GCHandleType.Pinned);
pointer = handle.AddrOfPinnedObject();
continue_ = (pointer.ToInt64() % 8) != 0 || (pointer.ToInt64() % 64) == 24;
handle.Free();
if (!continue_)
return candidates.Last.Value;
}
return default(T);
}
}
class Program
{
[StructLayoutAttribute(LayoutKind.Sequential)]
public class Variable
{
public double Value;
}
static void Main()
{
const int COUNT = 10000000;
while (true)
{
var x = AlignedNew.New<Variable>();
for (int inner = 0; inner < 5; ++inner)
{
var stopwatch = Stopwatch.StartNew();
var total = 0.0;
for (int i = 1; i <= COUNT; ++i)
{
x.Value = i;
total += x.Value;
}
if (Math.Abs(total - 50000005000000.0) > 1)
throw new ApplicationException(total.ToString());
Console.Write("{0}, ", stopwatch.ElapsedMilliseconds);
}
Console.WriteLine();
}
}
}
Maybe the StructLayoutAttribute is what you are looking for?
Using struct instead of class, makes the time constant. also consider using StructLayoutAttribute. It helps to specify exact memory layout of a structures. For CLASSES I do not think you have any guarantees how they are layouted in memory.
It will be correctly aligned, otherwise you'd get alignment exceptions on x64. I don't know what your snippet shows, but I wouldn't say anything about alignment from it.
You don't have any control over how .NET lays out your class in memory.
As others have said the StructLayoutAttribute can be used to force a specific memory layout for a struct BUT note that the purpose of this is for C/C++ interop, not for trying to fine-tune the performance of your .NET app.
If you're worried about memory alignment issues then C# is probably the wrong choice of language.
EDIT - Broke out WinDbg and looked at the heap running the code above on 32-bit Vista and .NET 2.0.
Note: I don't get the variation in timings shown above.
0:003> !dumpheap -type Sample+Variable
Address MT Size
01dc2fec 003f3c48 16
01dc54a4 003f3c48 16
01dc58b0 003f3c48 16
01dc5cbc 003f3c48 16
01dc60c8 003f3c48 16
01dc64d4 003f3c48 16
01dc68e0 003f3c48 16
01dc6cd8 003f3c48 16
01dc70e4 003f3c48 16
01dc74f0 003f3c48 16
01dc78e4 003f3c48 16
01dc7cf0 003f3c48 16
01dc80fc 003f3c48 16
01dc8508 003f3c48 16
01dc8914 003f3c48 16
01dc8d20 003f3c48 16
01dc912c 003f3c48 16
01dc9538 003f3c48 16
total 18 objects
Statistics:
MT Count TotalSize Class Name
003f3c48 18 288 TestConsoleApplication.Sample+Variable
Total 18 objects
0:003> !do 01dc9538
Name: TestConsoleApplication.Sample+Variable
MethodTable: 003f3c48
EEClass: 003f15d0
Size: 16(0x10) bytes
(D:\testcode\TestConsoleApplication\bin\Debug\TestConsoleApplication.exe)
Fields:
MT Field Offset Type VT Attr Value Name
6f5746e4 4000001 4 System.Double 1 instance 1655149.000000 Value
This seems to me that the classes' allocation addresses appear to be aligned unless I'm reading this wrong?
So I've got an algorithm that reads from a (very large, ~155+ MB) binary file, parses it according to a spec and writes out the necessary info (to a CSV, flat text). It works flawlessly for the first 15.5 million lines of output, which produces a CSV file of ~0.99-1.03 GB. This gets through hardly over 20% of the binary file. After this it breaks, as in suddenly the printed data is not at all what is shown in the binary file. I checked the binary file, the same pattern continues (data split up into "packets" - see code below). Due to how it's handled, mem usage never really increases (steady ~15K). The functional code is listed below. Is it my algorithm (if so, why would it break after 15.5 million lines?!)... are there other implications I'm not considering due to the large file sizes? Any ideas?
(fyi: each "packet" is 77 bytes in length, beginning with a 3byte "startcode" and ending with a 5byte "endcode" - you'll see the pattern below)
edit code has been updated based on the suggestions below... thanks!
private void readBin(string theFile)
{
List<int> il = new List<int>();
bool readyForProcessing = false;
byte[] packet = new byte[77];
try
{
FileStream fs_bin = new FileStream(theFile, FileMode.Open);
BinaryReader br = new BinaryReader(fs_bin);
while (br.BaseStream.Position < br.BaseStream.Length && working)
{
// Find the first startcode
while (!readyForProcessing)
{
// If last byte of endcode adjacent to first byte of startcod...
// This never occurs outside of ending/starting so it's safe
if (br.ReadByte() == 0x0a && br.PeekChar() == (char)0x16)
readyForProcessing = true;
}
// Read a full packet of 77 bytes
br.Read(packet, 0, packet.Length);
// Unnecessary I guess now, but ensures packet begins
// with startcode and ends with endcode
if (packet.Take(3).SequenceEqual(STARTCODE) &&
packet.Skip(packet.Length - ENDCODE.Length).SequenceEqual(ENDCODE))
{
il.Add(BitConverter.ToUInt16(packet, 3)); //il.ElementAt(0) == 2byte id
il.Add(BitConverter.ToUInt16(packet, 5)); //il.ElementAt(1) == 2byte semistable
il.Add(packet[7]); //il.ElementAt(2) == 1byte constant
for(int i = 8; i < 72; i += 2) //start at 8th byte, get 64 bytes
il.Add(BitConverter.ToUInt16(packet, i));
for (int i = 3; i < 35; i++)
{
sw.WriteLine(il.ElementAt(0) + "," + il.ElementAt(1) +
"," + il.ElementAt(2) + "," + il.ElementAt(i));
}
il.Clear();
}
else
{
// Handle "bad" packets
}
} // while
fs_bin.Flush();
br.Close();
fs_bin.Close();
}
catch (Exception e)
{
MessageBox.Show(e.ToString());
}
}
Your code is silently catching any exception that happens in the while loop and swallowing it.
This is a bad practice because it masks issues like the one you are running into.
Most likely, one of the methods you call inside the loop (int.Parse() for example) is throwing an exception because it encounters some problem in the format of the data (or your assumptions about that format).
Once an exception occurs, the loop that reads data is thrown off kilter because it is no longer positioned at a record boundary.
You should do several things to make this code more resilient:
Don't silently swallow exception in the run loop. Deal with them.
Don't read data byte by byte or field by field in the loop. Since your records are fixed size (77 bytes) - read an entire record into a byte[] and then process it from there. This will help ensure you are always reading at a record boundary.
Don't put an empty generic catch block here and just silently catch and continue. You should check and see if you're getting an actual exception in there and go from there.
There is no need for the byteToHexString function. Just use the 0x prefix before a hexadecimal number and it will do a binary comparison.
i.e.
if(al[0] == 0x16 && al[1] == 0x3C && al[2] == 0x02)
{
...
}
I don't know what your doConvert function does (you didn't provide that source), but the BinaryReader class provides many different functions, one of which is ReadInt16. Unless your shorts are stored in an encoded format, that should be easier to use than doing your fairly obfuscated and confusing conversion. Even if they're encoded, it would still be far simpler to read the bytes in and manipulate them, rather than doing several roundtrips with converting to strings.
Edit
You appear to be making very liberal use of the LINQ extension methods (particularly ElementAt). Every time you call that function, it enumerates your list until it reaches that number. You'll have much better performing code (as well as less verbose) if you just use the built-in indexer on the list.
i.e. al[3] rather than al.ElementAt(3).
Also, you don't need to call Flush on an input Stream. Flush is used to tell the stream to write anything that it has in its write buffer to the underlying OS file handle. For an input stream it won't do anything.
I would suggest replacing your current sw.WriteLine call with this:
sw.WriteLine(BitConverter.ToString(packet)); and see if the data you're expecting on the row where it starts to mess up is actually what you're getting.
I would actually do this:
if (packet.Take(3).SequenceEqual(STARTCODE) &&
packet.Skip(packet.Length - ENDCODE.Length).SequenceEqual(ENDCODE))
{
ushort id = BitConverter.ToUInt16(packet, 3);
ushort semistable = BitConverter.ToUInt16(packet, 5);
byte contant = packet[7];
for(int i = 8; i < 72; i += 2)
{
il.Add(BitConverter.ToUInt16(packet, i));
}
foreach(ushort element in il)
{
sw.WriteLine(string.Format("{0},{1},{2},{3}", id, semistable, constant, element);
}
il.Clear();
}
else
{
//handle "bad" packets
}
We need to represent huge numbers in our application. We're doing this using integer arrays. The final production should be maxed for performance. We were thinking about encapsulating our array in a class so we could add properties to be related to the array such as isNegative, numberBase and alike.
We're afraid that using classes, however, will kill us performance wise. We did a test where we created a fixed amount of arrays and set it's value through pure arrays usage and where a class was created and the array accessed through the class:
for (int i = 0; i < 10000; i++)
{
if (createClass)
{
BigNumber b = new BigNumber(new int[5000], 10);
for (int j = 0; j < b.Number.Length; j++)
{
b[j] = 5;
}
}
else
{
int[] test = new int[5000];
for (int j = 0; j < test.Length; j++)
{
test[j] = 5;
}
}
}
And it appears that using classes slows down the runnign time of the above code by a factor 6 almost. We tried the above just by encapsulating the array in a struct instead which caused the running time to be almost equal to pure array usage.
What is causing this huge overhead when using classes compared to structs? Is it really just the performance gain you get when you use the stack instead of the heap?
BigNumber just stores the array in a private variable exposed by a property. Simplified:
public class BigNumber{
private int[] number;
public BigNumber(int[] number) { this.number = number;}
public int[] Number{get{return number;}}
}
It's not surprising that the second loop is much faster than the first one. What's happening is not that the class is extraordinarily slow, it's that the loop is really easy for the compiler to optimize.
As the loop ranges from 0 to test.Length-1, the compiler can tell that the index variable can never be outside of the array, so it can remove the range check when accessing the array by index.
In the first loop the compiler can't do the connection between the loop and the array, so it has to check the index against the boundaries for each item that is accessed.
There will always be a bit of overhead when you encapsulate an array inside a class, but it's not as much as the difference that you get in your test. You have chosen a situation where the compiler is able to optimize the plain array access very well, so what you are testing is more the compilers capability to optimize the code rather than what you set out to test.
You should profile the code when you run it and see where the time is being spent.
Also consider another language that makes it easy to use big ints.
You're using an integer data type to store a single digit, which is part of a really large number. This is wrong
The numerals 0-9 can be represented in 4 bits. A byte contains 8 bits. So, you can stuff 2 digits into a single byte (there's your first speed up hint).
Now, go look up how many bytes an integer is taking up (hint: it will be way more than you need to store a single digit).
What's killing performance is the use of integers, which is consuming about 4 times as much memory as you should be. Use bytes or, worst case, a character array (2 digits per byte or character) to store the numerals. It doesn't take a whole lot of logic to "pack" and "unpack" the numerals into a byte.
From the face of it, I would not expect a big difference. Certainly not a factor 6. BigNumber is just a class around an int[] isn't it? It would help if you showed us a little from BigNumber. And check your benchmarking.
It would be ideal if you posted something small we could copy/paste and run.
Without seeing your BigInteger implementation, it is very difficult to tell. However, I have two guesses.
1) Your line, with the array test, can get special handling by the JIT which removes the array bounds checking. This can give you a significant boost, especially since you're not doing any "real work" in the loop
for (int j = 0; j < test.Length; j++) // This removes bounds checking by JIT
2) Are you timing this in release mode, outside of Visual Studio? If not, that, alone, would explain your 6x speed drop, since the visual studio hosting process slows down class access artificially. Make sure you're in release mode, using Ctrl+F5 to test your timings.
Rather than reinventing (and debugging and perfecting) the wheel, you might be better served using an existing big integer implementation, so you can get on with the rest of your project.
This SO topic is a good start.
You might also check out this CodeProject article.
As pointed out by Guffa, the difference is mostly bounds checking.
To guarantee that bounds checking will not ruin performance, you can also put your tight loops in an unsafe block and this will eliminate bounds checking. To do this you'll need to compile with the /unsafe option.
//pre-load the bits -- do this only ONCE
byte[] baHi = new byte[16];
baHi[0]=0;
baHi[1] = 000 + 00 + 00 + 16; //0001
baHi[2] = 000 + 00 + 32 + 00; //0010
baHi[3] = 000 + 00 + 32 + 16; //0011
baHi[4] = 000 + 64 + 00 + 00; //0100
baHi[5] = 000 + 64 + 00 + 16; //0101
baHi[6] = 000 + 64 + 32 + 00; //0110
baHi[7] = 000 + 64 + 32 + 16; //0111
baHi[8] = 128 + 00 + 00 + 00; //1000
baHi[9] = 128 + 00 + 00 + 16; //1001
//not needed for 0-9
//baHi[10] = 128 + 00 + 32 + 00; //1010
//baHi[11] = 128 + 00 + 32 + 16; //1011
//baHi[12] = 128 + 64 + 00 + 00; //1100
//baHi[13] = 128 + 64 + 00 + 16; //1101
//baHi[14] = 128 + 64 + 32 + 00; //1110
//baHi[15] = 128 + 64 + 32 + 16; //1111
//-------------------------------------------------------------------------
//START PACKING
//load TWO digits (0-9) at a time
//this means if you're loading a big number from
//a file, you read two digits at a time
//and put them into bLoVal and bHiVal
//230942034371231235 see that '37' in the middle?
// ^^
//
byte bHiVal = 3; //0000 0011
byte bLoVal = 7; //0000 1011
byte bShiftedLeftHiVal = (byte)baHi[bHiVal]; //0011 0000 =3, shifted (48)
//fuse the two together into a single byte
byte bNewVal = (byte)(bShiftedLeftHiVal + bLoVal); //0011 1011 = 55 decimal
//now store bNewVal wherever you want to store it
//for later retrieval, like a byte array
//END PACKING
//-------------------------------------------------------------------------
Response.Write("PACKING: hi: " + bHiVal + " lo: " + bLoVal + " packed: " + bNewVal);
Response.Write("<br>");
//-------------------------------------------------------------------------
//START UNPACKING
byte bUnpackedLoByte = (byte)(bNewVal & 15); //will yield 7
byte bUnpackedHiByte = (byte)(bNewVal & 240); //will yield 48
//now we need to change '48' back into '3'
string sHiBits = "00000000" + Convert.ToString(bUnpackedHiByte, 2); //drops leading 0s, so we pad...
sHiBits = sHiBits.Substring(sHiBits.Length - 8, 8); //and get the last 8 characters
sHiBits = ("0000" + sHiBits).Substring(0, 8); //shift right
bUnpackedHiByte = (byte)Convert.ToSByte(sHiBits, 2); //and, finally, get back the original byte
//the above method, reworked, could also be used to PACK the data,
//though it might be slower than hitting an array.
//You can also loop through baHi to unpack, comparing the original
//bUnpackedHyByte to the contents of the array and return
//the index of where you found it (the index would be the
//unpacked digit)
Response.Write("UNPACKING: input: " + bNewVal + " hi: " + bUnpackedHiByte + " lo: " + bUnpackedLoByte);
//now create your output with bUnpackedHiByte and bUnpackedLoByte,
//then move on to the next two bytes in where ever you stored the
//really big number
//END UNPACKING
//-------------------------------------------------------------------------
Even if you just changed your INT to SHORT in your original solution you'd chop your memory requirements in half, the above takes memory down to almost a bare minimum (I'm sure someone will come along screaming about a few wasted bytes)