C# Reading, Storing, and Combining Arrays - c#

I am working on an RS232 communication effort but have been running into issues with the some of the arrays I am handling.
In the example below I am sending out a "Command" with the intent to read and store the first 4 bytes of the command in a new array called "FirstFour". For every loop execution that is run I also want to convert the integer "i" to a Hex value. I then intend to combine the "FirstFour" and "iHex" arrays into a new array noted as "ComboByte". Below is my code so far but it doesn't seem to be working.
private void ReadStoreCreateByteArray()
{
byte[] Command = { 0x01, 0x02, 0x05, 0x04, 0x05, 0x06, 0x07, 0x08};
for (int i = 0; i < 10; i++)
{
//Send Command
comport.Write(Command, 0, Command.Length);
//Read response and store in buffer
int bytes = comport.BytesToRead;
byte[] Buffer = new byte[bytes];
comport.Read(Buffer, 0, bytes);
//Create 4 byte array to hold first 4 bytes out of Command
var FirstFour = Buffer.Take(4).ToArray();
//Convert i to a Hex value
byte iHex = Convert.ToByte(i.ToString());
//Combine "FirstFour" and "iHex" into a new array
byte [] ComboByte = {iHex, FirstFour[1], FirstFour[2], FirstFour[3], First Four[4]};
comport.Write(ComboByte, 0, ComboByte.Length);
}
}
Any help would be appreciated. Thanks!

Arrays are zero based, so...
byte [] ComboByte = {iHex, FirstFour[0], FirstFour[1], FirstFour[2], First Four[3]};
...should give you the first 4 elements of FirstFour.

Firstly, you need to handle the return value from Read operations. The following pattern should work fine for a range of APIs, including SerialPort, Stream, etc:
static void ReadExact(SerialPort port, byte[] buffer, int offset, int count)
{
int read;
while(count > 0 && (read = port.Read(buffer, offset, count)) > 0)
{
count -= read;
offset += read;
}
if (count != 0) throw new EndOfStreamException();
}
So: you have a method that can read a reliable number of bytes - you should then be able to re-use a single buffer and populate it sequentially:
byte[] buffer = new byte[5];
for (int i = 0; i < 10; i++)
{
//...
buffer[0] = (byte)i;
ReadExact(port, buffer, 1, 4);
}
The BytesToRead property is largely useless except for deciding whether to read synchronously or asynchronously, as it doesn't tell you whether more data is imminent. With your existing code there is no guarantee you will have at least 4 bytes.

Related

C# How to write bytes into the middle of a byte[] array

The function ReadPipe() below reads chunks of bytes, and I need each chunk to go to the next location in byte[] packet_buffer. But I can't figure out how to tell .ReadPipe to write bytes to within packet_buffer.
If it was C, I could just specify: *packet_buffer[ byte index of next chunk ]
How do I do this in C#?
public static int receive_SetStreamPipe_2( byte[] packet_buffer, int bytes_to_read )
{
uint received_chunk_bytes = 0;
int remaining_bytes = bytes_to_read;
int total_transferred_bytes = 0;
// Use DataPipeInformation to get the actual PipeID
ftStatus = USB_device_selection0.SetStreamPipe( FT_pipe_information.PipeId, (UInt32)bytes_to_read );
if (ftStatus != FTDI.FT_STATUS.FT_OK)
return -(int)ftStatus; // lookup: FTDI.FT_STATUS
// For each chunk 'o bytes:
for(;;)
{
// Read chunk of bytes from FPGA:
ftStatus = USB_device_selection0.ReadPipe( FT_pipe_information.PipeId,
packet_buffer( remaining_bytes ) , <<<<<<<<<<<<<< THIS WON'T WORK
(uint)remaining_bytes,
ref received_chunk_bytes );
if (ftStatus != FTDI.FT_STATUS.FT_OK)
return -(int)ftStatus; // lookup: FTDI.FT_STATUS
total_transferred_bytes += (int)received_chunk_bytes;
remaining_bytes -= (int)received_chunk_bytes;
// Get more if not done:
if( total_transferred_bytes < bytes_to_read )
{
continue; // go get more
}
return 0;
}
}
Based on CodeCaster's response, the best answer so far is that I have asked the FTDI company that makes the USB host driver to provide an overload with an offset.
Making the following assumption on your code, which I really shouldn't have to, please read [ask] and provide all relevant details:
receive_SetStreamPipe_2(byte[] packet_buffer, int bytes_to_read):
Is implemented by you
Receives in packet_buffer an array that is at least bytes_to_read long and needs to be filled with exactly bytes_to_read bytes
USB_device_selection0.ReadPipe(FT_pipe_information.PipeId, packet_buffer, (uint)remaining_bytes, ref received_chunk_bytes):
Fills packet_buffer from index 0 and doesn't have an overload with an offset (such as Stream.Write(buffer, offset, count))
Fills it up to at most remaining_bytes, but probably less
Assigns the received_chunk_bytes to how many bytes have been read
Then you need to introduce a temporary buffer that you copy to the final buffer. How large that buffer should optimally be should be obtainable from the API information, but let's take 1024 bytes:
uint received_chunk_bytes = 0;
int remaining_bytes = bytes_to_read;
int total_transferred_bytes = 0;
// Create a smaller buffer to hold each chunk
int chunkSize = 1024;
byte[] chunkBuffer = new byte[chunkSize];
// ...
for (;;)
{
// Read chunk of bytes from FPGA into chunkBuffer, chunk size being the the buffer size or the remaining number of bytes, whichever is less
ftStatus = USB_device_selection0.ReadPipe(FT_pipe_information.PipeId,
chunkBuffer
(uint)Math.Min(chunkSize, remaining_bytes),
ref received_chunk_bytes);
if (ftStatus != FTDI.FT_STATUS.FT_OK)
return -(int)ftStatus; // lookup: FTDI.FT_STATUS
// Copy the chunk into the output array
Array.Copy(chunkBuffer, 0, packet_buffer, total_transferred_bytes, received_chunk_bytes);
total_transferred_bytes += (int)received_chunk_bytes;
remaining_bytes -= (int)received_chunk_bytes;
// ...

Reading bytes from the serial port

Im building an application where i need to reed 15 byes from a serial device. (ScaleXtric c7042 powerbase) The bytes need to come in the right order, and the last one is a crc.
Using this code in an backgroundworker, I get the bytes:
byte[] data = new byte[_APB.ReadBufferSize];
_APB.Read(data, 0, data.Length);
The problem is that I don't get the first bytes first, Its like it stores some of the bytes in the buffer, so next time the DataRecieved event fires, I get the last x bytes from the previous message, and only the 15-x byte from the new. I write the bytes to a text box, and its all over the place, so some bytes are missing somewhere.
I have tried to clear the buffer after each read, but no luck.
_APB = new SerialPort(comboBoxCommAPB.SelectedItem.ToString());
_APB.BaudRate = 19200;
_APB.DataReceived += new SerialDataReceivedEventHandler(DataReceivedHandlerDataFromAPB);
_APB.Open();
_APB.DiscardInBuffer();
Hope any one can help me here
Use this Method to read fixed amout of bytes from serial port, for your case toread = 15;
public byte[] ReadFromSerialPort(SerialPort serialPort, int toRead)
{
byte[] buffer = new byte[toRead];
int offset = 0;
int read;
while (toRead > 0 && (read = serialPort.Read(buffer, offset, toRead)) > 0)
{
offset += read;
toRead -= read;
}
if (toRead > 0) throw new EndOfStreamException();
return buffer;
}

Copy all but the last 16 bytes of a stream? Early detection of end-of-stream?

This is C# related. We have a case where we need to copy the entire source stream into a destination stream except for the last 16 bytes.
EDIT: The streams can range upto 40GB, so can't do some static byte[] allocation (eg: .ToArray())
Looking at the MSDN documentation, it seems that we can reliably determine the end of stream only when the return value is 0. Return values between 0 and the requested size can imply bytes are "not currently available" (what does that really mean?)
Currently it copies every single byte as follows. inStream and outStream are generic - can be memory, disk or network streams (actually some more too).
public static void StreamCopy(Stream inStream, Stream outStream)
{
var buffer = new byte[8*1024];
var last16Bytes = new byte[16];
int bytesRead;
while ((bytesRead = inStream.Read(buffer, 0, buffer.Length)) > 0)
{
outStream.Write(buffer, 0, bytesRead);
}
// Issues:
// 1. We already wrote the last 16 bytes into
// outStream (possibly over the n/w)
// 2. last16Bytes = ? (inStream may not necessarily support rewinding)
}
What is a reliable way to ensure all but the last 16 are copied? I can think of using Position and Length on the inStream but there is a gotcha on MSDN that says
If a class derived from Stream does not support seeking, calls to Length, SetLength, Position, and Seek throw a NotSupportedException. .
Read between 1 and n bytes from the input stream.1
Append the bytes to a circular buffer.2
Write the first max(0, b - 16) bytes from the circular buffer to the output stream, where b is the number of bytes in the circular buffer.
Remove the bytes that you just have written from the circular buffer.
Go to step 1.
1This is what the Read method does – if you call int n = Read(buffer, 0, 500); it will read between 1 and 500 bytes into buffer and return the number of bytes read. If Read returns 0, you have reached the end of the stream.
2For maximum performance, you can read the bytes directly from the input stream into the circular buffer. This is a bit tricky, because you have to deal with the wraparound within the array underlying the buffer.
The following solution is fast and tested. Hope it's useful. It uses the double buffering idea you already had in mind. EDIT: simplified loop removing the conditional that separated the first iteration from the rest.
public static void StreamCopy(Stream inStream, Stream outStream) {
// Define the size of the chunk to copy during each iteration (1 KiB)
const int blockSize = 1024;
const int bytesToOmit = 16;
const int buffSize = blockSize + bytesToOmit;
// Generate working buffers
byte[] buffer1 = new byte[buffSize];
byte[] buffer2 = new byte[buffSize];
// Initialize first iteration
byte[] curBuffer = buffer1;
byte[] prevBuffer = null;
int bytesRead;
// Attempt to fully fill the buffer
bytesRead = inStream.Read(curBuffer, 0, buffSize);
if( bytesRead == buffSize ) {
// We succesfully retrieved a whole buffer, we will output
// only [blockSize] bytes, to avoid writing to the last
// bytes in the buffer in case the remaining 16 bytes happen to
// be the last ones
outStream.Write(curBuffer, 0, blockSize);
} else {
// We couldn't retrieve the whole buffer
int bytesToWrite = bytesRead - bytesToOmit;
if( bytesToWrite > 0 ) {
outStream.Write(curBuffer, 0, bytesToWrite);
}
// There's no more data to process
return;
}
curBuffer = buffer2;
prevBuffer = buffer1;
while( true ) {
// Attempt again to fully fill the buffer
bytesRead = inStream.Read(curBuffer, 0, buffSize);
if( bytesRead == buffSize ) {
// We retrieved the whole buffer, output first the last 16
// bytes of the previous buffer, and output just [blockSize]
// bytes from the current buffer
outStream.Write(prevBuffer, blockSize, bytesToOmit);
outStream.Write(curBuffer, 0, blockSize);
} else {
// We could not retrieve a complete buffer
if( bytesRead <= bytesToOmit ) {
// The bytes to output come solely from the previous buffer
outStream.Write(prevBuffer, blockSize, bytesRead);
} else {
// The bytes to output come from the previous buffer and
// the current buffer
outStream.Write(prevBuffer, blockSize, bytesToOmit);
outStream.Write(curBuffer, 0, bytesRead - bytesToOmit);
}
break;
}
// swap buffers for next iteration
byte[] swap = prevBuffer;
prevBuffer = curBuffer;
curBuffer = swap;
}
}
static void Assert(Stream inStream, Stream outStream) {
// Routine that tests the copy worked as expected
inStream.Seek(0, SeekOrigin.Begin);
outStream.Seek(0, SeekOrigin.Begin);
Debug.Assert(outStream.Length == Math.Max(inStream.Length - bytesToOmit, 0));
for( int i = 0; i < outStream.Length; i++ ) {
int byte1 = inStream.ReadByte();
int byte2 = outStream.ReadByte();
Debug.Assert(byte1 == byte2);
}
}
A much easier solution to code, yet slower since it would work at a byte level, would be to use an intermediate queue between the input stream and the output stream. The process would first read and enqueue 16 bytes from the input stream. Then it would iterate over the remaining input bytes, reading a single byte from the input stream, enqueuing it and then dequeuing a byte. The dequeued byte would be written to the output stream, until all bytes from the input stream are processed. The unwanted 16 bytes should linger in the intermediate queue.
Hope this helps!
=)
Use a circular buffer sounds great but there is no circular buffer class in .NET which means additional code anyways. I ended up with the following algorithm, a sort of map and copy - I think it's simple. The variable names are longer than usual for the sake of being self descriptive here.
This flows thru the buffers as
[outStream] <== [tailBuf] <== [mainBuf] <== [inStream]
public byte[] CopyStreamExtractLastBytes(Stream inStream, Stream outStream,
int extractByteCount)
{
//var mainBuf = new byte[1024*4]; // 4K buffer ok for network too
var mainBuf = new byte[4651]; // nearby prime for testing
int mainBufValidCount;
var tailBuf = new byte[extractByteCount];
int tailBufValidCount = 0;
while ((mainBufValidCount = inStream.Read(mainBuf, 0, mainBuf.Length)) > 0)
{
// Map: how much of what (passthru/tail) lives where (MainBuf/tailBuf)
// more than tail is passthru
int totalPassthruCount = Math.Max(0, tailBufValidCount +
mainBufValidCount - extractByteCount);
int tailBufPassthruCount = Math.Min(tailBufValidCount, totalPassthruCount);
int tailBufTailCount = tailBufValidCount - tailBufPassthruCount;
int mainBufPassthruCount = totalPassthruCount - tailBufPassthruCount;
int mainBufResidualCount = mainBufValidCount - mainBufPassthruCount;
// Copy: Passthru must be flushed per FIFO order (tailBuf then mainBuf)
outStream.Write(tailBuf, 0, tailBufPassthruCount);
outStream.Write(mainBuf, 0, mainBufPassthruCount);
// Copy: Now reassemble/compact tail into tailBuf
var tempResidualBuf = new byte[extractByteCount];
Array.Copy(tailBuf, tailBufPassthruCount, tempResidualBuf, 0,
tailBufTailCount);
Array.Copy(mainBuf, mainBufPassthruCount, tempResidualBuf,
tailBufTailCount, mainBufResidualCount);
tailBufValidCount = tailBufTailCount + mainBufResidualCount;
tailBuf = tempResidualBuf;
}
return tailBuf;
}

C# split byte array from file

Hello I'm doing an encryption algorithm which reads bytes from file (any type) and outputs them into a file. The problem is my encryption program takes only blocks of 16 bytes so if the file is bigger it has to be split into blocks of 16, or if there's a way to read 16 bytes from the file each time it's fine.
The algorithm is working fine with hard coded input of 16 bytes. The ciphered result has to be saved in a list or array because it has to be deciphered the same way later. I can't post all my program but here's what I do in main so far and cannot get results
static void Main(String[] args)
{
byte[] bytes = File.ReadAllBytes("path to file");
var stream = new StreamReader(new MemoryStream(bytes));
byte[] cipherText = new byte[16];
byte[] decipheredText = new byte[16];
Console.WriteLine("\nThe message is: ");
Console.WriteLine(stream.ReadToEnd());
AES a = new AES(keyInput);
var list1 = new List<byte[]>();
for (int i = 0; i < bytes.Length; i+=16)
{
a.Cipher(bytes, cipherText);
list1.Add(cipherText);
}
Console.WriteLine("\nThe resulting ciphertext is: ");
foreach (byte[] b in list1)
{
ToBytes(b);
}
}
I know that my loops always add the first 16 bytes from the byte array but I tried many ways and nothing work. It won't let me index the bytes array or copy an item to a temp variable like temp = bytes[i]. The ToBytes method is irrelevant, it just prints the elements as bytes.
I would like to recommend you to change the interface for your Cipher() method: instead of passing the entire array, it would be better to pass the source and destination arrays and offset - block by block encryption.
Pseudo-code is below.
void Cipher(byte[] source, int srcOffset, byte[] dest, int destOffset)
{
// Cipher these bytes from (source + offset) to (source + offset + 16),
// write the cipher to (dest + offset) to (dest + offset + 16)
// Also I'd recommend to check that the source and dest Length is less equal to (offset + 16)!
}
Usage:
For small files (one memory allocation for destination buffer, block by block encryption):
// You can allocate the entire destination buffer before encryption!
byte[] sourceBuffer = File.ReadAllBytes("path to file");
byte[] destBuffer = new byte[sourceBuffer.Length];
// Encrypt each block.
for (int offset = 0; i < sourceBuffer.Length; offset += 16)
{
Cipher(sourceBuffer, offset, destBuffer, offset);
}
So, the main advantage of this approach - it elimitates additional memory allocations: the destination array is allocated at once. There is also no copy-memory operations.
For files of any size (streams, block by block encryption):
byte[] inputBlock = new byte[16];
byte[] outputBlock = new byte[16];
using (var inputStream = File.OpenRead("input path"))
using (var outputStream = File.Create("output path"))
{
int bytesRead;
while ((bytesRead = inputStream.Read(inputBlock, 0, inputBlock.Length)) > 0)
{
if (bytesRead < 16)
{
// Throw or use padding technique.
throw new InvalidOperationException("Read block size is not equal to 16 bytes");
// Fill the remaining bytes of input block with some bytes.
// This operation for last block is called "padding".
// See http://en.wikipedia.org/wiki/Block_cipher_modes_of_operation#Padding
}
Cipher(inputBlock, 0, outputBlock, 0);
outputStream.Write(outputBlock, 0, outputBlock.Length);
}
}
No need to read the whole mess into memory if you can only process it a bit at a time...
var filename = #"c:\temp\foo.bin";
using(var fileStream = new FileStream(filename, FileMode.Open))
{
var buffer = new byte[16];
var bytesRead = 0;
while((bytesRead = fileStream.Read(buffer, 0, buffer.Length)) > 0)
{
// do whatever you need to with the next 16-byte block
Console.WriteLine("Read {0} bytes: {1}",
bytesRead,
string.Join(",", buffer));
}
}
You can use Array.Copy
byte[] temp = new byte[16];
Array.Copy(bytes, i, temp, 0, 16);

Removing trailing nulls from byte array in C#

Ok, I am reading in dat files into a byte array. For some reason, the people who generate these files put about a half meg's worth of useless null bytes at the end of the file. Anybody know a quick way to trim these off the end?
First thought was to start at the end of the array and iterate backwards until I found something other than a null, then copy everything up to that point, but I wonder if there isn't a better way.
To answer some questions:
Are you sure the 0 bytes are definitely in the file, rather than there being a bug in the file reading code? Yes, I am certain of that.
Can you definitely trim all trailing 0s? Yes.
Can there be any 0s in the rest of the file? Yes, there can be 0's other places, so, no, I can't start at the beginning and stop at the first 0.
I agree with Jon. The critical bit is that you must "touch" every byte from the last one until the first non-zero byte. Something like this:
byte[] foo;
// populate foo
int i = foo.Length - 1;
while(foo[i] == 0)
--i;
// now foo[i] is the last non-zero byte
byte[] bar = new byte[i+1];
Array.Copy(foo, bar, i+1);
I'm pretty sure that's about as efficient as you're going to be able to make it.
Given the extra questions now answered, it sounds like you're fundamentally doing the right thing. In particular, you have to touch every byte of the file from the last 0 onwards, to check that it only has 0s.
Now, whether you have to copy everything or not depends on what you're then doing with the data.
You could perhaps remember the index and keep it with the data or filename.
You could copy the data into a new byte array
If you want to "fix" the file, you could call FileStream.SetLength to truncate the file
The "you have to read every byte between the truncation point and the end of the file" is the critical part though.
#Factor Mystic,
I think there is a shortest way:
var data = new byte[] { 0x01, 0x02, 0x00, 0x03, 0x04, 0x00, 0x00, 0x00, 0x00 };
var new_data = data.TakeWhile((v, index) => data.Skip(index).Any(w => w != 0x00)).ToArray();
How about this:
[Test]
public void Test()
{
var chars = new [] {'a', 'b', '\0', 'c', '\0', '\0'};
File.WriteAllBytes("test.dat", Encoding.ASCII.GetBytes(chars));
var content = File.ReadAllText("test.dat");
Assert.AreEqual(6, content.Length); // includes the null bytes at the end
content = content.Trim('\0');
Assert.AreEqual(4, content.Length); // no more null bytes at the end
// but still has the one in the middle
}
Assuming 0=null, that is probably your best bet... as a minor tweak, you might want to use Buffer.BlockCopy when you finally copy the useful data..
test this :
private byte[] trimByte(byte[] input)
{
if (input.Length > 1)
{
int byteCounter = input.Length - 1;
while (input[byteCounter] == 0x00)
{
byteCounter--;
}
byte[] rv = new byte[(byteCounter + 1)];
for (int byteCounter1 = 0; byteCounter1 < (byteCounter + 1); byteCounter1++)
{
rv[byteCounter1] = input[byteCounter1];
}
return rv;
}
There is always a LINQ answer
byte[] data = new byte[] { 0x01, 0x02, 0x00, 0x03, 0x04, 0x00, 0x00, 0x00, 0x00 };
bool data_found = false;
byte[] new_data = data.Reverse().SkipWhile(point =>
{
if (data_found) return false;
if (point == 0x00) return true; else { data_found = true; return false; }
}).Reverse().ToArray();
You could just count the number of zero at the end of the array and use that instead of .Length when iterating the array later on. You could encapsulate this however you like. Main point is you don't really need to copy it into a new structure. If they are big, it may be worth it.
if in the file null bytes can be valid values, do you know that the last byte in the file cannot be null. if so, iterating backwards and looking for the first non-null entry is probably best, if not then there is no way to tell where the actual end of the file is.
If you know more about the data format, such as there can be no sequence of null bytes longer than two bytes (or some similar constraint). Then you may be able to actually do a binary search for the 'transition point'. This should be much faster than the linear search (assuming that you can read in the whole file).
The basic idea (using my earlier assumption about no consecutive null bytes), would be:
var data = (byte array of file data...);
var index = data.length / 2;
var jmpsize = data.length/2;
while(true)
{
jmpsize /= 2;//integer division
if( jmpsize == 0) break;
byte b1 = data[index];
byte b2 = data[index + 1];
if(b1 == 0 && b2 == 0) //too close to the end, go left
index -=jmpsize;
else
index += jmpsize;
}
if(index == data.length - 1) return data.length;
byte b1 = data[index];
byte b2 = data[index + 1];
if(b2 == 0)
{
if(b1 == 0) return index;
else return index + 1;
}
else return index + 2;
When the file is large (much larger than my RAM), I use this to remove trailing nulls:
static void RemoveTrailingNulls(string inputFilename, string outputFilename)
{
int bufferSize = 100 * 1024 * 1024;
long totalTrailingNulls = 0;
byte[] emptyArray = new byte[bufferSize];
using (var inputFile = File.OpenRead(inputFilename))
using (var inputFileReversed = new ReverseStream(inputFile))
{
var buffer = new byte[bufferSize];
while (true)
{
var start = DateTime.Now;
var bytesRead = inputFileReversed.Read(buffer, 0, buffer.Length);
if (bytesRead == emptyArray.Length && Enumerable.SequenceEqual(emptyArray, buffer))
{
totalTrailingNulls += buffer.Length;
}
else
{
var nulls = buffer.Take(bytesRead).TakeWhile(b => b == 0).Count();
totalTrailingNulls += nulls;
if (nulls < bytesRead)
{
//found the last non-null byte
break;
}
}
var duration = DateTime.Now - start;
var mbPerSec = (bytesRead / (1024 * 1024D)) / duration.TotalSeconds;
Console.WriteLine($"{mbPerSec:N2} MB/seconds");
}
var lastNonNull = inputFile.Length - totalTrailingNulls;
using (var outputFile = File.Open(outputFilename, FileMode.Create, FileAccess.Write))
{
inputFile.Seek(0, SeekOrigin.Begin);
inputFile.CopyTo(outputFile, lastNonNull, bufferSize);
}
}
}
It uses the ReverseStream class, which can be found here.
And this extension method:
public static class Extensions
{
public static long CopyTo(this Stream input, Stream output, long count, int bufferSize)
{
byte[] buffer = new byte[bufferSize];
long totalRead = 0;
while (true)
{
if (count == 0) break;
int read = input.Read(buffer, 0, (int)Math.Min(bufferSize, count));
if (read == 0) break;
totalRead += read;
output.Write(buffer, 0, read);
count -= read;
}
return totalRead;
}
}
In my case LINQ approach never finished ^))) It's to slow to work with byte arrays!
Guys, why won't you use Array.Copy() method?
/// <summary>
/// Gets array of bytes from memory stream.
/// </summary>
/// <param name="stream">Memory stream.</param>
public static byte[] GetAllBytes(this MemoryStream stream)
{
byte[] result = new byte[stream.Length];
Array.Copy(stream.GetBuffer(), result, stream.Length);
return result;
}

Categories