C# How to write bytes into the middle of a byte[] array - c#

The function ReadPipe() below reads chunks of bytes, and I need each chunk to go to the next location in byte[] packet_buffer. But I can't figure out how to tell .ReadPipe to write bytes to within packet_buffer.
If it was C, I could just specify: *packet_buffer[ byte index of next chunk ]
How do I do this in C#?
public static int receive_SetStreamPipe_2( byte[] packet_buffer, int bytes_to_read )
{
uint received_chunk_bytes = 0;
int remaining_bytes = bytes_to_read;
int total_transferred_bytes = 0;
// Use DataPipeInformation to get the actual PipeID
ftStatus = USB_device_selection0.SetStreamPipe( FT_pipe_information.PipeId, (UInt32)bytes_to_read );
if (ftStatus != FTDI.FT_STATUS.FT_OK)
return -(int)ftStatus; // lookup: FTDI.FT_STATUS
// For each chunk 'o bytes:
for(;;)
{
// Read chunk of bytes from FPGA:
ftStatus = USB_device_selection0.ReadPipe( FT_pipe_information.PipeId,
packet_buffer( remaining_bytes ) , <<<<<<<<<<<<<< THIS WON'T WORK
(uint)remaining_bytes,
ref received_chunk_bytes );
if (ftStatus != FTDI.FT_STATUS.FT_OK)
return -(int)ftStatus; // lookup: FTDI.FT_STATUS
total_transferred_bytes += (int)received_chunk_bytes;
remaining_bytes -= (int)received_chunk_bytes;
// Get more if not done:
if( total_transferred_bytes < bytes_to_read )
{
continue; // go get more
}
return 0;
}
}

Based on CodeCaster's response, the best answer so far is that I have asked the FTDI company that makes the USB host driver to provide an overload with an offset.

Making the following assumption on your code, which I really shouldn't have to, please read [ask] and provide all relevant details:
receive_SetStreamPipe_2(byte[] packet_buffer, int bytes_to_read):
Is implemented by you
Receives in packet_buffer an array that is at least bytes_to_read long and needs to be filled with exactly bytes_to_read bytes
USB_device_selection0.ReadPipe(FT_pipe_information.PipeId, packet_buffer, (uint)remaining_bytes, ref received_chunk_bytes):
Fills packet_buffer from index 0 and doesn't have an overload with an offset (such as Stream.Write(buffer, offset, count))
Fills it up to at most remaining_bytes, but probably less
Assigns the received_chunk_bytes to how many bytes have been read
Then you need to introduce a temporary buffer that you copy to the final buffer. How large that buffer should optimally be should be obtainable from the API information, but let's take 1024 bytes:
uint received_chunk_bytes = 0;
int remaining_bytes = bytes_to_read;
int total_transferred_bytes = 0;
// Create a smaller buffer to hold each chunk
int chunkSize = 1024;
byte[] chunkBuffer = new byte[chunkSize];
// ...
for (;;)
{
// Read chunk of bytes from FPGA into chunkBuffer, chunk size being the the buffer size or the remaining number of bytes, whichever is less
ftStatus = USB_device_selection0.ReadPipe(FT_pipe_information.PipeId,
chunkBuffer
(uint)Math.Min(chunkSize, remaining_bytes),
ref received_chunk_bytes);
if (ftStatus != FTDI.FT_STATUS.FT_OK)
return -(int)ftStatus; // lookup: FTDI.FT_STATUS
// Copy the chunk into the output array
Array.Copy(chunkBuffer, 0, packet_buffer, total_transferred_bytes, received_chunk_bytes);
total_transferred_bytes += (int)received_chunk_bytes;
remaining_bytes -= (int)received_chunk_bytes;
// ...

Related

Fastest way to split a Stream according to a pattern

What would be the most optimal/fastest way to split a Steam into chunks delimited by a byte pattern (eg. new byte[] { 0, 0 })?
My current, naieve and slow, implementation reads the stream byte per byte, decrements a counter each time it encounters the delimiter. If the counter is zero, it yields a memory chunk.
const int NUMBER_CONSECUTIVE_DELIMITER = 2;
const int DELIMITER = 0;
public IEnumerable<ReadOnlyMemory<byte>> Chunk(Stream stream)
{
var chunk = new MemoryStream();
try
{
int b; //the byte being read
int c = NUMBER_CONSECUTIVE_DELIMITER;
while ((b = stream.ReadByte()) != -1) //Read the stream byte by byte, -1 = end of the stream
{
chunk.WriteByte((byte)b); //Write this byte to the next chunk
if (b == DELIMITER)
c--; //if we hit the delimiter (ie '0') decrement the counter
else
c = NUMBER_CONSECUTIVE_DELIMITER; //else, reset the couter
if ((c <= 0 || stream.Position == stream.Length) //we hit two subsequent '0's
{
var r = chunk.ToArray().AsMemory(); //parse it to a Memory<T>
chunk.Dispose();
chunk = new();
yield return r;
}
}
}
finally
{
chunk.Dispose();
}
}
Such an implementation is extremely difficult to implement because a stream has to be read out in fixed buffer sizes. The buffer can be too big or too small for the content to be interpreted. To solve this problem, the ReadOnlySequence<T> struct was added. More information about this topic can be seen here.
By using System.IO.Pipelines (package must be obtained) this problem can be solved as follows:
public static async Task FillPipeAsync(Stream stream, PipeWriter writer, CancellationToken cancellationToken = default)
{
// The minimum buffer size that is used for the current buffer segment.
const int bufferSize = 65536;
while (true)
{
// Request 65536 bytes from the PipeWriter.
Memory<byte> memory = writer.GetMemory(bufferSize);
// Read the content from the stream.
int bytesRead = await stream.ReadAsync(memory, cancellationToken).ConfigureAwait(false);
if (bytesRead == 0) break;
// Tell the writer how many bytes are read.
writer.Advance(bytesRead);
// Flush the data to the PipeWriter.
FlushResult result = await writer.FlushAsync(cancellationToken).ConfigureAwait(false);
if (result.IsCompleted) break;
}
// This enables our reading process to be notified that no more new data is coming.
await writer.CompleteAsync().ConfigureAwait(false);
}
This will read your stream asynchronously and write a buffer segment to the pipe. Next you have to implement a read logic to slice/merge the concatenated buffer segments into chunks:
public static async IAsyncEnumerable<ReadOnlySequence<byte>> ReadPipeAsync(PipeReader reader, ReadOnlyMemory<byte> delimiter,
[EnumeratorCancellation] CancellationToken cancellationToken = default)
{
while (true)
{
// Read from the PipeReader.
ReadResult result = await reader.ReadAsync(cancellationToken).ConfigureAwait(false);
ReadOnlySequence<byte> buffer = result.Buffer;
while (TryReadChunk(ref buffer, delimiter.Span, out ReadOnlySequence<byte> chunk))
yield return chunk;
// Tell the PipeReader how many bytes are read.
// This is essential because the Pipe will release last used buffer segments that are not longer in use.
reader.AdvanceTo(buffer.Start, buffer.End);
// Take care of the complete notification and return the last buffer. UPDATE: Corrected issue 2/.
if (result.IsCompleted)
{
yield return buffer;
break;
}
}
await reader.CompleteAsync().ConfigureAwait(false);
}
private static bool TryReadChunk(ref ReadOnlySequence<byte> buffer, ReadOnlySpan<byte> delimiter,
out ReadOnlySequence<byte> chunk)
{
// Search the buffer for the first byte of the delimiter.
SequencePosition? position = buffer.PositionOf(delimiter[0]);
// If no occurence was found or the next bytes of the data in the buffer does not match the delimiter, return false.
// UPDATE: Corrected issue 3/.
if (position is null || !buffer.Slice(position.Value, delimiter.Length).FirstSpan.StartsWith(delimiter))
{
chunk = default;
return false;
}
// Return the calculated chunk and update the buffer to cut the start.
chunk = buffer.Slice(0, position.Value);
buffer = buffer.Slice(buffer.GetPosition(delimiter.Length, position.Value));
return true;
}
For this to work in that form you have to use an IAsyncEnumerable so that the chunks can be streamed into a foreach loop. Merging and slicing is largely handled by the pipe, so that a reliable algorithm can be built here with relatively little code. This code will also handle this in a high-performance manner.
Usage:
// Create a Pipe that manages the buffer.
Pipe pipe = new Pipe();
ConfiguredTaskAwaitable writing = FillPipeAsync(stream, pipe.Writer).ConfigureAwait(false);
// The delimiter that should be used. This can be any data with length > 0.
ReadOnlyMemory<byte> delimiter = new ReadOnlyMemory<byte>(new byte[] { 0, 0 });
// 'await foreach' and 'await writing' are executed asynchronously (in parallel).
await foreach (ReadOnlySequence<byte> chunk in ReadPipeAsync(pipe.Reader, delimiter))
{
// Use "chunk" to retrieve your chunked content.
};
await writing;
Note that reading and chunking is done asynchronously and independently.
I eventually ended up with the below code, strongly inspired by Philipp's answer above and https://keestalkstech.com/2010/11/seek-position-of-a-string-in-a-file-or-filestream/.
public override IEnumerable<byte[]> Chunk(Stream stream)
{
var buffer = new byte[bufferSize];
var size = bufferSize;
var offset = 0;
var position = stream.Position;
var nextChunk = Array.Empty<byte>();
while (true)
{
var bytesRead = stream.Read(buffer, offset, size);
// when no bytes are read -- the string could not be found
if (bytesRead <= 0)
break;
// when less then size bytes are read, we need to slice the buffer to prevent reading of "previous" bytes
ReadOnlySpan<byte> ro = buffer;
if (bytesRead < size)
ro = ro.Slice(0, offset + bytesRead);
// check if we can find our search bytes in the buffer
var i = ro.IndexOf(Delimiter);
if (i > -1 && // we found something
i <= bytesRead && //i <= r -- we found something in the area that was read (at the end of the buffer, the last values are not overwritten). i = r if the delimiter is at the end of the buffer
nextChunk.Length + (i + Delimiter.Length - offset) >= MinChunkSize) //the size of the chunk that will be made is large enough
{
var chunk = buffer[offset..(i + Delimiter.Length)];
yield return new byte[](Concat(nextChunk, chunk));
nextChunk = Array.Empty<byte>();
offset = 0;
size = bufferSize;
position += i + Delimiter.Length;
stream.Position = position;
continue;
}
else if (stream.Position == stream.Length)
{
// we re at the end of the stream
var chunk = buffer[offset..(bytesRead + offset)]; //return the bytes read
yield return new byte[](Concat(nextChunk, chunk));
break;
}
// the stream is not finished. Copy the last 2 bytes to the beginning of the buffer and set the offset to fill the buffer as of byte 3
nextChunk = Concat(nextChunk, buffer[offset..buffer.Length]);
offset = Delimiter.Length;
size = bufferSize - offset;
Array.Copy(buffer, buffer.Length - offset, buffer, 0, offset);
position += bufferSize - offset;
}
}

How to rewrite file as byte array fast C#

Hello I am trying to rewrite file by replacing bytes but it takes too much time to rewrite large files. For example on 700MB this code was working about 6 minutes. Pls help me to make it work less than 1 minute.
static private void _12_56(string fileName)
{
byte[] byteArray = File.ReadAllBytes(fileName);
for (int i = 0; i < byteArray.Count() - 6; i += 6)
{
Swap(ref byteArray[i], ref byteArray[i + 4]);
Swap(ref byteArray[i + 1], ref byteArray[i + 5]);
}
File.WriteAllBytes(fileName, byteArray);
}
Read the file in chuncks of bytes which are divisible by 6.
Replace the necessary bytes in each chunk and write each chunk to another file before reading the next chunk.
You can also try to perform the read of the next chunk in parallel with writing the next chunk:
using( var source = new FileStream(#"c:\temp\test.txt", FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
using( var target = new FileStream(#"c:\temp\test.txt", FileMode.Open, FileAccess.Write, FileShare.ReadWrite))
{
await RewriteFile(source, target);
}
}
private async Task RewriteFile( FileStream source, FileStream target )
{
// We're reading bufferSize bytes from the source-stream inside one half of the buffer
// while the writeTask is writing the other half of the buffer to the target-stream.
// define how many chunks of 6 bytes you want to read per read operation
int chunksPerBuffer = 1;
int bufferSize = 6 * chunksPerBuffer;
// declare a byte array that contains both the bytes that are read
// and the bytes that are being written in parallel.
byte[] buffer = new byte[bufferSize * 2];
// curoff is the start-position of the bytes we're working with in the
// buffer
int curoff = 0;
Task writeTask = Task.CompletedTask;
int len;
// Read the desired number of bytes from the file into the buffer.
// In the first read operation, the bytes will be placed in the first
// half of the buffer. The next read operation will read them in
// the second half of the buffer.
while ((len = await source.ReadAsync(buffer, curoff, bufferSize).ConfigureAwait(false)) != 0)
{
// Swap the bytes in the current buffer.
// When reading x * 6 bytes in one go, every 1st byte will be replaced by the 4th byte; every 2nd byte will be replaced by the 5th byte.
for (int i = curoff; i < bufferSize + curoff; i += 6)
{
Swap(ref buffer[i], ref buffer[i + 4]);
Swap(ref buffer[i + 1], ref buffer[i + 5]);
}
// wait until the previous write-task completed.
await writeTask.ConfigureAwait(false);
// Start writing the bytes that have just been processed.
// Do not await the task here, so that the next bytes
// can be read in parallel.
writeTask = target.WriteAsync(buffer, curoff, len);
// Position the pointer to the beginnen of the other part
// in the buffer
curoff ^= bufferSize;
}
// Make sure that the last write also finishes before closing
// the target stream.
await writeTask.ConfigureAwait(false);
}
The code above should read a file, swap bytes and rewrite to the same file in parallel.
As the other answer says, you have to read the file in chunks.
Since you are rewriting the same file, it's easiest to use the same stream for reading and writing.
using(var file = File.Open(path, FileMode.Open, FileAccess.ReadWrite)) {
// Read buffer. Size must be divisible by 6
var buffer = new byte[6*1000];
// Keep track of how much we've read in each iteration
var bytesRead = 0;
// Fill the buffer. Put the number of bytes into 'bytesRead'.
// Stop looping if we read less than 6 bytes.
// EOF will be signalled by Read returning -1.
while ((bytesRead = file.Read(buffer, 0, buffer.Length)) >= 6)
{
// Swap the bytes in the current buffer
for (int i = 0; i < bytesRead; i += 6)
{
Swap(ref buffer[i], ref buffer[i + 4]);
Swap(ref buffer[i + 1], ref buffer[i + 5]);
}
// Step back in the file, to where we filled the buffer from
file.Position -= bytesRead;
// Overwrite with the swapped bytes
file.Write(buffer, 0, bytesRead);
}
}

Reading bytes from the serial port

Im building an application where i need to reed 15 byes from a serial device. (ScaleXtric c7042 powerbase) The bytes need to come in the right order, and the last one is a crc.
Using this code in an backgroundworker, I get the bytes:
byte[] data = new byte[_APB.ReadBufferSize];
_APB.Read(data, 0, data.Length);
The problem is that I don't get the first bytes first, Its like it stores some of the bytes in the buffer, so next time the DataRecieved event fires, I get the last x bytes from the previous message, and only the 15-x byte from the new. I write the bytes to a text box, and its all over the place, so some bytes are missing somewhere.
I have tried to clear the buffer after each read, but no luck.
_APB = new SerialPort(comboBoxCommAPB.SelectedItem.ToString());
_APB.BaudRate = 19200;
_APB.DataReceived += new SerialDataReceivedEventHandler(DataReceivedHandlerDataFromAPB);
_APB.Open();
_APB.DiscardInBuffer();
Hope any one can help me here
Use this Method to read fixed amout of bytes from serial port, for your case toread = 15;
public byte[] ReadFromSerialPort(SerialPort serialPort, int toRead)
{
byte[] buffer = new byte[toRead];
int offset = 0;
int read;
while (toRead > 0 && (read = serialPort.Read(buffer, offset, toRead)) > 0)
{
offset += read;
toRead -= read;
}
if (toRead > 0) throw new EndOfStreamException();
return buffer;
}

C# reading bytes over SerialPort to slow

I am using the standard .Net Serialport clas for reading bytes (without events).
My code looks like this:
receivedDataList = new List<byte>();
_serialPort.ReadTimeout = timeout;
// First byte has length.
int bytesExpected = _serialPort.ReadByte();
receivedDataList.Add((byte)bytesExpected);
// initialize buffer with expected length.
byte[] buffer = new byte[bytesExpected];
int offset = 0;
int bytesReaded;
// Read as long as the expected bytes are not reached.
while (bytesExpected > 0 && (bytesReaded = _serialPort.Read(buffer, offset, bytesExpected)) > 0)
{
offset += bytesReaded;
bytesExpected -= bytesReaded;
}
receivedDataList.AddRange(buffer);
I am reading with 9600 8n1. The reading procedure for 10 bytes takes 34ms. If I read in linux, the reading procedure takes max. 20ms.
Is there a way to read faster, and I don't want to read over events...

Copy all but the last 16 bytes of a stream? Early detection of end-of-stream?

This is C# related. We have a case where we need to copy the entire source stream into a destination stream except for the last 16 bytes.
EDIT: The streams can range upto 40GB, so can't do some static byte[] allocation (eg: .ToArray())
Looking at the MSDN documentation, it seems that we can reliably determine the end of stream only when the return value is 0. Return values between 0 and the requested size can imply bytes are "not currently available" (what does that really mean?)
Currently it copies every single byte as follows. inStream and outStream are generic - can be memory, disk or network streams (actually some more too).
public static void StreamCopy(Stream inStream, Stream outStream)
{
var buffer = new byte[8*1024];
var last16Bytes = new byte[16];
int bytesRead;
while ((bytesRead = inStream.Read(buffer, 0, buffer.Length)) > 0)
{
outStream.Write(buffer, 0, bytesRead);
}
// Issues:
// 1. We already wrote the last 16 bytes into
// outStream (possibly over the n/w)
// 2. last16Bytes = ? (inStream may not necessarily support rewinding)
}
What is a reliable way to ensure all but the last 16 are copied? I can think of using Position and Length on the inStream but there is a gotcha on MSDN that says
If a class derived from Stream does not support seeking, calls to Length, SetLength, Position, and Seek throw a NotSupportedException. .
Read between 1 and n bytes from the input stream.1
Append the bytes to a circular buffer.2
Write the first max(0, b - 16) bytes from the circular buffer to the output stream, where b is the number of bytes in the circular buffer.
Remove the bytes that you just have written from the circular buffer.
Go to step 1.
1This is what the Read method does – if you call int n = Read(buffer, 0, 500); it will read between 1 and 500 bytes into buffer and return the number of bytes read. If Read returns 0, you have reached the end of the stream.
2For maximum performance, you can read the bytes directly from the input stream into the circular buffer. This is a bit tricky, because you have to deal with the wraparound within the array underlying the buffer.
The following solution is fast and tested. Hope it's useful. It uses the double buffering idea you already had in mind. EDIT: simplified loop removing the conditional that separated the first iteration from the rest.
public static void StreamCopy(Stream inStream, Stream outStream) {
// Define the size of the chunk to copy during each iteration (1 KiB)
const int blockSize = 1024;
const int bytesToOmit = 16;
const int buffSize = blockSize + bytesToOmit;
// Generate working buffers
byte[] buffer1 = new byte[buffSize];
byte[] buffer2 = new byte[buffSize];
// Initialize first iteration
byte[] curBuffer = buffer1;
byte[] prevBuffer = null;
int bytesRead;
// Attempt to fully fill the buffer
bytesRead = inStream.Read(curBuffer, 0, buffSize);
if( bytesRead == buffSize ) {
// We succesfully retrieved a whole buffer, we will output
// only [blockSize] bytes, to avoid writing to the last
// bytes in the buffer in case the remaining 16 bytes happen to
// be the last ones
outStream.Write(curBuffer, 0, blockSize);
} else {
// We couldn't retrieve the whole buffer
int bytesToWrite = bytesRead - bytesToOmit;
if( bytesToWrite > 0 ) {
outStream.Write(curBuffer, 0, bytesToWrite);
}
// There's no more data to process
return;
}
curBuffer = buffer2;
prevBuffer = buffer1;
while( true ) {
// Attempt again to fully fill the buffer
bytesRead = inStream.Read(curBuffer, 0, buffSize);
if( bytesRead == buffSize ) {
// We retrieved the whole buffer, output first the last 16
// bytes of the previous buffer, and output just [blockSize]
// bytes from the current buffer
outStream.Write(prevBuffer, blockSize, bytesToOmit);
outStream.Write(curBuffer, 0, blockSize);
} else {
// We could not retrieve a complete buffer
if( bytesRead <= bytesToOmit ) {
// The bytes to output come solely from the previous buffer
outStream.Write(prevBuffer, blockSize, bytesRead);
} else {
// The bytes to output come from the previous buffer and
// the current buffer
outStream.Write(prevBuffer, blockSize, bytesToOmit);
outStream.Write(curBuffer, 0, bytesRead - bytesToOmit);
}
break;
}
// swap buffers for next iteration
byte[] swap = prevBuffer;
prevBuffer = curBuffer;
curBuffer = swap;
}
}
static void Assert(Stream inStream, Stream outStream) {
// Routine that tests the copy worked as expected
inStream.Seek(0, SeekOrigin.Begin);
outStream.Seek(0, SeekOrigin.Begin);
Debug.Assert(outStream.Length == Math.Max(inStream.Length - bytesToOmit, 0));
for( int i = 0; i < outStream.Length; i++ ) {
int byte1 = inStream.ReadByte();
int byte2 = outStream.ReadByte();
Debug.Assert(byte1 == byte2);
}
}
A much easier solution to code, yet slower since it would work at a byte level, would be to use an intermediate queue between the input stream and the output stream. The process would first read and enqueue 16 bytes from the input stream. Then it would iterate over the remaining input bytes, reading a single byte from the input stream, enqueuing it and then dequeuing a byte. The dequeued byte would be written to the output stream, until all bytes from the input stream are processed. The unwanted 16 bytes should linger in the intermediate queue.
Hope this helps!
=)
Use a circular buffer sounds great but there is no circular buffer class in .NET which means additional code anyways. I ended up with the following algorithm, a sort of map and copy - I think it's simple. The variable names are longer than usual for the sake of being self descriptive here.
This flows thru the buffers as
[outStream] <== [tailBuf] <== [mainBuf] <== [inStream]
public byte[] CopyStreamExtractLastBytes(Stream inStream, Stream outStream,
int extractByteCount)
{
//var mainBuf = new byte[1024*4]; // 4K buffer ok for network too
var mainBuf = new byte[4651]; // nearby prime for testing
int mainBufValidCount;
var tailBuf = new byte[extractByteCount];
int tailBufValidCount = 0;
while ((mainBufValidCount = inStream.Read(mainBuf, 0, mainBuf.Length)) > 0)
{
// Map: how much of what (passthru/tail) lives where (MainBuf/tailBuf)
// more than tail is passthru
int totalPassthruCount = Math.Max(0, tailBufValidCount +
mainBufValidCount - extractByteCount);
int tailBufPassthruCount = Math.Min(tailBufValidCount, totalPassthruCount);
int tailBufTailCount = tailBufValidCount - tailBufPassthruCount;
int mainBufPassthruCount = totalPassthruCount - tailBufPassthruCount;
int mainBufResidualCount = mainBufValidCount - mainBufPassthruCount;
// Copy: Passthru must be flushed per FIFO order (tailBuf then mainBuf)
outStream.Write(tailBuf, 0, tailBufPassthruCount);
outStream.Write(mainBuf, 0, mainBufPassthruCount);
// Copy: Now reassemble/compact tail into tailBuf
var tempResidualBuf = new byte[extractByteCount];
Array.Copy(tailBuf, tailBufPassthruCount, tempResidualBuf, 0,
tailBufTailCount);
Array.Copy(mainBuf, mainBufPassthruCount, tempResidualBuf,
tailBufTailCount, mainBufResidualCount);
tailBufValidCount = tailBufTailCount + mainBufResidualCount;
tailBuf = tempResidualBuf;
}
return tailBuf;
}

Categories