Merge splitted bytes from socket for further usage? - c#

I have a TCP socket application where I have to read several types of replies being the max size of a buffer 8192 some replies are splitted in more packets.
Currently I receive a list of members at reply 44, so the first idea I had to be able to deal with splitted packets was to define a stream out side of it to stored the incoming data until it is complete with a bool and current size variable.
Once it hits the reply 44 it will check if extraList is true or false, if false it means it is an initial request to incoming members list.
If the 4 initial bytes of the packet is bigger then bytes.Legth which is 8192 it will trigger the extraList to true and fill the initial data to the buffer I had previously set with the with the total packet size as it is size.
Since extraList has been trigged and turned into true, the packet reading will fall into it until the data is complete, which will then set it back to false and trigger the MemberList function with the complete list.
Would like some advices, suggestions, etc to improve this code.
int storedCurSize = 0;
MemoryStream stored = null;
bool extraList = false;
while (roomSocket.Connected)
{
byte[] bytes = new byte[roomSocket.ReceiveBufferSize];
roomSocket.Receive(bytes);
MemoryStream bufferReceived = new MemoryStream(bytes, 0, bytes.Length);
using (var reader = new BinaryReader(bufferReceived))
{
int packetSize = (int)reader.ReadInt32() + 9;
int reply = (int)reader.ReadByte();
if (reply == 44 || extraList)
{
if (!extraList && packetSize <= bytes.Length)
{
MemberList(bytes);
}
else
{
if (!extraList)
{
stored = new MemoryStream(new byte[packetSize], 0, packetSize);
stored.Write(bytes, 0, bytes.Length);
storedCurSize = bytes.Length;
extraList = true;
}
else
{
if (storedCurSize < stored.Length)
{
int storedLeftSize = (int)stored.Length - storedCurSize;
stored.Write(bytes, 0, (storedLeftSzie < bytes.Length) ? storedLeftSize : bytes.Length);
storedCurSize += (storedLeftSize < bytes.Length) ? storedLeftSize : bytes.Length;
if (storedCurSize >= stored.Length)
{
extraList = false;
MemberList(stored.ToArray());
stored.Close();
}
}
}
}
}
}
}

While reading code briefly what is flaring is magic numbers (9, 44) and very deep nesting.
Replace numbers with good-named constants and move out some parts of code as methods.
In case they are tightly twisted by used local variables - probably all this method worth moving out to worker class with single responsibility - to read the message. Thus local variables becomes class fields and methods wouldn't be so inflexible to refactoring.
Also MemberList(...) is poor name for a method as for me. Make it a verb that will describe what method is doing.

To merge bytes that aren't together, you can use Buffer.BlockCopy().
byte[] buf1;
byte[] buf2;
byte[] concatenated = new byte[buf1.Length + buf2.Length];
Buffer.BlockCopy(buf1, 0, concatenated, 0, buf1.Length);
Buffer.BlockCopy(buf2, 0, concatenated, buf1.ength, buf2.Length);

Related

Socket Programming: How can I read a specified number of bytes from buffer?

TCP is stream-based protocol. To convert that stream into my messages, I send the size of each message with the message itself. At server side, I first read the first two bytes of message, which have the size. Then I create a byte array, of size equal to the size which was just read. Then I read the bytes into that array. But for some reason, more bytes are being read than specified. How can I read exactly the same number of bytes as I specify?
Here is my code:
while (true)
{
data = null;
length = null;
size = new byte[2];
handler.Receive(size);
length += Encoding.ASCII.GetString(size, 0, 2);
System.Console.WriteLine("Size: " + Int32.Parse(length));
bufferSize = Int32.Parse(length) + 2;
bytes = new byte[bufferSize];
handler.Receive(bytes);
data += Encoding.ASCII.GetString(bytes, 0, bufferSize);
System.Console.WriteLine("Data: " + data);
}
This is my server running in Windows PC, written in C#. My client is running in android phone, written in Java.
It's unclear why you're adding two to the size that's been transmitted - you've already accounted for the two additional bytes for storing the length during your previous receive. So I'd get rid of the +2.
You also need to respect the fact already stated in your question - TCP is a sequence of bytes, not messages. As such, you're never guaranteed whether a call to Receive is going to retrieve an entire "message" or just part of one (or, possible, parts of multiple messages). As such, you need to make sure that you respect the return value from Receive.
We can probably re-write your code as:
while (true)
{
data = null;
length = null;
size = ReceiveExactly(handler,2);
length = Encoding.ASCII.GetString(size, 0, 2); //Why +=?
bufferSize = Int32.Parse(length); //Why + 2?
System.Console.WriteLine("Size: " + bufferSize);
bytes = ReceiveExactly(handler,bufferSize);
data += Encoding.ASCII.GetString(bytes, 0, bufferSize);
System.Console.WriteLine("Data: " + data);
}
Where ReceiveExactly is defined something like this:
private byte[] ReceiveExactly(Socket handler, int length)
{
var buffer = new byte[length];
var receivedLength = 0;
while(receivedLength < length)
{
var nextLength = handler.Receive(buffer,receivedLength,length-receivedLength);
if(nextLength==0)
{
//Throw an exception? Something else?
//The socket's never going to receive more data
}
receivedLength += nextLength;
}
return buffer;
}
to receive a specific amount of bytes use the method
Socket.Receive(Byte[], Int32, Int32, SocketFlags)
rather than Socket.Receive(Byte[]). see spec here
I suspect you want something like
int len = Socket.Receive(bytes, 0, bufferSize, SocketFlags.None);
data += Encoding.ASCII.GetString(bytes, 0, len);
System.Console.WriteLine("Data: " + data);

C# NetworkStream data loss

I am currently working on a networking project where I worked out a binary protocol. My packets look like this:
[1 byte TYPE][2 bytes INDEX][2 bytes LENGTH][LENGTH bytes DATA]
And here's the code where I am receiving the packets:
NetworkStream clientStream= Client.GetStream();
while (Client.Connected)
{
Thread.Sleep(10);
try
{
if (clientStream.DataAvailable)
{
byte[] infobuffer = new byte[5];
int inforead = clientStream.Read(infobuffer, 0, 5);
if (inforead < 5) { continue; }
byte[] rawclient = new byte[2];
Array.Copy(infobuffer, 1, rawclient, 0, 2);
PacketType type = (PacketType)Convert.ToSByte(infobuffer[0]);
int clientIndex = BitConverter.ToInt16(rawclient, 0);
int readLength = BitConverter.ToInt16(infobuffer, 3);
byte[] readbuffer = new byte[readLength];
int count_read = clientStream.Read(readbuffer, 0, readLength);
byte[] read_data = new byte[count_read];
Array.Copy(readbuffer, read_data, count_read);
HandleData(read_data, type, clientIndex);
}
}
catch (Exception ex)
{
Console.ForegroundColor = ConsoleColor.Red;
Console.WriteLine("[E] " + ex.GetType().ToString());
Console.ResetColor();
break;
}
}
Well, and everything works fine... as long as I run it on 127.0.0.1. As soon as I try testing it over long distance, packets somehow get lost, and I am getting an overflow-exception on the line where I convert the first byte to PacketType. Also, if I try to convert the other values to int16, I get very strange values.
I assume the stream somehow looses some bytes on its way to the server, but can this be? Or is it just a little mistake of mine somewhere in the code?
edit:
I now edited the code, now it reads till it gets its 5 bytes. But I still get the same exception over long distance...
NetworkStream clientStream = Client.GetStream();
while (Client.Connected)
{
Thread.Sleep(10);
try
{
if (clientStream.DataAvailable)
{
int totalread = 0;
byte[] infobuffer = new byte[5];
while (totalread < 5)
{
int inforead = clientStream.Read(infobuffer, totalread, 5 - totalread);
if (inforead == 0)
{ break; }
totalread += inforead;
}
byte[] rawclient = new byte[2];
Array.Copy(infobuffer, 1, rawclient, 0, 2);
PacketType type = (PacketType)Convert.ToSByte(infobuffer[0]);
int clientIndex = BitConverter.ToInt16(rawclient, 0);
int readLength = BitConverter.ToInt16(infobuffer, 3);
byte[] readbuffer = new byte[readLength];
int count_read = clientStream.Read(readbuffer, 0, readLength);
byte[] read_data = new byte[count_read];
Array.Copy(readbuffer, read_data, count_read);
HandleData(read_data, type, clientIndex);
}
}
catch (Exception ex)
{
Console.ForegroundColor = ConsoleColor.Red;
Console.WriteLine("[E] " + ex.GetType().ToString());
Console.ResetColor();
break;
}
}
PacketType is an enum:
public enum PacketType
{
AddressSocks5 = 0,
Status = 1,
Data = 2,
Disconnect = 3,
AddressSocks4 = 4
}
So many things you're doing wrong here... so many bugs... where to even start...
First Network polling? Really? That's just a naïve way of doing network activity in this day and age.. but I won't harp on that.
Second, with this type of protocol, it's pretty easy to get "out of sync" and once you do, you have no way to get back in sync. This is typically accomplished with some kind of "framing protocol" which provides a unique sequence of bytes that you can use to indicate the start and end of a frame, so that if you ever find yourself out of sync you can read data until you get back in sync. Yes, you will lose data, but you've already lost it if you're out of sync.
Third, you're not really doing anything huge here, so I shamelessly stole the "ReadWholeArray" code from here, it's not the most efficient, but it works and there is other code there that might help:
http://www.yoda.arachsys.com/csharp/readbinary.html
Note: you don't mention how you are serializing the length, type and index values on the other side. So using the BitConverter may be the wrong thing depending on how that was done.
if (clientStream.DataAvailable)
{
byte[] data = new byte[5];
// if it can't read all 5 bytes, it throws an exception
ReadWholeArray(clientStream, data);
PacketType type = (PacketType)Convert.ToSByte(data[0]);
int clientIndex = BitConverter.ToInt16(data, 1);
int readLength = BitConverter.ToInt16(data, 3);
byte[] rawdata = new byte[readLength];
ReadWholeArray(clientStream, rawdata);
HandleData(rawdata, type, clientIndex);
}
/// <summary>
/// Reads data into a complete array, throwing an EndOfStreamException
/// if the stream runs out of data first, or if an IOException
/// naturally occurs.
/// </summary>
/// <param name="stream">The stream to read data from</param>
/// <param name="data">The array to read bytes into. The array
/// will be completely filled from the stream, so an appropriate
/// size must be given.</param>
public static void ReadWholeArray (Stream stream, byte[] data)
{
int offset=0;
int remaining = data.Length;
while (remaining > 0)
{
int read = stream.Read(data, offset, remaining);
if (read <= 0)
throw new EndOfStreamException
(String.Format("End of stream reached with {0} bytes left to read", remaining));
remaining -= read;
offset += read;
}
}
I think the problem is in these lines
int inforead = clientStream.Read(infobuffer, 0, 5);
if (inforead < 5) { continue; }
what happen to your previously read data if the length is under 5 byte? you should save the bytes you have read so far and append next bytes so you can have the header completely
You Read 5 - totalRead.
let totalRead equal 5 or more. When that happens you read nothing, and in cases of 1 - 4 you read that many arbitrary bytes. Not 5. You also then discard any result of less then 5.
You also copy at a offset 1 or another offset without really knowing the offset.
BitConverter.ToInt16(infobuffer, 3);
Is an example of this, what is at off 2?
So if it's not that (decoding error) and and not the structure of your data then unless you change the structure of your loop its you who's losing the bytes not the NetworkStream.
Calculate totalRead by increments of justRead when you recieve so you can handle any size of data as well as receiving it at the correct offset.

Copy all but the last 16 bytes of a stream? Early detection of end-of-stream?

This is C# related. We have a case where we need to copy the entire source stream into a destination stream except for the last 16 bytes.
EDIT: The streams can range upto 40GB, so can't do some static byte[] allocation (eg: .ToArray())
Looking at the MSDN documentation, it seems that we can reliably determine the end of stream only when the return value is 0. Return values between 0 and the requested size can imply bytes are "not currently available" (what does that really mean?)
Currently it copies every single byte as follows. inStream and outStream are generic - can be memory, disk or network streams (actually some more too).
public static void StreamCopy(Stream inStream, Stream outStream)
{
var buffer = new byte[8*1024];
var last16Bytes = new byte[16];
int bytesRead;
while ((bytesRead = inStream.Read(buffer, 0, buffer.Length)) > 0)
{
outStream.Write(buffer, 0, bytesRead);
}
// Issues:
// 1. We already wrote the last 16 bytes into
// outStream (possibly over the n/w)
// 2. last16Bytes = ? (inStream may not necessarily support rewinding)
}
What is a reliable way to ensure all but the last 16 are copied? I can think of using Position and Length on the inStream but there is a gotcha on MSDN that says
If a class derived from Stream does not support seeking, calls to Length, SetLength, Position, and Seek throw a NotSupportedException. .
Read between 1 and n bytes from the input stream.1
Append the bytes to a circular buffer.2
Write the first max(0, b - 16) bytes from the circular buffer to the output stream, where b is the number of bytes in the circular buffer.
Remove the bytes that you just have written from the circular buffer.
Go to step 1.
1This is what the Read method does – if you call int n = Read(buffer, 0, 500); it will read between 1 and 500 bytes into buffer and return the number of bytes read. If Read returns 0, you have reached the end of the stream.
2For maximum performance, you can read the bytes directly from the input stream into the circular buffer. This is a bit tricky, because you have to deal with the wraparound within the array underlying the buffer.
The following solution is fast and tested. Hope it's useful. It uses the double buffering idea you already had in mind. EDIT: simplified loop removing the conditional that separated the first iteration from the rest.
public static void StreamCopy(Stream inStream, Stream outStream) {
// Define the size of the chunk to copy during each iteration (1 KiB)
const int blockSize = 1024;
const int bytesToOmit = 16;
const int buffSize = blockSize + bytesToOmit;
// Generate working buffers
byte[] buffer1 = new byte[buffSize];
byte[] buffer2 = new byte[buffSize];
// Initialize first iteration
byte[] curBuffer = buffer1;
byte[] prevBuffer = null;
int bytesRead;
// Attempt to fully fill the buffer
bytesRead = inStream.Read(curBuffer, 0, buffSize);
if( bytesRead == buffSize ) {
// We succesfully retrieved a whole buffer, we will output
// only [blockSize] bytes, to avoid writing to the last
// bytes in the buffer in case the remaining 16 bytes happen to
// be the last ones
outStream.Write(curBuffer, 0, blockSize);
} else {
// We couldn't retrieve the whole buffer
int bytesToWrite = bytesRead - bytesToOmit;
if( bytesToWrite > 0 ) {
outStream.Write(curBuffer, 0, bytesToWrite);
}
// There's no more data to process
return;
}
curBuffer = buffer2;
prevBuffer = buffer1;
while( true ) {
// Attempt again to fully fill the buffer
bytesRead = inStream.Read(curBuffer, 0, buffSize);
if( bytesRead == buffSize ) {
// We retrieved the whole buffer, output first the last 16
// bytes of the previous buffer, and output just [blockSize]
// bytes from the current buffer
outStream.Write(prevBuffer, blockSize, bytesToOmit);
outStream.Write(curBuffer, 0, blockSize);
} else {
// We could not retrieve a complete buffer
if( bytesRead <= bytesToOmit ) {
// The bytes to output come solely from the previous buffer
outStream.Write(prevBuffer, blockSize, bytesRead);
} else {
// The bytes to output come from the previous buffer and
// the current buffer
outStream.Write(prevBuffer, blockSize, bytesToOmit);
outStream.Write(curBuffer, 0, bytesRead - bytesToOmit);
}
break;
}
// swap buffers for next iteration
byte[] swap = prevBuffer;
prevBuffer = curBuffer;
curBuffer = swap;
}
}
static void Assert(Stream inStream, Stream outStream) {
// Routine that tests the copy worked as expected
inStream.Seek(0, SeekOrigin.Begin);
outStream.Seek(0, SeekOrigin.Begin);
Debug.Assert(outStream.Length == Math.Max(inStream.Length - bytesToOmit, 0));
for( int i = 0; i < outStream.Length; i++ ) {
int byte1 = inStream.ReadByte();
int byte2 = outStream.ReadByte();
Debug.Assert(byte1 == byte2);
}
}
A much easier solution to code, yet slower since it would work at a byte level, would be to use an intermediate queue between the input stream and the output stream. The process would first read and enqueue 16 bytes from the input stream. Then it would iterate over the remaining input bytes, reading a single byte from the input stream, enqueuing it and then dequeuing a byte. The dequeued byte would be written to the output stream, until all bytes from the input stream are processed. The unwanted 16 bytes should linger in the intermediate queue.
Hope this helps!
=)
Use a circular buffer sounds great but there is no circular buffer class in .NET which means additional code anyways. I ended up with the following algorithm, a sort of map and copy - I think it's simple. The variable names are longer than usual for the sake of being self descriptive here.
This flows thru the buffers as
[outStream] <== [tailBuf] <== [mainBuf] <== [inStream]
public byte[] CopyStreamExtractLastBytes(Stream inStream, Stream outStream,
int extractByteCount)
{
//var mainBuf = new byte[1024*4]; // 4K buffer ok for network too
var mainBuf = new byte[4651]; // nearby prime for testing
int mainBufValidCount;
var tailBuf = new byte[extractByteCount];
int tailBufValidCount = 0;
while ((mainBufValidCount = inStream.Read(mainBuf, 0, mainBuf.Length)) > 0)
{
// Map: how much of what (passthru/tail) lives where (MainBuf/tailBuf)
// more than tail is passthru
int totalPassthruCount = Math.Max(0, tailBufValidCount +
mainBufValidCount - extractByteCount);
int tailBufPassthruCount = Math.Min(tailBufValidCount, totalPassthruCount);
int tailBufTailCount = tailBufValidCount - tailBufPassthruCount;
int mainBufPassthruCount = totalPassthruCount - tailBufPassthruCount;
int mainBufResidualCount = mainBufValidCount - mainBufPassthruCount;
// Copy: Passthru must be flushed per FIFO order (tailBuf then mainBuf)
outStream.Write(tailBuf, 0, tailBufPassthruCount);
outStream.Write(mainBuf, 0, mainBufPassthruCount);
// Copy: Now reassemble/compact tail into tailBuf
var tempResidualBuf = new byte[extractByteCount];
Array.Copy(tailBuf, tailBufPassthruCount, tempResidualBuf, 0,
tailBufTailCount);
Array.Copy(mainBuf, mainBufPassthruCount, tempResidualBuf,
tailBufTailCount, mainBufResidualCount);
tailBufValidCount = tailBufTailCount + mainBufResidualCount;
tailBuf = tempResidualBuf;
}
return tailBuf;
}

Read from basic stream (httpRequestStream)

I have a basic stream which is the stream of HTTP request
and
var s=new HttpListener().GetContext().Request.InputStream;
I want to read the stream (which contain non-Character content, because i've sent the packet)
When we wrap this stream by StreamReader then we use the ReadToEnd() function of StreamReader it can read the whole stream and return a string...
HttpListener listener = new HttpListener();
listener.Prefixes.Add("http://127.0.0.1/");
listener.Start();
var context = listener.GetContext();
var sr = new StreamReader(context.Request.InputStream);
string x=sr.ReadToEnd(); //This Workds
but since it has nonCharacter content we cant use StremReader (i tried all encoding mechanisms..using string is just wrong).And i Cant use the function
context.Request.InputStream.Read(buffer,position,Len)
because I cant get the length of the stream, InputStream.Length always throws an exception and cant be used..and i dont want to create a small protocol like [size][file] and read first size then the file ...somehow the StreamReader can get the length ..and i just want to know how .
I also tried this and it didn't work
List<byte> bb = new List<byte>();
var ss = context.Request.InputStream;
byte b = (byte)ss.ReadByte();
while (b >= 0)
{
bb.Add(b);
b = (byte)ss.ReadByte();
}
I've solved it by the following
FileStream fs = new FileStream("C:\\cygwin\\home\\Dff.rar", FileMode.Create);
byte[] file = new byte[1024 * 1024];
int finishedBytes = ss.Read(file, 0, file.Length);
while (finishedBytes > 0)
{
fs.Write(file, 0, finishedBytes);
finishedBytes = ss.Read(file, 0, file.Length);
}
fs.Close();
thanks Jon , Douglas
Your bug lies in the following line:
byte b = (byte)ss.ReadByte();
The byte type is unsigned; when Stream.ReadByte returns -1 at the end of the stream, you’re indiscriminately casting it to byte, which converts it to 255 and, therefore, satisfies the b >= 0 condition. It is helpful to note that the return type is int, not byte, for this very reason.
A quick-and-dirty fix for your code:
List<byte> bb = new List<byte>();
var ss = context.Request.InputStream;
int next = ss.ReadByte();
while (next != -1)
{
bb.Add((byte)next);
next = ss.ReadByte();
}
The following solution is more efficient, since it avoids the byte-by-byte reads incurred by the ReadByte calls, and uses a dynamically-expanding byte array for Read calls instead (similar to the way that List<T> is internally implemented):
var ss = context.Request.InputStream;
byte[] buffer = new byte[1024];
int totalCount = 0;
while (true)
{
int currentCount = ss.Read(buffer, totalCount, buffer.Length - totalCount);
if (currentCount == 0)
break;
totalCount += currentCount;
if (totalCount == buffer.Length)
Array.Resize(ref buffer, buffer.Length * 2);
}
Array.Resize(ref buffer, totalCount);
StreamReader cannot get the length either -- it seems there's some confusion regarding the third parameter of Stream.Read. That parameter specifies the maximum number of bytes that will be read, which does not need (and really cannot) be equal to the number of bytes actually available in the stream. You just call Read in a loop until it returns 0, in which case you know you have reached the end of the stream. This is all documented on MSDN, and it's also exactly how StreamReader does it.
There's also no problem in reading the request with StreamReader and getting it into string; strings are binary safe in .NET, so you 're covered. The problem will be making sense of the contents of the string, but we can't really talk about that since you don't provide any relevant information.
HttpRequestStream won't give you the length, but you can get it from the HttpListenerRequest.ContentLength64 property. Like Jon said, make sure you observe the return value from the Read method. In my case, we get buffered reads and cannot read our entire 226KB payload in one go.
Try
byte[] getPayload(HttpListenerContext context)
{
int length = (int)context.Request.ContentLength64;
byte[] payload = new byte[length];
int numRead = 0;
while (numRead < length)
numRead += context.Request.InputStream.Read(payload, numRead, length - numRead);
return payload;
}

Setting the offset in a stream

It says here msdn.microsoft.com/en-us/library/system.io.stream.read.aspx that the Stream.Read and Stream.Write methods both advance the position/offset in the stream automatically so why is the examples here http://msdn.microsoft.com/en-us/library/system.io.stream.read.aspx and http://msdn.microsoft.com/en-us/library/system.io.filestream.read.aspx manually changing the offset?
Do you only set the offset in a loop if you know the size of the stream and set it to 0 if you don't know the size and using a buffer?
// Now read s into a byte buffer.
byte[] bytes = new byte[s.Length];
int numBytesToRead = (int) s.Length;
int numBytesRead = 0;
while (numBytesToRead > 0)
{
// Read may return anything from 0 to 10.
int n = s.Read(bytes, numBytesRead, 10);
// The end of the file is reached.
if (n == 0)
{
break;
}
numBytesRead += n;
numBytesToRead -= n;
}
and
using (GZipStream stream = new GZipStream(new MemoryStream(gzip), CompressionMode.Decompress))
{
const int size = 4096;
byte[] buffer = new byte[size];
using (MemoryStream memory = new MemoryStream())
{
int count = 0;
do
{
count = stream.Read(buffer, 0, size);
if (count > 0)
{
memory.Write(buffer, 0, count);
}
}
while (count > 0);
return memory.ToArray();
}
}
The offset is actually the offset of the buffer, not the stream. Streams are advanced automatically as they are read.
Edit (to the edited question):
In none of the code snippets you pasted into the question I see any stream offset being set.
I think you are mistaking the calculation of bytes to read vs. bytes received. This protocol may seem funny (why would you receive fewer bytes than requested?) but it makes sense when you consider that you might be reading from a high-latency packet oriented source (think: network sockets).
You might be receiving 6 characters in one burst (from a TCP packet) and only receive the remaining 4 characters in your next read (when the next packet has arrived).
Edit In response to your linked example from the comment:
using (GZipStream stream = new GZipStream(new MemoryStream(gzip), CompressionMode.Decompress))
{
// ... snip
count = stream.Read(buffer, 0, size);
if (count > 0)
{
memory.Write(buffer, 0, count);
}
It appears that the coders use prior knowledge about the underlying stream implementation, that stream.Read will always return 0 OR the size requested. That seems like a risky bet, to me. But if the docs for GZipStream do state that, it could be alright. However, since the MSDN samples use a generic Stream variable, it is (way) more correct to check the exact number of bytes read.
The first linked example uses a MemoryStream in both Write and Read fashion. The position is reset in between, so the data that was written first will be read:
Stream s = new MemoryStream();
for (int i = 0; i < 100; i++)
{
s.WriteByte((byte)i);
}
s.Position = 0;
The second example linked does not set the stream position. You'd typically have seen a call to Seek if it did. You maybe confusing the offsets into the data buffer with the stream position?

Categories