I am trying to get the body of a request in an ASP.NET Core controller as a byte[] array. Here is what I initially wrote:
var declaredLength = (int)request.ContentLength;
byte[] fileBuffer = new byte[declaredLength];
request.Body.Read(fileBuffer, 0, declaredLength);
This code works, but only for small requests (around ~20KB). For larger requests it fills up the first 20,000 or so bytes in the array, then the rest of the array is empty.
I used some code in the top answer here, and was able to read the entire request body successfully after rewriting my code:
var declaredLength = (int)request.ContentLength;
byte[] fileBuffer = new byte[declaredLength];
// need to enable, otherwise Seek() fails
request.EnableRewind();
// using StreamReader apparently resolves the issue
using (var reader = new StreamReader(request.Body, Encoding.UTF8, true, 1024, true))
{
reader.ReadToEnd();
}
request.Body.Seek(0, SeekOrigin.Begin);
request.Body.Read(fileBuffer, 0, declaredLength);
Why is StreamReader.ReadToEnd() able to read the entire request body successfully, while Stream.Read() can't? Reading the request stream twice feels like a hack. Is there a better way to go about this? (I only need to read the stream into a byte array once)
Remember that you're trying to read request.Body before all of the request has been received yet.
Stream.Read behaves like this:
If the end of the stream has been reached, return 0
If there are no bytes available which haven't already been read, block until at least 1 byte is available
If 1 or more new bytes are available, return them straight away. Don't block.
As you can see, if the whole body hasn't been received yet, request.Body.Read(...) will just return the part of the body that has been received.
StreamReader.ReadToEnd() calls Stream.Read in a loop, until it finds the end of the stream.
You should probably call Stream.Read in a loop as well, until you've read all of the bytes:
byte[] fileBuffer = new byte[declaredLength];
int numBytesRead = 0;
while (numBytesRead < declaredLength)
{
int readBytes = request.Body.Read(fileBuffer, numBytesRead, declaredLength - numBytesRead);
if (readBytes == 0)
{
// We reached the end of the stream before we were expecting it
// Might want to throw an exception here?
}
numBytesRead += readBytes;
}
Related
I am making a server using C#'s HttpListner, and the server is handling incoming binary data from incoming post requests. I am trying to make the post request handler, and because I am handling binary data I am using byte[] (which is the buffer I am reading to). But the issue is I have to supply the length of the buffer before reading anything to the buffer. I tried HttpListnerRequest.InputStream.Length, but it throws this:
System.NotSupportedException: This stream does not support seek operations.
Is there another way to get the length of the stream? Other answers to similar questions just use StreamReader, but StreamReader does not do binary.
Here is my code that throws the error.
// If the request is a post request and the request has a body
Stream input = request.InputStream; // "request" in this case is the HttpListnerRequest
byte[] buffer = new byte[input.Length]; // Throws System.NotSupportedException.
input.Read(buffer, 0, input.Length);
You can use HttpListnerRequest.ContentLength64, which represents the length of the request body, which in this case is the input stream. Example:
// If the request is a post request and the request has a body
long longLength = request.ContentLength64;
int length = (int) longLength;
Stream input = request.InputStream; // "request" in this case is the HttpListnerRequest
byte[] buffer = new byte[length];
input.Read(buffer, 0, length);
This question already has answers here:
How to get all data from NetworkStream
(8 answers)
Closed 4 years ago.
I am trying to simply download an object from my bucket using C# just like we can find in S3 examples, and I can't figure out why the stream won't be entirely copied to my byte array. Only the first 8192 bytes are copied instead of the whole stream.
I have tried with with an Amazon.S3.AmazonS3Client and with an Amazon.S3.Transfer.TransferUtility, but in both cases only the first bytes are actually copied into the buffer.
var stream = await _transferUtility.OpenStreamAsync(BucketName, key);
using (stream)
{
byte[] content = new byte[stream.Length];
stream.Read(content, 0, content.Length);
// Here content should contain all the data from the stream, but only the first 8192 bytes are actually populated.
}
When debugging, I see the stream type is Amazon.Runtime.Internal.Util.Md5Stream, and inside the stream, before calling Read() the property CurrentPosition = 0. After the call, CurrentPosition becomes 8192, which seems to indeed indicate only the first 8K of data was read. The total Length of the stream is 104042.
If I make more calls to stream.Read(), I see more data gets read and CurrentPosition increases in value. But CurrentPosition is not a public property, and I cannot access it in my code to make a while() loop (and having to code such loops to read all the data seems a bit clunky).
Why are only the first 8K read in my code? How should I proceed to read the entire stream?
I tried calling stream.Flush(), but it did not fix the problem.
EDIT 1
I have modified my code so it does the following:
var stream = await _transferUtility.OpenStreamAsync(BucketName, key);
using (stream)
{
byte[] content = new byte[stream.Length];
var bytesRead = 0;
while (bytesRead < stream.Length)
bytesRead += stream.Read(content, bytesRead, content.Length - bytesRead);
}
And it works. But still looks clunky. Is it normal I have to do this?
EDIT 2
Final solution is to create a MemoryStream of the correct size and then call CopyTo(). So no clunky loop anymore and no risk of infinite loop if Read() starts returning 0 before the whole stream has been read:
var stream = await _transferUtility.OpenStreamAsync(BucketName, key);
using (stream)
{
using (var memoryStream = new MemoryStream((int)stream.Length))
{
stream.CopyTo(memoryStream);
var myBuffer = memoryStream.GetBuffer();
}
}
stream.Read() returns the number of bytes read. You can then keep track of the total number of bytes read until you have reached the end of the file (content.Length).
You could also just loop until the returned value is 0 meaning error / no more bytes left.
You will need to keep track of the current offset for your content buffer so that you are not overwriting data for each call.
We have a code snippet that is converting Stream to byte[] and later displaying that as an image in aspx page.
Problem is when first time page loads image is being displayed, but not displaying for later requests (reload etc).
Only difference I observed is Stream position in 'input' (ConvertStreamtoByteArray) is 0 for the first time and subsequent calls is > 0. How do I fix this?
context.Response.Clear();
context.Response.ContentType = "image/pjpeg";
context.Response.BinaryWrite(ConvertStreamtoByteArray(imgStream));
context.Response.End();
private static byte[] ConvertStreamtoByteArray(Stream input)
{
var buffer = new byte[16 * 1024];
using (var ms = new MemoryStream())
{
int read;
while ((read = input.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
return ms.ToArray();
}
}
I think the source is : Creating a byte array from a stream
I think the code snippet is from above link, I see everything matches except for method name.
You're (most likely) holding a reference to imgStream, so the same stream is being used every time ConvertStreamtoByteArray is called.
The problem is that streams track their Position. This starts at 0 when the stream is new, and ends up at the end when you read the entire stream.
Usually the solution in this case is to set the Position back to 0 prior to copying the content of the stream.
In your case, you should probably 1) convert imgStream to a byte array the first time it's needed 2) cache this byte array and not the stream 3) dispose and throw away imgStream and 4) pass the byte array to the Response from this point onwards.
See, this is what happens when you copypasta code from the internets. Weird stuff like this, repeatedly converting the same stream to a byte array (waste of time!), and you end up not using the framework to do your work for you. Manually copying streams is so 2000s.
I'm having two problems and after trying a few techniques I've read on stackoverflow, the problem persists. I'm trying to send a file from the server to client with the following code below but the problem is that the file is always a few bytes short, causing file corruption.. The second problem is that the stream doesn't close despite implementing a zero length packet at the end to indicate the transfer is finished without closing the connection.
Server code snippet:
/*
* Received request from client for file, sending file to client.
*/
//open file to send to client
FileStream fs = new FileStream(fileLocation, FileMode.Open, FileAccess.Read);
byte[] data = new byte[1024];
long fileSize = fs.Length;
long sent = 0;
int count = 0;
while (sent < fileSize)
{
count = fs.Read(data, 0, data.Length);
netStream.Write(data, 0, count);
sent += count;
}
netStream.Write(new byte[1024], 0, 0); //send zero length byte stream to indicate end of file.
fs.Flush();
netStream.Flush();
Client code snippet:
TcpClient client;
NetworkStream serverStream;
/*
* [...] client connect
*/
//send request to server for file
byte[] dataToSend = SerializeObject(obj);
serverStream.Write(dataToSend, 0, dataToSend.Length);
//create filestream to save file
FileStream fs = new FileStream(fileName, FileMode.Create, FileAccess.Write);
//handle response from server
byte[] response = new byte[client.ReceiveBufferSize];
byte[] bufferSize = new byte[1024];
int bytesRead;
while ((bytesRead = serverStream.Read(bufferSize, 0, bufferSize.Length)) > 0 && client.ReceiveBufferSize > 0)
{
Debug.WriteLine("Bytes read: " + bytesRead);
fs.Write(response, 0, bytesRead);
}
fs.Close();
With UDP you can transmit an effectively empty packet, but TCP won't allow you to do that. At the application layer the TCP protocol is a stream of bytes, with all of the packet-level stuff abstracted away. Sending zero bytes will not result in anything happening at the stream level on the client side.
Signalling the end of a file transfer can be as simple as having the server close the connection after sending the last block of data. The client will receive the final data packet then note that the socket has been closed, which indicates that the data has been completely delivered. The flaw in this method is that the TCP connection can be closed for other reasons, leaving a client in a state where it believes that it has all the data even though the connection was dropped for another reason.
So even if you are going to use the 'close on complete' method to signal end of transfer, you need to have a mechanism that allows the client to identify that the file is actually complete.
The most common form of this is to send a header block at the start of the transfer that tells you something about the data being transferred. This might be as simple as a 4-byte length value, or it could be a variable-length descriptor structure that includes various metadata about the file such as its length, name, create/modify times and a checksum or hash that you can use to verify the received content. The client reads the header first, then processes the rest of the data in the stream as content.
Let's take the simplest case, sending a 4-byte length indicator at the start of the stream.
Server Code:
public void SendStream(Socket client, Stream data)
{
// Send length of stream as first 4 bytes
byte[] lenBytes = BitConverter.GetBytes((int)data.Length);
client.Send(lenBytes);
// Send stream data
byte[] buffer = new byte[1024];
int rc;
data.Position = 0;
while ((rc = data.Read(buffer, 0, 1024)) > 0)
client.Send(buffer, rc, SocketFlags.None);
}
Client Code:
public bool ReceiveStream(Socket server, Stream outdata)
{
// Get length of data in stream from first 4 bytes
byte[] lenBytes = new byte[4];
if (server.Receive(lenBytes) < 4)
return false;
long len = (long)BitConverter.ToInt32(lenBytes, 0);
// Receive remainder of stream data
byte[] buffer = new byte[1024];
int rc;
while ((rc = server.Receive(buffer)) > 0)
outdata.Write(buffer, 0, rc);
// Check that we received the expected amount of data
return len == outdata.Position;
}
Not much in the way of error checking and so on, and blocking code in all directions, but you get the idea.
There is no such thing as sending "zero bytes" in a stream. As soon as the stream sees you're trying to send zero bytes it can just return immediately and will have done exactly what you asked.
Since you're using TCP, it is up to you to use an agreed-upon protocol between the client and server. For example:
The server could close the connection after sending all its data. The client would see this as a "Read" that completes with zero bytes returned.
The server could send a header of a fixed size (maybe 4 bytes) that includes the length of the upcoming data. The client could then read those 4 bytes and would then know how many more bytes to wait for.
Finally, you might need a "netStream.Flush()" in your server code above (if you intended to keep the connection open).
Edit: Solution is at bottom of post
I am trying my luck with reading binary files. Since I don't want to rely on byte[] AllBytes = File.ReadAllBytes(myPath), because the binary file might be rather big, I want to read small portions of the same size (which fits nicely with the file format to read) in a loop, using what I would call a "buffer".
public void ReadStream(MemoryStream ContentStream)
{
byte[] buffer = new byte[sizePerHour];
for (int hours = 0; hours < NumberHours; hours++)
{
int t = ContentStream.Read(buffer, 0, sizePerHour);
SecondsToAdd = BitConverter.ToUInt32(buffer, 0);
// further processing of my byte[] buffer
}
}
My stream contains all the bytes I want, which is a good thing. When I enter the loop several things cease to work.
My int t is 0although I would presume that ContentStream.Read() would process information from within the stream to my bytearray, but that isn't the case.
I tried buffer = ContentStream.GetBuffer(), but that results in my buffer containing all of my stream, a behaviour I wanted to avoid by using reading to a buffer.
Also resetting the stream to position 0 before reading did not help, as did specifying an offset for my Stream.Read(), which means I am lost.
Can anyone point me to reading small portions of a stream to a byte[]? Maybe with some code?
Thanks in advance
Edit:
Pointing me to the right direction was the answer, that .Read() returns 0 if the end of stream is reached. I modified my code to the following:
public void ReadStream(MemoryStream ContentStream)
{
byte[] buffer = new byte[sizePerHour];
ContentStream.Seek(0, SeekOrigin.Begin); //Added this line
for (int hours = 0; hours < NumberHours; hours++)
{
int t = ContentStream.Read(buffer, 0, sizePerHour);
SecondsToAdd = BitConverter.ToUInt32(buffer, 0);
// further processing of my byte[] buffer
}
}
And everything works like a charm. I initially reset the stream to its origin every time I iterated over hour and giving an offset. Moving the "set to beginning-Part" outside my look and leaving the offset at 0 did the trick.
Read returns zero if the end of the stream is reached. Are you sure, that your memory stream has the content you expect? I´ve tried the following and it works as expected:
// Create the source of the memory stream.
UInt32[] source = {42, 4711};
List<byte> sourceBuffer = new List<byte>();
Array.ForEach(source, v => sourceBuffer.AddRange(BitConverter.GetBytes(v)));
// Read the stream.
using (MemoryStream contentStream = new MemoryStream(sourceBuffer.ToArray()))
{
byte[] buffer = new byte[sizeof (UInt32)];
int t;
do
{
t = contentStream.Read(buffer, 0, buffer.Length);
if (t > 0)
{
UInt32 value = BitConverter.ToUInt32(buffer, 0);
}
} while (t > 0);
}