I am using the stream from HttpWebRequest.GetResponse().GetResponseStream() to read data from a streaming web API. I use Begin/EndRead on the stream with a buffer of 65K bytes. I can see that data is being returned in the following pattern:
16383 bytes read.
1 bytes read.
16383 bytes read.
1 bytes read.
16383 bytes read.
1 bytes read.
etc...
Obviously the 1 byte reads introduce a lot of inefficiency in the process, and the buffer size I provide is large enough to fit 16384 bytes or more. Is there anything I can do as a client to improve this or is simply up to the server how it's streaming data to me?
The reader code is basically:
var buffer = new byte[65536];
using (var stream = response.GetResponseStream()) {
while (true) {
var bytesRead = await AsyncRead(stream.BeginRead, stream.EndRead, buffer);
Console.WriteLine($"{bytesRead} bytes read.");
// do something with the bytes
}
}
where AsyncRead just calls BeginRead(buffer, 0, buffer.Length, callback, null), then EndRead in the callback and returns the return value of EndRead.
BTW this is on .NET 4.0, no HttpClient.
What exactly are you trying to achieve by sending the HTTPWebRequest to the target server?
Are you trying to read a live response from the server after asking the server for data or initializing a request between your client application and the target server? If you are try to send an HTTPWepRequest and an HTTPWebResponse to the target server then convert the response given from the server to a stream then use System.IO.StreamReader to read the in-coming stream!
Then just to be on the safe side convert the stream that was read by the System.IO.StreamReader method into UTF-8 if that is your overall goal!
After converting to UTF-8 you can now print the output stream into a string value which you can output to console or wherever you want to send the output string!
I hope this is what you wanted if not then I have been essentially useless! :P
Related
I'm exploring how to implement an HTTP server in C#. (And before you ask, I know there is Kestrel (and nothing else that isn't obsolete), and I want a much, much smaller application.) So, the response could be a Stream that cannot be seeked and has an unknown length. For this situation, chunked encoding can be used instead of sending a Content-Length header.
The response can also be compressed with gzip or br as indicated by the client. This can be accomplished with e.g. the GZipStream class. I had almost said "easily", because that's not really the case. I always find the GZipStream API confusing each time I use it. I usually bump into every exception there is until I finally get it right.
It seems like I can only write (push) to a GZipStream and the compressed data will trickle out the other end into the specified "base" stream. But that's not desirable because I can't just let the compressed data flow to the client. It needs to be chunked. That is, each bit of compressed data needs to be prefixed with its chunk size. Of course the GZipStream cannot produce that format.
Instead, I'd like to read (pull) from the compressing GZipStream, but that doesn't seem to be possible. The documentation says it will throw an exception if I try that. But there has to be some instance that brings the compressed bytes into the chunked format.
So how would I get the expected result? Can it even be achieved with this API? Why can't I pull from the compressing stream, only push?
I'm not trying to make up (non-functional) sample code because that would only be confusing.
PS: Okay, maybe this:
Stream responseBody = ...;
if (canCompress)
{
responseBody = new GZipStream(responseBody, CompressionMode.Compress); // <-- probably wrong
}
// not shown: add appropriate headers
while (true)
{
int chunkLength = responseBody.Read(buffer); // <-- not possible
if (chunkLength == 0)
break;
response.Write($"{chunkLength:X}\r\n");
response.Write(buffer.AsMemory()[..chunkLength]);
response.Write("\r\n");
}
response.Write("0\r\n\r\n");
Your usage of GZipStream is incomplete. While your input responseBuffer is the correct target buffer, you have to actually write the bytes TO the GZipStream itself.
In addition, once you are done writing, you must close the GZipStream instance to write all compressed bytes to your target buffer. This is the critical step because there is no such thing as "partial compression" of an input stream in GZip. You would have to analyze the entire input in order to properly compress it. As such, this is the critical missing link that MUST happen before you can continue to write the response.
Finally, you need to reset the position of your output stream so that you can read it into an intermediary response buffer.
using MemoryStream responseBody = new MemoryStream();
GZipStream gzipStream = null; // make sure to dispose after use
if (canCompress)
{
using MemoryStream gzipStreamBuffer = new MemoryStream(bytes);
gzipStream = new GZipStream(responseBody, CompressionMode.Compress, true);
gzipStreamBuffer.CopyTo(gzipStream);
gzipStream.Close(); // close the stream so that all compressed bytes are written
responseBody.Seek(0, SeekOrigin.Begin); // reset the response so that we can read it to the buffer
}
var buffer = new byte[20];
while (true)
{
int chunkLength = responseBody.Read(buffer);
if (chunkLength == 0)
break;
// write response
}
In my test example, my bytes input was 241 bytes, whereas the compressed bytes written to the buffer totaled 82 bytes.
I have a TcpClient class on a client and server setup on my local machine. I have been using the Network stream to facilitate communications back and forth between the 2 successfully.
Moving forward I am trying to implement compression in the communications. I've tried GZipStream and DeflateStream. I have decided to focus on DeflateStream. However, the connection is hanging without reading data now.
I have tried 4 different implementations that have all failed due to the Server side not reading the incoming data and the connection timing out. I will focus on the two implementations I have tried most recently and to my knowledge should work.
The client is broken down to this request: There are 2 separate implementations, one with streamwriter one without.
textToSend = ENQUIRY + START_OF_TEXT + textToSend + END_OF_TEXT;
// Send XML Request
byte[] request = Encoding.UTF8.GetBytes(textToSend);
using (DeflateStream streamOut = new DeflateStream(netStream, CompressionMode.Compress, true))
{
//using (StreamWriter sw = new StreamWriter(streamOut))
//{
// sw.Write(textToSend);
// sw.Flush();
streamOut.Write(request, 0, request.Length);
streamOut.Flush();
//}
}
The server receives the request and I do
1.) a quick read of the first character then if it matches what I expect
2.) I continue reading the rest.
The first read works correctly and if I want to read the whole stream it is all there. However I only want to read the first character and evaluate it then continue in my LongReadStream method.
When I try to continue reading the stream there is no data to be read. I am guessing that the data is being lost during the first read but I'm not sure how to determine that. All this code works correctly when I use the normal NetworkStream.
Here is the server side code.
private void ProcessRequests()
{
// This method reads the first byte of data correctly and if I want to
// I can read the entire request here. However, I want to leave
// all that data until I want it below in my LongReadStream method.
if (QuickReadStream(_netStream, receiveBuffer, 1) != ENQUIRY)
{
// Invalid Request, close connection
clientIsFinished = true;
_client.Client.Disconnect(true);
_client.Close();
return;
}
while (!clientIsFinished) // Keep reading text until client sends END_TRANSMISSION
{
// Inside this method there is no data and the connection times out waiting for data
receiveText = LongReadStream(_netStream, _client);
// Continue talking with Client...
}
_client.Client.Shutdown(SocketShutdown.Both);
_client.Client.Disconnect(true);
_client.Close();
}
private string LongReadStream(NetworkStream stream, TcpClient c)
{
bool foundEOT = false;
StringBuilder sbFullText = new StringBuilder();
int readLength, totalBytesRead = 0;
string currentReadText;
c.ReceiveBufferSize = DEFAULT_BUFFERSIZE * 100;
byte[] bigReadBuffer = new byte[c.ReceiveBufferSize];
while (!foundEOT)
{
using (var decompressStream = new DeflateStream(stream, CompressionMode.Decompress, true))
{
//using (StreamReader sr = new StreamReader(decompressStream))
//{
//currentReadText = sr.ReadToEnd();
//}
readLength = decompressStream.Read(bigReadBuffer, 0, c.ReceiveBufferSize);
currentReadText = Encoding.UTF8.GetString(bigReadBuffer, 0, readLength);
totalBytesRead += readLength;
}
sbFullText.Append(currentReadText);
if (currentReadText.EndsWith(END_OF_TEXT))
{
foundEOT = true;
sbFullText.Length = sbFullText.Length - 1;
}
else
{
sbFullText.Append(currentReadText);
}
// Validate data code removed for simplicity
}
c.ReceiveBufferSize = DEFAULT_BUFFERSIZE;
c.ReceiveTimeout = timeOutMilliseconds;
return sbFullText.ToString();
}
private string QuickReadStream(NetworkStream stream, byte[] receiveBuffer, int receiveBufferSize)
{
using (DeflateStream zippy = new DeflateStream(stream, CompressionMode.Decompress, true))
{
int bytesIn = zippy.Read(receiveBuffer, 0, receiveBufferSize);
var returnValue = Encoding.UTF8.GetString(receiveBuffer, 0, bytesIn);
return returnValue;
}
}
EDIT
NetworkStream has an underlying Socket property which has an Available property. MSDN says this about the available property.
Gets the amount of data that has been received from the network and is
available to be read.
Before the call below Available is 77. After reading 1 byte the value is 0.
//receiveBufferSize = 1
int bytesIn = zippy.Read(receiveBuffer, 0, receiveBufferSize);
There doesn't seem to be any documentation about DeflateStream consuming the whole underlying stream and I don't know why it would do such a thing when there are explicit calls to be made to read specific numbers of bytes.
Does anyone know why this happens or if there is a way to preserve the underlying data for a future read? Based on this 'feature' and a previous article that I read stating a DeflateStream must be closed to finish sending (flush won't work) it seems DeflateStreams may be limited in their use for networking especially if one wishes to counter DOS attacks by testing incoming data before accepting a full stream.
The basic flaw I can think of looking at your code is a possible misunderstanding of how network stream and compression works.
I think your code might work, if you kept working with one DeflateStream. However, you use one in your quick read and then you create another one.
I will try to explain my reasoning on an example. Assume you have 8 bytes of original data to be sent over the network in a compressed way. Now let's assume for sake of an argument, that each and every byte (8 bits) of original data will be compressed to 6 bits in compressed form. Now let's see what your code does to this.
From the network stream, you can't read less than 1 byte. You can't take 1 bit only. You take 1 byte, 2 bytes, or any number of bytes, but not bits.
But if you want to receive just 1 byte of the original data, you need to read first whole byte of compressed data. However, there is only 6 bits of compressed data that represent the first byte of uncompressed data. The last 2 bits of the first byte are there for the second byte of original data.
Now if you cut the stream there, what is left is 5 bytes in the network stream that do not make any sense and can't be uncompressed.
The deflate algorithm is more complex than that and thus it makes perfect sense if it does not allow you to stop reading from the NetworkStream at one point and continue with new DeflateStream from the middle. There is a context of the decompression that must be present in order to decompress the data to their original form. Once you dispose the first DeflateStream in your quick read, this context is gone, you can't continue.
So, to resolve your issue, try to create only one DeflateStream and pass it to your functions, then dispose it.
This is broken in many ways.
You are assuming that a read call will read the exact number of bytes you want. It might read everything in one byte chunks though.
DeflateStream has an internal buffer. It can't be any other way: Input bytes do not correspond 1:1 to output bytes. There must be some internal buffering. You must use one such stream.
Same issue with UTF-8: UTF-8 encoded strings cannot be split at byte boundaries. Sometimes, your Unicode data will be garbled.
Don't touch ReceiveBufferSize, it does not help in any way.
You cannot reliably flush a deflate stream, I think, because the output might be at a partial byte position. You probably should devise a message framing format in which you prepend the compressed length as an uncompressed integer. Then, send the compressed deflate stream after the length. This is decodable in a reliable way.
Fixing these issues is not easy.
Since you seem to control client and server you should discard all of this and not devise your own network protocol. Use a higher-level mechanism such as web services, HTTP, protobuf. Anything is better than what you have there.
Basically there are a few things wrong with the code I posted above. First is that when I read data I'm not doing anything to make sure the data is ALL being read in. As per microsoft documentation
The Read operation reads as much data as is available, up to the
number of bytes specified by the size parameter.
In my case I was not making sure my reads would get all the data I expected.
This can be accomplished simply with this code.
byte[] data= new byte[packageSize];
bytesRead = _netStream.Read(data, 0, packageSize);
while (bytesRead < packageSize)
bytesRead += _netStream.Read(data, bytesRead, packageSize - bytesRead);
On top of this problem I had a fundamental issue with using DeflateStream - namely I should not use DeflateStream to write to the underlying NetworkStream. The correct approach is to first use the DeflateStream to compress data into a ByteArray, then send that ByteArray using the NetworkStream directly.
Using this approach helped to correctly compress data over the network and property read the data on the other end.
You may point out that I must know the size of the data, and that is true. Every call has a 8 byte 'header' that includes the size of the compressed data and the size of the data when it is uncompressed. Although I think the second was utimately not needed.
The code for this is here. Note the variable compressedSize serves 2 purposes.
int packageSize = streamIn.Read(sizeOfDataInBytes, 0, 4);
while (packageSize!= 4)
{
packageSize+= streamIn.Read(sizeOfDataInBytes, packageSize, 4 - packageSize);
}
packageSize= BitConverter.ToInt32(sizeOfDataInBytes, 0);
With this information I can correctly use the code I showed you first to get the contents fully.
Once I have the full compressed byte array I can get the incoming data like so:
var output = new MemoryStream();
using (var stream = new MemoryStream(bufferIn))
{
using (var decompress = new DeflateStream(stream, CompressionMode.Decompress))
{
decompress.CopyTo(output);;
}
}
output.Position = 0;
var unCompressedArray = output.ToArray();
output.Close();
output.Dispose();
return Encoding.UTF8.GetString(unCompressedArray);
From looking at the example application on the Microsoft Website How to connect with a stream socket (XAML)
, I have learned how to connect to a server, and send string to the server. However, the example doesn't quite extend on reading data from the socket.
The server is a c# windows console application and what it does is send data to the mobile client by using a network stream.
//send user response
//message is a frame packet containing information
message = new Message();
//type 1 just means it's successfull
message.type = 1;
//using Newton JSON i convert the Message Object into a string object
string sendData = JsonConvert.SerializeObject(message);
//conver the string into a bytearray and store in the variable data (type byte array)
byte[] data = GetBytes(sendData);
//netStream is my NetworkStream i want to write the byte array called data, starting to from 0, and ending at the last point in array
netStream.Write(data, 0, data.Length);
//flushing the stream, not sure why, flushing means to push data
netStream.Flush();
//debugging
Console.WriteLine("sent");
In order to read data from a stream, the DataReader class is used. I am quite new to c# Mobile, and the documentation on the DataReader class doesn't provide any implementations of examples, so how can I read data from the Stream Socket?
Using the example code from microsoft;
DataReader reader = new DataReader(clientSocket.InputStream);
// Set inputstream options so that we don't have to know the data size
reader.InputStreamOptions = InputStreamOptions.Partial;
await reader.LoadAsync(reader.UnconsumedBufferLength);
But now I am not sure how to read the byte array sent from the server.
I would like to send a stream of data(huge file > 2GB), to a WCF service, process it then return the processed data as a stream (transferMode = "Streamed" ), without buffering the entire stream in memory then sending it back.
I know the traditional approach of streaming data in and out(outside of a WCF operation) involves
At the consuming end, sending a stream to a (WCF service) void method with an input Stream and an output Stream as parameters.
At the consuming end, also having a receiving stream to get the
incoming,processed output Stream
In the method, processing that input Stream then writing the
processed bytes to the output Stream
those processed bytes are what is received through the output Stream
This way, the stream flow is not broken.
E.g from a Microsoft sample:
void CopyStream(System.IO.Stream instream, System.IO.Stream outstream)
{
//read from the input stream in 4K chunks
//and save to output stream
const int bufferLen = 4096;
byte[] buffer = new byte[bufferLen];
int count = 0;
while ((count = instream.Read(buffer, 0, bufferLen)) > 0)
{
outstream.Write(buffer, 0, count);
}
}
I would like to do the same, only that outputstream would be a WCF return type. Is that possible? How do I do it with transferMode = "Streamed"?
Using WCF, you cannot have more than one parameter(or message contract object) of Stream type when you want to use transferMode = "Streamed"
Hypothetically, with psuedocode like this:
Stream StreamAndReturn(System.IO.Stream instream)
{
Stream outstream = new MemoryStream();//instantiate outstream - probably should be buffered?
while ((count = instream.Read(buffer, 0, bufferLen)) > 0)
{
//some operation on instream that will
SomeOperation(instream,outstream);
}
return outstream; //obviously this will close break the streaming
}
I also tried NetTcpBinding with SessionMode set to SessionMode.Allowed, hoping to have a session that I can start,send the stream data to the service, get the results in a separate stream then using the OperationContext, retrieve whatever property will be associated with with that service instance. But it did not retain the session information, see below:
According to MSDN documentation I should also set InstanceContextMode = InstanceContextMode.PerSession and ConcurrencyMode = ConcurrencyMode.Multiple (see last paragraph)
For that, I Asked a question on SO, but still waiting for an answer. I am thinking maybe there is a better way to do it.
My recommendation would be for you to create two methods:
One that takes your input as a stream and returns an ID number generated on the server.
Another that takes the ID number, and returns the response stream.
You'll need some way of coordinating the input and output on the server, and it will make it a little more complicated if you put this on multiple servers behind a load balancer (shared state and all that), but it will allow you to use streaming for both the request side and the response side.
I have a WCF Service, which uploads the document using Stream class.
Now after this, i want to get the Size of the document(Length of Stream), to update the fileAttribute for FileSize.
But doing this, the WCF throws an exception saying
Document Upload Exception: System.NotSupportedException: Specified method is not supported.
at System.ServiceModel.Dispatcher.StreamFormatter.MessageBodyStream.get_Length()
at eDMRMService.DocumentHandling.UploadDocument(UploadDocumentRequest request)
Can anyone help me in solving this.
Now after this, i want to get the Size of the document(Length of Stream), to update the fileAttribute for FileSize.
No, don't do that. If you are writing a file, then just write the file. At the simplest:
using(var file = File.Create(path)) {
source.CopyTo(file);
}
or before 4.0:
using(var file = File.Create(path)) {
byte[] buffer = new byte[8192];
int read;
while((read = source.Read(buffer, 0, buffer.Length)) > 0) {
file.Write(buffer, 0, read);
}
}
(which does not need to know the length in advance)
Note that some WCF options (full message security etc) require the entire message to be validated before processing, so can never truly stream, so: if the size is huge, I suggest you instead use an API where the client splits it and sends it in pieces (which you then reassemble at the server).
If the stream doesn't support seeking you cannot find its length using Stream.Length
The alternative is to copy the stream to a byte array and find its cumulative length. This involves processing the whole stream first , if you don't want this, you should add a stream length parameter to your WCF service interface