I have an HTTPListener based .NET webserver, which generates data on request and outputs it. I am using StreamWriter to write to the output stream. When I write the data, not all of it gets received in Chrome, but all of it gets sent in the code. If I go to view source, it only shows part of the data that was sent.
It almost seems like some kind of timeout, or it getting too big, or something. Is there something that needs to be set in the HTTPListener or the request context?
As per the comments:
Call Flush on the stream before it is sent.
Related
I'm using raw TCP sockets. I can send 200/404/302 without a problem.
If I serve 413 like a normal request it works just fine
"HTTP/1.1 413\r\nContent-Type: text/html; charset=utf-8;\r\nContent-Length: 7\r\n\r\nToo big"
However in the http 1/1 rfc it says I may close the connection before processing the request https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.4.14
Request Entity Too Large
The server is refusing to process a request because the request entity is larger than the server is willing or able to process. The server MAY close the connection to prevent the client from continuing the request.
However when I serve the results and don't process the entire request both firefox and chrome will show a connection error. When I send the results THEN READ THE REQUEST it will work. The browsers seem to not process what I send them until I read the entire stream which defeats the point.
Or maybe I'm sending it wrong. How am I suppose to send the 413? I send the string above, flush the socket, sleep for 100 milliseconds, close the connection, sleep for another 100 milliseconds then restart.
A request and response exchange isn't considered complete until both the request data is fully received and the response fully sent. So the only way to send a valid 413 response is to read (and presumably discard) the full request body, as well as sending the 413 as soon as possible as it appears you are doing.
If you send the 413 immediately, it's up to the client to detect that early or not. I think most clients will just continue uploading the request though.
Alternatively, you may close the connection which is how you can stop the client from sending the whole message. But then, the HTTP request and response is not complete, and clients may show a connection error rather than the 413.
So: if you want to stop clients uploading data then close the connection. If you want clients to get a nice valid 413 response then consume the whole request.
I have a pretty big video file I upload to a web service via multipart/form-data.
It takes ~ 30 seconds to arrive and I would prefer not waiting that long simply to access parameters I send along with the file.
My question is simple, can I access parameters sent with the form without waiting for the video payload to be uploaded?
Can this be done using headers or any other methods?
Streaming vs. Buffering
It's about how the webserver is set up. For IIS you can enable Streaming.
Otherwise, by default, IIS will use 'buffering' - the whole request is loaded into memory first (IIS's memory that you can't get to) before your app running in IIS can get it.
Not using IIS? You have to figure out how to get the webserver to do the same thing.
How to stream using IIS:
Streaming large file uploads to ASP.NET MVC
Note the way the file is read in the inner loop:
while ((cbRead = clientRequest.InputStream.Read(rgbBody, 0, rgbBody.Length)) > 0)
{
fileStream.Write(rgbBody, 0, cbRead);
}
Here instead of just saving the data like that question does, you will have to parse any xml/json/etc or whatever contains the file parameters you speak of ... and expect the video to be sent afterwards. You can process them right away if it's a quick process ... then get the rest of the video ... or you can send them to a background thread.
You probably won't be able to parse it just dumping what you have to a json or xml parser, there will be an unclosed tag or } at the top that isn't closed til after the video data is uploaded (however that is done). Or if it's multipart data from a form submission, as you imply, you will have to parse that partial upload yourself, instead of just asking IIS for the post data.
So this will be tricky, you can first start by writing 1k at a time to a log file with a time stamp to prove that you're getting the data as it comes. after that it's just a coding headache.
Getting this to work also means you'll have to have some control over the client and how it sends the data.
That's because you'll at least have to ensure it sends the file parameters FIRST!
Which concerns me, because, if you have control of the client, why can't you take the simple route (as Nobody and Nkosi imply) and use 2 requests? You mention you need one. Why not write js client code to send the parameters first in an XHR and then the file in a second request, using a correlation ID in both to tie them together? (the server could return this from the first request and you could send it in the 2nd).
Obviously, if you're just having a form with some inputs and a file upload and doing submit, then you need one request ;-) But if you have control over the client side you're not stuck with that.
Good luck, there is some advanced programming here, but nothing super high-tech. You will make it work!!!
If you don't have control over the server code, you are probably stuck, if the server app's webserver is buffering, the server app won't get anything, of course, if you wanted to do something with the file parameters first, this really implies you have control of the server side ;-)
I couldn't find an exact example of what I'm looking for.
I have a REST/WCF endpoint that takes in a Stream and returns a Stream. I am able to do "live" streaming of the request, by that I mean that the incoming stream gets read by the service as it is written to by the client (using a shared buffer stream in the service).
I would like to achieve the same thing on the outgoing stream but all the examples I found write all the data to a local stream then return it back to the client.
How can I access the underlying stream of the response and start writing to it, allowing the client to start reading from it without exiting my service method?
Is it even possible to achieve what I need with WCF and HTTP streaming?
I'm using the System.Net.HttpWebRequest class to implement a simple HTTP downloader that can be paused, canceled and even resumed after it was canceled (with the HTTP Range request header).
It's clear that HttpWebRequest.GetResponse() is when the HTTP request is actually sent to the server, and the method returns when a HTTP response is received (or a timeout occurs). However, the response body is represented with a Stream, which leaves me wonder whether the response body is actually transmitted with the response header (i.e. it's already downloaded when GetResponse() returns), or is it only downloaded on-demand, when I try to read from the response stream? Or maybe when I call the HttpWebResponse.GetResponseStream() method?
Unfortunately the msdn documentation doesn't tell, and I don't know enough about the HTTP protocol to be able to tell.
How do chunked transfers and the like behave in this case (that is, how should I handle them in my C# application)? When is actually the response data downloaded from the server?
This all depends on TCP, the underlying protocol of HTTP. The way TCP works is that data is sent in segments. Whenever a client sends a segment to the server, among the data sent is information about how much additional data is it ready to receive. This usually corresponds to some kind of buffer on the client's part. When the client receives some data, it also sends a segment to the server, acknowledging the received data.
So, assuming the client is very slow in processing the received data, the sequence of events could go like this:
Connection is established, the clients says how much data is it ready to receive.
Server sends one or more segments to the client, the total data in them at most the amount client said it is ready to receive
Client says to the server: I received the data you sent me, but don't send me anymore for now.
Client processes some of the data.
Client says to the server: You can send me x more bytes of data
What does this mean with regards to GetResponse()? When you call GetResponse(), the client sends the request, reads the HTTP header of the response (which usually fits into one segment, but it may be more) and returns. At this point, if you don't start reading the response stream (that you get by calling GetResponseStream()), some data from the server is received, but only to fill the buffer. When that is full, no more data is transmitted until you start reading the response stream.
I'm trying to get the raw data sent to IIS using a HttpHandler. However, because the request is an "GET"-request without the "Content-Length" header set it reports that there is no data to read (TotalBytes), and the inputstream is empty. Is there any way I can plug into the IIS-pipeline (maybe even before the request is parsed) and just kind of take control over the request and read it's raw data? I don't care if I need to parse headers and stuff like that myself, I just want to get my hands on the actual request and tell IIS to ignore this one. Is that at all possible? Cause right now it looks like I need to do the alternative, which is developing a custom standalone server, and I really don't want to do that.
Most web servers will ignore (and rarely give you access to) the body of a GET request, because the HTTP semantics imply that it is to be ignored anyway. You should consider another method (for example POST or PUT).
See this question and the link in this answer:
HTTP GET with request body