I am sending a page to the client in chunks, by calling Response.Flush() in the middle of my page. This is so the browser will get the first part of the html, and can start downloading resources while my server continues to process the rest of the request.
Because of certain 3rd party services between my IIS server and my client (CDN, Firewall, Load Balancing, etc.) I need to set the header Transfer-Encoding: Chunked so they will know that the response will return in chunks.
I try setting the header by calling : Response.Headers.Add("Transfer-Encoding", "chunked");
For some reason when i do this, I get a blank page back after waiting quite a long time, even when contacting my IIS server directly, without going through all the 3rd parties. When attaching to process to debug, I don't see any errors.
Removing the 'Transfer-Encoding' header works, but I need this header for some of the 3rd parties I'm using.
Anyone know how I can set this header in my web application ??
Btw - I also tried setting this header in 'Response Headers' section in IIS directly, and the response is still empty when doing this.
According to the description on Wikipedia, Chunked Transfer Encoding requires the response body to be encoded in a specific way, one of main points of the described format being :
Each chunk starts with the number of octets of the data it embeds
expressed as a hexadecimal number in ASCII followed by optional
parameters (chunk extension) and a terminating CRLF sequence, followed
by the chunk data. The chunk is terminated by CRLF. If chunk
extensions are provided, the chunk size is terminated by a semicolon
followed with the extension name and an optional equal sign and value.
As far as I know, calling Response.Flush() does not generate this specific markup. It just empties any buffered response content to the client.
You may have a look at "When does the server use chunked transfer encoding?" in this answer :
https://stackoverflow.com/a/2711405/1236044
It seems to imply that, with the correct settings, IIS should automatically switch to Chunked Transfer Encoding when needed
The server will be using chunked transfer encoding if you disable buffering:
Set context.Response.BufferOutput to false
According to this question
You might further need to set Server.ScriptTimeout (in seconds) to avoid your script being interrupted.
Related
I have a pretty big video file I upload to a web service via multipart/form-data.
It takes ~ 30 seconds to arrive and I would prefer not waiting that long simply to access parameters I send along with the file.
My question is simple, can I access parameters sent with the form without waiting for the video payload to be uploaded?
Can this be done using headers or any other methods?
Streaming vs. Buffering
It's about how the webserver is set up. For IIS you can enable Streaming.
Otherwise, by default, IIS will use 'buffering' - the whole request is loaded into memory first (IIS's memory that you can't get to) before your app running in IIS can get it.
Not using IIS? You have to figure out how to get the webserver to do the same thing.
How to stream using IIS:
Streaming large file uploads to ASP.NET MVC
Note the way the file is read in the inner loop:
while ((cbRead = clientRequest.InputStream.Read(rgbBody, 0, rgbBody.Length)) > 0)
{
fileStream.Write(rgbBody, 0, cbRead);
}
Here instead of just saving the data like that question does, you will have to parse any xml/json/etc or whatever contains the file parameters you speak of ... and expect the video to be sent afterwards. You can process them right away if it's a quick process ... then get the rest of the video ... or you can send them to a background thread.
You probably won't be able to parse it just dumping what you have to a json or xml parser, there will be an unclosed tag or } at the top that isn't closed til after the video data is uploaded (however that is done). Or if it's multipart data from a form submission, as you imply, you will have to parse that partial upload yourself, instead of just asking IIS for the post data.
So this will be tricky, you can first start by writing 1k at a time to a log file with a time stamp to prove that you're getting the data as it comes. after that it's just a coding headache.
Getting this to work also means you'll have to have some control over the client and how it sends the data.
That's because you'll at least have to ensure it sends the file parameters FIRST!
Which concerns me, because, if you have control of the client, why can't you take the simple route (as Nobody and Nkosi imply) and use 2 requests? You mention you need one. Why not write js client code to send the parameters first in an XHR and then the file in a second request, using a correlation ID in both to tie them together? (the server could return this from the first request and you could send it in the 2nd).
Obviously, if you're just having a form with some inputs and a file upload and doing submit, then you need one request ;-) But if you have control over the client side you're not stuck with that.
Good luck, there is some advanced programming here, but nothing super high-tech. You will make it work!!!
If you don't have control over the server code, you are probably stuck, if the server app's webserver is buffering, the server app won't get anything, of course, if you wanted to do something with the file parameters first, this really implies you have control of the server side ;-)
I have code performing an HTTP POST to a vendor's site using WebClient.UploadValues. When the payload is somewhere under 1.6 MB in size, the response is some XML data as expected. When larger, the response from the vendor's site is null.
var client = new WebClient();
client.Encoding = Encoding.UTF8;
byte[] response = client.UploadValues(strTargetUri, paramsNameValueCollection);
The vendor indicates they routinely receive larger payloads. I can't find any IIS or WCF settings that would be limiting outgoing payload by size or time. If I were exceeding a limit I set, .NET would throw an exception, not just return null.
Any suggestions of what I might be missing on my side? Or something I should be sharing with the vendor?
UPDATE
I've received back samples received at the vendor end. When under ~2MB, they show that they receive straight up XML such as:
<STAT>
<REQUEST _SEQUENCE_ID="1">
<CUSTOMER>...
But when larger, it 1) is URL encoded, 2) is still preceded by other query string components and 3) contains some of the embedded "add-on" XML such as XML namespace references:
integrator=MyVal&userId=MyUser&password=12345&payload=%3cSTAT+xmlns%3axsd%3d%22http%3a%2f%2fwww.w3.org%2f2001%2fXMLSchema%22+xmlns%3axsi%3d%22http%3a%2f%2fwww.w3.org%2f2001%2fXMLSchema-instance%22%3e%3cREQUEST+_SEQUENCE_ID%3d%221%22...
My simplistic understanding of POSTs and the fact that I set nothing differently between scenarios makes me think the difference is because the vendor's processing software has "choked" and showing them different results. I'm getting my net eng team to help me with tracking out outgoing packets to see if we can verify what we're sending at the last moment.
John,
The limit is most likely the server, so you may have to contact the vendor. For example, the default max, non multipart, POST size of a Tomcat server is only 2MB. If you can detect that the server is Tomcat, you could suggest that they increase the maxPostSize attribute of the Connnector.
Joe
I have a problem whereby I have lots of very small web service calls to a Java endpoint (hosted on Oracle GlassFish 3.1.X). I added this service as a Service Reference (using the remote wsdl file) and use a BasicHttpBinding to reach it.
Since the service is located half the world away plus going across the internet, we frequently experience some packet loss when reaching the destination. We are looking at any way possible to reduce the impact of these occurrences. We have been using Wireshark to give us detailed knowledge of what is going across the wire to our destination, and back again. I was curious to see that for every single request we generate, we are sending 2 packets. The packet boundary is always between the HTTP header and the <s:Envelope> tag. To me this is a big overhead, particularly in my environment where I want to minimise the amount of packets sent (to reduce overall packetloss).
In most cases (99% of our calls), the HTTP header packet is 210 bytes followed by a SOAP envelope packet of 291 bytes (excluding the 54 bytes of TCP/IP overhead for each packet). Totalling these gives 501 bytes - just over a third of our Max Segment Size of 1460 bytes. Why isn't WCF sending this HTTP POST request as a single packet of 501 bytes (555 bytes if you include the 54 bytes of TCP/IP overhead)?
Does anyone know why it does this? It almost seems as if the HttpWebRequest object is calling .Flush() on the stream after writing it's headers but I'm not sure why it would do this?
I've tried different combinations on these:
ServicePointManager.UseNagleAlgorithm = false;
ServicePointManager.Expect100Continue = false;
With no effect.
EDIT
Wrong: I've investigated a bit further and when a HttpWebRequest.GetRequestStream() is called, it does write the headers to the stream immediately. At some stage before you write to the Stream that is given back to you, the network would Flush these (I guess? Unless a deliberate flush is happening somewhere). When you finally start writing to the stream, it has already sent the header packet. Not sure how to prevent this, it seems a very hard assertion inside the HttpWebRequest code that called GetRequestStream() will write the headers. For my small requests, I want nothing to be written until I have closed the stream but that goes against the streaming nature of it.
And the answer is - Can't be done with WebHttpRequest (and hence BasicHttpBinding)
http://us.generation-nt.com/answer/too-packets-httpwebrequest-help-23298102.html
My ASP.NET app returns JSON object to user, it contains binary encoded data. Due to this I decided to enable HTTP compression and the problem begun with Content-Length.
If I enable compression the Content-Length header is ignored while response s send and connection is not closed immediately. The connection is still open for about 15 seconds after all data has been sent.
I would like to have HTTP compression enabled but don't know how to solve the problem with Content-Length header.
context.Response.AddHeader("Content-Length", jsonObject.ToString().Length.ToString());
context.Response.Write(jsonObject);
context.Response.Flush();
context.Response.Close();
Content-Length represents the length of the data being transferred in your case it is the compressed bytes. See this SO question whose answer links to relevant RFC: content-length when using http compression
In case, HTTP Compression is happening via web server (and not by your code) then I will suggest you to try adding content-length header by your self. Hopefully, web server will add it correctly.
This can be verified using Chrome on http://httpd.apache.org/ if you look at the developer console you would see the Content-Length to be much smaller than the actual uncompressed page in bytes.
Can anyone guide me in acquiring the POST content-length of a website by just using sockets. Thanks and Kudos!
(I'm avoiding using httpwebrequest for some reason)
If it's a proxy application you don't need to be parsing headers at all. You just need to mirror the data from one side to the other, as bytes. The only thing you need to parse is for example the initial HTTP CONNECTION request, or whatever your initial handshake with the client is that causes you to set up the upstream connection. The rest of it is just byte copying and EOS and error propagation.
In the Http protocol the header is seperated from the content by a double crlf.
So you could either parse the header and get the Content-Length header or you can figure out the length of the content (since you know where the header ends and content starts).
HTTP/1.1 message length rules are described in section 4.4 of the RFC 2616.