HttpWebRequest is slow with chunked data - c#

I'm using HttpWebRequest to connect to my in-house built HTTP server. My problem is that it is a lot slower than connecting to the server via for instance PostMan (https://chrome.google.com/webstore/detail/postman-rest-client/fdmmgilgnpjigdojojpjoooidkmcomcm?hl=en), which is probably using the built-in functions in Chrome to request data.
The server is built using this example on MSDN (http://msdn.microsoft.com/en-us/library/dxkwh6zw.aspx) and uses a buffer size of 64. The request is a HTTP request with some data in the body.
When connecting via PostMan, the request is split into a bunch of chunks and BeginRecieve() is called multiple times, each time receiving 64B and taking about 2 milliseconds. Except the last one, which receives less than 64B.
But when connecting with my client using HttpWebRequest, the first BeginRecieve() callback receives 64B and takes about 1 ms, the following receives only 47B and takes almost 200 ms, and finally the third receives about 58B and takes 2ms.
What is up with the second BeginRecieve? I note that the connection is established as soon as I start to write data to the HttpWebRequest input stream, but the data reception does not start until I call GetResponse().
Here is my HttpWebRequest code:
var request = (HttpWebRequest)WebRequest.Create(url);
request.Method = verb;
request.Timeout = timeout;
request.Proxy = null;
request.KeepAlive = false;
request.Headers.Add("Content-Encoding", "UTF-8");
System.Net.ServicePointManager.Expect100Continue = false;
request.ServicePoint.Expect100Continue = false;
if ((verb == "POST" || verb == "PUT") && !String.IsNullOrEmpty(data))
{
var dataBytes = Encoding.UTF8.GetBytes(data);
try
{
var dataStream = request.GetRequestStream();
dataStream.Write(dataBytes, 0, dataBytes.Length);
dataStream.Close();
}
catch (Exception ex)
{
throw;
}
}
WebResponse response = null;
try
{
response = request.GetResponse();
}
catch (Exception ex)
{
throw;
}
var responseReader = new StreamReader(rStream, Encoding.UTF8);
var responseStr = responseReader.ReadToEnd();
responseReader.Close();
response.Close();
What am I doing wrong? Why is it behaving so much differently than a HTTP request from a web browser? This is effectively adding 200ms of lag to my application.

This looks like a typical case of the Nagle algorithm clashing with TCP Delayed Acknowledgement. In your case you are sending a small Http Request (~170 bytes according to your numbers). This is likely less than the MSS (Maximum Segment Size) meaning that the Nagle Algorithm will kick in. The server is probably delaying the ACK resulting in a delay of up to 500 ms. See links for details.
You can disable Nagle via ServicePointManager.UseNagleAlgorithm = false (before issuing the first request), see MSDN.
Also see Nagle’s Algorithm is Not Friendly towards Small Requests for a detailed discussion including a Wireshark analysis.
Note: In your answer you are running into the same situation when you do write-write-read. When you switch to write-read you overcome this problem. However I do not believe you can instruct the HttpWebRequest (or HttpClient for that matter) to send small requests as a single TCP write operation. That would probably be a good optimization in some cases. Althought it may lead to some additional array copying, affecting performance negatively.

200ms is the typical latency of the Nagle algorithm. This gives rise to the suspicion that the server or the client is using Nagling. You say you are using a sample from MSDN as the server... Well there you go. Use a proper server or disable Nagling.
Assuming that the built-in HttpWebRequest class has an unnecessary 200ms latency is very unlikely. Look elsewhere. Look at your code to find the problem.

It seems like HttpWebRequest is just really slow.
Funny thing: I implemented my own HTTP client using Sockets, and I found a clue to why HttpWebRequest is so slow. If I encoded my ASCII headers into its own byte array and sent them on the stream, followed by the byte array encoded from my data, my Sockets-based HTTP client behaved exactly like HttpWebRequest: first it fills one buffer with data (part of the header), then it uses another buffer partially (the rest of the header), waits 200 ms and then sends the rest of the data.
The code:
TcpClient client = new TcpClient(server, port);
NetworkStream stream = client.GetStream();
// Send this out
stream.Write(headerData, 0, headerData.Length);
stream.Write(bodyData, 0, bodyData.Length);
stream.Flush();
The solution was of course to append the two byte arrays before sending them out on the stream. My application is now behaving as espected.
The code with a single stream write:
TcpClient client = new TcpClient(server, port);
NetworkStream stream = client.GetStream();
var totalData = new byte[headerBytes.Length + bodyData.Length];
Array.Copy(headerBytes,totalData,headerBytes.Length);
Array.Copy(bodyData,0,totalData,headerBytes.Length,bodyData.Length);
// Send this out
stream.Write(totalData, 0, totalData.Length);
stream.Flush();
And HttpWebRequest seems to send the header before I write to the request stream, so it might be implemented somewhat like my first code sample. Does this make sense at all?
Hope this is helpful for anyone with the same problem!

Try this: you need to dispose of your IDisposables:
var request = (HttpWebRequest)WebRequest.Create(url);
request.Method = verb;
request.Timeout = timeout;
request.Proxy = null;
request.KeepAlive = false;
request.Headers.Add("Content-Encoding", "UTF-8");
System.Net.ServicePointManager.Expect100Continue = false;
request.ServicePoint.Expect100Continue = false;
if ((verb == "POST" || verb == "PUT") && !String.IsNullOrEmpty(data))
{
var dataBytes = Encoding.UTF8.GetBytes(data);
using (var dataStream = request.GetRequestStream())
{
dataStream.Write(dataBytes, 0, dataBytes.Length);
}
}
string responseStr;
using (var response = request.GetResponse())
{
using (var responseReader = new StreamReader(rStream, Encoding.UTF8))
{
responseStr = responseReader.ReadToEnd();
}
}

Related

Why the (httpwebresponse) ResponseStream stops reading data from internet radio?

I am using this code to get the data from an icecast radio, but the ResponseStream stops reading data at 64K recieved. Can you help me with this?
HttpWebRequest request = (HttpWebRequest) WebRequest.Create("http://icecast6.play.cz/radio1-128.mp3");
request.AllowReadStreamBuffering = false;
request.Method = "GET";
request.BeginGetResponse(new AsyncCallback(GetShoutAsync), request);
void GetShoutAsync(IAsyncResult res)
{
HttpWebRequest request = (HttpWebRequest) res.AsyncState;
HttpWebResponse response = (HttpWebResponse) request.EndGetResponse(res);
Stream r = response.GetResponseStream();
byte[] data = new byte[4096];
int read;
while ((read = r.Read(data, 0, data.Length)) > 0)
{
Debug.WriteLine(data[0]);
}
}
I don’t see any obvious problems in your code. Apart from not using async-await which greatly simplifies the kind of asyncronous code you’re developing :-)
What do you mean “the ResponseStream stops reading”?
If the connection is dropped, then my #1 idea — server does that. Use wireshark to confirm, and then use wireshark to compare the request’s HTTP header with e.g. Winamp that starts playing that stream. I’m sure you’ll find some important differences.
If however it merely pauses, it’s normal.
Upon connect, streaming servers typically send you some initial amount of data, and then they only send you their data in real-time. So, after you’ve received that initial buffer, you’ll only get the data # the rate of your stream, i.e. 16 kbytes/sec for your 128 kbit/sec radio.
BTW, some clients send “Initial-Burst” HTTP header with the request, but I was unable to find the documentation about that header. When I worked on my radio for WP7, I basically replicated the behavior of some other, iOS app.
Finally I write this code to solve the issue, it is completely necessary to use the namespace : Windows.Web.Http, And it is like..
Uri url = new Uri("http://icecast6.play.cz/radio1-128.mp3");
HttpResponseMessage response = await httpClient.GetAsync(
url,
HttpCompletionOption.ResponseHeadersRead);
IInputStream inputStream = await response.Content.ReadAsInputStreamAsync();
try
{
ulong totalBytesRead =
IBuffer buffer = new Windows.Storage.Streams.Buffer(100000);
while (buffer.Length > 0);
{
uffer = await inputStream.ReadAsync(
buffer,
buffer.Capacity,
InputStreamOptions.Partial);
//
// Some stuff here...
totalBytesRead += buffer.Length;
Debug.WriteLine(buffer.Length + " " + totalBytesRead);
}
Debug.WriteLine(totalBytesRead);
I hope you guys enjoy it.

Retrying POST request loses body c# winforms

I faced with the issue that after retrying the request my POST data got lost somehow.
Code sample below. (Please note that request.timeout = 1 set for testing purposes to reproduce the behavior shown in the code below):
//post_data_final getting
private void request_3()
{
for(int i=1; i<=5; i++)
{
byte[] byteArray = Encoding.ASCII.GetBytes(post_data_final);
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(site_URI);
request.Method = "POST";
//some headers info
request.Timeout = 1;
request.ContentLength = byteArray.Length;
using (Stream os = request.GetRequestStream())
{
os.Write(byteArray, 0, byteArray.Length);
}
try
{
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
//some code about response
}
catch (WebException wex)
{
if (wex.Status == WebExceptionStatus.Timeout)
{
continue;
}
//some additional checks
}
}
}
The magic is that first request (until Request timeout error) goes well. Further requests are going without POST data, but content length is counted properly (i.e. stays the same as in previous request).
Updated:
post_data_final getting is separate function. It is not used (except byteArray) or changed in request_3() function.
Request works fine if it got into for loop and Timeout exception has not occured. So if I just put my request into for loop it will do particular number of valid requests. As soon as I'm getting Timeout exception, the next request will be without POST data.
Source code is edited for those who thinks that recursion is a bad idea. The edited code still doesn't work.
Any suggestions are appreciated
I cant find anything wrong in your code, so provide mode details, as the comments mentioned.
private void request_3()
{
bool sendData = true;
int numberOfTimeOuts = 0;
// The follwing only needs to be done only once, unless you alter post_data_final after each timeout.
byte[] dataToSend = Encoding.ASCII.GetBytes(post_data_final);
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(site_URI);
using (Stream outputStream = request.GetRequestStream())
outputStream.Write(dataToSend, 0, dataToSend.Length);
// request.TimeOut = 1000 * 15; would mean 15 Seconds.
while(sendData && numberOfTimeOuts < MAX_NUMBER_OF_TIMEOUTS)
{
try
{
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
if(response != null)
processResponse(response);
else
{
//You should handle this case aswell.
}
sendData = false;
}
catch(WebException wex)
{
if (wex.Status == WebExceptionStatus.Timeout)
numberOfTimeOuts++;
else
throw;
}
}
}
The issue was because of using Fiddler2 - analogue of Wireshark (i.e. intercepting traffic tool).
Requested site uses https protocol. For debugging purposes I installed Fiddler2 and Fiddler2 certificate to be able to see all incoming and outcoming responses. For some magic reason when I turned off Fiddler2 and added some additional logging into console, i figured out that requests are appeared to be valid (i.e. POST body data still exists after first request).
So with Fiddler2 code above doesn't work while we're having Timeout exception. Without Fiddler2 everything works fine under the same circumstances and using the same code.
I didn't dig deep into Fiddler2, but seems to me issue could be only with compatibility of VS2010 and internal proxy for Error Codes (taking into account that using point 2 under "Updates" area (The Fiddler2 was also used there) for success codes (i.e. 2xx - 3xx) worked fine)
Thanks everyone for getting attention into this.

HttpWebRequest stops working suddenly, No response received after few requests

I’m working with a WPF .net 4.0 Application. I have a search bar. For each search token I need to do 8 http request to 8 separate URLs to get search results. I send 8 requests to server after 400 milliseconds once the user stops typing in search bar. Searching for 6 to 7 search-tokens results comes very nicely. But after that suddenly HttpWebRequest stops working silently. No exception was thrown, no response was received. I'm working with Windows 7, I disabled the firewall too. I don't know where the subsequent http requests are lost.
Can anyone show me lights to fix this issue?
Below is my code for HttpWebRequest call.
public static void SendReq(string url)
{
// Create a new HttpWebRequest object.
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
request.ContentType = "application/x-www-form-urlencoded";
request.Proxy = new WebProxy("192.168.1.1", 8000);
// Set the Method property to 'POST' to post data to the URI.
request.Method = "POST";
// start the asynchronous operation
request.BeginGetRequestStream(new AsyncCallback(GetRequestStreamCallback), request);
}
private static void GetRequestStreamCallback(IAsyncResult asynchronousResult)
{
HttpWebRequest request = (HttpWebRequest)asynchronousResult.AsyncState;
// End the operation
Stream postStream = request.EndGetRequestStream(asynchronousResult);
string postData = this.PostData;
// Convert the string into a byte array.
byte[] byteArray = Encoding.UTF8.GetBytes(postData);
// Write to the request stream.
postStream.Write(byteArray, 0, byteArray.Length);
postStream.Close();
// Start the asynchronous operation to get the response
request.BeginGetResponse(new AsyncCallback(GetResponseCallback), request);
}
private static void GetResponseCallback(IAsyncResult asynchronousResult)
{
HttpWebRequest request = (HttpWebRequest)asynchronousResult.AsyncState;
// End the operation
using(HttpWebResponse response = (HttpWebResponse)request.EndGetResponse(asynchronousResult))
{
using(Stream streamResponse = response.GetResponseStream())
{
using(StreamReader streamRead = new StreamReader(streamResponse))
{
string responseString = streamRead.ReadToEnd();
Debug.WriteLine(responseString);
}
}
}
}
I think i am very late, but i still want to answer your question, may be it can be helpful to others. By default HTTP Request you make are HTTP 1.1 requests. And HTTP 1.1 Request by default has Keep-Alive connection. so when you make too many request to same server .net framework only make x no. of request.
you should close all your response by response.Close()
you can also specify how many simultaneous requests you can make.
ServicePointManager.DefaultConnectionLimit = 20;
Note that you have to set DefaultConnectionLimit before the first request you make. you can find more information
here on msdn.
All i can see is that in GetRequestStreamCallback you should replace
postStream.Write(byteArray, 0, postData.Length);
by
postStream.Write(byteArray, 0, byteArray.Length);
since these length aren't necessarily equal.
#Somnath
I am not sure if you found this answer, but if anyone else stumbles across this post with the same issue that Somnath and I were having.
We all try to do our due diligence in keeping memory clean and clear, but with streams we always will save unexplained issues if we make sure to flush the stream before we close it.
Replace This :
postStream.Write(byteArray, 0, byteArray.Length);
postStream.Close();
With This :
postStream.Write(byteArray, 0, byteArray.Length);
postStream.Flush();
postStream.Close();
I followed all suggestion provide by all of you but couldn't stop the silent failure of HTTP request. But I found a workaround. Even myself could not reach to a final conclusion till now.
But my workaround is working well as off now without any failure.
In SendReq(string url) function i have added the following lines of code
System.Net.ServicePointManager.DefaultConnectionLimit = 100; // Just selected a random number for testing greater than 2
System.Net.ServicePointManager.SetTcpKeepAlive(true, 30, 30); // 30 is based on my server i'm hitting
System.Net.ServicePointManager.SecurityProtocol = SecurityProtocolType.Ssl3;
request.KeepAlive = true;

GetRequestStream() is throwing time out exception when posting data to HTTPS url

I'm calling an API hosted on Apache server to post data. I'm using HttpWebRequest to perform POST in C#.
API has both normal HTTP and secure layer (HTTPS) PORT on the server. When I call HTTP URL it works perfectly fine. However, when I call HTTPS it gives me time-out exception (at GetRequestStream() function). Any insights? I'm using VS 2010, .Net framework 3.5 and C#. Here is the code block:
string json_value = jsonSerializer.Serialize(data);
HttpWebRequest request = (HttpWebRequest)System.Net.WebRequest.Create("https://server-url-xxxx.com");
request.Method = "POST";
request.ProtocolVersion = System.Net.HttpVersion.Version10;
request.ContentType = "application/x-www-form-urlencoded";
byte[] buffer = Encoding.ASCII.GetBytes(json_value);
request.ContentLength = buffer.Length;
System.IO.Stream reqStream = request.GetRequestStream();
reqStream.Write(buffer, 0, buffer.Length);
reqStream.Close();
EDIT:
The console program suggested by Peter works fine. But when I add data (in JSON format) that needs to be posted to the API, it throws out operation timed out exception. Here is the code that I add to console based application and it throws error.
byte[] buffer = Encoding.ASCII.GetBytes(json_value);
request.ContentLength = buffer.Length;
I ran into the same issue. It seems like it is solved for me. I went through all my code making sure to invoke webResponse.Close() and/or responseStream.Close() for all my HttpWebResponse objects. The documentation indicates that you can close the stream or the HttpWebResponse object. Calling both is not harmful, so I did. Not closing the responses may cause the application to run out of connections for reuse, and this seems to affect the HttpWebRequest.GetRequestStream as far as I can observe in my code.
I don't know if this will help you with your specific problem but you should consider Disposing some of those objects when you are finished with them. I was doing something like this recently and wrapping stuff up in using statements seems to clean up a bunch of timeout exceptions for me.
using (var reqStream = request.GetRequestStream())
{
if (reqStream == null)
{
return;
}
//do whatever
}
also check these things
Is the server serving https in your local dev environment?
Have you set up your bindings *.443 (https) properly?
Do you need to set credentials on the request?
Is it your application pool account accessing the https resources or is it your account being passed through?
Have you thought about using WebClient instead?
using (WebClient client = new WebClient())
{
using (Stream stream = client.OpenRead("https://server-url-xxxx.com"))
using (StreamReader reader = new StreamReader(stream))
{
MessageBox.Show(reader.ReadToEnd());
}
}
EDIT:
make a request from console.
internal class Program
{
private static void Main(string[] args)
{
new Program().Run();
Console.ReadLine();
}
public void Run()
{
var request = (HttpWebRequest)System.Net.WebRequest.Create("https://server-url-xxxx.com");
request.Method = "POST";
request.ProtocolVersion = System.Net.HttpVersion.Version10;
request.ContentType = "application/x-www-form-urlencoded";
using (var reqStream = request.GetRequestStream())
{
using(var response = new StreamReader(reqStream )
{
Console.WriteLine(response.ReadToEnd());
}
}
}
}
Try this:
WebRequest req = WebRequest.Create("https://server-url-xxxx.com");
req.Method = "POST";
string json_value = jsonSerializer.Serialize(data); //Body data
ServicePointManager.Expect100Continue = false;
using (var streamWriter = new StreamWriter(req.GetRequestStream()))
{
streamWriter.Write(json_value);
streamWriter.Flush();
streamWriter.Close();
}
HttpWebResponse resp = req.GetResponse() as HttpWebResponse;
Stream GETResponseStream = resp.GetResponseStream();
StreamReader sr = new StreamReader(GETResponseStream);
var response = sr.ReadToEnd(); //Response
resp.Close(); //Close response
sr.Close(); //Close StreamReader
And review the URI:
Reserved characters. Send reserved characters by the URI can bring
problems ! * ' ( ) ; : # & = + $ , / ? # [ ]
URI Length: You should not exceed 2000 characters
I ran into this, too. I wanted to simulate hundreds of users with a Console app. When simulating only one user, everything was fine. But with more users came the Timeout exception all the time.
Timeout occurs because by default the ConnectionLimit=2 to a ServicePoint (aka website).
Very good article to read: https://venkateshnarayanan.wordpress.com/2013/04/17/httpwebrequest-reuse-of-tcp-connections/
What you can do is:
1) make more ConnectionGroups within a servicePoint, because ConnectionLimit is per ConnectionGroups.
2) or you just simply increase the connection limit.
See my solution:
private HttpWebRequest CreateHttpWebRequest<U>(string userSessionID, string method, string fullUrl, U uploadData)
{
HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(fullUrl);
req.Method = method; // GET PUT POST DELETE
req.ConnectionGroupName = userSessionID; // We make separate connection-groups for each user session. Within a group connections can be reused.
req.ServicePoint.ConnectionLimit = 10; // The default value of 2 within a ConnectionGroup caused me always a "Timeout exception" because a user's 1-3 concurrent WebRequests within a second.
req.ServicePoint.MaxIdleTime = 5 * 1000; // (5 sec) default was 100000 (100 sec). Max idle time for a connection within a ConnectionGroup for reuse before closing
Log("Statistics: The sum of connections of all connectiongroups within the ServicePoint: " + req.ServicePoint.CurrentConnections; // just for statistics
if (uploadData != null)
{
req.ContentType = "application/json";
SerializeToJson(uploadData, req.GetRequestStream());
}
return req;
}
/// <summary>Serializes and writes obj to the requestStream and closes the stream. Uses JSON serialization from System.Runtime.Serialization.</summary>
public void SerializeToJson(object obj, Stream requestStream)
{
DataContractJsonSerializer json = new DataContractJsonSerializer(obj.GetType());
json.WriteObject(requestStream, obj);
requestStream.Close();
}
You may want to set timeout property, check it here http://www.codeproject.com/Tips/69637/Setting-timeout-property-for-System-Net-WebClient

HttpWebOperation timing out. Why and what to check?

I'm calling web service (my web service) like this:
var request = WebRequest.Create(Options.ServerUri + Options.AccountId + "/integration/trip") as HttpWebRequest;
request.Timeout = 20000; // 20 seconds should be plenty, no need for 100 seconds
request.ContentType = "application/json";
request.Headers.Add(HttpRequestHeader.Authorization, "Basic " + Convert.ToBase64String(Encoding.UTF8.GetBytes(Options.LoginName + ":" + Options.Password)));
request.Method = "POST";
var serializedData = (new JavaScriptSerializer()).Serialize(trip);
var bytes = Encoding.UTF8.GetBytes(serializedData);
request.ContentLength = bytes.Length;
var os = request.GetRequestStream();
os.Write(bytes, 0, bytes.Length);
os.Close();
request.GetResponse();
LoggingAndNotifications.LogAndNotify(string.Format("Success uploading trip: {0}", trip.TripId), false);
return true;
This code called repeatedly to post new objects. After about 3 calls I start getting timeouts on reguest.GetReponse()
There is no errors on server side, nothing in Event Log. It feels like "something" stops me from repeatedly hitting service. What should I look for? Is it possible with company firewall? Or something wrong with my code?
I think the issue is that you are not closing the response. Try editing your code as follows:
var response = request.GetResponse() as HttpWebResponse;
response.Close();
You should close the response as per the example in the doco.
WebRequest myRequest = WebRequest.Create("http://www.contoso.com");
// Return the response.
WebResponse myResponse = myRequest.GetResponse();
// Code to use the WebResponse goes here.
// Close the response to free resources.
myResponse.Close();
Hmm. The doco also says
Any public static (Shared in Visual Basic) members of this type are
thread safe. Any instance members are not guaranteed to be thread
safe.
You should probably ask for a lock of some kind.
Are you sure this is not caused by server side bugs?
It seems strange, as far as I known, the webrequest on .net4 is based on IOCP in lower layer, maybe you can try release web request/response resources after each loop.
Since the GetResponse() will return a stream, if you don't read from it, the real data may not transfer from server to client side. (I found this when I am trying to parse a response that I used peek(), and it always return an invalid value until the read() is called.)
So, try to read it or just close it.

Categories