Retrying POST request loses body c# winforms - c#

I faced with the issue that after retrying the request my POST data got lost somehow.
Code sample below. (Please note that request.timeout = 1 set for testing purposes to reproduce the behavior shown in the code below):
//post_data_final getting
private void request_3()
{
for(int i=1; i<=5; i++)
{
byte[] byteArray = Encoding.ASCII.GetBytes(post_data_final);
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(site_URI);
request.Method = "POST";
//some headers info
request.Timeout = 1;
request.ContentLength = byteArray.Length;
using (Stream os = request.GetRequestStream())
{
os.Write(byteArray, 0, byteArray.Length);
}
try
{
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
//some code about response
}
catch (WebException wex)
{
if (wex.Status == WebExceptionStatus.Timeout)
{
continue;
}
//some additional checks
}
}
}
The magic is that first request (until Request timeout error) goes well. Further requests are going without POST data, but content length is counted properly (i.e. stays the same as in previous request).
Updated:
post_data_final getting is separate function. It is not used (except byteArray) or changed in request_3() function.
Request works fine if it got into for loop and Timeout exception has not occured. So if I just put my request into for loop it will do particular number of valid requests. As soon as I'm getting Timeout exception, the next request will be without POST data.
Source code is edited for those who thinks that recursion is a bad idea. The edited code still doesn't work.
Any suggestions are appreciated

I cant find anything wrong in your code, so provide mode details, as the comments mentioned.
private void request_3()
{
bool sendData = true;
int numberOfTimeOuts = 0;
// The follwing only needs to be done only once, unless you alter post_data_final after each timeout.
byte[] dataToSend = Encoding.ASCII.GetBytes(post_data_final);
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(site_URI);
using (Stream outputStream = request.GetRequestStream())
outputStream.Write(dataToSend, 0, dataToSend.Length);
// request.TimeOut = 1000 * 15; would mean 15 Seconds.
while(sendData && numberOfTimeOuts < MAX_NUMBER_OF_TIMEOUTS)
{
try
{
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
if(response != null)
processResponse(response);
else
{
//You should handle this case aswell.
}
sendData = false;
}
catch(WebException wex)
{
if (wex.Status == WebExceptionStatus.Timeout)
numberOfTimeOuts++;
else
throw;
}
}
}

The issue was because of using Fiddler2 - analogue of Wireshark (i.e. intercepting traffic tool).
Requested site uses https protocol. For debugging purposes I installed Fiddler2 and Fiddler2 certificate to be able to see all incoming and outcoming responses. For some magic reason when I turned off Fiddler2 and added some additional logging into console, i figured out that requests are appeared to be valid (i.e. POST body data still exists after first request).
So with Fiddler2 code above doesn't work while we're having Timeout exception. Without Fiddler2 everything works fine under the same circumstances and using the same code.
I didn't dig deep into Fiddler2, but seems to me issue could be only with compatibility of VS2010 and internal proxy for Error Codes (taking into account that using point 2 under "Updates" area (The Fiddler2 was also used there) for success codes (i.e. 2xx - 3xx) worked fine)
Thanks everyone for getting attention into this.

Related

Ways to structure our HTTP Post method so it doesn't get blocked by firewalls

An issue we've faced since the beta launch of our software is that not all of our users are able to be authenticated (we authenticate using web requests) as a result of something happening. We're not sure exactly what is causing it, but we're slightly confident that it's caused by our POST requests. Below is the C# method we use to do the request.
public static string getResponse(string url, string postdata)
{
try
{
ASCIIEncoding encoding = new ASCIIEncoding();
byte[] byte1 = encoding.GetBytes(postdata);
HttpWebRequest myHttpWebRequest = (HttpWebRequest)WebRequest.Create(url);
myHttpWebRequest.Method = "POST";
myHttpWebRequest.ContentType = "application/x-www-form-urlencoded";
myHttpWebRequest.ContentLength = byte1.Length;
Stream newstream = myHttpWebRequest.GetRequestStream();
newstream.Write(byte1, 0, byte1.Length);
WebResponse response = myHttpWebRequest.GetResponse();
Stream stream = response.GetResponseStream();
StreamReader reader = new StreamReader(stream);
return reader.ReadToEnd();
}
catch
{
return "";
}
}
Perhaps we haven't structured it in a way that all firewalls will accept it, for example.
With that said, we do not know if it's the issue of the port, the request itself, or the user's issue. Has anyone ever faced this issue before? What did you do to fix it?
Are there "standards" to how you structure your POST so most devices/firewalls will accept it? Are we using the right port?
I have updated my catch statement to catch Web Exceptions:
catch (WebException ex)
{
ex.writeToDebuggerOrTxtFile;
}
Since at the moment we are not entirely sure about what is causing the issue, that is, we don't know if it's the user, firewall, or ports, that's causing the user being unable to authenticate, we have to first isolate the issue. If the status codes return something like HTTP 500, 401, 403, etc, then this indicates that the request is failing on the server.
EDIT: Upon sleeping over this, I saw a bit of an issue with this answer.
Let me explain.
Generally, all HTTP responses should succeed, even if the status code may return a code that indicates an error. I think the right way to approach this is to simply not just look at the WebExceptions thrown (since most "failed" responses should return a success), but to also look at purely the status codes.
A failed response would still succeed in bringing a response headers back, but the right way to approach it, I think, is to look at the status codes themselves.
Here's a node.js script I whipped up to check HTTP response headers:
var http = require("http");
var fs = require("fs");
var i = 0;
var hostNames = ['www.google.com'];
for (i; i < hostNames.length; i++){
var options = {
host: hostNames[i],
path: '/'
};
(function (i){
http.get(options, function(res) {
var obj = {};
obj.url = hostNames[i];
obj.statusCode = res.statusCode;
obj.headers = res.headers;
for(var item in res.headers){
obj.headers[item.replace(/\./,'\\')] = res.headers[item];
}
console.log(JSON.stringify(obj, null, 4));
}).on('error',function(e){
console.log("Error: " + hostNames[i] + "\n" + e.stack + "\n");
});
})(i);
};

HttpWebRequest is slow with chunked data

I'm using HttpWebRequest to connect to my in-house built HTTP server. My problem is that it is a lot slower than connecting to the server via for instance PostMan (https://chrome.google.com/webstore/detail/postman-rest-client/fdmmgilgnpjigdojojpjoooidkmcomcm?hl=en), which is probably using the built-in functions in Chrome to request data.
The server is built using this example on MSDN (http://msdn.microsoft.com/en-us/library/dxkwh6zw.aspx) and uses a buffer size of 64. The request is a HTTP request with some data in the body.
When connecting via PostMan, the request is split into a bunch of chunks and BeginRecieve() is called multiple times, each time receiving 64B and taking about 2 milliseconds. Except the last one, which receives less than 64B.
But when connecting with my client using HttpWebRequest, the first BeginRecieve() callback receives 64B and takes about 1 ms, the following receives only 47B and takes almost 200 ms, and finally the third receives about 58B and takes 2ms.
What is up with the second BeginRecieve? I note that the connection is established as soon as I start to write data to the HttpWebRequest input stream, but the data reception does not start until I call GetResponse().
Here is my HttpWebRequest code:
var request = (HttpWebRequest)WebRequest.Create(url);
request.Method = verb;
request.Timeout = timeout;
request.Proxy = null;
request.KeepAlive = false;
request.Headers.Add("Content-Encoding", "UTF-8");
System.Net.ServicePointManager.Expect100Continue = false;
request.ServicePoint.Expect100Continue = false;
if ((verb == "POST" || verb == "PUT") && !String.IsNullOrEmpty(data))
{
var dataBytes = Encoding.UTF8.GetBytes(data);
try
{
var dataStream = request.GetRequestStream();
dataStream.Write(dataBytes, 0, dataBytes.Length);
dataStream.Close();
}
catch (Exception ex)
{
throw;
}
}
WebResponse response = null;
try
{
response = request.GetResponse();
}
catch (Exception ex)
{
throw;
}
var responseReader = new StreamReader(rStream, Encoding.UTF8);
var responseStr = responseReader.ReadToEnd();
responseReader.Close();
response.Close();
What am I doing wrong? Why is it behaving so much differently than a HTTP request from a web browser? This is effectively adding 200ms of lag to my application.
This looks like a typical case of the Nagle algorithm clashing with TCP Delayed Acknowledgement. In your case you are sending a small Http Request (~170 bytes according to your numbers). This is likely less than the MSS (Maximum Segment Size) meaning that the Nagle Algorithm will kick in. The server is probably delaying the ACK resulting in a delay of up to 500 ms. See links for details.
You can disable Nagle via ServicePointManager.UseNagleAlgorithm = false (before issuing the first request), see MSDN.
Also see Nagle’s Algorithm is Not Friendly towards Small Requests for a detailed discussion including a Wireshark analysis.
Note: In your answer you are running into the same situation when you do write-write-read. When you switch to write-read you overcome this problem. However I do not believe you can instruct the HttpWebRequest (or HttpClient for that matter) to send small requests as a single TCP write operation. That would probably be a good optimization in some cases. Althought it may lead to some additional array copying, affecting performance negatively.
200ms is the typical latency of the Nagle algorithm. This gives rise to the suspicion that the server or the client is using Nagling. You say you are using a sample from MSDN as the server... Well there you go. Use a proper server or disable Nagling.
Assuming that the built-in HttpWebRequest class has an unnecessary 200ms latency is very unlikely. Look elsewhere. Look at your code to find the problem.
It seems like HttpWebRequest is just really slow.
Funny thing: I implemented my own HTTP client using Sockets, and I found a clue to why HttpWebRequest is so slow. If I encoded my ASCII headers into its own byte array and sent them on the stream, followed by the byte array encoded from my data, my Sockets-based HTTP client behaved exactly like HttpWebRequest: first it fills one buffer with data (part of the header), then it uses another buffer partially (the rest of the header), waits 200 ms and then sends the rest of the data.
The code:
TcpClient client = new TcpClient(server, port);
NetworkStream stream = client.GetStream();
// Send this out
stream.Write(headerData, 0, headerData.Length);
stream.Write(bodyData, 0, bodyData.Length);
stream.Flush();
The solution was of course to append the two byte arrays before sending them out on the stream. My application is now behaving as espected.
The code with a single stream write:
TcpClient client = new TcpClient(server, port);
NetworkStream stream = client.GetStream();
var totalData = new byte[headerBytes.Length + bodyData.Length];
Array.Copy(headerBytes,totalData,headerBytes.Length);
Array.Copy(bodyData,0,totalData,headerBytes.Length,bodyData.Length);
// Send this out
stream.Write(totalData, 0, totalData.Length);
stream.Flush();
And HttpWebRequest seems to send the header before I write to the request stream, so it might be implemented somewhat like my first code sample. Does this make sense at all?
Hope this is helpful for anyone with the same problem!
Try this: you need to dispose of your IDisposables:
var request = (HttpWebRequest)WebRequest.Create(url);
request.Method = verb;
request.Timeout = timeout;
request.Proxy = null;
request.KeepAlive = false;
request.Headers.Add("Content-Encoding", "UTF-8");
System.Net.ServicePointManager.Expect100Continue = false;
request.ServicePoint.Expect100Continue = false;
if ((verb == "POST" || verb == "PUT") && !String.IsNullOrEmpty(data))
{
var dataBytes = Encoding.UTF8.GetBytes(data);
using (var dataStream = request.GetRequestStream())
{
dataStream.Write(dataBytes, 0, dataBytes.Length);
}
}
string responseStr;
using (var response = request.GetResponse())
{
using (var responseReader = new StreamReader(rStream, Encoding.UTF8))
{
responseStr = responseReader.ReadToEnd();
}
}

.NET service responds 500 internal error and "missing parameter" to HttpWebRequest POSTS but test form works fine

I am using a simple .NET service (asmx) that works fine when invoking via the test form (POST). When invoking via a HttpWebRequest object, I get a WebException "System.Net.WebException: The remote server returned an error: (500) Internal Server Error." Digging deeper, reading the WebException.Response.GetResponseStream() I get the message: "Missing parameter: serviceType." but I've clearly included this parameter.
I'm at a loss here, and its worse that I don't have access to debug the service itself.
Here is the code being used to make the request:
string postData = String.Format("serviceType={0}&SaleID={1}&Zip={2}", request.service, request.saleId, request.postalCode);
byte[] data = (new ASCIIEncoding()).GetBytes(postData);
HttpWebRequest httpWebRequest = (HttpWebRequest)WebRequest.Create(url);
httpWebRequest.Timeout = 60000;
httpWebRequest.Method = "POST";
httpWebRequest.ContentType = "application/x-www-form-urlencoded";
httpWebRequest.ContentLength = data.Length;
using (Stream newStream = httpWebRequest.GetRequestStream())
{
newStream.Write(data, 0, data.Length);
}
try
{
using (response = (HttpWebResponse)httpWebRequest.GetResponse())
{
if (response.StatusCode != HttpStatusCode.OK)
throw new Exception("There was an error with the shipping freight service.");
string responseData;
using (StreamReader responseStream = new StreamReader(httpWebRequest.GetResponse().GetResponseStream(), System.Text.Encoding.GetEncoding("iso-8859-1")))
{
responseData = responseStream.ReadToEnd();
responseStream.Close();
}
if (string.IsNullOrEmpty(responseData))
throw new Exception("There was an error with the shipping freight service. Request went through but response is empty.");
XmlDocument providerResponse = new XmlDocument();
providerResponse.LoadXml(responseData);
return providerResponse;
}
}
catch (WebException webExp)
{
string exMessage = webExp.Message;
if (webExp.Response != null)
{
using (StreamReader responseReader = new StreamReader(webExp.Response.GetResponseStream()))
{
exMessage = responseReader.ReadToEnd();
}
}
throw new Exception(exMessage);
}
Anyone have an idea what could be happening?
Thanks.
UPDATE
Stepping through the debugger, I see the parameters are correct. I also see the parameters are correct in fiddler.
Examining fiddler, I get 2 requests each time this code executes. The first request is a post that sends the parameters. It gets a 301 response code with a "Document Moved Object Moved This document may be found here" message. The second request is a GET to the same URL with no body. It gets a 500 server error with "Missing parameter: serviceType." message.
It seems like you found your problem when you looked at the requests in Fiddler. Taking an excerpt from http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html:
10.3.2 301 Moved Permanently
The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs. Clients with link editing capabilities ought to automatically re-link references to the Request-URI to one or more of the new references returned by the server, where possible.
.....
Note: When automatically redirecting a POST request after
receiving a 301 status code, some existing HTTP/1.0 user agents
will erroneously change it into a GET request.
Here's a couple options that you can take:
Hard-code your program to use the new Url that you see in the 301 response in Fiddler
Adjust your code to retrieve the 301 response, parse out the new Url from the response, and build a new response with the new Url.
The latter option would be ideal if you're dealing with user-based input on the Url (like a web browser), since you don't know where the user is going to want your program to go.

series of httpWebRequests

I'm currently writing a simple app that performs a series of requests to the web server and I've encountered a strange... feature?
I don't need response stream of the request, but only status code. So, for each piece of my data I call my own "Send" method:
public static int Send(string uri)
{
HttpWebRequest request = null;
HttpWebResponse response = null;
try
{
request = (HttpWebRequest)WebRequest.Create(uri);
response = (HttpWebResponse)request.GetResponse();
if (response.StatusCode == HttpStatusCode.OK) return 0;
}
catch (Exception e)
{
if (request != null) request.Abort();
}
return -1;
}
Works fine? Yes, unless I call this function at least twice. Second call of such a function in a row (with the same uri) will ALWAYS result in timeout.
Now, that's odd: if I add request.Abort(); when I return zero (here, when status code is 200) - everything ALWAYS works fine.
So my question is - why? Is it some kind of framework restriction, or maybe the some kind of anti-DOS protection on the particular server (unfortunately, the server is a black box for me)? Or maybe I just don't understand smth in how it all works?
Try to dispose of the web response, you may leak some resources
public static int Send(string uri)
{
HttpWebRequest request = null;
try
{
request = (HttpWebRequest)WebRequest.Create(uri);
using (var response = (HttpWebResponse)request.GetResponse())
{
if (response.StatusCode == HttpStatusCode.OK) return 0;
}
}
catch (Exception e)
{
if (request != null) request.Abort();
}
return -1;
}
There is also a default number of connections (2 I think, but you can configure this) you can make to a domain simultaneously, please see this SO question. You're probably hitting this limit with your unclosed responses.
First of all I'd make a series of changes in order to get to the root of this:
take out that try..catch{} (you're likely swallowing an exception)
return a boolean instead of a number
You should then get your exception information you need.
Also you should be using "HEAD" as your method as you only want the status code:
request.Method = "HEAD";
read the difference here.

Changing DefaultWebProxy causing WebRequests to time out

For the project I'm working on, we have a desktop program that contacts an online server for a store. Because it's used in schools, getting the proxy setup right is tricky. What we've gone for is to allow users to specify proxy details to use if they want, otherwise it uses the ones from IE. We've also tried to bypass incorrect details being put in, so the code tries the user specified proxy, if that fails the default one, if that fails, then with credentials, if that fails then null.
The problem I'm having is that in places where the proxy settings need to be changed in succession (for example, if their registration fails because the proxy is wrong, they change one tiny thing and try again, takes seconds.) I end up with calls to a HttpRequests .GetResponse() timing out, causing the program to freeze for a good while. Sometimes if I leave a minute or two between the changes, it doesn't freeze, but not every time (Just tried again now after 10mins and it's timing out again).
I can't spot anything in the code that could cause this - though it looks a bit messy. I don't think it could be the server refusing the request unless it's generic server behaviour as I've tried this with requests to our server and others such as google.co.uk.
I'm posting the code in the hope that someone may be able to spot something that's wrong with it, or knows a much simpler way of doing what we're trying to.
The tests we run are without any proxy, so the first part is usually skipped. The first time ApplyProxy is run, it works fine and finishes everything in the first try block, the second, it can either timeout on the GetResponse in the first try block and then go through the rest of the code, or it can work there and timeout on the actual requests made for the registration.
Code:
void ApplyProxy()
{
Boolean ProxySuccess = true;
String WebRequestURI = #"http://www.google.co.uk";
if (UseProxy)
{
try
{
String ProxyUrl = (ProxyUri.ToLower().Contains("http://")) ?
ProxyUri :
"http://" + ProxyUri;
WebRequest.DefaultWebProxy = new WebProxy(ProxyUrl);
if (!string.IsNullOrEmpty(ProxyUsername) && !string.IsNullOrEmpty(ProxyPassword))
WebRequest.DefaultWebProxy.Credentials = new NetworkCredential(ProxyUsername, ProxyPassword);
HttpWebRequest request = HttpWebRequest.Create(WebRequestURI) as HttpWebRequest;
request.Method = "GET";
HttpWebResponse response = request.GetResponse() as HttpWebResponse;
}
catch
{
ProxySuccess = false;
}
}
if(!ProxySuccess || !UseProxy)
{
try
{
WebRequest.DefaultWebProxy = WebRequest.GetSystemWebProxy();
HttpWebRequest request = HttpWebRequest.Create(WebRequestURI) as HttpWebRequest;
request.Method = "GET";
HttpWebResponse response = request.GetResponse() as HttpWebResponse;
}
catch (Exception e)
{ //try with credentials
//make a new proxy from defaults
WebRequest.DefaultWebProxy = WebRequest.GetSystemWebProxy();
String newProxyURI = WebRequest.DefaultWebProxy.GetProxy(new Uri(WebRequestURI)).ToString();
if (newProxyURI == String.Empty)
{ //check we actually get a result
WebRequest.DefaultWebProxy = null;
return;
}
//continue
WebProxy NewProxy = new WebProxy(newProxyURI);
NewProxy.UseDefaultCredentials = true;
NewProxy.Credentials = CredentialCache.DefaultCredentials;
WebRequest.DefaultWebProxy = NewProxy;
try
{
HttpWebRequest request = HttpWebRequest.Create(WebRequestURI) as HttpWebRequest;
request.Method = "GET";
HttpWebResponse response = request.GetResponse() as HttpWebResponse;
}
catch
{
WebRequest.DefaultWebProxy = null;
}
}
}
}
Is it not just a case of needing to set the Timeout property of the HttpWebRequest? It could be that the connection is being made, but not serviced (wrong type of proxy server or stalled server, for example), in which case it may be that the request is waiting for the Timeout period before giving up — a shorter timeout may be preferrable here.
Seems to be a programming error on my behalf. The requests were left open and obviously either the program or the server doesn't like this. Simply closing the HttpWebRequests once done seems to remove this issue.

Categories