C# - HttpWebResponse redirect checker - c#

I'm trying to code a redirect checker, the solution I have was just banged together this morning so it's not the most efficient but it does everything I need it to do apart from one thing:
It only ever checks two sites before stopping, no errors occur, it just stops on the "request.GetResponse() as HttpWebResponse;" line for the third page.
I've tried using different sites and changing the combination of pages to check but it only ever checks two.
Any ideas?
string URLs = "/htmldom/default.asp/htmldom/dom_intro.asp/htmldom/dom_examples2.asp/xpath/default.asp";
string sURL = "http://www.w3schools.com/";
string[] u = Regex.Split(URLs, ".asp");
foreach (String site in u)
{
String superURL = sURL + site + ".asp";
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(superURL);
request.Method = "HEAD";
request.AllowAutoRedirect = false;
var response = request.GetResponse() as HttpWebResponse;
String a = response.GetResponseHeader("Location");
Console.WriteLine("Site: " + site + "\nResponse Type: " + response.StatusCode + "\nRedirect page" + a + "\n\n");
}

Aside from the fact that it will break if a WebException is ever thrown, I believe the reason it's just stopping is that you never dispose of your response. If you have multiple URLs served actually by the same site, they'll use a connection pool - and by not disposing of the response, you're not releasing the connection. You should use:
using (var response = request.GetResponse())
{
var httpResponse = (HttpWebResponse) response;
// Use httpResponse here
}
Note that I'm casting here instead of using as - if the response isn't an HttpWebResponse, an InvalidCastException on that line is more informative than a NullReferenceException on the next line...

Related

Console application, first webrequest doesn't get response, later does, why?

I am developing console application for API integration. This follow fetching token in first request and then getting report in second request after passing token.
string token = GetToken(app_id); // API call, which is working fine and getting token
string reportquerystring = "path?token=" + token;
WebRequest req = WebRequest.Create(#reportquerystring);
req.Method = "GET";
req.Headers["Authorization"] = "Basic " +
Convert.ToBase64String(Encoding.Default.GetBytes("username:password"));
var resp = req.GetResponse() as HttpWebResponse;
using (Stream downloadstream = resp.GetResponseStream())
{
XmlDocument reportxml = new XmlDocument();
string filename = "location\\";
string reportxmlString = (new StreamReader(downloadstream)).ReadToEnd();
reportxml.LoadXml(reportxmlString);
string json = JsonConvert.SerializeXmlNode(reportxml);
System.IO.File.WriteAllText(filename + "data_" + app_id + ".txt", json);
}
Here when I run this code while debugging, on the first call of download report, response xml is empty, when I drag debugger again before call, in the same run, then it gets response properly. But until and unless I figure out the reason why first call to download report API is not working or how I can make it work, I can not proceed.
Any suggestions ?

Ways to structure our HTTP Post method so it doesn't get blocked by firewalls

An issue we've faced since the beta launch of our software is that not all of our users are able to be authenticated (we authenticate using web requests) as a result of something happening. We're not sure exactly what is causing it, but we're slightly confident that it's caused by our POST requests. Below is the C# method we use to do the request.
public static string getResponse(string url, string postdata)
{
try
{
ASCIIEncoding encoding = new ASCIIEncoding();
byte[] byte1 = encoding.GetBytes(postdata);
HttpWebRequest myHttpWebRequest = (HttpWebRequest)WebRequest.Create(url);
myHttpWebRequest.Method = "POST";
myHttpWebRequest.ContentType = "application/x-www-form-urlencoded";
myHttpWebRequest.ContentLength = byte1.Length;
Stream newstream = myHttpWebRequest.GetRequestStream();
newstream.Write(byte1, 0, byte1.Length);
WebResponse response = myHttpWebRequest.GetResponse();
Stream stream = response.GetResponseStream();
StreamReader reader = new StreamReader(stream);
return reader.ReadToEnd();
}
catch
{
return "";
}
}
Perhaps we haven't structured it in a way that all firewalls will accept it, for example.
With that said, we do not know if it's the issue of the port, the request itself, or the user's issue. Has anyone ever faced this issue before? What did you do to fix it?
Are there "standards" to how you structure your POST so most devices/firewalls will accept it? Are we using the right port?
I have updated my catch statement to catch Web Exceptions:
catch (WebException ex)
{
ex.writeToDebuggerOrTxtFile;
}
Since at the moment we are not entirely sure about what is causing the issue, that is, we don't know if it's the user, firewall, or ports, that's causing the user being unable to authenticate, we have to first isolate the issue. If the status codes return something like HTTP 500, 401, 403, etc, then this indicates that the request is failing on the server.
EDIT: Upon sleeping over this, I saw a bit of an issue with this answer.
Let me explain.
Generally, all HTTP responses should succeed, even if the status code may return a code that indicates an error. I think the right way to approach this is to simply not just look at the WebExceptions thrown (since most "failed" responses should return a success), but to also look at purely the status codes.
A failed response would still succeed in bringing a response headers back, but the right way to approach it, I think, is to look at the status codes themselves.
Here's a node.js script I whipped up to check HTTP response headers:
var http = require("http");
var fs = require("fs");
var i = 0;
var hostNames = ['www.google.com'];
for (i; i < hostNames.length; i++){
var options = {
host: hostNames[i],
path: '/'
};
(function (i){
http.get(options, function(res) {
var obj = {};
obj.url = hostNames[i];
obj.statusCode = res.statusCode;
obj.headers = res.headers;
for(var item in res.headers){
obj.headers[item.replace(/\./,'\\')] = res.headers[item];
}
console.log(JSON.stringify(obj, null, 4));
}).on('error',function(e){
console.log("Error: " + hostNames[i] + "\n" + e.stack + "\n");
});
})(i);
};

HttpWebRequest.GetResponse() keeps getting timed out

i wrote a simple C# function to retrieve trade history from MtGox with following API call:
https://data.mtgox.com/api/1/BTCUSD/trades?since=<trade_id>
documented here: https://en.bitcoin.it/wiki/MtGox/API/HTTP/v1#Multi_currency_trades
here's the function:
string GetTradesOnline(Int64 tid)
{
Thread.Sleep(30000);
// communicate
string url = "https://data.mtgox.com/api/1/BTCUSD/trades?since=" + tid.ToString();
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
StreamReader reader = new StreamReader(response.GetResponseStream());
string json = reader.ReadToEnd();
reader.Close();
reader.Dispose();
response.Close();
return json;
}
i'm starting at tid=0 (trade id) to get the data (from the very beginning). for each request, i receive a response containing 1000 trade details. i always send the trade id from the previous response for the next request. it works fine for exactly 4 requests & responses. but after that, the following line throws a "System.Net.WebException", saying that "The operation has timed out":
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
here are the facts:
catching the exception and retying keeps causing the same exception
the default HttpWebRequest .Timeout and .ReadWriteTimeout are already high enough (over a minute)
changing HttpWebRequest.KeepAlive to false didn't solve anything either
it seems to always work in the browser even while the function is failing
it has no problems retrieveing the response from https://www.google.com
the amount of successful responses before the exceptions varies from day to day (but browser always works)
starting at the trade id that failed last time causes the exception immediately
calling this function from the main thread instead still caused the exception
running it on a different machine didn't work
running it from a different IP didn't work
increasing Thread.Sleep inbetween requests does not help
any ideas of what could be wrong?
I had the very same issue.
For me the fix was as simple as wrapping the HttpWebResponse code in using block.
using (HttpWebResponse response = (HttpWebResponse) request.GetResponse())
{
// Do your processings here....
}
Details: This issue usually happens when several requests are made to the same host, and WebResponse is not disposed properly. That is where using block will properly dispose the WebResponse object properly and thus solving the issue.
There are two kind of timeouts. Client timeout and server timeout. Have you tried doing something like this:
request.Timeout = Timeout.Infinite;
request.KeepAlive = true;
Try something like this...
I just had similar troubles calling a REST Service on a LINUX Server thru ssl. After trying many different configuration scenarios I found out that I had to send a UserAgent in the http head.
Here is my final method for calling the REST API.
private static string RunWebRequest(string url, string json)
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
// Header
request.ContentType = "application/json";
request.Method = "POST";
request.AllowAutoRedirect = false;
request.KeepAlive = false;
request.Timeout = 30000;
request.ReadWriteTimeout = 30000;
request.UserAgent = "test.net";
request.Accept = "application/json";
request.ProtocolVersion = HttpVersion.Version11;
request.Headers.Add("Accept-Language","de_DE");
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls;
ServicePointManager.ServerCertificateValidationCallback = delegate { return true; };
byte[] bytes = Encoding.UTF8.GetBytes(json);
request.ContentLength = bytes.Length;
using (var writer = request.GetRequestStream())
{
writer.Write(bytes, 0, bytes.Length);
writer.Flush();
writer.Close();
}
var httpResponse = (HttpWebResponse)request.GetResponse();
using (var streamReader = new StreamReader(httpResponse.GetResponseStream()))
{
var jsonReturn = streamReader.ReadToEnd();
return jsonReturn;
}
}
This is not a solution, but just an alternative:
These days i almost only use WebClient instead of HttpWebRequest. Especially WebClient.UploadString for POST and PUT and WebClient.DownloadString. These simply take and return strings. This way i don't have to deal with streams objects, except when i get a WebException. i can also set the content type with WebClient.Headers["Content-type"] if necessary. The using statement also makes life easier by calling Dispose for me.
Rarely for performance, i set System.Net.ServicePointManager.DefaultConnectionLimit high and instead use HttpClient with it's Async methods for simultaneous calls.
This is how i would do it now
string GetTradesOnline(Int64 tid)
{
using (var wc = new WebClient())
{
return wc.DownloadString("https://data.mtgox.com/api/1/BTCUSD/trades?since=" + tid.ToString());
}
}
2 more POST examples
// POST
string SubmitData(string data)
{
string response;
using (var wc = new WebClient())
{
wc.Headers["Content-type"] = "text/plain";
response = wc.UploadString("https://data.mtgox.com/api/1/BTCUSD/trades", "POST", data);
}
return response;
}
// POST: easily url encode multiple parameters
string SubmitForm(string project, string subject, string sender, string message)
{
// url encoded query
NameValueCollection query = HttpUtility.ParseQueryString(string.Empty);
query.Add("project", project);
query.Add("subject", subject);
// url encoded data
NameValueCollection data = HttpUtility.ParseQueryString(string.Empty);
data.Add("sender", sender);
data.Add("message", message);
string response;
using (var wc = new WebClient())
{
wc.Headers[HttpRequestHeader.ContentType] = "application/x-www-form-urlencoded";
response = wc.UploadString( "https://data.mtgox.com/api/1/BTCUSD/trades?"+query.ToString()
, WebRequestMethods.Http.Post
, data.ToString()
);
}
return response;
}
Error handling
try
{
Console.WriteLine(GetTradesOnline(0));
string data = File.ReadAllText(#"C:\mydata.txt");
Console.WriteLine(SubmitData(data));
Console.WriteLine(SubmitForm("The Big Project", "Progress", "John Smith", "almost done"));
}
catch (WebException ex)
{
string msg;
if (ex.Response != null)
{
// read response HTTP body
using (var sr = new StreamReader(ex.Response.GetResponseStream())) msg = sr.ReadToEnd();
}
else
{
msg = ex.Message;
}
Log(msg);
}
For what it's worth, I was experiencing the same issues with timeouts every time I used it, even though calls went through to the server I was calling. The problem in my case was that I had Expect set to application/json, when that wasn't what the server was returning.

Cross domain HttpWebRequest return 403 forbidden

I have 2 sites that sits on 2 domains. I need to send information from one site to the other.
I checked that the asmx is reachable running unit test but when I run my code from the second domain I get a 403 forbidden response.
The code is extremely simple:
try
{
var webRequest = (HttpWebRequest)WebRequest.Create("http://www.blahblahblah.com/Services/ChatNotification.asmx?fromId=" + 525808 + "&toId=" + 525808);
var webResponse = webRequest.GetResponse();
var stream = webResponse.GetResponseStream();
}
catch(WebException ex)
{
var response = (HttpWebResponse)ex.Response;
}

Reading remote file [C#]

I am trying to read a remote file using HttpWebRequest in a C# Console Application. But for some reason the request is empty - it never finds the URL.
This is my code:
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create("http://uo.neverlandsreborn.org:8000/botticus/status.ecl");
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
How come this is not possible?
The file only contains a string. Nothing more!
How are you reading the response data? Does it come back as successful but empty, or is there an error status?
If that doesn't help, try Wireshark, which will let you see what's happening at the network level.
Also, consider using WebClient instead of WebRequest - it does make it incredibly easy when you don't need to do anything sophisticated:
string url = "http://uo.neverlandsreborn.org:8000/botticus/status.ecl";
WebClient wc = new WebClient();
string data = wc.DownloadString(url);
You have to get the response stream and read the data out of that. Here's a function I wrote for one project that does just that:
private static string GetUrl(string url)
{
HttpWebRequest request = (HttpWebRequest)WebRequest.CreateDefault(new Uri(url));
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
if (response.StatusCode != HttpStatusCode.OK)
throw new ServerException("Server returned an error code (" + ((int)response.StatusCode).ToString() +
") while trying to retrieve a new key: " + response.StatusDescription);
using (var sr = new StreamReader(response.GetResponseStream()))
{
return sr.ReadToEnd();
}
}
}

Categories