I tried to detect the final Url after a redirect. The Url is shown below. When using chrome I receive the 302 response however in my code I simply see a 200 without an Url change. I'm able to see 302 status using chrome dev tools.
http://imgur.com/ZmQOSHo (Chrome console image at this link)
I've tested with another Url which works. I've noticed the latency for the tiny url is 1.21 s and for the other Url is 85 ms. Maybe the Redirect is so fast it doesn't get noticed?
I've included my code at the bottom.
References used trying to detect the redirect:
Getting the Redirected URL from the Original URL
Get a collection of redirected URLs from HttpWebResponse
Works: http://tinyurl.com/qcrr3kn
Not working: http://api.citygridmedia.com/content/places/v2/click?q=urVBa6moXC5c7fD6SHv-hbf_7HRTJ-epI2pHQq9_3jGRS3oxUEA3zsS1yLhsh9JEMiM8qdYEtS_lStDSZc-z0HmZe5l2XwN1a5djnuXu5V1qx7Z8EcA6yxLKrMog1OH9vFD9461355g648hyYiUbBiYsgkiDMf0VK0OyE5NeCh7xaY-OWE-pI6J9TXcyr1kJ5B5hTzbOJpxvXVdX05ZCdwHpYplILuELic5q1ojeI1nhJzk8Tl7wsl-lPrROPRwrv6Vvvc3wVE2idwR784UoxXgZt00XiBh6rVYpGf0-q80smzQZRNmEcO7wa3342DmMbLPlCUd6QMCCSyvMlmgc4Q7yPuDG5vGRdaSPzm47_mTd74r6ovP00DvY5ODHN3LETLY7LMYirpiUus0hR1WWmUC26M-JCspa2l7J14e2WcsVUHKek47lJyLmPoAqUFoLFNwKZyOqQKk
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(strUrl);HttpContext.Current.Request.Headers["User-Agent"] = "Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2049.0 Safari/537.36";
request.Method = "GET";
request.AllowAutoRedirect = false;
string myTestUri = request.RequestUri.ToString();
string testResponse = null;
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
int statusCode = (int)response.StatusCode;
Stream receiveStream = response.GetResponseStream();
StreamReader readStream = null;
if (String.IsNullOrEmpty(response.CharacterSet) == true)
readStream = new StreamReader(receiveStream);
else
readStream = new StreamReader(receiveStream, System.Text.Encoding.GetEncoding(response.CharacterSet));
output = readStream.ReadToEnd();
string newUrl = response.ResponseUri.AbsoluteUri;// api.citygridmedia.com domain is returned
string uriString = response.GetResponseHeader("Location"); // empty for the api.citygridmedia.com domain
Related
I tried to get the source of a particular site page using the code below but it failed.
I was able to get the page source in 1~2 seconds using a webbrowser or webdriver, but httpwebrequest failed.
I tried putting the actual webbrowser cookie into httpwebrequest, but it failed, too.
(Exception - The operation has timed out)
I wonder why it failed and want to learn through failure.
Thank you in advance!!.
string Html = String.Empty;
CookieContainer cc = new CookieContainer();
HttpWebRequest req = (HttpWebRequest)WebRequest.Create("https://www.coupang.com/");
req.Method = "GET";
req.Host = "www.coupang.com";
req.UserAgent = "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.157 Safari/537.36";
req.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3";
req.Headers.Add("Accept-Language", "ko-KR,ko;q=0.9,en-US;q=0.8,en;q=0.7");
req.CookieContainer = cc;
using (HttpWebResponse res = (HttpWebResponse)req.GetResponse())
using (StreamReader str = new StreamReader(res.GetResponseStream(), Encoding.UTF8))
{
Html = str.ReadToEnd();
}
Removing req.Host from your code should do the trick.
According to the documentation:
If the Host property is not set, then the Host header value to use in an HTTP request is based on the request URI.
You already set the URI in (HttpWebRequest)WebRequest.Create("https://www.coupang.com/") so I don't think doing it again is necessary.
Result
Please let me know if it helps.
I am performing the following HttpWebRequest:
private static void InvokeHealthCheckApi()
{
var webRequest = (HttpWebRequest)WebRequest.Create(Url);
string sb = JsonConvert.SerializeObject(webRequest);
webRequest.Method = "GET";
webRequest.KeepAlive = true;
webRequest.AllowAutoRedirect = true;
webRequest.ContentType = "application/json";
webRequest.UserAgent =
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36";
webRequest.CookieContainer = cookieJar;
using (HttpWebResponse response = webRequest.GetResponse() as HttpWebResponse)
using (StreamReader reader = new StreamReader(response.GetResponseStream()))
{
File.AppendAllText("C:\\httpResponse1.txt", response.Headers.ToString());
File.AppendAllText("C:\\httpResponse2.html", reader.ReadToEnd());
}
}
The response from the request is coming back as web page that reads:
"Script is disabled. Click Submit to continue."
(Submit Button)
After clicking the submit button i get a prompt that reads:
"Do you want to open or save healthcheck.json(297 bytes) from fr74a87d9.work.corp.net?
After clicking the Open button I receive the json data that I am expecting to receive.
My question is how do I parse the response to get to the json data that I need? Is it normal to get the web page as the initial response and have to drill down to get the json response? StreamReader can't parse the response because it's a web page and not json data.
I found a resolution to my problem here: http://blogs.microsoft.co.il/applisec/2013/06/03/passive-federation-client/
My computer's proxy setting is using automatic configuration script and its address is like:http://gazproxy.xxxxxx.com:81/proxy.pac, I can visit websites normally, but when I tried to use GetResponse() method of a HttpWebRequest instance as follow:
public static string GetContent(string url)
{
string data = string.Empty;
Uri uri=new Uri(url);
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
request.Referer = uri.Host;
request.UserAgent = "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.29 Safari/537.36";
request.Method = "GET";
WebProxy proxy = new WebProxy();
proxy.Address = new Uri("http://gazproxy.xxxxxx.com:81/proxy.pac");
request.UseDefaultCredentials = true;
request.Proxy = proxy;
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
if (response.StatusCode == HttpStatusCode.OK)
{
Stream receiveStream = response.GetResponseStream();
StreamReader readStream = null;
if (response.CharacterSet == null)
readStream = new StreamReader(receiveStream);
else
readStream = new StreamReader(receiveStream, Encoding.GetEncoding(response.CharacterSet));
data = readStream.ReadToEnd();
response.Close();
readStream.Close();
}
return data;
}
it will throw a exception"The remote server returned an error: (404) Not Found.",what should I do?
The problem is with this line:
proxy.Address = new Uri("http://gazproxy.xxxxxx.com:81/proxy.pac");
The URI you wrote actually refers to an automatic configuration script, and not to a proxy server. You will need to open and inspect that configuration script to know what is the actual proxy server. For that purpose, copy paste that URL on a browser address bar, and either view / download or save a copy of the same.
The WebProxy object does not have the intelligence to go read some configuration script and identify the real proxy server.
From MSDN ( https://msdn.microsoft.com/en-us/library/system.net.webproxy.address%28v=vs.90%29.aspx )
WebProxy.Address Property
The Address property contains the address of the proxy server. When automatic proxy detection is not enabled, and no automatic configuration script is specified, the Address property and BypassList determine the proxy used for a request.
The main problem I think is that I am trying to get an output of a php script on an ssl protected website. Why doesn't the following code work?
string URL = "https://mtgox.com/api/0/data/ticker.php";
HttpWebRequest myRequest =
(HttpWebRequest)WebRequest.Create(URL);
myRequest.Method = "GET";
WebResponse myResponse = myRequest.GetResponse();
StreamReader _sr = new StreamReader(myResponse.GetResponseStream(),
System.Text.Encoding.UTF8);
string result = _sr.ReadToEnd();
//Console.WriteLine(result);
result = result.Replace('\n', ' ');
_sr.Close();
myResponse.Close();
Console.WriteLine(result);
It hangs at WebException was unhandeled The operation has timed out
You're hitting the wrong url. ssl is https://, but you're hitting http:// (note the lack of S). The site does redirect to the SSL version of the page, but your code is apparently not following that redirect.
Have added myRequest.UserAgent = "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11"; everything started working
My friend is using C# to write a simple program for requesting a webpage.
However he encounter a problem when try to request a specified webpage.
He have already tried to set all the header and cookie inside the request, but it still got the timeout exception.
The example webpage is https://my.ooma.com
Here is the code:
string url = "https://my.ooma.com";
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
request.Timeout = 30000;
request.UserAgent = "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.52 Safari/536.5";
request.Method = "GET";
request.CookieContainer = new CookieContainer();
request.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8";
request.Headers.Add("Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3");
request.Headers.Add("Accept-Encoding:gzip,deflate,sdch");
request.Headers.Add("Accept-Language:en-US,en;q=0.8");
request.KeepAlive = true;
WebResponse myResponse = request.GetResponse();
StreamReader sr = new StreamReader(myResponse.GetResponseStream());
string result = sr.ReadToEnd();
sr.Close();
myResponse.Close();
All the headers is as same as when using Chrome to browse the webpage.
And he didn't see any cookies set by using the Chrome developer tool.
Do anyone can success request the page using C#?
Thanks a lot.
Sorry for being late.
The following code snippet should work just fine. I also tried with tour old URL that had "getodds.xgi" in it and it also worked fine.
The server uses a secure sockets layer (SSL) protocol for connections that use the Secure Hypertext Transfer Protocol (HTTPS) scheme only.
You don't need to set any UserAgent or Header if they were just intended to get response.
ServicePointManager.SecurityProtocol = SecurityProtocolType.Ssl3;
WebRequest request = WebRequest.Create("http://my.ooma.com/");
string htmlResponse = string.Empty;
using (WebResponse response = request.GetResponse())
{
using (StreamReader reader = new StreamReader(response.GetResponseStream()))
{
htmlResponse = reader.ReadToEnd();
reader.Close();
}
response.Close();
}