HttpWebRequest to pretend like a browser request? - c#

I have some code (in a Winform app) that reads this URL using HttpWebRequest.GetResponse().
For some reason, it recently starts returning 500 Internal Error when requested from my app.
(The response contains some HTML for the navigations, but doesn't have the main content I need)
On Firefox/Chrome/IE, it is still returning 200 OK.
The problem is I don't have control over their code, I don't know what it does on the backend that causes it to break when requested from my app.
Is there a way I can "pretend" to make the request from, say, Google Chrome? (just to avoid the error)

Set the HttpWebRequest.UserAgent property to the value of a real browser's user agent.
HttpWebRequest webRequest = (HttpWebRequest) WebRequest.Create("http://example.com");
webRequest.UserAgent = #"Mozilla/5.0 (Windows NT 6.2; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1667.0 Safari/537.36";

Related

C# Consolse Apps Got error The remote server returned an error: (403) Forbidden using WebClient also HttpWebRequest

i'm new in this programming world. So i got a task to download zip file from url via C# Console Application, i've tried many things but still got same error. I've already add some Headers in my code also based on Fiddler Result.
This is my code:
WebClient webclient = new WebClient();
webclient.Headers.Add("sec-ch-ua-mobile", "?0");
webclient.Headers.Add("sec-ch-ua-platform", "windows");
webclient.Headers.Add("upgrade-insecure-requests", "1");
webclient.Headers.Add("user-agent", "mozilla/5.0 (windows nt 10.0; win64; x64) applewebkit/537.36 (khtml, like gecko) chrome/98.0.4758.102 safari/537.36");
webclient.Headers.Add("accept", "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,/;q=0.8,application/signed-exchange;v=b3;q=0.9");
webclient.Headers.Add("sec-fetch-site", "cross-site");
webclient.Headers.Add("sec-fetch-mode", "navigate");
webclient.Headers.Add("sec-fetch-user", "?1");
webclient.Headers.Add("sec-fetch-dest", "document");
webclient.Headers.Add("accept-encoding", "gzip, deflate, br");
webclient.Headers.Add("accept-language", "id-id,id;q=0.9,en-us;q=0.8,en;q=0.7,ms;q=0.6,th;q=0.5");
webclient.DownloadFile(url, #"d:\data.zip");
Is there any missed? or should i take other way?
Thanks
Regards,
Alvin
403 is mean HTTP forbidden. It's usually using authentication like an API_KEY or any authentication but refused or fail.
Make sure your API_KEY is right.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
https://en.wikipedia.org/wiki/HTTP_403
Can you please add the URL from where you want to download that zip?

Windows Store Application Http Requests Only Work After Manifest Change

I am currently building a windows store application and I am running into what I now think is a bug.
All my http requests fail except when I make a change to the application manifest then they will work on the first run but straight after that the next web request will fail.
The strange thing is that in order for me to get it to work again I will have to remove a capability from the manifest, even if it is an important one such as the internet capability then the application will work!
Here are the headers I am passing in my HttpClient request:
requestMessage.Headers.TryAddWithoutValidation("Connection", "keep-alive");
requestMessage.Headers.TryAddWithoutValidation("Accept-Encoding", "gzip, deflate, sdch");
requestMessage.Headers.TryAddWithoutValidation("Accept-Language", "en-US,en;q=0.8");
requestMessage.Headers.TryAddWithoutValidation("Host", "xx.xx.x.xxx");
requestMessage.Headers.TryAddWithoutValidation("Cache-Control", "no-cache");
requestMessage.Headers.TryAddWithoutValidation("User-Agent", "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.107 Safari/537.36");
requestMessage.Headers.TryAddWithoutValidation("Upgrade-Insecure-Requests", "1");
Even when I get a web request to hit the server it will always fail on the next call.
Here is the exception I get when it fails:
InnerException = {"An attempt was made to access a socket in a way forbidden by its access permissions 10.98.0.181:80"}
We have deployed the Web API services on IIS and the application is running on a tablet that connects to the server using a VPN.
The tablet is running Symantic Endpoint Protection which is managed externally.
Is there a caching option I can turn of on the device that could be causing this or a setting I have overlooked?

How to get html source of youtube video with http and not https

I am trying to get the html source of a youtube video using cURL command line but I need it to be without https/ssl.
My problem is that I must use the compiled version of cURL with SSL/SSH.
I am using the following command:
curl --user-agent "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.90 Safari/537.36" -L -x http://my.foo.proxy:8080 http://youtube.com/watch?v=youtubevideo > html.html
this works but a specific part of the html source is in https (look for a really long script string inside that file. some of the links there start with httpS)
curl --proto =http --proto-redir =http --user-agent "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.90 Safari/537.36" -L -x http://my.foo.proxy:8080 http://youtube.com/watch?v=youtubevideo > html.html
this command causes an error:
protocol https not supported or disabled in libcurl.
which is really weird because the curl version I am using does have ssl and I dont even want https (see the -proto and -proto-redir args).
As a test I also tried using .NET Webclient class like:
public static void DownloadString (string address)
{
WebClient client = new WebClient ();
string reply = client.DownloadString (address);
Console.WriteLine (reply);
}
and in this case I get a html source file without https.
My question is, how do I get a html source file of a youtube video using cURL without https inside my html source file like when I use .NET/Webclient?
using an user agent without firefox fixes this issue when used inside a console:
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101
When used with a binding set SSL_VERIFYPEER to false and SSL_VERIFYHOST to 0. Allows man in the middle attack but if that is the only option...
In addition HTTPGET and FOLLOWLOCATION should also both be set to true.

.net Unconsistency in Request Browser version

i had created an empty c# web site with just one page with Request.Browser.Version & UserAgent output on it. Then hit it with different Chrome versions using "User-Agent Switcher" Chrome extension.
For time to time, though the Request.UserAgent is correct, Request.Browser.Version seems to return wrong value:
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.16 Safari/537.36" Returned Request.Browser.Version:39
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2272.16 Safari/537.36" Returned Request.Browser.Version:41
So yes, .net 4.5 caches the user agent by its first 64 chars. And that's just gets them before the version number. So the next user with the same browser but with a different version will get the wrong browser version and so.
To solve it just change the :browserCaps userAgentCacheKeyLength="...", as can be seen here:
.Net 4.0 website cannot identify some AppleWebKit based browsers
How isn't this stupid Microsoft bug on the headlines?

How to call a web page/service from different domain?

I need to call a web page from different domain. When I call this page from browser, it responds normally. But when i call it from a server side code or from jquery ajax script, it responds empty xml.
I am trying to call a page or service like this:
http://www.otherdomain.com/oddsData.jsp?odds_flash_id=11&odds_s_type=1&odds_league=all&odds_period=all&me_select_string=&q=93801
this responds normally from browser. But when I write a c# code like this:
WebClient wc = new WebClient();
wc.Headers[HttpRequestHeader.UserAgent] = "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.56 Safari/536.5";
wc.Headers[HttpRequestHeader.Accept] = "*/*";
wc.Headers[HttpRequestHeader.AcceptCharset] = "ISO-8859-1,utf-8;q=0.7,*;q=0.3";
wc.Headers[HttpRequestHeader.AcceptEncoding] = "gzip,deflate,sdch";
wc.Headers[HttpRequestHeader.AcceptLanguage] = "en-US,en;q=0.8";
wc.Headers[HttpRequestHeader.Host] = "otherdomain.com";
var response = wc.DownloadString("http://www.otherdomain.com/oddsData.jsp?odds_flash_id=11&odds_s_type=1&odds_league=all&odds_period=all&me_select_string=&q=93801");
Response.Write(response);
i get empty xml as response:
<xml></xml>
How can I get same response from server side code or client side which I got from browser?
I tried solution here: Calling Cross Domain WCF service using Jquery
So that I didnt understand what to do, I couldnt apply solution described.
How can I get same response from server side code or client side which I got from browser?
Due to the same origin policy restriction you cannot send cross domain AJAX requests from browsers.
From .NET on the other hand you could perfectly fine send this request. But probably the web server that you are trying to send the request to expects some HTTP headers such as the User-Agent header for example. So make sure that you have provided all the headers in your request that the server needs. For example to add the User-Agent header:
using (WebClient wc = new WebClient())
{
wc.Headers[HttpRequestHeader.UserAgent] = "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.56 Safari/536.5";
var response = wc.DownloadString("http://www.otherdomain.com/oddsData.jsp?odds_flash_id=11&odds_s_type=1&odds_league=all&odds_period=all&me_select_string=&q=93801");
Response.Write(response);
}
You could use FireBug or Chrome developer toolbar to inspect all the HTTP request headers that your browser sends along the request that works and simply add those headers.

Categories