Issue:
Consider the following working code.
System.Net.WebProxy proxy = new System.Net.WebProxy(proxyServer[i]);
System.Net.HttpWebRequest objRequest = (System.Net.HttpWebRequest)System.Net.WebRequest.Create(https_url);
objRequest.Method = "GET";
objRequest.Proxy = proxy;
First, notice that proxyServer is an array so each request may use a different proxy.
If I comment out the last line thereby removing the use of any proxies, I can monitor requests in Fiddler just fine, but once I reinstate the line and start using them, Fiddler stops logging outbound requests from my app.
Question:
Do I need to configure something in Fiddler to see the requests or is there a change in .Net I can make?
Notes:
.Net 4.0
requests are sometimes https, but i don't think this is directly relevant to issue
all requests are outbound (not localhost/127.0.0.1)
Fiddler is a proxy itself. By assigning a different proxy to your request.. you're essentially taking Fiddler out of the equation.
If you're looking to capture traffic and use your own proxy.. you can't use a proxy (by definition that makes no sense).. you want a network analyzer, such as WireShark. This captures the traffic instead of having the traffic routed through it (as a proxy does), allowing you to have it monitor traffic and route your requests through your custom proxy.
Related
I need to consume a third-party WebSocket API in .NET Core and C#; the WebSocket server is implemented using socket.io (using protocol version 0.9), and I am having a hard time understanding how socket.io works... besides that the API requires SSL.
I found out that the HTTP handshake must be initiated via a certain path, which is...
socket.io/1/?t=...
...whereby the value of the parameter t is a Unix-timestamp (in seconds). The service replies with a session-key, timeout information, and a list of supported transport protocols. Due to simplicity, this first request is made via HttpClient and does not involve any additional headers.
Next, another HTTP request is required, which should result in an HTTP 101 Switching Protocol response. I specified the following headers in accordance to the previous request...
Connection: Upgrade
Upgrade: websocket
Sec-WebSocket-Key: ...
Sec-WebSocket-Version: 13
...whereby the value of the Key-header is a Base64-encoded GUID-value that the server will use to calculate the Sec-WebSocket-Accept header value. I also precalculate the expected Sec-WebSocket-Accept header value, for validation...
I tried to make that request using HttpClient as well, but that does not seem to work... I actually don´t understand why, because I expect an HTTP response. I also tried to make the request using TcpClient by sending a manually prepared GET request over a SslStream, which accepts the remote certificate as expected. Sending data seems to work, but there´s no response data... the Read-method returns zero.
What do I miss here? Do I need to setup a listener for the WebSocket connection as well, and if yes how? I don´t want to implement a feature complete socket.io client, I´d just like to keep it as simple as possible to catch some events...
The best way of debugging these issues is to use a sniffer like wireshark or fiddler. Often connect using an IE and compare IE results with my application and modify my app so it works like the IE. Using WebClient instead of HttpClient will also work better because the WebClient does more automatically than the HttpClient.
A web connection uses the header of the client and the headers in the server webpage to negotiate a connection mode. Adding additional headers to you client will change the connection mode. Cookies are also used to select the connection mode. Cookies are the results of previous connection to the same server which shortens the negotiations and stores info from previous connection so less data has to be downloaded from server. The server remembers the cookies. Cookies have a timeout and is kept until timeout expires. The IE history in your client has a list of IP addresses and Net automatically sends the cookies associated with the server IP.
If a bad connection is made to the server the cookies is also bad so the only was of connection is to remove the cookie. Usually I go into the IE and delete cookies manually in the IE history.
To check if a response is good the server returns a status. A completed response contains a status 200 DONE. You can get status which are errors. You can also get a 100 Continue which means you need to send another request to get the rest of the webpage.
Http has 1.0 (stream mode) and 1.1 (chunk mode). Net library doesn't work with chunk. Chunk requires client to send message to get next chunk and I have not found a way in Net to send the next chunk message. So if a server responds with a 1.1 then you have to add to your client headers to use 1.0 only.
Http uses TCP as the transport layer. So in a sniffer you will see TCP and HTTP. Usually you can filter sniffer just to return Http and look at header for debugging. Occasionally TCP disconnects and then you have to look at TCP to find why the disconnect occurs.
Edit: after talking it over with a couple IT guys, I've realized it's only the POLL requests that are having issues. I'm fetching the images via GET requests that go through quickly and as expected, whether or not the POLL messages are having issues.
I'm working on a client to interface with an IP camera in C#.
It's all working dandy except that I can get really poor http request performance when I'm not using Fiddler (a web traffic inspection proxy).
I'm using an httpclient to send my requests, this is my code that actually initiates the poll request:
public async Task<bool> SetPoll(int whichpreset)
{
string action = "set";
string resource = presetnames[whichpreset];
string value = presetvalues[whichpreset];
int requestlen = 24 + action.Length + resource.Length + value.Length;
var request = new HttpRequestMessage
{
RequestUri = new Uri("http://" + ipadd + "/res.php"),
Method = HttpMethod.Post,
Content = new FormUrlEncodedContent(new[]{
new KeyValuePair<string,string>("action",action),
new KeyValuePair<string,string>("resource",resource),
new KeyValuePair<string,string>("value",value)
}),
Version = new System.Version("1.1"),
};
HttpResponseMessage mess = await client.SendAsync(request);
if (mess.IsSuccessStatusCode)
{
return true;
}
else
{
return false;
}
}
When Fiddler is up, all my http requests go through quickly, and without a hitch (I'm making about 20 post requests upon connecting). Without it, they only go through as expected ~1/5 of the time, and the rest of the time they're never completed, which is a big issue. Additionally, the initial connection request often takes 1+ minutes when not using Fiddler, and consistently only takes a few seconds when I am, so it doesn't seem to be a timing issue of sending requests too soon after connecting.
This leads me to think that the request, as written, is fairly poorly behaved, and perhaps Fiddler's requests behave better. I'm a newbie to HTTP, so I'm not sure exactly why this would be. My questions:
does Fiddler modify HTTP requests (E.G. different headers, etc.)
as they are sent to the server?
even if it doesn't modify the requests, are Fiddler's requests in
some way better behaved than I'd be getting out of .net 4.0 in C# in
VS2013?
is there a way to improve the behavior of my requests to emulate
whatever Fiddler is doing? Ideally while still working within the
stock HTTP namespace, but I'm open to using others if necessary.
I'll happily furnish more code if helpful (though tomorrow).
Inserting
await Task.Delay(50);
between all requests fixed the problem (I haven't yet tested at different delays). Because Fiddler smoothed the problem out, I suspect it's an issue the camera has with requests sent in too quick of a succession, and fiddler sent them at a more tolerable rate. Because it's an async await, there is no noticeable performance impact, other than it taking a little while to get through all ~20 (30 now) requests through on startup, which is not an issue for my app.
Fiddler installs itself as a system proxy. It is possible that the Fiddler process has better access to the network than your application's process.
Fiddler might be configured to bypass your normal system proxy (check the gateway tab under options) and perhaps the normal system proxy has issues.
Fiddler might be running as a different user with a different network profile, e.g. could be using a different user cert store or different proxy settings such as exclusion list.
Fiddler might be configured to override your hosts file and your hosts file may contain errors.
Your machine might be timing out trying to reach the servers necessary to check for certificate revocation. Fiddler has CRL checking disabled by default (check the HTTPS tab).
Fiddler has a ton of options and the above are just some guesses.
My recommendation would be to check and/or toggle the above options to see if any of them apply. If you can't get anywhere, you may have to forget Fiddler exists and troubleshoot your network problems independently, e.g. by using NSLOOKUP, PING, TRACERT, and possibly TELNET to isolate the problem.
There is nothing in your code sample that suggests a code flaw that could cause intermittent network failures of the kind you are describing. In fact it is hard to imagine any code flaw that would cause that sort of behavior.
I'm executing request through some free proxy servers, and I would like to know what headers each proxy server sets. Right now I'm visiting a page that prints out the result in the html body.
using(WebClient client = new WebClient())
{
WebProxy wp = new WebProxy("proxy url");
client.Proxy = wp;
string str = client
.DownloadString("http://www.pagethatprintsrequestheaders.com");
}
The WebClient doesn't show the modified headers, but the page prints the correct ones. Is there any way to find out what headers that are being set by the proxy without visiting a page that prints them like in my example? Do I have to create my own http listener?
When the proxy server sets its own headers, it is essentially performing its own web request. It can even hide or override some of the headers that you set using your WebProxy.
Consequently, only the target page (pagethatprintsrequestheaders.com) can reliably see the headers being set by the proxy. There is no guarantee that the proxy server will send back the headers that it had sent to the target, back to you.
To put it another way, it really depends on the proxy server implementation. if the proxy server you are using is based on Apache's ProxyPass, you'd probably see the headers being set! If it's a custom implementation, then you may not see it.
You can first try inspecting the client.ResponseHeaders property of the WebClient after your response comes back. If this does not contain headers matching what (pagethatprintsrequestheaders.com) reports, then it's indeed a custom or modified implementation.
You could then create your own proxy servers, but this is more involved. You would probably spin up an EC2 instance, install Squid/TinyProxy/YourCustomProxy on it and use that in your WebProxy call.
You may also want to modify your question and explain why you want to read the headers. There may be solutions to your overall goal that don't require reading headers at all but could be done in some other way.
It looks like your sending a request from your WebClient, through the proxy and its received by the host at www.pagethatprintsrequestheaders.com.
If the proxy is adding headers to the request, your webclient will never see them on it's request.
webclient proxys request
request with headers added
client -----------> proxy ----------------------> destination host
The webclient can only see the state of the request between it and the proxy. The proxy will create a new request to send to the destination host, and its that request to which the headers are added. It also that request that is received by the destination host (which is why when it echoes back the headers it can see those added by the proxy)
When the response comes back, the headers are set by the host. It's possible that the proxy will add some headers to the response, but even if it did, they are not likely to be the same headers it adds to a request.
response response
(forwarded by proxy) (headers set by host)
client <------------------- proxy <------------------------- destination host
Using a host that echo the headers back as part of the response payload is one option.
Another would be to use something between the proxy and the destination host to inspect the request there (e.g a packet sniffer or another proxy like Fiddler that lets you see the request headers).
If the proxy is outside of you network, getting between the proxy and the destination host will be difficult (unless the host is under your control).
We use Request.Url.GetLeftPart(UriPartial.Authority) to get the domain part of the site. This served our requirement on http.
We recently change site to https (about 3 days ago) but this still returns with http://..
Urls were all changed to https and show in browser address bar.
Any idea why this happens?
The following example works fine and returns a string with "https":
var uri = new Uri("https://www.google.com/?q=102njgn24gk24ng2k");
var authority = uri.GetLeftPart(UriPartial.Authority);
// authority => "https://www.google.com"
You either have an issue with the HttpContext class right here, or all your requests are still using http:
You can check the requests HttpContext.Current.Request.IsSecureConnection property. If it is true, and the GetLeftPart method still returns http for you, I think you won't get around a replacing here.
If all your requests are really coming with http, you might enforce a secure connection in IIS.
You should also inspect the incoming URL and log it somewhere for debugging purposes.
This can also happen when dealing with a load balancer. In one situation I worked on, any https requests were converted into http by the load balancer. It still says https in the browser address bar, but internally it's a http request, so the server-side call you are making to GetLeftPart() returns http.
If your request is coming from ARR with SSL Offloading,
Request.Url.GetLeftPart(UriPartial.Authority) just get http
This is probably the weirdest problem I have run into. I have a piece of code to submit POST to a url. The code doesn't work neither throws any exceptions when fiddler isn't running, However, when fiddler is running, the code posts the data successfuly. I have access to the post page so I know if the data has been POSTED or not. This is probably very non-sense, But it's a situation I am running into and I am very confused.
byte[] postBytes = new ASCIIEncoding().GetBytes(postData);
HttpWebRequest req = (HttpWebRequest)WebRequest.Create("http://myURL);
req.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/8.0.552.224 Safari/534.10";
req.Accept = "application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5";
req.Headers.Add("Accept-Charset", "ISO-8859-1,utf-8;q=0.7,*;q=0.3");
req.Headers.Add("Accept-Language", "en-US,en;q=0.8");
req.Method = "POST";
req.ContentType = "application/x-www-form-urlencoded";
req.ContentLength = postBytes.Length;
req.CookieContainer = cc;
Stream s = req.GetRequestStream();
s.Write(postBytes, 0, postBytes.Length);
s.Close();
If you don't call GetResponseStream() then you can't close the response. If you don't close the response, then you end up with a socket in a bad state in .NET. You MUST close the response to prevent interference with your later request.
Close the HttpWebResponse after getting it.
I had the same problem, then I started closing the responses after each request, and Boom, no need to have fiddler running.
This is a pseudo of a synchronous code:
request.create(url);
///codes
httpwebresponse response = (httpwebresponse)request.getresponse();
/// codes again like reading it to a stream
response.close();
I had a similar problem recently. Wireshark would show the HTTPWebRequest not leave the client machine unless Fiddler was running. I tried removing proxy settings, but that didn't fix the problem for me. I tried everything from setting the request to HttpVersion.Version10, enabling/disabling SendChuck, KeepAlive, and a host of other settings. None of which worked.
Ultimately, I just checked if .Net detected a proxy and had the request attempt to ignore it. That fixed my issue with request.GetResponse() throwing an immediate exception.
IWebProxy proxy = request.Proxy;
if (request.Proxy != null)
{
Console.WriteLine("Removing proxy: {0}", proxy.GetProxy(request.RequestUri));
request.Proxy = null;
}
In my case when I had the same situation (POST only works when Fiddler is running) the code was sending the POST from an application running on IISExpress in a development environment behind a proxy to an external server. Apparently even if you have proxy settings configured in Internet Options the environment IIS is running in may not have access to them. In my work environment I simply had to update web.config with the path to our proxy's configuration script. You may need to tweak other proxy settings. In that case your friend is this MSDN page that explains what they are: http://msdn.microsoft.com/en-us/library/sa91de1e.aspx.
Ultimately I included the following in the application's web.config and then the POST went through.
<configuration>
<system.net>
<defaultProxy>
<proxy scriptLocation="http://example.com:81/proxy.script" />
</defaultProxy>
</system.net>
</configuration>
Well i have faced similar problem few weeks back and the reason was that when fiddler is running it changes the proxy settings to pass the request through Fiddler but when its closed the proxy somehow still remains and thus doesn't allow your request to go ahead on internet.
I tried by setting the IE's and Firefox's network settings to not to take any proxy and it worked.
Try this, may it be the same problem...
I ran into the same problem with Python - requests to a local server were failing with a 404, but then when I ran them with Fiddler running they were working correctly.
The real clue to the problem here is that Fiddler works by acting as a proxy for HTTP traffic so that all requests from the local machine go through Fiddler rather than straight out into the network.
In the exact situation I was in, I was making requests to a local server, regular traffic passes through a proxy and in Local Area Network (LAN) Settings for the network connection the Proxy server pane the Bypass proxy server for local addresses option was checked.
My suspicion is that the "Bypass proxy server for local addresses" is not necessarily picked up by the programming language, but the proxy server details are. Fiddler is aware of that policy, so requests through Fiddler work but requests direct from the programming language don't.
By setting the proxy for the request for the local server to nothing, it worked correctly from code. Obviously, that could be a gotcha if you find yourself moving from an internal to external server during deployment.
I faced the same scenario : I was POSTing to an endpoint behind Windows Authentication.
Fiddler keeps a pool of open connections, but your C# test or powershell script does not when it runs without fiddler.
So you can make the test/script to also maintain a pool of open authenticated connections, by setting the property UnsafeAuthenticatedConnectionSharing to true on your HttpWebRequest. Read more about it here, microsoft KB. In both cases in that article, you can see that they are making two requests. The first one is a simple GET or HEAD to get the authentication header (to complete the handshake), and the second one is the POST, that will use the header obtained before.
Apparently you cannot (sadness) directly do the handshake with POST http requests.
Always use using construct. it make sure all resource release after call
using (HttpWebResponse responseClaimLines = (HttpWebResponse)requestClaimLines.GetResponse())
{
using (StreamReader reader = new StreamReader(responseClaimLines.GetResponseStream()))
{
responseEnvelop = reader.ReadToEnd();
}
}
add following entries to webconfig file
<system.net>
<connectionManagement>
<add address="*" maxconnection="30"/>
I found the solution in increasing the default number of connections
ServicePointManager.DefaultConnectionLimit = 10000;