Hide Http(s) traffic made by my application from Fiddler - c#

I have application which uses http to obtain data from my server like this:
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(requestString);
req.Timeout = 200 * 1000;
req.Headers.Add(String.Format("deleteme: {0}", content));
HttpWebResponse resp = (HttpWebResponse)req.GetResponse();
Stream resStream = resp.GetResponseStream();
StreamReader read = new StreamReader(resStream);
html = read.ReadToEnd();
Everything works fine, but how can I hide my requests from Fiddler (something similar to Wireshark)? I want to prevent users to see it.

Fiddler works by registering itself as the http proxy in Windows.
You can disable your application from using the default proxy by setting a specific proxy (like in the code below "no proxy") anywhere in your application before making web requests:
HttpWebRequest.DefaultWebProxy = new WebProxy();
Note that this will also prevent your application from using a configured proxy when one is set-up for legitimate reasons.
This will hide the requests from Fiddler or any other tool that traces web requests by registering itself as http proxy. This will not prevent tracing the request with other tools that operate on a different level in the stack (like Wireshark)
Security by obscurity does not really work. If you want to make it impossible to read the data transferred, use actual encryption.

Related

Sending post to creation factory on RTC but getting a 'GET' response

I've hit a strange bit of behaviour and I'm pretty sure is related to my code rather than the RTC instance I'm working with.
I've got a web request setup and configured:
var cookies = new CookieContainer();
var request = (HttpWebRequest)WebRequest.Create(getCreationFactoryUri);
var xmlString = getRDF.ToString();
request.CookieContainer = cookies;
request.Accept = "application/rdf+xml";
request.Method = "POST";
request.ContentType = "application/rdf+xml";
request.Headers.Add("OSLC-Core-Version", "2.0");
request.Timeout = 40000;
request.KeepAlive = true;
byte[] bytes = Encoding.ASCII.GetBytes(xmlString);
request.ContentLength = bytes.Length;
Stream dataStream = request.GetRequestStream();
dataStream.Write(bytes, 0, bytes.Length);
dataStream.Close();
This is passed to another method wrote based on an RTC example using forms authentication for RTC.
Under the OSLC v2 spec, I'm using a creation factory URL to post to. I know the URL is fine because I've setup a call using RESTClient in Firefox. Added the headers that are needed (Content-Type: application/rdf+xml, Accept: application/rdf+xml, OSLC-Core-Version: 2.0) and used the generated XML that my code is trying to pass. My manual call works perfectly and the ticket is created.
In my logs I captured the response from RTC, which is a list of tickets rather than a response showing my ticket as being created. I can re-create this behavior by doing a GET on the creation factory URL I'm using to create an event ticket.
So although I know I'm sending a POST to the creation factory (I debugged to check that my web request method was 100% set to 'POST') RTC instead returns a list of tickets and I can only conclude somewhere my request is treated as a 'GET'.
As a test I changed my request to use PUT instead of POST. This isn't permitted for use on the creation factory URL and in testing it indeed throws an error. So I'm totally miffed as to why RTC isn't creating my ticket, but instead treating my request as a GET and returning a list of tickets.
Anyone have any ideas?
Thanks.
If the server using form authentication, as you state, then I expect what is happening is that the POST is resulting in an HTTP redirection to the authentication form. Even if your other code is handling that authentication (which it sounds like it is), the result of that authentication will be an HTTP redirection to the URL of the original request. However, that redirection is likely to result in a GET to that URL, not the original POST. (Also, I don't believe the redirection after authentication is 100% reliable, if your requests are multi-threaded).
The jazz.net information on form authentication says "After authentication succeeded, you always have to replay the original request at least once to get to the protected resource. More replays may be required if the first replay led to another set of redirections and the original request had a non-GET method."
So if your code received an authentication challenge, you will need to re-send the original POST.
I believe the reason why the RESTClient plug-in in your browser is working first time is that it is sending the cookies from your previous log-in to the RTC web UI in the browser. (I had this experience recently, and also found it very confusing).
Also, if you are not preserving cookies between requests to RTC in your client app, then you will meet an auth challenge for every request. If you preserve cookies between calls from your client app (how you do that will depend on your client library - I'm not familiar with the code in your examples) then my experience is that you won't receive an auth challenge for every request. (However, you still need to be able to handle an auth challenge on every request - including POSTs - otherwise it may fail intermittently if the session times out just before you send a POST).

Why does HttpWebRequest.GetResponse() fail in an ASP.NET page under IIS?

I built a set of API's for one of our developers to consume in our web application. I did this in a .NET 4.0 class library project and wrote integration tests to ensure the API integrated with the backend service correctly. In the integration tests (a unit test project), as well as a console application, the API's work correctly and return all the expected results. However, when we execute the same API's from a ASP.NET web page that is running under IIS, the API fails at the following line of code:
using (HttpWebResponse response = (HttpWebRequest)request.GetResponse())
The failure is a WebException with a status of SendFailure and a socket error of ConnectionReset (10054) in the inner exception. The error is The underlying connection was closed: An unexpected error occurred on a send. This is using HTTPS, as well (hence the X509).
I already know that this is actually when the request is made, but I'm trying to pin-point what is different about an IIS environment that would prevent the stream from being able to write bytes over the network. I know that this is actually the web service server closing the connection before we get a chance to send our data, but I want to urge, again, that this same API works fine under a integration or unit test, or console application all day long.
I have already exhausted as many articles and posts on the internet that are related that I could find, including extensive Msdn documentation such as checking things related to the certificate, modifying HTTP headers service point properties. I'm truly at a loss because the code is not complicated, I've written web request code too many times to count, but here it is:
private string ExecuteServiceMessages(Uri serviceUrl, X509Certificate clientCertificate, string requestBody)
{
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(serviceUrl);
request.ClientCertificates.Add(clientCertificate);
request.Date = DateTime.Now;
request.Method = WebRequestMethods.Http.Post;
request.ContentType = MediaTypeNames.Text.Xml;
request.UserAgent = "******";
request.KeepAlive = false;
using (Stream requestStream = request.GetRequestStream())
using (StreamWriter writer = new StreamWriter(requestStream))
{
writer.Write(requestBody);
writer.Close();
}
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
using (Stream responseStream = response.GetResponseStream())
using (StreamReader reader = new StreamReader(responseStream))
{
string data = reader.ReadToEnd();
reader.Close();
return data;
}
}
The certificate is being loaded in by X509Certificate.CreateFromCertFile, where our certificate for testing is just in a directory of the website. We're not pulling it directly from the certificate store (unless we don't fully understand how certificates work when loaded from a file).
Is there a delay of any kind before you get the error (e.g. something that might indicate a timeout is occuring)?
A lot happens when the framework tries to validate that certificate. One of the things it may do (depending on the X509 policy you're using) is check the Certificate Revocation List, which requires a connection to the internet. This has bitten us a couple times when we tried to run the code on a server that is in an environment with limited internet access-- it spins for exactly 60 seconds then returns the error you are seeing (not very helpful). If this is it, you can change your CRL policy to offline, or edit your hosts file and override DNS so that the CRL check is performed using the loopback address-- it'll fail but at least you won't get a timeout.
You may be connecting internet through a proxy check your IE lan settings.
from c# you need to add proxy settings
var request = (HttpWebRequest)WebRequest.CreateHttp(url);
WebProxy proxy = new WebProxy("http://127.0.0.1:8888", true);
proxy.Credentials = new NetworkCredential("ID", "pwd", "Domain");
request.Proxy = proxy;
request.Credentials = System.Net.CredentialCache.DefaultNetworkCredentials;
request.Timeout = 1000 * 60 * 5;
request.Method = method;
return request.GetResponse();

Windows Authentication for OData service returns "401 - Unauthorized" in C# app, but works in browser

I am trying to add functionality to my C# app, to test a connection to an OData service which is secured with only Windows Authentication. The following block of code is what I am using to perform this test:
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(new Uri(SERVICE_NAME));
CredentialCache myCache = new CredentialCache();
myCache.Add(new Uri(SERVICE_NAME), "Negotiate", new NetworkCredential(user, password));
resolver.Credentials = myCache;
// Do a simple request
request.Credentials = myCache;
request.PreAuthenticate = true;
request.KeepAlive = true;
request.AllowAutoRedirect = true;
request.CookieContainer = new CookieContainer();
object response = request.GetResponse(); // This is where the exception is thrown
When I run the above code, I receive the 401 - Unauthorized error as previously stated. However, when I have Fiddler2 running, the code works fine. So I am using Wireshark instead. In addition, the service works perfect within my browser (Chrome), and if I use Wireshark to compare the HTTP requests/responses for the Authentication, I see that they are nearly identical, except that in Chrome I have: Accept, User-Agent, Accept-Encoding, and Accept-Language headers, while my C# app does not have these. The only other difference is that my C# app sets the "Negotiate Seal" flag in the NTLM header, while Chrome does not set this flag.
Despite these differences, the authentication phase seems to work fine in the C# app, up until the service returns a 302 - Redirection, at which point the app tries a GET on the newly redirected URI, which returns a 401 again (when Chrome does the analogous GET, it receives HTTP 200 - OK, and proceeds on its merry way).
So, any ideas what could cause this? Problem with the service? or my code?
Thanks a lot!
-Erik
Ok, two whole days of research and I found the answer. Line 3 in the code above was using the full URI of the service (".../Northwind/Northwind.svc"), when the request was redirected, the credentials no longer applied to the new URI. The solution was to pass in only the beginning part ("...") of the URI. Stupid mistake on my part.

HttpWebRequest doesn't work except when fiddler is running

This is probably the weirdest problem I have run into. I have a piece of code to submit POST to a url. The code doesn't work neither throws any exceptions when fiddler isn't running, However, when fiddler is running, the code posts the data successfuly. I have access to the post page so I know if the data has been POSTED or not. This is probably very non-sense, But it's a situation I am running into and I am very confused.
byte[] postBytes = new ASCIIEncoding().GetBytes(postData);
HttpWebRequest req = (HttpWebRequest)WebRequest.Create("http://myURL);
req.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/8.0.552.224 Safari/534.10";
req.Accept = "application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5";
req.Headers.Add("Accept-Charset", "ISO-8859-1,utf-8;q=0.7,*;q=0.3");
req.Headers.Add("Accept-Language", "en-US,en;q=0.8");
req.Method = "POST";
req.ContentType = "application/x-www-form-urlencoded";
req.ContentLength = postBytes.Length;
req.CookieContainer = cc;
Stream s = req.GetRequestStream();
s.Write(postBytes, 0, postBytes.Length);
s.Close();
If you don't call GetResponseStream() then you can't close the response. If you don't close the response, then you end up with a socket in a bad state in .NET. You MUST close the response to prevent interference with your later request.
Close the HttpWebResponse after getting it.
I had the same problem, then I started closing the responses after each request, and Boom, no need to have fiddler running.
This is a pseudo of a synchronous code:
request.create(url);
///codes
httpwebresponse response = (httpwebresponse)request.getresponse();
/// codes again like reading it to a stream
response.close();
I had a similar problem recently. Wireshark would show the HTTPWebRequest not leave the client machine unless Fiddler was running. I tried removing proxy settings, but that didn't fix the problem for me. I tried everything from setting the request to HttpVersion.Version10, enabling/disabling SendChuck, KeepAlive, and a host of other settings. None of which worked.
Ultimately, I just checked if .Net detected a proxy and had the request attempt to ignore it. That fixed my issue with request.GetResponse() throwing an immediate exception.
IWebProxy proxy = request.Proxy;
if (request.Proxy != null)
{
Console.WriteLine("Removing proxy: {0}", proxy.GetProxy(request.RequestUri));
request.Proxy = null;
}
In my case when I had the same situation (POST only works when Fiddler is running) the code was sending the POST from an application running on IISExpress in a development environment behind a proxy to an external server. Apparently even if you have proxy settings configured in Internet Options the environment IIS is running in may not have access to them. In my work environment I simply had to update web.config with the path to our proxy's configuration script. You may need to tweak other proxy settings. In that case your friend is this MSDN page that explains what they are: http://msdn.microsoft.com/en-us/library/sa91de1e.aspx.
Ultimately I included the following in the application's web.config and then the POST went through.
<configuration>
<system.net>
<defaultProxy>
<proxy scriptLocation="http://example.com:81/proxy.script" />
</defaultProxy>
</system.net>
</configuration>
Well i have faced similar problem few weeks back and the reason was that when fiddler is running it changes the proxy settings to pass the request through Fiddler but when its closed the proxy somehow still remains and thus doesn't allow your request to go ahead on internet.
I tried by setting the IE's and Firefox's network settings to not to take any proxy and it worked.
Try this, may it be the same problem...
I ran into the same problem with Python - requests to a local server were failing with a 404, but then when I ran them with Fiddler running they were working correctly.
The real clue to the problem here is that Fiddler works by acting as a proxy for HTTP traffic so that all requests from the local machine go through Fiddler rather than straight out into the network.
In the exact situation I was in, I was making requests to a local server, regular traffic passes through a proxy and in Local Area Network (LAN) Settings for the network connection the Proxy server pane the Bypass proxy server for local addresses option was checked.
My suspicion is that the "Bypass proxy server for local addresses" is not necessarily picked up by the programming language, but the proxy server details are. Fiddler is aware of that policy, so requests through Fiddler work but requests direct from the programming language don't.
By setting the proxy for the request for the local server to nothing, it worked correctly from code. Obviously, that could be a gotcha if you find yourself moving from an internal to external server during deployment.
I faced the same scenario : I was POSTing to an endpoint behind Windows Authentication.
Fiddler keeps a pool of open connections, but your C# test or powershell script does not when it runs without fiddler.
So you can make the test/script to also maintain a pool of open authenticated connections, by setting the property UnsafeAuthenticatedConnectionSharing to true on your HttpWebRequest. Read more about it here, microsoft KB. In both cases in that article, you can see that they are making two requests. The first one is a simple GET or HEAD to get the authentication header (to complete the handshake), and the second one is the POST, that will use the header obtained before.
Apparently you cannot (sadness) directly do the handshake with POST http requests.
Always use using construct. it make sure all resource release after call
using (HttpWebResponse responseClaimLines = (HttpWebResponse)requestClaimLines.GetResponse())
{
using (StreamReader reader = new StreamReader(responseClaimLines.GetResponseStream()))
{
responseEnvelop = reader.ReadToEnd();
}
}
add following entries to webconfig file
<system.net>
<connectionManagement>
<add address="*" maxconnection="30"/>
I found the solution in increasing the default number of connections
ServicePointManager.DefaultConnectionLimit = 10000;

How does HttpWebRequest differ (functional) from pasteing a URL into an address bar?

I'm narrowing in on an underlying problem related to two prior questions.
Basically, I've got a URL that when I fetch it manually (paste it into browser) works just fine, but when I run through some code (using the HttpWebRequest) has a different result.
The URL (example):
http://208.106.250.207:8192/announce?info_hash=-%CA8%C1%C9rDb%ADL%ED%B4%2A%15i%80Z%B8%F%C&peer_id=01234567890123456789&port=6881&uploaded=0&downloaded=0&left=0&compact=0&no_peer_id=0&event=started
The code:
String uri = BuildURI(); //Returns the above URL
HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(uri);
req.Proxy = new WebProxy();
WebResponse resp = req.GetResponse();
Stream stream = resp.GetResponseStream();
... Parse the result (which is an error message from the server claiming the url is incorrect) ...
So, how can I GET from a server given a URL? I'm obviously doing something wrong here, but can't tell what.
Either a fix for my code, or an alternative approach that actually works would be fine. I'm not wed at all to the HttpWebRequest method.
I recommend you use Fiddler to trace both the "paste in web browser" call and the HttpWebRequest call.
Once traced you will be able to see any differences between them, whether they are differences in the request url, in the form headers, etc, etc.
It may actually be worth pasting the raw requests from both (obtained from Fiddler) here, if you can't see anything obvious.
Well, the only they might differ is in the HTTP headers that get transmitted. In particular the User-Agent.
Also, why are you using a WebProxy? That is not really necessary and it most likely is not used by your browser.
The rest of your code is fine.. Just make sure you set up the HTTP headers correctly. Check this link out:
I would suggest that you get yourself a copy of WireShark and examine the communication that happens between your browser and the server that you are trying to access. Doing so will be rather trivial using WireShark and it will show you the exact HTTP message that is being sent from the browser.
Then take a look at the communication that goes on between your C# application and the server (again using WireShark) and then compare the two to find out what exactly is different.
If the communication is a pure HTTP GET method (i.e. there is no HTTP message body involved), and the URL is correct then the only two things I could think of are:
make sure that your are send the right protocol (i.e. HTTP/1.0 or HTTP/1.1 or whatever it is that you should be sending)
make sure that you are sending all required HTTP headers correctly, and obviously that you are not sending any HTTP headers that you shouldn't be sending.
There could be something wrong with the URL. Instead of using a string, it's usually better to use an instance of System.Uri:
String url = BuildURI(); //Returns the above URL
Uri uri = new Uri(url);
HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(url);
req.Proxy = new WebProxy();
using (WebResponse resp = req.GetResponse()) {
using (Stream stream = resp.GetResponseStream()) {
// whatever
}
}
I think you need to see exactly what's flowing to your server in the HTTP request. Does sound likely that the headers are interestingly different.
You can introduce a some kind of debugging proxy between your request and the server (for example RAD has such a capability in the box).

Categories