I have a WebClient object with the Proxy property set.
But when using the WebClient object the communication is transparent so there is no proxy in effect.
How do I check programmatically during runtime if (for example before downloading a file with that WebClient object) the proxy connection works?
If I understand your question:
You could set up a Proxy Server on your computer, such as CCProxy (Or something similar). Point your WebClient application to that proxy server. Then enable logging on CCProxy to see if the traffic you expected is passing through.
EDIT
Are you in a network that restricts internet access unless you are using a proxy server?
If your network supports it, you could look into Automatic Proxy Detection https://msdn.microsoft.com/en-us/library/fze2ytx2(v=vs.110).aspx
When automatic proxy detection is enabled, the system attempts to locate a proxy configuration script that is responsible for returning the set of proxies that can be used for the request. If the proxy configuration script is found, the script is downloaded, compiled, and run on the local computer when proxy information, the request stream, or the response is obtained for a request that uses a WebProxy instance.
The reason it is hard to know if the problem is proxy settings is; when your app tries to connect to the internet, it cannot possibly know or guess that the reason the URL is not accessible is because you need to use a proxy. So it throws a general exception. Because there are many reasons why a URL might not be accessible, such as, Internet service is down, network misconfiguration, or like in your case, a proxy setting is required. Your app would be guessing as to which one of those reasons the URL is inaccessible.
Related
I'm creating a crawler which uses several IP Proxies. Whenever I tried to crawl the website without proxy, I'm able to get the html source, but when I tried to enable the ip proxy, it always fail and throws an exceptions (The remote server returned an error: (403) Forbidden.)
Upon looking at the fiddler, it seems the website stores cookies upon visit. But if the proxy is enabled, it fails at get response part.
I don't understand why the cookies was not set using a proxy? Is it the proxy server settings for cookies that cause it? or I can do something about it while still enabling proxy?
I'm using C# by the way, but the question doesn't seems language dependent.
Another thing to consider is that you set a cookie from the ip address of the non proxied machine (which worked), then when you sent another request with the same cookie from another ip address which might have gotten you blocked.
Some network level software looks at stuff like this which might have flagged you as a malicious crawler or annonymous tor browser.
I've been looking all over the site and on stack overflow and I just can solve my issue.
Network Setup
The way my network on my staging world is that I have clients looking at my web app on a 443 port - https, but the underlying structure is listening on 80 port - http. So when my apps talk to each other its on port 80, but when the clients visit the site its port 443. So for example, my svc called from silverlight would be on port 80.
I should also point out that on my staging and test domains: I have a web server acting as a portal to my app server; but this shouldn't really matter since I was able to get this working on test. It's just that staging has the HTTP forwarding to HTTPS.
Application
I have a silverlight xap file that is on the same domain as my hosted web application using IIS 6.
Now since my silverlight xap file and my web application are on the same domain, I have no problems running this on dev and test, but when I try to deploy to staging I'm getting a weird cross domain reference problem:
"System.ServiceModel.CommunicationException: An error occurred while trying to make a request to URI . This could be due to attempting to access a service in a cross-domain way without a proper cross-domain policy in place, or a policy that is unsuitable for Soap services."
Digging around, I realize that my app thinks that my xap (or the service I'm calling) and my web app are on a different domain, and looks for the crossdomain.xml and clientaccesspolicy.xml files automatically, I can't really stop it. However, in my application, this is not the case. They both reside on the same domain. I have used fiddler and I didn't see anything about another domain or even a subdomain for that matter.
Browser Issues
Another weird thing that I found out is an issue with chrome vs ie:
On chrome it finds the crossdomain.xml and clientaccesspolicy.xml telling me its insecure, then it does another fetch from the https side, signalling a 404 error. However, on IE I'm getting a 302 redirect. On microsoft's doc about clientaccesspolicy.xml you aren't supposed to do any redirects from the xml file; this is mentioned here: http://msdn.microsoft.com/en-us/library/cc838250(v=vs.95).aspx
So my question is, if my app and xap are on the same domain, why are those xmls trying to get fetched? Is it because I'm using a DNS instead of an IP address? I also stumbled upon this site: http://msdn.microsoft.com/en-us/library/ff921170(v=pandp.20).aspx
It states: To avoid cross-domain call issues, the remote modules' XAP files should be located on the same domain as the main application; when deployed like this, the Ref property on the ModuleCatalog should be a Uniform Resource Identifier (URI) relative to the main XAP file location on the Web server.
What does that even mean??
EDIT
Okay so I changed the services to point to https instead of http. However new error comes out: The provided URI scheme 'https' is invalid; expected http.
The good thing is, it doesn't even check crossdomain.xml or clientaccesspolicy.xml; so it now realizes it's on the same domain. But now it's expecting a service on port 80, but the name has to follow as https:// in order for it to work.
I think the only solution I have now is to break it off as being a virtual directory, make it a root node of its own website, and make the whole thing as 443. Save myself the headache.
It sounds like you're working in an environment where there is a load balancer offloading the SSL traffic. In this situation, your client(Silverlight) needs to be configured for HTTPS and your server must be configured for HTTP. This is because a device between the two parties is decrypting the SSL data.
In situations like this, aside from the normal client and server side configurations, your server side code needs to be a bit more forgiving about the address of the request.
You likely also need to add an attribute to your service implementation to allow your client to call over HTTPS, but have your service listening on HTTP.
Add this to your service:
[ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)]
This allows your client to call https://my.domain.com/service.svc and have your server live at http://my.domain.com/service.svc.
Here are some links that might help as well:
http://social.msdn.microsoft.com/Forums/vstudio/en-US/b5ae495b-f5fb-4eed-ae21-2b2280d4fec3/address-filter-mismatch-wcf-addressing
http://www.i-m-code.com/blog/blog/2011/11/30/hosting-silverlight-over-http-under-f5-big-ip/
http://www.i-m-code.com/blog/blog/2011/08/18/hosting-silverlight-under-https/
I'd like to determine whether the proxy at a given IP address is transparent or anonymous. Transparent proxies connect to websites with your real IP in headers like HTTP_X_FORWARDED_FOR or HTTP_VIA. I would like to check these proxies, but all solutions I found are developed to work on server side, to test incoming connections for proxyness. My plan is to make a web request to an example page via the proxy. How do I check the headers sent by the proxy, preferably using the WebRequest class?
EDIT: So is there some free web API that will allow me to do this? I'm not keen on setting up a script on my own small server that will be bombarded with requests.
Simply you don't need that headers. I could check transparency of a proxy by sending request to any get-my-IP site, if it returns my IP then it is transparent. If not then the proxy is anonymous. So steps are:
send request to any get-my-IP site without proxies
extract the IP from response as my local IP address
send new request to any get-my-IP site with the proxy
extract the IP from response and compare it with my local IP (step 2)
if(LocalIp==ProxyIp) then the proxy is transparent else it is anonymous
That is technically impossible since the client only sees what the proxy returns back to the client - the proxy can do whatever it wants when communicating with the target server and transform your request and the answer from the server anyway it wants...
To really know what the proxy does you NEED see what the server gets and sends back without any interference from the proxy...
The reason all solutions are server side is that the headers you're talking about are only passed from the proxy to the server and never back to the client again in the response.
In other words, if you plan to check for HTTP headers in the request from the proxy to the server, you either need to check them server side (as the solutions you found do) or actively pass them right back in the response to the client to check.
Either way, you can't just make a request to a random page and check the headers the server gets, the server needs to be involved in some way.
I'm writing an application in C# that uses proxies. The proxies are sending the HTTP_X_FORWARDED_FOR during HTTP requests, and this is unwanted behavior.
I am extending the Interop.SHDocVw axWebBrowser (aka Internet Explorer) control right now, but can take another approach if needed for this problem.
Is there some way to suppress this header... can this be done in code, on the proxy server, or not at all?
The proxy server, between your C# client and web site, is adding that HTTP_X_FORWARDED_FOR header.
So, You cannot suppress that on C# client.
But if you have control on proxy server, there should be a setting to turn it off.
For example in squid, following could work.
header_access X_Forwarded_For deny all
Or
You may try to find different proxy services, which does not send your ip address.
Long story short an API I'm calling's different environments (dev/staging/uat/live) is set up by putting a host-record on the server so the live domain resolves to their other server in for the HTTP request.
The problem is that they've done this with so many different environments that we don't have enough servers to use the server-wide host files for it anymore (We've got some environments running off the same servers - luckily not dev and live though :P).
I'm wondering if there's a way to make WebRequest request to a domain but explicitly specify the IP of the server it should connect to? Or is there any way of doing this short of going all the way down to socket connections (Which I'd really prefer not to waste time/create bugs by trying to re-implementing the HTTP protocol).
PS: I've tried and we can't just get a new sub-domain for each environment.
One way to spoof a HTTP host header is to set a proxy to the actual server you'd like the request sent to. Something like
request.Proxy = new WebProxy(string.Format("http://{0}/", hostAddress));
may well work.
There are ways to control configuration values.
You have conditional compilation enabled in .NET, in which you can create your configuration sets and create directives that can use specific domain instead of changing its resolution strategy. For example, in debug, you can use x.com and in release mode you can use y.com, wherever you need to reference your url.
Web.config and app.config now supports changes as per the configuration selected, you can have web.debug.config and web.release.config and you can specify different url references here.