Specify custom DNS Server when downloading using System.Net.WebClient - c#

I'm trying to filter submitted web sites by attempting to browse them while bouncing the request off of OpenDNS. If the page received is the OpenDNS page, I'll assume the page shoudld be blocked.
How might I accomplish this task using the System.Net.WebClient class?

The WebClient class relies on the Dns class which calls into the system and uses the currently configured DNS settings. If you go down this route the only way to do what you want is to change the DNS settings first which is probably not an option for you.
Instead, either take a look at a thrid-party control like DnDns (free, open source) or just call nslookup and parse the results as detailed here.

Related

ASP.NET reCAPTCHA in Internet Explorer SSL Warning Message

I have implemented the reCAPTCHA solution (latest version), into my ASP.NET web project. It works fine and dandy in my local environment, but on our SSL encrypted server I receive the warning message "Do you want to view only the webpage content that was delivered securely? This webpage contains content that will not be delivered using a secure HTTPS connection, which could compromise the security of the entire webpage." This only occurs in Internet Explorer.
I've found, on these forums and others to change the deprecated server to the new of googles (old: https://api-secure.recaptcha.net to new: https://www.google.com/recaptcha/api), but I am not directly referencing the javascript files, just using the .net library.
Any help would be greatly appreciated!
If the MVC Helper is using Context.Request.IsSecureConnection as Dan has pointed out above, and your application server is behind a load balancer that intercepts HTTPS and forwards as HTTP, then OverrideSecureMode will likely be false and rendering would take place insecurely.
If behind a load balancer, one of the ways to find the originating protocol would be to do something like this (provided you have access to the X-Forwarded-Proto header field).
bool isSecureConnection = String.Equals(
filterContext.HttpContext.Request.Headers["X-Forwarded-Proto"],
"https",
StringComparison.OrdinalIgnoreCase);
From browsing the control's source, the control has a property called OverrideSecureMode that, when set to true, always causes the control to render via HTTPS.
The MVC Helper, on the other hand, doesn't seem to allow setting that property. It seems to be using Context.Request.IsSecureConnection to determine which hostname to use; discovering why that value is wrong for you is another way to attack the problem.

Best way to transfer files through a web service

My C# program communicates with a server using a web service, I need the client to download big files from the server and have the option to pause and continue their download, the downloader must also be authorized to download the file.
I had two thoughts on how to do that,
one is to use some 3rd party API like wget to download the files. the problem with that is that I need to learn the API commands and that I'm not certain I can show my download progress in the program, another issue is that I would have to use use bare URLs to get the files from the server which seems ugly and could lead to people just downloading them off the server (I want them to be authorized, although this isn't a real issue since this is just a school project).
My other thought was to create a method on the web service that will get a position in the file and an amount of bytes and return them and the client will piece them together, it seems more complicated but more compelling since the user must be authorized to download the file and I can use it to show the tester some more advanced programming skills ;). The issue with that looks like it might be performance taxing.
What's your opinion? what's the best way to download big files off a server?
Absent the need for authorization and partial downloads, WebClient.DownloadData or WebClient.DownloadDataAsync would be the preferred method of downloading a file from a server.
You could still use WebClient for the authorization by setting the Credentials in your WebClient object instance. If the user isn't authorized to download the file, based on those credentials, the server can return a 404 (Not found) or 403 (Forbidden).
If your server supports HTTP 1.1, the client can start in the middle of the file. To do so, you'll have to create a class that inherits from WebClient and override the GetWebRequest method. That method would then set the headers to do a positional GET.
class MyWebClient : WebClient
{
public int StartDownloadAt { get; set; }
protected override WebRequest GetWebRequest(Uri address)
{
HttpWebRequest req = (HttpWebRequest)base.GetWebRequest(address);
req.AddRange(position_to_start);
}
}
And in the code that uses it:
MyWebClient client = new MyWebClient();
client.StartDownloadAt = 1024 * 2024; // start download 1 megabyte into file.
client.DownloadData(...);
The above is just an example. You'd probably want to make that more robust by having the StartDownLoadAt property reset to 0 when a download is done (or aborted), and not do the AddRange if StartdownloadAt is set to 0. To fully support ranges, you'd probably want properties for start and end range, etc.
And, of course, the client will have to handle stitching the disparate downloaded pieces together after download is complete.
The point is that it should be possible, with a little work, by using the WebClient class.

POST data to a Flex/Flash (mxml) application

I have Flex application requiring to filter users depending on there database groups. Depending on which group they are, the're is a config.xml file that is use to populate the swf.
Here is how I figure how to do this :
1. The client comes to a .aspx page with a form requiring a username and a password.
2. On the server side I confirm the user credential
3. Once the username/password is valid I redirect to the mxml file with the config.xml file in the html headers (post).
My problem comes when I need to get the post data from the http request. Let's say I have this code :
<mx:Application initialize="init()">
<mx:Script>
<![CDATA[
private function init():void
{
// get the post data here
}
/* More code here */
]]>
</mx:Script>
</mx:Application>
How do I get the post data on the init() function.
Thank you.
For those that would be interested, I've found some ressources on the Adobe Flex 3 Ressource center.
Basically there is no current way to pass data with the POST method. You can either add the parameters at the end of you swf url (GET method) as shown here : http://livedocs.adobe.com/flex/3/html/help.html?content=deep_linking_5.html#245869
The other way is to embed them in the page with the flashVars method shown here : http://livedocs.adobe.com/flex/3/html/help.html?content=passingarguments_3.html#229997
If you still wonder, how I'll manage to do this if you run to in the same situation. Here is my idea (feel free to share if you have different vision) :
1.User logs in login.aspx
2.Depending on the credentials of the users the server side code modify the index.html file to embed the correct xml file in the flash object.
3.With the FlashVars method, I get back the xml file path and job done!
If you ever run in a similar situation and need help contact me.
I don't think it's possible to get the POST data, but others might have a way. An alternative solution would be:
User logs in: login.aspx
User directed to Flash content: content.html embedding content.swf
Flash requests config.xml from server: content.swf makes HTTP request for config.xml.aspx
Server provides user's configuration in config.xml.aspx
In your init() function, you'd make the URLLoader request to get the configuration, and you'd do the configuration in the Event.COMPLETE handler.
Another possibility is to use HTTP cookies--not handled natively by Flash, but you can get to them via Javascript--see this CookieUtil class.

How to POST Data to another web application (cross domain)

Please consider the following scenario,
There are two web applications App1 & App2. A user would submit his information on App1 though a form. On click of a specific button/link on App1, the same data should be posted to a page on App2 and the user should also be redirected to the same page on App2.
I would like some help in finding out the best way to implement this functionality.
One of the approaches that I have already tried out is by creating a temporary HTML form at runtime, setting the action attribute of the form to the App2 Page and get the form posted by using javascript submit. The data can then be fetched on App2 page by using the response.form object.
This approach works well, but i was still wondering if there is any other way to implement the required functionality.
I would really appriciate if you can give some insights on using RESTful webservices to implement this, or else, using some HttpModule to intercept requests at App1 and modify redirect response to app2 or any other approach that you might find fit for the purpose.
Edit:
Using querystring isnt an option for me.
I've had a need to do similar things with feed agregation and building rss feeds from web page content on different domains.
User Gets app1 page, fills in details and submits then on the server for app1 I have a method that looks like this ...
HTMLDocument FetchURL( string url )
{
WebClient wc = new WebClient();
string remoteContent = wc.DownloadString(url);
// mshtml api is very weird but lets just say you have to do things this way ...
HtmlDocument doc = new HTMLDocument();
IHTMLDocument2 doc2 = (IHTMLDocument2)doc;
doc2.write(new object[] { remoteContent });
return (HTMLDocument)doc2;
}
This function does 2 things of use ...
It gets the page of content at "url"
It parses that content in to a HTMLDocument object
Once you have this function you can then call it passing it the url to the remote page and get back a html doucment.
The functions in the HTMLDocument object will allow you to do javascript like dom queries such as :
docObject.GetElementById("id");
I then have different functions that do different things with this object based on the page / site i'm returning data from.
There is however one fatal flaw here ...
This is likely to work really well with sites that don't change much in structure and are built by code but not so well on less dynamic sites.
With stackoverflow for example its easy to pull out a question and the accepted answer for that question so I could use this code to pull and publish content from here on my own web site.
However ...
This is not going to help you for user / login related details as this sort of information is not shared to generally everyone.
It's bit like me going and trying this to link facebook profiles to my own website, I would have to go through some form of api that asked the user to authenticate their details before making the request.
simply pulling a web page based on a url only will give the other site no authentication information unless that site accepts the user login details in the quesrystring and you already have them.
You may however be able to chain requests by ripping apart my sample method, requesting the login page parsing the results, filling in the form, then posting back using the same web client instance to login then requesting the url.
The idea being that you would have a form that asks the user to put in their login details for the remote site on your site then you go and find their profile page based on that.
This would be best farmed out to a class rather than just a simple method like i have here.
In my case though i was only after something simple (the bbc top 40 uk charts) which i pulled information from not only the bbc but places like amazon, google, and youtube, then i built a page :)
It's neat but serves no functional purpose other than pulling all your other fave sources of info on to 1 page.
If you are already committed to using javascript, then why not an ajax post, and change the window.location based on the response?
You can use HttpServerUtility.Transfer this will preserve your form contents and transfer the user to the new page.
http://msdn.microsoft.com/en-us/library/system.web.httpserverutility.transfer.aspx
I have built something like what you are describing, and I found that using a <form> tag to POST to app2 is the most reliable way... basically, the way you found that worked well.
If App2 is residing on a different domain, it's usually best to create your own interface for the submission, and have that interface handle the posting from App1 to App2.
(Browser) -> Submits form to App1 ->
(App1) -> validate input
-> stores local info
-> creates an HttpRequest/POST object
-> posts to App2
(App2) -> handles the post
<- returns the response
-> confirms the results of App2
<- returns the results to the browser.
In essense, you want to control and proxy requests from your Applications domain to any outside interfaces as much as possible.
Note: I'm answering my own question
just to have a correct answers marked
against it. All the suggestions
provided by various members here are
correct in their own way, but they
were not apt for my requirements.
Hence, I cant accept any of them as
correct.
The way I have Implemented is by creating a custom control which would have a configurable property containing the URL to post data and another one accepting a dictionary object as the data input to be posted.
This control would internally create a HTML form with action attribute set to the URL specified by the user and have the data feilds created out of the dictionary object. This form would then be posted on the button click event on the page hosting this control.

How to detect if page load in newly-started browser process fails?

I use Process.Start("firefox.exe", "http://localhost/page.aspx");
And how i can know page fails or no?
OR
How to know via HttpWebRequest, HttpWebResponse page fails or not?
When i use
HttpWebRequest myReq = (HttpWebRequest)WebRequest.Create("somepage.aspx");
HttpWebResponse loWebResponse = (HttpWebResponse)myReq.GetResponse();
Console.Write("{0},{1}",loWebResponse.StatusCode, loWebResponse.StatusDescription);
how can I return error details?
Not need additional plugins and frameworks. I want to choose this problem only by .net
Any Idea please
Use Watin to automate firefox instead of Process.Start. Its a browser automation framework that will let you monitor what is happening properly.
http://watin.sourceforge.net/
edit: see also Google Webdriver http://google-opensource.blogspot.com/2009/05/introducing-webdriver.html
If you are spawning a child-process, it is quite hard and you'd probably need to use each browser's specific API (it won't be the same between FF and IE, for example).
It doesn't help that in many cases the exe detects an existing instance and forwards the request there (so you can't trust the exit-code, since the page hasn't even been requested in the right exe yet).
Personally, I try to avoid assuming any particular browser for this scenario; just launch the url:
Process.Start("http://somesite.com");
This will use the user's default browser. You have to hope it appears though - you can't (reliably and robustly) check that externally without lots of work.
One other option is to read the data yourself (WebClient.Download*) - but this may have issues with complex cookies, login, user-agent awareness, etc.
Use HttpWebRequest class or WebClient class to check this. I don't think Process.Start will return something if the URL not exists.
Don't start the page in this form. Instead, create a local http://localhost:<port>/wrapper.html which loads http://localhost/page.aspx and then either http://localhost:<port>/pass.html or http://localhost:<port>/fail.html. localhost: is a trivial HTTP server interface implemented by your app.
The idea is that Javascript gives you an API inside the browser, which is far more standard than the APIs on the outside of browsers. Since the Javascript on wrapper.html comes from the same server and even port as the subsequent resources, this should satisfy the same-origin policies in current browsers.

Categories