how to handle http requests from a browser using c# - c#

I have a windows application developed in c#.Net which is used as a website blocker for a network.I have done this by modifying the hosts file.It works fine when urls are blocked like "www.yahoo.com".Now my requirement is I have to block the urls based on the keywords.i.e when the user just types "yahoo" in the browser,I should verify the keyword and block a corresponding website.Now how can I track the website typed by the user in the browser and block or allow the user to particular site based on the url.I should not allow the user to view the page if the keyword is present.How to do this?Can some one help me to do this?

There's plenty of code samples out there that will act as proxies (eg. http://code.cheesydesign.com/?p=393) however I would strongly suggest following the advice of the comments you've been given and go with an existing application.
Building a proxy that will not interfere with the complicated web apps of today is not trivial. You also need to be careful about blocking based on keywords - web apps I've worked on have failed in spectacular ways due to proxies doing this, and rejecting requests for important javascript files (often due to minification or compression) rendering our app useless.
Also consider that your proxy won't be able to check SSL traffic (which is increasing all the time) without serving up your own certs acting as a man-in-the-middle.

Related

Mitigating the risk of Server-Side Request Forgery when downloading files with the .NET Framework

Question: If I have an untrusted, user-supplied URL to a file, how do I protect myself against server-side request forgery when I download that file? Are there tools in the .NET Framework (4.8) base class library that help me, or is there some canonical reference implementation for this use case?
Details: Our web application (an online product database) allows users to upload product images. We have the requirement that users should be allowed to supply the URL to a (self-hosted) image instead of uploading an image.
So far, so good. However, sometimes our web application will have to fetch the image from the (external, user-supplied) URL to do something with it (for example, to include the image in a PDF product data sheet).
This exposes my web application to the risk of Server-Side Request Forgery. The OWASP Cheat Sheet documents this use case as "Case 2" and suggests mitigations such as validating URLs and backlisting known internal IP addresses.
This means that I cannot use the built-in methods for downloading files such as WebClient or HttpWebRequest, since those classes take care of DNS resolution, and I need to validate IP addresses after DNS resolution but before performing the HTTP request. I could perform DNS resolution myself and then create a web request with the (validated) IP address and a custom Host header, but that might mess up TLS certificate checking.
To make a long story short, I feel like I am reinventing the wheel here, for something that sounds like a common-enough use case. (I am surely not the first web developer who has to fetch files from user-supplied URLs.) The .NET Framework has tools for protection against CSRF built-in, so I'm wondering if there are similar tools available for SSRF that I just haven't found.
Note: There are similar question (such as this one) in the ssrf tag, but, contrary to them, my goal is not to "get rid of a warning", but to actually protect my system against SSRF.
Confirm the requirement with the business stakeholders. It's very possible they don't care how the file is obtained-- they just want the user to be able to specify a URL rather than a local file. If that is the case, your application can use Javascript to download the file from the browser then upload it from there. This avoids the server-side problem completely.
If you have to do it server-side, ask for budget for a a dedicated server. Locate this in your DMZ (between the perimeter firewall and the firewall that isolates your web servers from the rest of your network). Use this server to run a program that downloads the URLs and puts the data where your main application can get it, e.g. a database.
If you have to host it on your existing hardware, use a dedicated process, running in a dedicated application pool with a dedicated user identity. The proper location for this service is on your web server (not application or database servers).
Audit and monitor the security logs for the dedicated user.
Revoke any permission to private keys or local resources such as the filesystem.
Validate the protocol (http or https only).
To the extent possible, validate the IP address, and maintain a black list.
Validate the domain name to ensure it is a public URL and not something within your network. If possible, use a proxy server with public DNS.

ASP.NET Unique Browser ID

I have a single solution with multiple C# ASP.NET Web Forms projects. I want a way to identify a given browser so that each website can identifier that same browser. I need to do this from the C# Code-Behind code (not with the client code, like JavaScript). I also cannot use the Session because it isn't shared across websites. I don't think cookies are either.
For example, if a user logs onto Website1 and then logs onto Website2 with the same browser on the same computer, I want to be able to identify that. But if a user logs onto Website1 with Chrome and then Website1 with FireFox (regardless of whether it's on the same computer or not), I want to detect that as well.
If it makes any difference, I am using Azure to publish my web projects. So all websites will have similar domains (eg website1.azurewebsites.net and website2.azurewebsites.net).
If you want to track someone using the same browser on the same computer then use a cookie. If the websites have different domains you'll need to be clever because modern browsers have a lot of protection against what they see as tracking cookies. One option is using a hidden interstitial page as described here.
Your second scenario, a user accessing same site with different browsers, I suggest storing the user agent string (one of the request headers) and adding this to a login audit so you can build up a collection of different user agents used by a given user. There are libraries available for parsing user agent strings and extracting name, version, engine etc.
Between these two techniques and a bit of business logic you should get what you need. If you would like me to clarify any of this, let me know and I'll provide more detail.

Solution for mixed content blocking

Our product has a collection of sites and the main page contain 3 iframes which loads these different web sites. We are going to enable SSL on all the site. We allow html user data to be displayed in our systems. Currently we put this on hold since we experience Mixed Content Issues because of following reasons.
Some of the elements in the user’s data which refers http content.
Ex: img, js etc
Some of the third party which loads in our iframes.
(Different content provider)
We thought of developing our own web proxy, we do have concern about performance as well as expensiveness of this solution. Can anybody tell what are the available solutions for the Mixed Content Issues and available third party web proxy where we can buy?
The best solution would probably be to purchase remote servers from some service (google will give you millions of hits) and then set up a CGI script to load the insecure content on to the remote server, cache it, and then serve that content. That way your users are protected from 3rd parties knowing what they look at and if you set up your SSL certificate on those servers then you can easily get around the mixed content.
That being said, there will be a big hiccup when you start loading your user's content off the remote server as it will have to start caching everything.
Using a web proxy is not a good solution for following reasons:
We have performance problem and expensiveness of this solution like you said.
The most problematic of this solution is we still have security vulnerability. The point of using https on a site is to prevent the site from sniffers and man-in-middle attack. If you use a web proxy, the connection between your browser and your proxy is still vulnerable.
I'm not sure whether a web proxy would help in anyway because the browser always interprets these links as http even if your server is SSL enabled.
For more information about mixed content: https://developer.mozilla.org/en-US/docs/Security/MixedContent
The correct way to deal with this situation is you must modify all your links to load content with https. Or a better way is to use protocol relative url
<script src="//scripts/main.js"></script>
Mixed content warnings are built into browsers by design to indicate exactly what they mean. You can turn them off in settings or just click ok, so by throwing the mixed content, you're degrading UI, but not functionality.
A few things come to mind, since the providers can't change their content:
Write a back end scraper for your app that scrapes the web page and servers the content locally over https.
Don't render the content immediately, make the user click on it to open the iframe so that at least your page loads and you can warn the user (optional).
Enhance either solution by checking for https first, a lot of websites have 80 and 443 both open, but as you pointed out, not everybody.
Not too familiar with this one, but you can maybe even have the server instance of internet explorer open the pages and cache them for you simplifying the scrape.
If I was writing this, I would check for https when possible and allow the mixed content warnings as all that's by design.

How to: Encrypt URL in WebBrowser Controls

I have a program that opens a web browser control and just displays a web page from our server. They can't navigate around or anything.
The users are not allowed to know the credentials required to login, so after some googling on how to log into a server I found this:
http://user_name:password#URL
This is 'hard coded' into the web browsers code. -It works fine.
HOWEVER: Some smart ass managed to grab the credentials by using WireShark which tracks all the packets sent from your machine.
Is there a way I can encrypt this so the users cannot find out?
I've tried other things like using POST but with the way the page was setup, it was proving extremely difficult to get working. -(Its an SSRS Report Manager webpage)
I forgot to include a link to this question: How to encrypt/decrypt the url in C#
^I cannot use this answer as I myself am not allowed to change any of the server setup!
Sorry if this is an awful question, I've tried searching around for the past few days but can't find anything that works.
Perhaps you could work around your issue with a layer of indirection - for example, you could create a simple MVC website that doesn't require any authentication (or indeed, requires some authentication that you fully control) and it is this site that actually makes the request to the SSRS page.
That way you can have full control over how you send authentication, and you need never worry about someone ever getting access to the actual SSRS system. Now if your solution requires the webpage to be interactive then I'm not sure this will work for you, but if it's just a static report, it might be the way to go.
i.e. your flow from the app would be
User logs into your app (or use Windows credentials, etc)
User clicks to request the SSRS page
Your app makes an HTTP request to your MVC application
Your MVC application makes the "real" HTTP request to SSRS (eg via HttpClient, etc) and dumps the result back to the caller (for example,it could write the SSRS response via #HTML.Raw in an MVC View) The credentials for SSRS will therefore never be sent by your app, so you don't need to worry about that problem any more...
Just a thought.
Incidentally, you could take a look here for the various options that SSRS allows for authentication; you may find some method that suits (for e.g Custom authentication) - I know you mentioned you can't change anything on the server so I'm just including it for posterity.

Single Sign-On on Winforms Web Browser control

I have a Winforms app which hosts a web browser control. Within this control you can also navigate to pre-determined external websites.
I need to implement Single Sign-On so that the user doesn't need to authenticate in each of the known external websites. I have already some ideas but it would be nice to hear all your opinions.
What would be the best way to do this?
In fact, is there something already for this? (edit: how do browsers remember logins/passwords)?
Cheers
Generally it is the responsibility of the site to implement SSO, and the client will then automatically respond to the site. Since the web browser control is using IE it inherits the same capabilities as the web browser. For example if the site uses Windows Authentication then the control will authenticate following a challenge from the site without user intervention. Similarly the control will perform the necessary redirects if the site is using SAML 2.0.
Since these are external websites I have to assume that Windows Authentication is not going to work well because the server and the client are on different domains. Therefore something along the lines of SAML sounds like the most secure option.
It seems like implementing SAML is going to be a problem for you and you need to manually complete and submit web forms which load inside the control. This is possible by accessing the DOM but it quickly becomes a difficult to maintain solution.
The web browser control offers up a document property that gives you an HtmlDocument object which allows you to find elements and execute JavaScript in pages. You need to use these mechanisms to automatically perform the authentication. The steps might look like this:
Capture URL, or some cookie that will let you know if authentication is required by inspecting the web browser control properties. You might want to look into OnNavigate().
Access the document and complete the form values.
Call a JavaScript submit function to submit the form, or inject some JavaScript to do this. I find it easier to insert JavaScript into pages than to write more complicated C# code in a lot of cases. It is easier to prototype in a regular browser.
Unless the websites all share a common trusted authentication mechanism (like OpenID) you're stuck doing custom coding for each site.
Browsers remember passwords for single sites. I wouldn't call that "single sign-on", which is a method of using a trusted authority to authenticate across multiple disparate web sites which all rely on that authority to verify a user's identity.
As you asked for SSO packages that already do this, some examples are:
RSA ClearTrust
CA SiteMinder

Categories