How can you detect if the client browser has SSL support? I am not refering
to the server Variables HTTPS_* . I want to be able to determine
if the browser has no SSL support.
P.S. I know this is possible because this company (http://www.cyscape.com)
has a product that can even detect when you unselect SSL support from your
browser options.
All browsers have SSL support (period). No one is going to release a browser that cannot be used. HTTPS is a security requirement and apart of OWASP A3: Broken Authentication and Session Management.
While it is relatively easy to check if a server supports SSL connections, detecting browser support for the same is extremely difficult. The solution likely requires a client-side browser extension that implements the logic necessary to search through browser configuration or version information for SSL support. This problem becomes even more difficult, because the extension would need to work with multiple browsers.
If you do not visitors that cannot connect to a particular page over SSL, there are usually server-side methods you can employ, such as redirecting them to a landing page where they are notified of the SSL requirement, or you can simply deny their web request.
As mentioned, there's no reason a modern web browser will have SSL disabled by default.
At the SSL level, does your server receive a connection when you give the browser an https link?
At the HTTP level, you could try various scenarios that assign a session cookie via HTTP, then update some session variables via links only accessible via HTTPS. Or you could set the "secure" attribute on a cookie and see how the browser handles it.
You could try a JavaScript methodology and inspect the window.location property or just try setting it to an https link. (Or try some Java functions using LiveConnect or do something similar with Flash.)
Is there a particular motivation for the question? If you're trying to determine SSL support for browsers that for some bizarre reason don't have SSL enabled, then a cookie or JavaScript approach should be fine. If you're trying to determine SSL support for an adversarial browser (e.g. a bot that doesn't follow robots.txt) or you have more reason to not trust client-side checks like JavaScript, then checking SSL either might not be a useful solution or you might have to go deeper into seeing if the SSL handshake differs from common browsers.
Checks for whether or not a client supports SSL will be subject to Man-in-the-Middle attacks where an active network attacker gains control of the user's connection and makes it appear as if the client doesn't support SSL.
This question is most often asked when developing mobile sites targeted at older phones that may not support real SSL (they support WAP-TLS). The number of phones in this category continues to shrink and my suggestion is to ignore them, maybe even going so far as to blacklist their user agents.
Related
Our product has a collection of sites and the main page contain 3 iframes which loads these different web sites. We are going to enable SSL on all the site. We allow html user data to be displayed in our systems. Currently we put this on hold since we experience Mixed Content Issues because of following reasons.
Some of the elements in the user’s data which refers http content.
Ex: img, js etc
Some of the third party which loads in our iframes.
(Different content provider)
We thought of developing our own web proxy, we do have concern about performance as well as expensiveness of this solution. Can anybody tell what are the available solutions for the Mixed Content Issues and available third party web proxy where we can buy?
The best solution would probably be to purchase remote servers from some service (google will give you millions of hits) and then set up a CGI script to load the insecure content on to the remote server, cache it, and then serve that content. That way your users are protected from 3rd parties knowing what they look at and if you set up your SSL certificate on those servers then you can easily get around the mixed content.
That being said, there will be a big hiccup when you start loading your user's content off the remote server as it will have to start caching everything.
Using a web proxy is not a good solution for following reasons:
We have performance problem and expensiveness of this solution like you said.
The most problematic of this solution is we still have security vulnerability. The point of using https on a site is to prevent the site from sniffers and man-in-middle attack. If you use a web proxy, the connection between your browser and your proxy is still vulnerable.
I'm not sure whether a web proxy would help in anyway because the browser always interprets these links as http even if your server is SSL enabled.
For more information about mixed content: https://developer.mozilla.org/en-US/docs/Security/MixedContent
The correct way to deal with this situation is you must modify all your links to load content with https. Or a better way is to use protocol relative url
<script src="//scripts/main.js"></script>
Mixed content warnings are built into browsers by design to indicate exactly what they mean. You can turn them off in settings or just click ok, so by throwing the mixed content, you're degrading UI, but not functionality.
A few things come to mind, since the providers can't change their content:
Write a back end scraper for your app that scrapes the web page and servers the content locally over https.
Don't render the content immediately, make the user click on it to open the iframe so that at least your page loads and you can warn the user (optional).
Enhance either solution by checking for https first, a lot of websites have 80 and 443 both open, but as you pointed out, not everybody.
Not too familiar with this one, but you can maybe even have the server instance of internet explorer open the pages and cache them for you simplifying the scrape.
If I was writing this, I would check for https when possible and allow the mixed content warnings as all that's by design.
I asked a question here a while back on how to hide my http request calls and make them more secure in my application. I did not want people to use fiddler 2 to see the call and set up an auto responder. Everyone told me to go SSL and calls will be hidden and information kept safe.
I bought and installed an SSL Certificate and got everything set up. I booted up fiddler 2 and ran a test application that connect to an https web service as well as connected to an https php script.
Fiddler 2 was able to not only detect both requests, but decrypt them as well! I was able to see all information going back and fourth, which brings me to my question.
What is the point of having SSL if it made zero difference to security. With or without SSL I can see all information going back and fourth and STILL set up an auto responder.
Is there something in .NET I am missing to better hide my calls going over SSL?
EDIT
I am adding a new part to this question due to some of the responses I have received. What if an app connects to a web service to login. The app sends the web service a username and a password. The web service then sends data back to the app saying good login data or bad. Even if going over SSL the person using fiddler 2 could just set up an auto responder and the application is then "cracked". I understand how it could be useful to see the data in debugging, but my question is what exactly should one do to make sure the SSL is connecting to the one it was requesting. Basically saying there cannot be a middle man.
This is covered here: http://www.fiddlerbook.com/fiddler/help/httpsdecryption.asp
Fiddler2 relies on a "man-in-the-middle" approach to HTTPS interception. To your web browser, Fiddler2 claims to be the secure web server, and to the web server, Fiddler2 mimics the web browser. In order to pretend to be the web server, Fiddler2 dynamically generates a HTTPS certificate.
Essentially, you manually trust whatever certificate Fiddler provides, the same will be true if you manually accept certificate from random person that does not match domain name.
EDIT:
There are ways to prevent Fiddler/man-in-the-middle attack - i.e. in custom application, using SSL, one can require particular certificates to be used for communication. In case of browsers, they have UI to notify user of certificate mismatch, but eventually allow such communication.
As a publicly available sample for explicit certificates, you can try to use Azure services (i.e. with PowerShell tools for Azure) and sniff traffic with Fiddler. It fails due to explicit cert requirement.
You could set up your web-service to require a Client-side certification for SSL authentication, as well as the server side. This way Fiddler wouldn't be able to connect to your service. Only your application, which has the required certificate would be able to connect.
Of course, then you have the problem of how to protect the certificate within the app, but you've got that problem now with your username & password, anyway. Someone who really wants to crack your app could have a go with Reflector, or even do a memory search for the private key associated with the client-side cert.
There's no real way to make this 100% bullet proof. It's the same problem the movie industry has with securing DVD content. If you've got software capable of decrypting the DVD and playing back the content, then someone can do a memory dump while that software is in action and find the decryption key.
The point of SSL/TLS in general is so that the occasional eavesdropper with Wireshark isn't able to see your payloads. Fiddler/Burp means that you interacted with the system. Yes, it is a very simple interaction, but it does require (one) of the systems to be compromised.
If you want to enhance the security by rendering these MITM programs useless at such a basic level, you would require client certificate authentication (2-way SSL) and pin both the server and client certificates (e.g. require that only the particular certificate is valid for the comms). You would also encrypt the payloads transferred on the wire with the public keys of each party, and ensure that the private keys only reside on the systems they belong to. This way even if one party (Bob) is compromised the attacker can only see what is sent to Bob, and not what Bob sent to Alice.
You would then take the encrypted payloads and sign the data with a verifiable certificate to ensure the data has not been tampered with (there is a lot of debate on whether to encrypt first or sign first, btw).
On top of that, you can hash the signature using several passes of something like sha2 to ensure the signature is 'as-sent' (although this is largely an obscure step).
This would get you about as far in the security way as achievable reasonably when you do not control (one) of the communicating systems.
As others mentioned, if an attacker controls the system, they control the RAM and can modify all method calls in memory.
I am wondering how is Google able to show messages like Cannot connect to the real mail.google.com or similar? Are the IP addresses of Google servers simply hard-coded within Chrome or is it possible to do a similar thing? This could help making sure clients are not visiting phishing or scams websites.
This errors only shows when trying to access Google related websites, nothing else.
Here is a sample of what Google Chrome shows when trying to connect to Gmail without providing the proxy credentials.
PS: I usually use C# & ASP.NET. I am open to suggestions.
EDIT :
Following the answer from SilverlightFox, is there any way to "request" the pinning of my website certificate? And/Or how to add it to the "STS preloaded list"?
As #Ted Bigham mentioned in comments, this will be achieved via Certificate pinning:-
One way to detect and block many kinds of MITM attacks is "certificate pinning", sometimes called "SSL pinning". A client that does certificate pinning adds an extra step to the normal TLS protocol or SSL protocol: After obtaining the server's certificate in the standard way, the client checks the server's certificate against trusted validation data. Typically the trusted validation data is bundled with the app, in the form of a trusted copy of that certificate, or a trusted hash or fingerprint of that certificate or the certificate's public key. For example, Chromium and Google Chrome include validation data for the *.google.com certificate that detected fraudulent certificates in 2011. In other systems the client hopes that the first time it obtains a server's certificate it is trustworthy and stores it; during later sessions with that server, the client checks the server's certificate against the stored certificate to guard against later MITM attacks.
From What is certificate pinning?:-
some newer browsers (Chrome, for example) will do a variation of cerficiate pinning using the HSTS mechanism. They preload a specific set of public key hashes into this the HSTS configuration, which limits the valid certificates to only those which indicate the specified public key.
HTTP Strict Transport Security (HSTS) is a technology that is implemented via a HTTP response header (sent via HTTPS only) that tells a browser to "remember" that a website is to only be accessed via HTTPS for a period of time. If HSTS is set on www.example.com and the user visits http://www.example.com before max-age has expired, the browser will request https://www.example.com instead and no request will be sent via plain HTTP. HSTS requires that the user has already visited the site in order to have received the header, however a workaround has been implemented by Google in their Chrome browser code:
Google Chrome and Mozilla Firefox address this limitation by implementing a "STS preloaded list", which is a list that contains known sites supporting HSTS. This list is distributed with the browser so that it uses HTTPS for the initial request to the listed sites as well.
Update following question edit
Following the answer from SilverlightFox, is there any way to "request" the pinning of my website certificate? And/Or how to add it to the "STS preloaded list"?
According to this blog post you should contact the browser developers to be included in the HSTS list and have your public key (or CA's) pinned in the browser:
is this domain HSTS-preloaded in Chrome? For now it is hardcoded in the binary and will hopefully grow. You can contact Chromium to have your site included in that list.
and
So right now, the only solution to pin public keys of CAs signing your website certificates would be to contact Chromium team to be included in the code.
The only way to fight the man-in-the-middle is to have some pre-shared knowledge. In this case those are hardcoded certificates of a couple of root authorities that your browser trusts. These root certificates are used by their authority to sign certificates of other authorities which in turn become trustworthy too. A chain of trust is built until you hit the certificate of mail.google.com.
When you go to mail.google.com you are automatically redirected to the HTTPS (note the S!) version of the site. HTTPS means certificates. Your browser downloads the certificate of that site and inspects if the signing chain is rooted by some of the authorities your browser trusts. If not: Big fat warning! Possibly man-in-the-middle spoof going on!
Another thing that might happen is that the redirect from HTTP to HTTPS fails because some firewall between you and Google blocks HTTPS. That might be the warning you are getting.
My company wants to change domain names.
Requests to http://ServerA/folder/page.aspx need to go to http://ServerB/folder/page.aspx.
I can do most of the redirection in IIS and it works fine.
I have a concern that I don't seem to have the ability to test. Are there any problems/concerns form using the same technique for SSL pages? That is: to .
Using C#/ASP.net/ISS6 (I think)/Windows Server 2003
Thanks in advance.
No problems, no real security issues, but the browser will throw up a notification that the user is being redirected, and may require them to accept another certificate, or, if you're using the same certificate, may make snide little comments about the certificate being valid for servera.domain.com not serverb.domain.com.
If I were you, I'd try to remove the SSL from the original domain name, just to remove the possibility of having multiple SSL popups around to alarm your users.
if you have "bought" ssl certificates this will work without any warning.
My site has https sections (ssl), and others are regular http (not using ssl).
Are there any issues going from ssl to non-ssl pages?
Some times that user will click on a link, which will be ssl, then click on another link that leaves https to http based urls.
I understand that when on a ssl page, all images have to be also served using https.
What other issues do I have to handle?
I recall that a popup displays sometimes telling the user about a security issue, like some content isn't secure, I am guessing that is when you are under https and the page is loading images that are not under https.
Mixing is generally a bad idea just because it tends to detract from the user experience and coding around the differences makes the application that much harder to maintain. If you need SSL for even a little of the site, I'd recommend putting it all behind SSL. Some companies use a hybrid for the public "low end" site and SSL for the actual customer experience.
As Miyagi mentioned, session sometimes gets goofy, but it's not impossible if you keep the session stored in an external location. These means all session objects must be serializable, compact, etc, and it also means you'll need to manage the sessionid in a common browser element (cookie is usually the safest).
There is a good article on The Codeproject about this theme. The author encapsulates the switching by code and configuration. Not so long ago I tried to go this way - and stopped going it. There were some handling problems. But the main reason for stopping was the bad user experience mentioned by Joel before.
If you are using sessions on your site you will lose any session information when switching between ssl pages and non-ssl pages.