Password is in clear text after imlementing SSL [duplicate] - c#

I asked a question here a while back on how to hide my http request calls and make them more secure in my application. I did not want people to use fiddler 2 to see the call and set up an auto responder. Everyone told me to go SSL and calls will be hidden and information kept safe.
I bought and installed an SSL Certificate and got everything set up. I booted up fiddler 2 and ran a test application that connect to an https web service as well as connected to an https php script.
Fiddler 2 was able to not only detect both requests, but decrypt them as well! I was able to see all information going back and fourth, which brings me to my question.
What is the point of having SSL if it made zero difference to security. With or without SSL I can see all information going back and fourth and STILL set up an auto responder.
Is there something in .NET I am missing to better hide my calls going over SSL?
EDIT
I am adding a new part to this question due to some of the responses I have received. What if an app connects to a web service to login. The app sends the web service a username and a password. The web service then sends data back to the app saying good login data or bad. Even if going over SSL the person using fiddler 2 could just set up an auto responder and the application is then "cracked". I understand how it could be useful to see the data in debugging, but my question is what exactly should one do to make sure the SSL is connecting to the one it was requesting. Basically saying there cannot be a middle man.

This is covered here: http://www.fiddlerbook.com/fiddler/help/httpsdecryption.asp
Fiddler2 relies on a "man-in-the-middle" approach to HTTPS interception. To your web browser, Fiddler2 claims to be the secure web server, and to the web server, Fiddler2 mimics the web browser. In order to pretend to be the web server, Fiddler2 dynamically generates a HTTPS certificate.
Essentially, you manually trust whatever certificate Fiddler provides, the same will be true if you manually accept certificate from random person that does not match domain name.
EDIT:
There are ways to prevent Fiddler/man-in-the-middle attack - i.e. in custom application, using SSL, one can require particular certificates to be used for communication. In case of browsers, they have UI to notify user of certificate mismatch, but eventually allow such communication.
As a publicly available sample for explicit certificates, you can try to use Azure services (i.e. with PowerShell tools for Azure) and sniff traffic with Fiddler. It fails due to explicit cert requirement.

You could set up your web-service to require a Client-side certification for SSL authentication, as well as the server side. This way Fiddler wouldn't be able to connect to your service. Only your application, which has the required certificate would be able to connect.
Of course, then you have the problem of how to protect the certificate within the app, but you've got that problem now with your username & password, anyway. Someone who really wants to crack your app could have a go with Reflector, or even do a memory search for the private key associated with the client-side cert.
There's no real way to make this 100% bullet proof. It's the same problem the movie industry has with securing DVD content. If you've got software capable of decrypting the DVD and playing back the content, then someone can do a memory dump while that software is in action and find the decryption key.

The point of SSL/TLS in general is so that the occasional eavesdropper with Wireshark isn't able to see your payloads. Fiddler/Burp means that you interacted with the system. Yes, it is a very simple interaction, but it does require (one) of the systems to be compromised.
If you want to enhance the security by rendering these MITM programs useless at such a basic level, you would require client certificate authentication (2-way SSL) and pin both the server and client certificates (e.g. require that only the particular certificate is valid for the comms). You would also encrypt the payloads transferred on the wire with the public keys of each party, and ensure that the private keys only reside on the systems they belong to. This way even if one party (Bob) is compromised the attacker can only see what is sent to Bob, and not what Bob sent to Alice.
You would then take the encrypted payloads and sign the data with a verifiable certificate to ensure the data has not been tampered with (there is a lot of debate on whether to encrypt first or sign first, btw).
On top of that, you can hash the signature using several passes of something like sha2 to ensure the signature is 'as-sent' (although this is largely an obscure step).
This would get you about as far in the security way as achievable reasonably when you do not control (one) of the communicating systems.
As others mentioned, if an attacker controls the system, they control the RAM and can modify all method calls in memory.

Related

C# WebClient Encryption - Charles

is it possible to connect to SSL via a web client so that it can not be decrypted later by programs like Charles or Fiddler?
My Problem is, I have a Application white a Login, if the Password and Username from a User is correct, the Server returns Success. But if the user reads the response, he can easily fake it white Charles and „bypass“ my Login.
That depends on what you are trying to do, and why. Fiddler decrypts traffic by installing a root certificate on your computer, and then uses that basically make a man in the middle attack. Or in other words, with the users permission it subverts the security model of windows. So if you rely on windows to validate the SSL certificate used, there is nothing you can do about it.
If you only have one server that you really want toconnect to, you can validate that the certificate you are getting is the one and only one that you do indeed trust. This is known as certificate pinning.
If you are worried about someone storing the traffic and later using Fiddler to decrypt it, you can stop worrying.

Open X509 Certificates Selection Using USB Token in C# Hosted on IIS

I am working on a requirement which required Digital Signature on PDF files in ASP.Net C# and developed one application who gets the client certificate using USB token on my local machine. But, When hosting this application on IIS server getting error 'Current Session is not Interactive'.
Anyone have any idea, how can we get X509Certificates from client machine in ASP.Net C# and this application hosted on IIS server not console application.
My code for reference:
private void getSign()
{
X509Store store = new X509Store(StoreLocation.CurrentUser);
store.Open(OpenFlags.OpenExistingOnly | OpenFlags.ReadOnly);
X509Certificate2 cert = null;
//manually chose the certificate in the store
X509Certificate2Collection sel = X509Certificate2UI.SelectFromCollection(store.Certificates, null, null, X509SelectionFlag.SingleSelection);
if (sel.Count > 0)
cert = sel[0];
else
{
//MessageBox.Show("Certificate not found");
return;
}
SignWithThisCert(cert);
}
Disclaimer: although, it is not a straight answer to your question, but may contain directions for you to get the right way depending on a business requirements. There are two major issues in your question. One of them I tried to discuss in comments, but may need to explain.
Let's try to analyze your initial post:
Task: let users to upload a PDF to web application and make it signed.
Requirements: the PDF must be signed by using a certificate stored on a client's USB token.
Your proposed solution: get client certificate (from the client) and perform signing on a server side.
Let's formalize terms and operations used in your scenario.
Signing: document signing is used to guarantee document integrity and ensure that the document was not tampered in any way after it was signed. Digital signaure provides information about the entity who performed signing. This adds non-repudiation feature. Once signed, signer cannot deny the signing fact and proves that the information in the document was correct at the signing time. Asymmetric signing requires private key. Associated public key can be used to verify and validate the signature.
Private key: is a part of a key pair which belongs to particular client. No one else should know private key.
Security boundaries: client (web browser or other web client) and web server run in different security boundaries and have different trust levels. They are under different administrative controls. As the result, web server has very limited access to client machine/data and vice versa: web client has very limited access to server machine/data.
Your proposed design assumes that client picks the document and upload it to a server. Then client picks a signing certificate (particularly, private key) and upload it to a server for signing operation.
Issue #1: once private key leaves client and is copied to your web application you are breaking security boundary. As the result, the key is no longer private, because web application possesses a knowledge of private key and it is stored (even if temporarily) in server memory. Or, in simple word, the key is leaked and compromised.
Client is no longer responsible for his key and operations made by the key. Client may deny anything that was made by using its private key. And deny signatures made by your web application.
Issue #2: your proposed design assumes that PDF is copied to a server as is. And only then it is signed. However, once the document (or its exact binary copy, to be more precise) touches the network, client is no longer responsible for document accuracy. Because the document may be transformed during transit or some code that touches the document between client and document signing code.
Once document leaves client machine, client is no longer responsible for document integrity, because the document is passed through various pieces of code that compose the document to a suitable for transmission format (encapsulation, for example). As the result, document sent by client and recieved by document signing code on a server side may not be the same. Document integrity is not guaranteed. Although, you can apply TLS to protect the document during transmit, there still are places where the document can be anonymously tampered and no one will notice that.
Again, due to the fact that client cannot guarantee that web application received the same document he sent, client can deny the document you are trying to sign and deny the signature. Thus, making signature useless, because it proves nothing.
Issue #3: (not really an issue, but worth explanation) provided piece of code doesn't perform intended task (even though, it looks working in dev environment). Your goal is to invoke certificate selection dialog on client to select proper certificate.
During testing, you are running all the code locally. In debugger, the web application runs under currently logged user (which is interactive session) and is able to show the certificate selection dialog. However, you can't easily identify in which context (client or server) it is executed, because both, client and server run on the same machine and under the same security context. In fact, it is called under server context.
When you deploy the application to web server, you see the difference. Web application runs under some application pool context (user account) and this session is not interactive. As the result, X509Certificate2UI class cannot show the dialog, because no one will see it and no one can press buttons on it. This behavior is consistent regardless if the client and server run on the same or different machines, because IIS (or other web server) immediately separate concerns and security boundaries, while debugger does not. And client and server will definitely run under different security contexts. Even if you force to use the same context, IIS will make a secondary non-interactive user session to run the web application.
In order to show certificate selection dialog on client, you have to have a deep interaction with client, for example, via Silverlight (not sure if X509Certificate2UI is available in Silverlight) or some ActiveX control. You have to run some code on client side to accomplish that.
All stated above shows potential issues in your initial design, they simply break basic security rules. Technologies and tools are designed to follow these rules, not break them. By pursuing your initial design you will be forced to constantly fight with technologies to break them and making your application very insecure and vulnerable.
Preferred solution: we identified common risks in your design: key leak and document integrity between client and signing code on a server. To easily mitigate all this, you should do the only thing: perform document signing on client side. by doing this, signing private key will never leak from client and document integirty will be guaranteed over the course of signing and receipt by web application.
Now, we can talk about certificate properties. Your requirement is to use the certificate which is stored on a USB token. I don't know what kind of token you mean here. Is is a standard USB mass storage with PFX on it, or it is cryptographic device (smart card with USB interface, which is usually referred to USB token. For example, Aladdin (SafeNet) eToken devices).
If it is USB mass storage device, then it can't be a part of requirement, because generic USB drive does not offer anything helpful to identify the source of the certificate. Any certificate can be easily copied to USB drive and any certificate can be copied from USB drive.
If it is USB smart card, then there is a way to identify whether the certificate came from this device, or other source. Smart cards have one unique property: private key never leaves the card. All operations are performed on a card (this is the reason why they are slow comparing to certificates stored on PC).
This is usually accomplished by adding extra information to signing certificate during certificate issuance. For example, by adding Certificate Policies certificate extension. This will require that CA operator/manager ensures that certificates with specified certificate policies is deployed to smart cards only.
If these processes are established, you can use the code on server side that accepts signed PDF document and examines signing certificate contents. For example, you read certificate policies and expect to see particular entry there. If it is presented, then you can safely make assumptions that the document was signed by using the certificate stored on a smart card, because the key cannot be copied to anywhere from card device. If the certificate does not contain specific entry in certificate policies, then you can reject the documment acceptance and ask client to use proper certificate.

How do I prevent an app from using my api key?

My organization has a Win32 application that is written in the "fat client" style. I am writing a C# Client / Server solution that will replace this Win32 application. I am using ASP.NET MVC for the server and the client is a WPF application. I did my own custom implementation of the OAuth 2 spec. I am planning on creating a Restful API, and I want for not only my client to use it, but also allow 3rd parties to use it as well.
Every app will have an api key issued to it including the official client, but the official client's api key should be allowed additional api scopes (permissions) that 3rd party users aren't allowed to use. It is pretty obvious how to solve this but if you consider not everyone plays nicely, you have to ask "What would stop someone from just pretending like they are the official client and using it's api key?" Communication will be encrypted, but the server is not in the cloud or anything like that where we could control it. Our customers install the servers on their own machines and they will more than likely have access to the server application's SSL cert. Once you have that you can easily write an app that would run on our customer's machine that could glean the API key and secret from the official client app and use that info to request tokens from the server as if you were the official client.
I am planning on self signing the default key that the server uses and I could try and hide it in the application, but that really is just obfuscation. Besides, I wanted to allow users to provide their own SSL certs so browser based 3rd party applications wouldn't have issues with the browsers complaining that they are trying to communicate with on a self-signed SSL channel.
Is there anything I can do? Here are my choices as I see it:
1) I can set it up so that only SSL certs provided by us can be used and we hide them on disk encrypted using a secret that is obfuscated in the application code. We then just hope no one bothers to take the time to dig through our .net assemblies to find the secret used to encrypt/decrypt the certs on disk.
2) We allow them to provide certs so that we don't need to be involved with that process at all when they want to use a signed cert (we don't want to be in the cert business). Now we can't even hide behind obfuscation so if someone wants it, then the official client's API key and secret is easily obtainable.
Neither seems very desirable to me. Option 1 makes us have to request addition funds from them and manage SSL certs when self-signed doesn't work for them and in the end if someone really wants them they can still take the time to get them. Option 2 just makes it super easy to steal the official client's secret.
Reasons to want to limit unofficial Apps:
1. Discourage clones
A. Tell people not do it. Have a lawyer send cease and desist letters to authors of popular apps (and to anyone helping distribute them). Intermittently download them and alter the client/server code so that the popular apps will break. For added discouragement, temporarily ban any users who used the popular app. Authors will mostly give up on cloning your app; temporarily banning users will kill their install base. This is not great for your reputation.
2. Prevent unauthorized behavior.
A. Any behavior allowed by the official app should be allowed by the custom app. Whatever scenario you are worried about, block it server-side so that neither app can do it.
You can try to hide credentials (code obfuscation, hidden credentials, etc.), but this is only raises the cost/difficulty. This is often enough to discourage code theft (no need to make code theft impossible; it is sufficient to make it more difficult than copying it by hand). However, users who want to use your api in unsupported ways can work around this.
The answer is simple. each instance of you app should have its own unique key effectively a user sign up. You then ban users who infringe your rules. in this case signing in with a non authorised client. should be pretty easy to detect by pushing updates more frequently than it would be cost effective to reverse engineer them. Much like punk buster or other anti cheating tech

C# : Authenticate server certificate before sending HTTP GET / POST

I'm building a .net app, and I'd like to make the web calls secure enough that its not easy to monitor the the traffic thru something like fiddler. I'd like to be able to know that the certificate from the server isn't as expected and then never send out a full request with the request data.
Twitter's iOS app does this. Someone I think would have to somehow make a copy of the twitter's https certificate and make that fiddler's certificate. I havn't done it myself, but i think that's how I understand it. What you see in fiddler is that the tunnel has been created, but no request was actually set out. Its same scenario when you don't have the fiddler's HTTPS certificate installed on the device and you open a browser to google.com / a tunnel is created and then the browser knows 'uhoh untrusted server' and displays a message to the user. I'd like to just make it more secure and only allow 1 certificate / my server's certificate.
Make sense? I think i figured out how to do it with making a separate full dummy request / but thats not ideal.
What you're asking is "How do I implement certificate pinning in my client application?"
The way to do that would be to attach a validation callback on the Service Point responsible for making your HTTPS requests. Your validation callback would override the default behavior ("Accept any certificate trusted by Windows") and would instead validate that the received certificate is EXACTLY the one you expect.
Now, before you go that route, keep in mind a few things:
An attacker with Admin or Debug privileges can easily change your code in memory to remove your validation. This is called the "Trusted client" problem.
Your validation will break if the code is ever run in a corporate environment where an security appliance is doing HTTPS inspection (e.g. BlueCoat, ISA TMG, etc)
Your validation will prevent "certificate agility" -- if the server cert needs to change, you will need to update the application. If you ever want to use a load-balanced configuration with multiple certificates, or a public HTTPS CDN, you would need to update your validation logic.

Can I avoid storing MS Exchange credentials while still being able to authenticate (against EWS)?

I'm building an application that syncs data between users' Exchange Server accounts (version 2007-2013 supported) and the application.
The application can't use impersonation (at least not in the typical case) as users could be on any number of domains and exchange servers.
I know I'm going to have to ask for their username/email-address and password initially. However, I really don't want to be responsible for storing these credentials if I don't have to (even if they are encrypted, I'd rather not).
I'm not sure what questions to ask, so I'm going with these:
How does Exchange Server authenticate? Do the user's credentials get sent directly to the server as they are, or are the hashed together before being sent across the wire? If they are hashed, how can I get/generate this hash for re-use on successive authentications?
Does Exchange Server send some sort of authentication token that can be re-used later (and forever, until password change or invalidation)?
If you know of a solution to the problem, that the answers to these questions won't address, please do provide it instead.
Active directory federation services is exactly for such tasks. You can read about it there.
As mentioned by Kirill, ADFS 2.0 is one of the best solution for your task. You can also look into other SSO implementations as well. Though the main goal of SSO implementation is to maintain single Login state for multiple application (thereby reducing multiple Login prompt for each application), some of your application goals seems relevant. Please do a thorough research on all the tradeoffs before heading to the sso implementation since there is a small degree of complexity involved during implementation. SSO suits best if you are considering integration of multiple application in the future with the exchange server.
To answer some of your questions (in the same order - considering an SSO scenario with ADFS 2.0):
The authentication to exchange server will be done via ADFS 2.0 (Which provides security tokens (STS service) - to your application after authenticating with AD/ main Directory service). All the communication is encrypted and token signing certificates are used for Integrity and confidentiality.
The lifetime of Security tokens sent by ADFS 2.0 can be configured and reused as required. Please see this blog post for more details.
Also you can configure the ADFS 2.0 (Federation Service) to send only the relevant claim values (like username and email address) to the application, thereby improving the data security.
The System.Net.CredentialCache should work to suite your needs. The WebCredentials is a wrapper for the System.Net.NetworkCredential. Depending on the connection type/domain ect you should be able to utilize System.Net.CredentialCache.DefaultNetworkCredentials or System.Net.CredentialCache.DefaultCredentials
perhaps you should take a look at this Links Connecting to EWS by using the EWS Managed API , Connect to Exchange - Getting Started Tutorial? hopfully it will give you a new idea how to solve your problem :)
because if i understand the information correctly you maybe over think problem but i haven't any experiences so i could also absolute wrong
Bottom Line
If you can't configure anything on the server, there's no automatically generated token to use. It's unfortunate, but you're facing the same general problem that web browsers have--saving the password.
It's worth noting that any authentication needs to be over SSL (an https connection) to prevent a third party listening in on the authentication.
Password storage thoughts:
My suggestion is then to be somewhat creative when storing the password. You can use a keyed encryption algorithm, and then use a hash to generate the key, letting you arbitrarily choose what goes into the key. You would want at least 3 pieces of information going into this: something unique to the device, something unique to the app, and something unique to the exchange server.
For example:
a unique id given by the device (it doesn't matter whether or not this value is app-specific or not, merely that it is consistent)
a (long) string of information compiled into the app, possibly keyed to installation specific values, say the time when the app was first used
something unique to the destination, like the DNS name and perhaps some more specific server info
If you're willing to provide the option to the user, you could have an authorization PIN of some kind that would also be added to the data.
All this data gets put together in one byte array and hashed. The hash (or part of it, or it twice, depending on the hash size vs. the key length) is then used as the key for the encryption of the password.
You could also include some check information along with the password to be able to check client side whether or not the password was decrypted correctly. (If the wrong data is hashed, the wrong key is generated, and the wrong result comes from the decryption).
It's worth noting that all the information to be used for putting into the hash needs to be stored on the device, which is why I would suggest a Pin to authorize the usage of the account.

Categories