Open X509 Certificates Selection Using USB Token in C# Hosted on IIS - c#

I am working on a requirement which required Digital Signature on PDF files in ASP.Net C# and developed one application who gets the client certificate using USB token on my local machine. But, When hosting this application on IIS server getting error 'Current Session is not Interactive'.
Anyone have any idea, how can we get X509Certificates from client machine in ASP.Net C# and this application hosted on IIS server not console application.
My code for reference:
private void getSign()
{
X509Store store = new X509Store(StoreLocation.CurrentUser);
store.Open(OpenFlags.OpenExistingOnly | OpenFlags.ReadOnly);
X509Certificate2 cert = null;
//manually chose the certificate in the store
X509Certificate2Collection sel = X509Certificate2UI.SelectFromCollection(store.Certificates, null, null, X509SelectionFlag.SingleSelection);
if (sel.Count > 0)
cert = sel[0];
else
{
//MessageBox.Show("Certificate not found");
return;
}
SignWithThisCert(cert);
}

Disclaimer: although, it is not a straight answer to your question, but may contain directions for you to get the right way depending on a business requirements. There are two major issues in your question. One of them I tried to discuss in comments, but may need to explain.
Let's try to analyze your initial post:
Task: let users to upload a PDF to web application and make it signed.
Requirements: the PDF must be signed by using a certificate stored on a client's USB token.
Your proposed solution: get client certificate (from the client) and perform signing on a server side.
Let's formalize terms and operations used in your scenario.
Signing: document signing is used to guarantee document integrity and ensure that the document was not tampered in any way after it was signed. Digital signaure provides information about the entity who performed signing. This adds non-repudiation feature. Once signed, signer cannot deny the signing fact and proves that the information in the document was correct at the signing time. Asymmetric signing requires private key. Associated public key can be used to verify and validate the signature.
Private key: is a part of a key pair which belongs to particular client. No one else should know private key.
Security boundaries: client (web browser or other web client) and web server run in different security boundaries and have different trust levels. They are under different administrative controls. As the result, web server has very limited access to client machine/data and vice versa: web client has very limited access to server machine/data.
Your proposed design assumes that client picks the document and upload it to a server. Then client picks a signing certificate (particularly, private key) and upload it to a server for signing operation.
Issue #1: once private key leaves client and is copied to your web application you are breaking security boundary. As the result, the key is no longer private, because web application possesses a knowledge of private key and it is stored (even if temporarily) in server memory. Or, in simple word, the key is leaked and compromised.
Client is no longer responsible for his key and operations made by the key. Client may deny anything that was made by using its private key. And deny signatures made by your web application.
Issue #2: your proposed design assumes that PDF is copied to a server as is. And only then it is signed. However, once the document (or its exact binary copy, to be more precise) touches the network, client is no longer responsible for document accuracy. Because the document may be transformed during transit or some code that touches the document between client and document signing code.
Once document leaves client machine, client is no longer responsible for document integrity, because the document is passed through various pieces of code that compose the document to a suitable for transmission format (encapsulation, for example). As the result, document sent by client and recieved by document signing code on a server side may not be the same. Document integrity is not guaranteed. Although, you can apply TLS to protect the document during transmit, there still are places where the document can be anonymously tampered and no one will notice that.
Again, due to the fact that client cannot guarantee that web application received the same document he sent, client can deny the document you are trying to sign and deny the signature. Thus, making signature useless, because it proves nothing.
Issue #3: (not really an issue, but worth explanation) provided piece of code doesn't perform intended task (even though, it looks working in dev environment). Your goal is to invoke certificate selection dialog on client to select proper certificate.
During testing, you are running all the code locally. In debugger, the web application runs under currently logged user (which is interactive session) and is able to show the certificate selection dialog. However, you can't easily identify in which context (client or server) it is executed, because both, client and server run on the same machine and under the same security context. In fact, it is called under server context.
When you deploy the application to web server, you see the difference. Web application runs under some application pool context (user account) and this session is not interactive. As the result, X509Certificate2UI class cannot show the dialog, because no one will see it and no one can press buttons on it. This behavior is consistent regardless if the client and server run on the same or different machines, because IIS (or other web server) immediately separate concerns and security boundaries, while debugger does not. And client and server will definitely run under different security contexts. Even if you force to use the same context, IIS will make a secondary non-interactive user session to run the web application.
In order to show certificate selection dialog on client, you have to have a deep interaction with client, for example, via Silverlight (not sure if X509Certificate2UI is available in Silverlight) or some ActiveX control. You have to run some code on client side to accomplish that.
All stated above shows potential issues in your initial design, they simply break basic security rules. Technologies and tools are designed to follow these rules, not break them. By pursuing your initial design you will be forced to constantly fight with technologies to break them and making your application very insecure and vulnerable.
Preferred solution: we identified common risks in your design: key leak and document integrity between client and signing code on a server. To easily mitigate all this, you should do the only thing: perform document signing on client side. by doing this, signing private key will never leak from client and document integirty will be guaranteed over the course of signing and receipt by web application.
Now, we can talk about certificate properties. Your requirement is to use the certificate which is stored on a USB token. I don't know what kind of token you mean here. Is is a standard USB mass storage with PFX on it, or it is cryptographic device (smart card with USB interface, which is usually referred to USB token. For example, Aladdin (SafeNet) eToken devices).
If it is USB mass storage device, then it can't be a part of requirement, because generic USB drive does not offer anything helpful to identify the source of the certificate. Any certificate can be easily copied to USB drive and any certificate can be copied from USB drive.
If it is USB smart card, then there is a way to identify whether the certificate came from this device, or other source. Smart cards have one unique property: private key never leaves the card. All operations are performed on a card (this is the reason why they are slow comparing to certificates stored on PC).
This is usually accomplished by adding extra information to signing certificate during certificate issuance. For example, by adding Certificate Policies certificate extension. This will require that CA operator/manager ensures that certificates with specified certificate policies is deployed to smart cards only.
If these processes are established, you can use the code on server side that accepts signed PDF document and examines signing certificate contents. For example, you read certificate policies and expect to see particular entry there. If it is presented, then you can safely make assumptions that the document was signed by using the certificate stored on a smart card, because the key cannot be copied to anywhere from card device. If the certificate does not contain specific entry in certificate policies, then you can reject the documment acceptance and ask client to use proper certificate.

Related

C# WebClient Encryption - Charles

is it possible to connect to SSL via a web client so that it can not be decrypted later by programs like Charles or Fiddler?
My Problem is, I have a Application white a Login, if the Password and Username from a User is correct, the Server returns Success. But if the user reads the response, he can easily fake it white Charles and „bypass“ my Login.
That depends on what you are trying to do, and why. Fiddler decrypts traffic by installing a root certificate on your computer, and then uses that basically make a man in the middle attack. Or in other words, with the users permission it subverts the security model of windows. So if you rely on windows to validate the SSL certificate used, there is nothing you can do about it.
If you only have one server that you really want toconnect to, you can validate that the certificate you are getting is the one and only one that you do indeed trust. This is known as certificate pinning.
If you are worried about someone storing the traffic and later using Fiddler to decrypt it, you can stop worrying.

How do I prevent an app from using my api key?

My organization has a Win32 application that is written in the "fat client" style. I am writing a C# Client / Server solution that will replace this Win32 application. I am using ASP.NET MVC for the server and the client is a WPF application. I did my own custom implementation of the OAuth 2 spec. I am planning on creating a Restful API, and I want for not only my client to use it, but also allow 3rd parties to use it as well.
Every app will have an api key issued to it including the official client, but the official client's api key should be allowed additional api scopes (permissions) that 3rd party users aren't allowed to use. It is pretty obvious how to solve this but if you consider not everyone plays nicely, you have to ask "What would stop someone from just pretending like they are the official client and using it's api key?" Communication will be encrypted, but the server is not in the cloud or anything like that where we could control it. Our customers install the servers on their own machines and they will more than likely have access to the server application's SSL cert. Once you have that you can easily write an app that would run on our customer's machine that could glean the API key and secret from the official client app and use that info to request tokens from the server as if you were the official client.
I am planning on self signing the default key that the server uses and I could try and hide it in the application, but that really is just obfuscation. Besides, I wanted to allow users to provide their own SSL certs so browser based 3rd party applications wouldn't have issues with the browsers complaining that they are trying to communicate with on a self-signed SSL channel.
Is there anything I can do? Here are my choices as I see it:
1) I can set it up so that only SSL certs provided by us can be used and we hide them on disk encrypted using a secret that is obfuscated in the application code. We then just hope no one bothers to take the time to dig through our .net assemblies to find the secret used to encrypt/decrypt the certs on disk.
2) We allow them to provide certs so that we don't need to be involved with that process at all when they want to use a signed cert (we don't want to be in the cert business). Now we can't even hide behind obfuscation so if someone wants it, then the official client's API key and secret is easily obtainable.
Neither seems very desirable to me. Option 1 makes us have to request addition funds from them and manage SSL certs when self-signed doesn't work for them and in the end if someone really wants them they can still take the time to get them. Option 2 just makes it super easy to steal the official client's secret.
Reasons to want to limit unofficial Apps:
1. Discourage clones
A. Tell people not do it. Have a lawyer send cease and desist letters to authors of popular apps (and to anyone helping distribute them). Intermittently download them and alter the client/server code so that the popular apps will break. For added discouragement, temporarily ban any users who used the popular app. Authors will mostly give up on cloning your app; temporarily banning users will kill their install base. This is not great for your reputation.
2. Prevent unauthorized behavior.
A. Any behavior allowed by the official app should be allowed by the custom app. Whatever scenario you are worried about, block it server-side so that neither app can do it.
You can try to hide credentials (code obfuscation, hidden credentials, etc.), but this is only raises the cost/difficulty. This is often enough to discourage code theft (no need to make code theft impossible; it is sufficient to make it more difficult than copying it by hand). However, users who want to use your api in unsupported ways can work around this.
The answer is simple. each instance of you app should have its own unique key effectively a user sign up. You then ban users who infringe your rules. in this case signing in with a non authorised client. should be pretty easy to detect by pushing updates more frequently than it would be cost effective to reverse engineer them. Much like punk buster or other anti cheating tech

Password is in clear text after imlementing SSL [duplicate]

I asked a question here a while back on how to hide my http request calls and make them more secure in my application. I did not want people to use fiddler 2 to see the call and set up an auto responder. Everyone told me to go SSL and calls will be hidden and information kept safe.
I bought and installed an SSL Certificate and got everything set up. I booted up fiddler 2 and ran a test application that connect to an https web service as well as connected to an https php script.
Fiddler 2 was able to not only detect both requests, but decrypt them as well! I was able to see all information going back and fourth, which brings me to my question.
What is the point of having SSL if it made zero difference to security. With or without SSL I can see all information going back and fourth and STILL set up an auto responder.
Is there something in .NET I am missing to better hide my calls going over SSL?
EDIT
I am adding a new part to this question due to some of the responses I have received. What if an app connects to a web service to login. The app sends the web service a username and a password. The web service then sends data back to the app saying good login data or bad. Even if going over SSL the person using fiddler 2 could just set up an auto responder and the application is then "cracked". I understand how it could be useful to see the data in debugging, but my question is what exactly should one do to make sure the SSL is connecting to the one it was requesting. Basically saying there cannot be a middle man.
This is covered here: http://www.fiddlerbook.com/fiddler/help/httpsdecryption.asp
Fiddler2 relies on a "man-in-the-middle" approach to HTTPS interception. To your web browser, Fiddler2 claims to be the secure web server, and to the web server, Fiddler2 mimics the web browser. In order to pretend to be the web server, Fiddler2 dynamically generates a HTTPS certificate.
Essentially, you manually trust whatever certificate Fiddler provides, the same will be true if you manually accept certificate from random person that does not match domain name.
EDIT:
There are ways to prevent Fiddler/man-in-the-middle attack - i.e. in custom application, using SSL, one can require particular certificates to be used for communication. In case of browsers, they have UI to notify user of certificate mismatch, but eventually allow such communication.
As a publicly available sample for explicit certificates, you can try to use Azure services (i.e. with PowerShell tools for Azure) and sniff traffic with Fiddler. It fails due to explicit cert requirement.
You could set up your web-service to require a Client-side certification for SSL authentication, as well as the server side. This way Fiddler wouldn't be able to connect to your service. Only your application, which has the required certificate would be able to connect.
Of course, then you have the problem of how to protect the certificate within the app, but you've got that problem now with your username & password, anyway. Someone who really wants to crack your app could have a go with Reflector, or even do a memory search for the private key associated with the client-side cert.
There's no real way to make this 100% bullet proof. It's the same problem the movie industry has with securing DVD content. If you've got software capable of decrypting the DVD and playing back the content, then someone can do a memory dump while that software is in action and find the decryption key.
The point of SSL/TLS in general is so that the occasional eavesdropper with Wireshark isn't able to see your payloads. Fiddler/Burp means that you interacted with the system. Yes, it is a very simple interaction, but it does require (one) of the systems to be compromised.
If you want to enhance the security by rendering these MITM programs useless at such a basic level, you would require client certificate authentication (2-way SSL) and pin both the server and client certificates (e.g. require that only the particular certificate is valid for the comms). You would also encrypt the payloads transferred on the wire with the public keys of each party, and ensure that the private keys only reside on the systems they belong to. This way even if one party (Bob) is compromised the attacker can only see what is sent to Bob, and not what Bob sent to Alice.
You would then take the encrypted payloads and sign the data with a verifiable certificate to ensure the data has not been tampered with (there is a lot of debate on whether to encrypt first or sign first, btw).
On top of that, you can hash the signature using several passes of something like sha2 to ensure the signature is 'as-sent' (although this is largely an obscure step).
This would get you about as far in the security way as achievable reasonably when you do not control (one) of the communicating systems.
As others mentioned, if an attacker controls the system, they control the RAM and can modify all method calls in memory.

Read certificates from a PKI card

How can I read certificates from a PKI card?
I tried finding answer on the Internet but I didn't get any good results.
Any ideas how to get the certs from a PKI card?
I need to sign some forms with a certificate key. All this will happen in a web app.
Later...
I didn't tried much because I don't have a point to start. I've just learned that all of the certs are read by Windows when you insert the card. This way I think I can get them using X509Store. I'll try it and I'll be back but still I'm in the need of some help.
As soon as you plugin in your SmartCard the certificates are copied to your local, personal certificate store. You can use "certmgr.msc" (run -> enter) to have a look at these certs.
You can access the certificates, as well as the associated private keys, with the X509Store. But of course you can only do it locally on your machine due to security reasons. Imagine every website would have access to your private keys...
How to Sign and Verify the signature with .NET and a certificate (C#)
If you are using CAPICOM, you will still need to execute code on the local machine (JavaScript).
You find the following statement here :
[CAPICOM is a 32-bit only component that is available for use in the following operating systems: Windows Server 2008, Windows Vista, Windows XP. Instead, use the .NET Framework to implement security features. For more information, see the alternatives listed below.]
Important None of the alternatives to CAPICOM offer a solution for scripts; therefore, you must write your own ActiveX control. For more information, see ActiveX Controls.
Which indicates that the .Net classes are not a "full" replacement to CAPICOM. So you can't use the "X509" classes in JavaScript.
If you want to use a client side private certificate to sign some data (assume a hash), you need to run code on the client. Here are some ideas what you could do:
Write an ActiveX control
Write browser Plugin(s)
Write an application which can be called by using a custom URI schema (can't post another Link, google it and you will find it).
Of course you need to retrieve the data on the server side and for the last solution you may need a kind of a webservice.
Conclusion
Don't be confused about private and public keys from a certificate.
There are scenarios where you send a certificate to the server for e.g. authentication.
But then its your public key. You should never send your private key around (of course technically its possible).

How to detect fake trusted personal/root SSL certificates of target domain

Someone can add a fake SSL cert. into trusted certificates collection. How can I detect these fakes? How can I verify a cert is official, is there any list to compare?
I've added a screenshot of a legal & a fake one (created by Fiddler):
ADDITION:
To ensure your sensitive SSL communication is secure, you have to use certificates of common trusted authorities. If someone installed Fiddler -or a malicious software installed its own cert.- then I need to cancel any communication attempts and alert in my app at client's pc.
ADDITION 2
I only care about communication between end user's pc & Google Docs. We know Google Docs web site's public certificate is given by "Google Internet Authority". I think I have to compare it & installed certificate for Google Docs on user's pc.
Last word:
I need to simply compare the certificate in use for target site vs the target site's original SSL certificate just before any SSL comunication.
More info: This link
There is no single "official list". You must compare your list to someone else's list.
Windows has its own list which is used by Internet Explorer.
Firefox maintains a separate list of its own.
I don't know about Chrome, Safari or Opera.
But the long and short is you need to compare your list to other lists which you know are correct, for example from a colleague's computer.
You can't. If user added it, that means she trusts it. And also, a certificate can be valid without being part of the "official" lists.
Each application maintains (or relies on other applications) the list of trusted root certificate authorities. Windows has its own list, OpenSSL has its own list, all major browsers have their own lists (Chrome uses or can use Windows one, if memory serves).
If you create a Windows application, your best bet is to rely on system list, as it is updated on a regular basis (if you carry your own list, you have to maintain it as well).
One thing to pay attention to is that the certificate issued by trusted authority doesn't mean trusted certificate. Some certificates are issued by hacking (this happened at least with two intermediate CAs during last years), private keys for others are leaked, and this causes the need to revoke such certificates. Revocation status can be checked by inspecting CRLs (revocation lists published by CAs) or using OCSP (online certificate status protocol).
You need to use them no matter where you get the list of trusted CAs.

Categories