Kind of a quick question here, couldn't find the answer on-line.. and that could well be because it's a strange question.
I want to send data (basically strings/JSON) through a port that I have listening on a Linux server from a C# application. When the application talks to the server for the first time it must pass the correct credentials which are obviously meant to be secret. So when I compile this application are these credentials (username/password combo) safe? I guess not because I have heard of decompilers and the like. So in what way would you make this safe? Surely any application that has been compiled and connects to a public database or something similar must have the details stored in the executable somewhere.
Am I missing something blatantly obvious here, sorry for my ignorance!
No, they're not.
Broadly put, you can't ever assume that any data you ship to an end user in any form will be safe from prying eyes of some sort.
What are you trying to do in the first place? What are the security requirements?
For the application you have described, you should probably put the credentials in a file and set the permissions such that only the immediate owner can read the file. I would never recommend putting sensitive information into a file that anyone else can read, such as your application.
No. There are reflection tools that can show the code in your application. You can use tools that "scramble" your code (through obfuscation) and that can be somewhat effective, although not 100% secure
ASP.NET 2.0 introduced a new feature, called protected configuration, that enables you to encrypt sensitive information in a configuration file. Although primarily designed for ASP.NET, protected configuration can also be used to encrypt configuration file sections in Windows applications. For a detailed description of the protected configuration capabilities, see Encrypting Configuration Information Using Protected Configuration.
Related
I have an assembly that does some "magic" calculation, conversion and cryptography stuffs that should be used only from other assemblies of mine. Note that these assemblies have to be installed on the customer machine and so a malicious attacker has local access to them.
Is there a secure way to achieve this? I think I've looked at almost everything from cas, strong name signing, windows srp, obfuscation, InternalToVisible attribute, internalize with ILMerge and so on. The problem of these approach is that they aren't completely secure if an attacker has local access to my "magic assembly".
Do you have any advice for me? Is there anything else to do from Microsoft side?
Thanks in advance.
Essentially, no. If code on the local machine can be run by one process, it can be run by any process with access the the assemblies' data. There are various ways to make it difficult but not impossible. If an attacker has access to the machine and time, you can't prevent access.
There is one way to protect yourself: don't have the code on the local machine
provide a remote service and call it
provide the software on a hardware dongle
use hardware encryption (like Trusted Platform Module)
Specifically for SQL connections, you could use an externally trusted connection -- like Active Directory or Kerberos. Most enterprise Sql Servers will support this. It will require your users to 'log in' to your app, but .Net supports protecting credentials in RAM
CanYou can try to investingate callstack How can I find the method that called the current method? and limit call to assemblies you want.
I'm not a security guy but I thnik this scenerio is a bit strange, and solution to your problem may not be the thing you're currently asking for. Did you define what vulnerable data your client will have?
I am really sad because a few days ago we launched our software developed in .Net 4.0 (Desktop application). After 3 days, its crack was available on the internet. We tried to protect the software from this but somehow people got away cracking it.
Here is the scenario:
When the application launches the first time it communicates with the web server and checks the credentials passed by the user. If the credentials are correct, the software saves the values in the Registry, sends the MachineID back to the server and stores it in the database.
Now, the hacker has replaced the Server communication with a "return true;" statement (I checked that with Telrik JustDecompile). and he has uploaded the cracked software on the internet.
Now, following are my questions:
1- How to make sure that .Net application will not get cracked ?
2- The hacker now knows my code since he has done the modification. What steps should i take ?
3- I read on the internet about - obfuscators . But the hacker knows my code what should i do ?
4- Any other pro tips that i can use to avoid getting the software cracked ?
5- I am not sure but can these reflector softwares also decompile the App.Config with sensitive data ?
1- How to make sure that .Net application will not get cracked ?
If a computer can run your code + The hacker can run his own code at a higher privilege level than you, there is nothing that can 100% prevent your app from being cracked. Even if they just have access to the executable but not the target platform they still can step through and mimic what the target platform would do and figure out how the protection is being done.
2- The hacker now knows my code since he has done the modification. What steps should i take ?
Totally rewrite the authentication portion so they have to start from scratch but they will get it again, it is just a matter of how long.
3- I read on the internet about - obfuscators . But the hacker knows my code what should i do ?
The jinni is out of the bottle now that they have the non-obfuscated code. There is not much you can do unless you drastically re-write the software so they have to start from scratch. A obfuscateor will not prevent a determined attacker, they only thing that can prevent it is keeping the binary out of their hands.
4- Any other Pro tips that i can use to avoid getting the software cracked ?
The only copy protection I have seen to remotely delay for any period of time is what Ubisoft did with Assassin's Creed: Brotherhood. They encrypted ther levels with the game disk and it had to download the decryption key from the internet as it was needed (This is the keeping the binary out of their hands approach). But that did not work forever, eventually the hackers did get those levels decrypted and it was fully cracked. This approach is just what I saw take the longest time to get around without legal involvement (See point 2 at the bottom)
5- I am not sure but can these reflector softwares also decompile the App.Config with sensitive data ?
All the reflector software needs to do is look for the section that loads App.config and read what the defaults are. There is no secure place to store information on a computer you do not have full control over. If it is on the computer, it can be read. If it can be read, it can be reverse engineered.
The only real solution I can see to prevent piracy is one of two options.
The person never gets your app, it is streamed from a server under your control and they never get to see the binary. The only thing you send them is the information they need to drive the UI. This is the approach that all MMO's work on. People can reverse engineer what you are sending to the UI and mimic the logic that is going on on your servers but they will never be able to outright see what it is doing and if your software is complex enough it may not be feeseable for the attacker to recreate the server side code. The downside to this approach is you will need to host servers for your users to connect to, this will be a reoccurring cost you will need a way to re-coup. Often this method is called a "Rich Client" or "Thin Client" depending on how much processing is done client side and how much processing is done server side. See Chapter 22 of "Microsoft Application Architecture Guide, 2nd Edition". Specifically I am describing what is shown in figure 4 and 5
The seccond option is whoever you sell your software too have them sign a legal contract not to distribute the software (not a EULA, a actual contract that must be physically signed by the client). In that contract have large fines be applied to the person who leaks the software, then riddle your program with fingerprints that are unique to the person who buys the software so that when the program is leaked you can see who did it. (This is the method the vendor Hex-Rays use for their disassembler IDA. A quick google search could not turn up any cracked versions newer than 6.1, they are on 6.3). This method will not stop piracy, but it may discourage the copy to be leaked in the first place. This also lets you recover some lost costs associated with the program being leaked in the first place. One issue is that you will need to put a lot of fingerprints and they will need to be subtle, if a attacker can get two copies of the program and can compare the files between the two he will be able to tell what is the identifying information and just put whatever they want in so they can't tell who they got it from. The only way to do this is put a lot of red-herrings in that can't just be stripped out or randomized, also make the identifying code non-critical to running the software, if they don't have to work to crack it they are more likely to leave it in.
Update: After revisiting this answer to link to it for another question I thought of a easy way of implementing the #2 solution.
All you need to do is run your code through an obfuscateor and let it rename your classes for every person you sell your software to (I would still make them sign a license agreement, not just click a EULA so you can enforce the next part). You then make a database of the obfuscation mapping, when you see a leaked copy on the internet you just need to find one class anywhere in the project, look it up in your database, and you will know who leaked it and know who you need to go after for legal damages.
1: you can't. You can obfuscate, but the only way of preventing this is: don't give anyone the exe. Look how much games companies spend on this; they haven't solved it either.
2: obfuscation would help a little, although it is an arms race between obfuscators and de-obfuscators
3: too late to go back and undo that, but the obfuscation will slow them down a bit in future
5: app.config is usually very readable; you not much you can do here; encrypting will only slow them down a bit if the keys are in your app and therefore obtainable
As others have said there really isn't anything you can do against a determined cracker if they have access to your code. Obfuscation will provide some protection against a lazy cracker. Dotfuscator is built into VS you can give it a try. Keep in mind that there is a real cost to obfuscation. It will make it very difficult to debug issues from stack traces that your (paying) customers send you.
The best answer is one you will have to accept. You can't. Just focus on giving your users a great user experience, and make licensing very easy. The possibility that your application can be cracked does not mean that choosing to build a desktop application was a bad idea. Pirates will be pirates and honest customers will be honest customers.
Apparently there is enough commercial or intellectual value cracking your app that someone with reasonable skills tried it almost right away.
The only way you will win that war is to use commercial software protection packages.
If you try to implement copy protection yourself, you will be an easy target to hack again.
If you write a business application you would not also write the database engine that stores the data. You should also not write the crack prevention code for your application. That is not what solves your customer's problem, and it takes a tremendous skill set to do it right.
What you can do, in addition to the code obfuscation is, adding a mechanism of code decryption based on hardwareID, have in mind the following scenario, the send their HwID to your server, you identify the copy/owner/installation number/etc with that HwID, and you reply with a decription key BASED in that HwID for THAT specific binary (with the fingerprints mentioned before), so the hacking would be harder, since for fully functionality they need valid access to your server, otherwise they can't use the software.
Cheers,
I'm new to web programming and have a question about code behind in ASP.NET C#. How safe is it from someone seeing what's in it? The reason I ask is the program I'm linking this website to requires me to create an object that takes in my admin credentials (It does this in the background thousands of times or I would just prompt for creds). It uses the credentials to create things dynamically. I'm 99.99% sure this is highly unsafe to hard code my credentials into the page but I figured I would ask.
The code behind files and raw aspx files are protected from being retrieved by the web server, so as long as you control console and file share access to the server you are relatively safe.
Still, it is not considered really safe. You should set up the application pool of the site to run under a specific account and then give that account the necessary rights. Having services using ordinary user accounts is considered bad practice. Each service should have its own account, with least possible rights.
ASP.NET pages are compiled before sending the page over HTTP. This is secure. But if the user can access the file system, you have another problem on your hands.
You should put your credentials in your web.config (or you can move them into separate files like AppSettings.config or ConnectionStrings.config etc). The server will should never serve these.
This might be helpful:
http://msdn.microsoft.com/en-us/library/4c2kcht0(v=VS.100).aspx
This tells you how you can can go one step further and encrypt these so they do not store plain text password etc:
http://weblogs.asp.net/scottgu/archive/2006/01/09/434893.aspx
It is "safe". IIS (by default) does not serve up .cs files.
Another option is to precompile the site and then just drop the .aspx files on the web server.
Putting sensitive information into .cs files in ASP.NET is by default not a risky process as ASP.NET does not give access to .cs files from the client side (if you don't change it explictly), however, be sure that if there is a server error, custom errors reporting mode does not send the lines of the code to the client (which is extremely useful when debugging, and extremely risky when you release it to the public) or anyone may be able to read your sensitive information if an exception is thrown near those lines.
There are various levels of "safe" here.
Yes, IIS is configured to not serve up .cs files or .config files. That said, there are attack vectors which have proven successful in getting IIS to deliver those files into the evil doers hands.
First off, I wouldn't deploy .cs files to the server. If possible convert the web site to a web application and deploy it compiled. Of course, .net code can be decompiled (and here); so you should also look into obfuscation. However even obfuscated code can be decompiled but it's generally harder to read. ;)
Note that each level isn't really "secure". It just makes it more difficult.
The real answer is to not store the credentials on the server at all and require them to be provided by the client over an encrypted transport. Certainly you could cache them in memory, but even that has proven insecure to those with physical access.
At the end of the day, ask yourself how valuable the keys are and how much money/time you can invest in securing the system. There's usually a balance somewhere.
I'm currently working on a Winforms app created by someone else. I've noticed that all the configurations are stored in the registy. This includes connection strings and so on.
Is this good or bad practice? If bad, then what is the better alternative?
A better option for you and the user is to use configuration files stored in the per-user application data directories. Look at the documentation for the System.Configuration namespace. Version 2.0 of the framework added a lot of functionality beyond the per-application config files.
I think a better option would be to store them in an app.config. This gives better visability and frankly is easier to change.
If you want to hide your app settings, most users aren't savvy enough to go hunting through the registry for keys relevant to an application. Also, as other answers have pointed out, back in the days before XML configuration file standards, the registry was the recommended place.
The recommended option is an XML config file nowadays; it won't add data to a file that's loaded at startup, meaning you're not contributing quite as much to the problem of a computer with a lot on it getting an inflated registry. It's more easily changeable (provided your user has admin access, and in any case, your program will need special permission to access the file to make programmatic changes).
If you kind of want to keep the data away from the casual user, a SQLite database is a relatively lightweight way to store small amounts of data, like user settings, in a manner that isn't easily changeable without access to SQLite. Just remember that if you can get in, so can others, no matter how hard that may be.
It's mostly an old practice from pre .NET days (VB6 comes to mind), when there were no standard configuration files and Microsoft recommended storing configuration in the registry.
These days, the registry is a good place to store information that is used by several applications, but if this is not the case, you should prefer the application configuration file.
I want to secure my assembly (dll) by binding it to a specific environment. Say I have a dll (BizLogic.dll), I want to make it available to my co-developers to use it within the organization. But I don't want others to use it outside my organization.
Is there any way to address this issue?
Thanks in Advance.
--
Mohammed.
What you're describing is not the problem that CAS was designed to solve. The .NET Code Access Security system was designed to protect benign users from hostile third party code. You are trying to do the opposite -- protect benign code from hostile users. If you give someone a hunk of code, they can do whatever they want to that code -- disassemble it, rewrite it, recompile it, whatever, and there's not much you can do technically to stop them.
Probably your best bet is to use other enforcement mechanisms, like make them sign a contract that says that they will not reverse-engineer or redistribute your code, and then sue them if they do. Or, simply don't give them the code in the first place. Make a web service and keep the code on your server, away from the people you don't trust.
What do you mean by outside your organization?
Nevertheless, did you consider signing your assembly?
Very little that will be actually effective. You can try the various license/key frameworks that are out there, but there are exactly zero that are 100% uncrackable.
Create a domain user, give that user the only read permissions to a db object.
Set password never expires.
Physically secure the password.
dll must be able to access (via your new user) db object to proceed.
Obfuscate your code.