Prevent methods call from unwanted assemblies - c#

I have an assembly that does some "magic" calculation, conversion and cryptography stuffs that should be used only from other assemblies of mine. Note that these assemblies have to be installed on the customer machine and so a malicious attacker has local access to them.
Is there a secure way to achieve this? I think I've looked at almost everything from cas, strong name signing, windows srp, obfuscation, InternalToVisible attribute, internalize with ILMerge and so on. The problem of these approach is that they aren't completely secure if an attacker has local access to my "magic assembly".
Do you have any advice for me? Is there anything else to do from Microsoft side?
Thanks in advance.

Essentially, no. If code on the local machine can be run by one process, it can be run by any process with access the the assemblies' data. There are various ways to make it difficult but not impossible. If an attacker has access to the machine and time, you can't prevent access.
There is one way to protect yourself: don't have the code on the local machine
provide a remote service and call it
provide the software on a hardware dongle
use hardware encryption (like Trusted Platform Module)
Specifically for SQL connections, you could use an externally trusted connection -- like Active Directory or Kerberos. Most enterprise Sql Servers will support this. It will require your users to 'log in' to your app, but .Net supports protecting credentials in RAM

CanYou can try to investingate callstack How can I find the method that called the current method? and limit call to assemblies you want.
I'm not a security guy but I thnik this scenerio is a bit strange, and solution to your problem may not be the thing you're currently asking for. Did you define what vulnerable data your client will have?

Related

How do I protect OAuth keys from a user decompiling my project?

I am writing my first application to use OAuth. This is for a desktop application, not a website or a mobile device where it would be more difficult to access the binary, so I am concerned on how to protect my application key and secret. I feel it would be trivial to look at the complied file and find the string that stores the key.
Am I over reacting or is this a genuine problem (with a known solution) for desktop apps?
This project is being coded in Java but I am also a C# developer so any solutions for .NET would be appreciated too.
EDIT:
I know there is no perfect solution, I am just looking for mitigating solutions.
EDIT2: I know pretty much only solution is use some form of obfuscation. Are there any free providers for .NET and Java that will do string obfuscation?
There is no good or even half good way to protect keys embedded in a binary that untrusted users can access.
There are reasons to at least put a minimum amount of effort to protect yourself.
The minimum amount of effort won't be effective. Even the maximum amount of effort won't be effective against a skilled reverse engineer / hacker with just a few hours of spare time.
If you don't want your OAuth keys to be hacked, don't put them in code that you distribute to untrusted users. Period.
Am I over reacting or is this a genuine problem (with a known solution) for desktop apps?
It is a genuine problem with no known (effective) solution. Not in Java, not in C#, not in Perl, not in C, not in anything. Think of it as if it was a Law of Physics.
Your alternatives are:
Force your users to use a trusted platform that will only execute crypto signed code. (Hint: this is most likely not practical for your application because current generation PC's don't work this way. And even TPS can be hacked given the right equipment.)
Turn your application into a service and run it on a machine / machines that you control access to. (Hint: it sounds like OAuth 2.0 might remove this requirement.)
Use some authentication mechanism that doesn't require permanent secret keys to be distributed.
Get your users to sign a legally binding contract to not reverse engineer your code, and sue them if they violate the contract. Figuring out which of your users has hacked your keys is left to your imagination ... (Hint: this won't stop hacking, but may allow you to recover damages, if the hacker has assets.)
By the way, argument by analogy is a clever rhetorical trick, but it is not logically sound. The observation that physical locks on front doors stop people stealing your stuff (to some degree) says nothing whatsoever about the technical feasibility of safely embedding private information in executables.
And ignoring the fact that argument by analogy is unsound, this particular analogy breaks down for the following reason. Physical locks are not impenetrable. The lock on your front door "works" because someone has to stand in front of your house visible from the road fiddling with your lock for a minute or so ... or banging it with a big hammer. Someone doing that is taking the risk that he / she will be observed, and the police will be called. Bank vaults "work" because the time required to penetrate them is a number of hours, and there are other alarms, security guards, etc. And so on. By contrast, a hacker can spend minutes, hours, even days trying to break your technical protection measures with effectively zero risk of being observed / detected doing it.
OAuth is not designed to be used in the situation you described, i.e. its purpose is not to authenticate a client device to a server or other device. It is designed to allow one server to delegate access to its resources to a user who has been authenticated by another server, which the first server trusts. The secrets involved are intended to be kept secure at the two servers.
I think you're trying to solve a different problem. If you're trying to find a way for the server to verify that it is only your client code that is accessing your server, you're up against a very big task.
Edit: Let me be clear; this is not a solution for safely storing your keys in a binary, as many others have mentioned, there is no way of doing this.
What I am describing is a method of mitigating some of the danger of doing so.
/Edit
This is only a partial solution, but it can work depending on your setup; it worked well for us in our university internal network.
The idea is that you make a service that is only likely to be accessed by a computer.
For example, an authenticated WCF service, that not only requires you to log in (using the credentials that are stored in your executable) but also requires you to pass a time dependant value (like one of the gadgets you get for your online banking) or a the value of a specific database row, or a number of options.
The idea is simple really, you cannot totally secure the credentials, but you can make them only part of the problem.
We did this for a windows app that uses a student data store, which as you can imagine, had to be pretty secure.
The idea was that we had a connection provider running as a service somewhere and we had a heartbeat system that generated a new key every 30 seconds or so.
The only way you could get the correct connection information was to authenticate with the connection provider and provide the current time-dependant heartbeat. It was complex enough so that a human couldn't sit there and open a connection by hand and provide the correct results, but was performant enough to work in our internal network.
Of course, someone could still disassemble your code, find your credentials, decipher your heartbeat and so on; but if someone is capable and prepared to go to those lengths, then then only way of securing your machine is unplugging it from the network!
Hope this inspires you to some sort of solution!
Eazfuscator.NET and other .NET obfuscators do string encryption, which makes slightly less trivial for someone to see your string literals in a de-obfuscation program like Reflector. I say slightly less trivial because the key used to encrypt the strings is still stored in your binary, so an attacker can still decrypt the strings quite easily (just need to find the key, and then determine which crypto algo is being used to encrypt the strings, and they have your string literals decrypted).
It doesn't matter the platform, what you are asking will always be impossible. Whatever you have done to need this feature is whats wrong with your application. You can never trust a client like this. Perhaps you are looking for (in)Security Through Obscurity.

C# program to remote login and change password of services

I am required to write a C# program, that, given a list of servers and services running on them, remotely logs into the servers, stops the service, changes the password associated with that service and restarts the service.
I am not entirely sure if it is even possible, but I would like to believe it is. Any pointers as to where / what I should be looking at to get started?
PS - I am not limited to C#. If there is another language that would make this task easier, Ia m open to suggestions.
You may use WMI to accomplish all of these operations you mentioned. WMI is exposed through System.Manangement.Instrumentation and there are plenty of examples out there, just google C#+WMI.
Another option is to use ServiceController class which you can use to also remotely connect to services but not sure if you can change the credentials of the service (its identity) with it.
A sequence of OpenSCManager, OpenService, ChangeServiceConfig can be used to modify the password or anything else you want to modify, given sufficient access permissions on the target machines.
I imagine native code is just as easy as wrapping this in C# and using P/Invoke, but it might be worth your while to do it that way depending on how you have to handle the target server list.
EDIT:
If you are using WMI per the other answer, you will need to use Win32_Service class Change method.

Is it necessary to learn about Code Access Security (CAS)?

Is it necessary to learn about Code Access Security (CAS) for desktop application development in C#.
That’s a pretty broad question to ask, and the answer depends on a number of things. The two most important factors, however, are your target environment your method of deployment.
Most consumer software is installed with an installer (MSI) and gets full-trust on the target machine. If this is your environment and method of delivery, you’ll likely never need to learn or master Code Access Security. On the other hand, enterprise customers generally want more control over what software can and can’t do. Code Access Security provides IT with the ability to lock down applications and the control they can assert of the machine they’re installed on. So if you’re building for Enterprise, understanding CAS may be a requirement.
Regardless of your target market, how you deploy your application may require you to learn about CAS. XBAP applications are by default NOT full-trust and require significant steps to elevate to full-trust. Click-Once deployed applications are also not full-trust but can be elevated to full-trust more easily. So if you plan to deploy software using either of these methods, you’ll likely want to understand CAS.
And finally, Silverlight as a platform by definition is not full-trust. In fact it can never be full-trust. This is not a CAS issue because no depth of understanding CAS will help you overcome the fact that Silverlight does not include code required to perform full-trust activities. The reason I include Silverlight here, however, is that a good understanding of CAS might come in handy when understanding some of the security limitations that exist in the platform by design.
Hope that helps.
Yes if you want to get an MCPD. In the real world I have never needed it. I write applications for the Government and they are pretty tight on security and they have never requested it.
It is not essential, but it certainly helps to make your application more secure. Implict declarations on methods makes your intentions clear.
Ugh, it was a nice idea (I guess), but in real life CAS only rears its ugly head when you try to deploy or read a file off a network drive. It's not difficult to 'learn', but I wouldn't dwell on it.
Desktop Applications are considered "Full Trust", you will never need CAS for full trust applications.
I bought a book on it shortly after .NET 1.0 came out, I'll never get the time I spent reading it back.
I have never actually run across a situation that required code access security. It is a definite requirement for getting an MCPD or MCSD (or whatever the new cert is), but I think a better idea would be to understand secure coding practices (configuration encryption, dealing with user input, etc.) before going down the route of code access security.

Securing .net Assemblies

I want to secure my assembly (dll) by binding it to a specific environment. Say I have a dll (BizLogic.dll), I want to make it available to my co-developers to use it within the organization. But I don't want others to use it outside my organization.
Is there any way to address this issue?
Thanks in Advance.
--
Mohammed.
What you're describing is not the problem that CAS was designed to solve. The .NET Code Access Security system was designed to protect benign users from hostile third party code. You are trying to do the opposite -- protect benign code from hostile users. If you give someone a hunk of code, they can do whatever they want to that code -- disassemble it, rewrite it, recompile it, whatever, and there's not much you can do technically to stop them.
Probably your best bet is to use other enforcement mechanisms, like make them sign a contract that says that they will not reverse-engineer or redistribute your code, and then sue them if they do. Or, simply don't give them the code in the first place. Make a web service and keep the code on your server, away from the people you don't trust.
What do you mean by outside your organization?
Nevertheless, did you consider signing your assembly?
Very little that will be actually effective. You can try the various license/key frameworks that are out there, but there are exactly zero that are 100% uncrackable.
Create a domain user, give that user the only read permissions to a db object.
Set password never expires.
Physically secure the password.
dll must be able to access (via your new user) db object to proceed.
Obfuscate your code.

.Net Dynamic Plugin Loading with Authority

What recommendations can you give for a system which must do the following:
Load Plugins (and eventually execute them) but have 2 methods of loading these plugins:
Load only authorized plugins
(developed by the owner of the
software)
Load all plugins
And we need to be reasonably secure that the authorized plugins are the real deal (unmodified). However all plugins must be in seperate assemblies. I've been looking at using strong named assemblies for the plugins, with the public key stored in the loader application, but to me this seems too easy to modify the public key within the loader application (if the user was so inclined) regardless of any obfuscation of the loader application. Any more secure ideas?
Basically, if you're putting your code on someone else's machine, there's no absolute guarantee of security.
You can look at all kinds of security tricks, but in the end, the code is on their machine so it's out of your control.
How much do you stand to lose if the end user loads an unauthorised plugin?
How much do you stand to lose if the end user loads an unauthorised plugin?
Admittedly this won't happen often, but when/if it does happen we lose a lot and I although I understand we will produce nothing 100% secure, I want to make it enough of a hindrance to put people off doing it.
The annoying thing about going with a simple dynamic loading with full strong name, is that all it takes is a simple string literal change within the loader app to load any other assembly even though the plugins are signed.
you can broaden your question : "how can I protect my .net assemblies from reverse engineering ?"
the answer is - you can not. for those who havent seen it yet, just look up "reflector", and run it on some naive exe.
(by the way, this is always the answer for code that is out of your hands, as long as you do not have en/decryption hardware sent with it),
obfuscating tries to make the reverse engineering to be harder (cost more money) than development, and for some types of algorithems it succeeds.
Sign the assemblies.
Strong-name signing, or strong-naming,
gives a software component a globally
unique identity that cannot be spoofed
by someone else. Strong names are used
to guarantee that component
dependencies and configuration
statements map to exactly the right
component and component version.
http://msdn.microsoft.com/en-us/library/h4fa028b(VS.80).aspx

Categories