.Net Dynamic Plugin Loading with Authority - c#

What recommendations can you give for a system which must do the following:
Load Plugins (and eventually execute them) but have 2 methods of loading these plugins:
Load only authorized plugins
(developed by the owner of the
software)
Load all plugins
And we need to be reasonably secure that the authorized plugins are the real deal (unmodified). However all plugins must be in seperate assemblies. I've been looking at using strong named assemblies for the plugins, with the public key stored in the loader application, but to me this seems too easy to modify the public key within the loader application (if the user was so inclined) regardless of any obfuscation of the loader application. Any more secure ideas?

Basically, if you're putting your code on someone else's machine, there's no absolute guarantee of security.
You can look at all kinds of security tricks, but in the end, the code is on their machine so it's out of your control.
How much do you stand to lose if the end user loads an unauthorised plugin?

How much do you stand to lose if the end user loads an unauthorised plugin?
Admittedly this won't happen often, but when/if it does happen we lose a lot and I although I understand we will produce nothing 100% secure, I want to make it enough of a hindrance to put people off doing it.
The annoying thing about going with a simple dynamic loading with full strong name, is that all it takes is a simple string literal change within the loader app to load any other assembly even though the plugins are signed.

you can broaden your question : "how can I protect my .net assemblies from reverse engineering ?"
the answer is - you can not. for those who havent seen it yet, just look up "reflector", and run it on some naive exe.
(by the way, this is always the answer for code that is out of your hands, as long as you do not have en/decryption hardware sent with it),
obfuscating tries to make the reverse engineering to be harder (cost more money) than development, and for some types of algorithems it succeeds.

Sign the assemblies.
Strong-name signing, or strong-naming,
gives a software component a globally
unique identity that cannot be spoofed
by someone else. Strong names are used
to guarantee that component
dependencies and configuration
statements map to exactly the right
component and component version.
http://msdn.microsoft.com/en-us/library/h4fa028b(VS.80).aspx

Related

Project can only be used by specified solution [duplicate]

How do I protect the dlls of my project in such a way that they cannot be referenced and used by other people?
Thanks
The short answer is that beyond the obvious things, there is not much you can do.
The obvious things that you might want to consider (roughly in order of increasing difficulty and decreasing plausibility) include:
Static link so there is no DLL to attack.
Strip all symbols.
Use a .DEF file and an import library to have only anonymous exports known only by their export ids.
Keep the DLL in a resource and expose it in the file system (under a suitably obscure name, perhaps even generated at run time) only when running.
Hide all real functions behind a factory method that exchanges a secret (better, proof of knowledge of a secret) for a table of function pointers to the real methods.
Use anti-debugging techniques borrowed from the malware world to prevent reverse engineering. (Note that this will likely get you false positives from AV tools.)
Regardless, a sufficiently determined user can still figure out ways to use it. A decent disassembler will quickly provide all the information needed.
Note that if your DLL is really a COM object, or worse yet a CLR Assembly, then there is a huge amount of runtime type information that you can't strip off without breaking its intended use.
EDIT: Since you've retagged to imply that C# and .NET are the environment rather than a pure Win32 DLL written in C, then I really should revise the above to "You Can't, But..."
There has been a market for obfuscation tools for a long time to deal with environments where delivery of compilable source is mandatory, but you don't want to deliver useful source. There are C# products that play in that market, and it looks like at least one has chimed in.
Because loading an Assembly requires so much effort from the framework, it is likely that there are permission bits that exert some control for honest providers and consumers of Assemblies. I have not seen any discussion of the real security provided by these methods and simply don't know how effective they are against a determined attack.
A lot is going to depend on your use case. If you merely want to prevent casual use, you can probably find a solution that works for you. If you want to protect valuable trade secrets from reverse engineering and reuse, you may not be so happy.
You're facing the same issue as proponents of DRM.
If your program (which you wish to be able to run the DLL) is runnable by some user account, then there is nothing that can stop a sufficiently determined programmer who can log on as that user from isolating the code that performs the decryption and using that to decrypt your DLL and run it.
You can of course make it inconvenient to perform this reverse engineering, and that may well be enough.
Take a look at the StrongNameIdentityPermissionAttribute. It will allow you to declare access to your assembly. Combined with a good code protection tool (like CodeVeil (disclaimer I sell CodeVeil)) you'll be quite happy.
You could embed it into your executable, and extract and loadlibrary at runtime and call into it. Or you could use some kind of shared key to encrypt/decrypt the accompanying file and do the same above.
I'm assuming you've already considered solutions like compiling it in if you really don't want it shared. If someone really wants to get to it though, there are many ways to do it.
Have you tried .Net reactor? I recently came across it. Some people say its great but I am still testing it out.
Well you could mark all of your "public" classes as "internal" or "protected internal" then mark you assemblies with [assembly:InternalsVisibleTo("")] Attribute and no one but the marked assemblies can see the contents.
You may be interested in the following information about Friend assemblies:
http://msdn.microsoft.com/en-us/library/0tke9fxk(VS.80).aspx

C# Control what an external DLL can access?

I'm building a project that will support loading in external, managed DLLs, essentially as a modding system. However due to security reasons I'd like to restrict (as far as possible) what those external DLLs can access and do because they won't be made by myself.
My current plan was to simply blanket ban every assembly besides a select whitelist which I can add to upon request, however my main issue is the System.dll. It's probably the most important one to restrict access to due to the obvious reason that it can access System, however it also has vital namespaces like System.Collections, so it needs to be useable.
Is there a way to check specifically what assemblies and namespaces a loaded DLL is utilising or am I going about this the wrong way?

Strong named assembly needed for COM interoperability?

I have several C# assembly libraries, which are not strongly named (signed). I would like to make a SxS COM-wrapper over those components using the tlbexp.exe to consume in native programs. Is it necessary to sign them or is there another way to do it?
Thanks
There are strong misconceptions in this question, it confuzzles the roles of two programmers. You are the author of the library, somebody else uses your library and probably works for another company and has no idea who you are. The client programmer. You in turn have no idea how the client programmer uses your library, how many programs he wrote and what he does to deploy your library on his users' machines. You run Tlbexp.exe only to help him write his code.
This is a recipe for trouble, like it is no matter what language or tooling you use when you create libraries. That trouble starts when you make a change in the library and the client programmer has to rebuild and redeploy his programs that use your library.
There is extra trouble in a COM library because by default registration is machine-wide. Which is pretty nice if the change you made is a bug-fix, all of the client programs that use your library automatically get the fix. But it is not nice if the change is breaking and causes the old client program to fail. The standard disaster is that the client programmer rebuilds some of this programs but forgets or ignores some old ones he no longer maintains. The end-user is often the real victim, he's got a program that crashes but two programmers that don't think it is their problem to fix.
What is necessary is that programs that the client programmer does not update keep using the old version of your library so it is unaffected by the change. In other words, there need to be multiple copies of your DLL on the users machine and a program automagically needs to pick the right one.
Thankfully that is easy to do for a [ComVisible] .NET assembly. Either the client programmer, his user or an installer you provide for him can put the assembly in the GAC. Which allows multiple copies of an assembly to exist side-by-side and the CLR can automatically find the correct one. That has two requirements. You need to bump the [AssemblyVersion] of your library, that's standard. And the assembly needs to have a strong name so it can be put in the GAC. That is trivial to do by you, using Project > Properties > Signing and ticking the "Sign the assembly" checkbox. This has no security implications so the key doesn't matter and a password is entirely unnecessary. It is not easy to do by the client programmer so this is something that you must do. Always.
The client programmer also has the option to use isolated COM with a manifest (aka "regfree COM"), probably what you meant with "SxS COM-wrapper". With the benefit that each program he writes has its own copy of the DLL, the way it works by default in .NET. Bug-fixes need to be deployed manually but a change in your library can't break an unmaintained client program. But this is entirely his choice, there is nothing that you can do to ensure that this is done. You must assume that he doesn't use it, and he almost certainly won't at first, so you can't bypass the need to strong-name.
When an assembly is strongly named, its types can only be used from other strongly named assemblies. Since your assemblies are not strongly named, there's no need to sign your COM wrapper.
Signing an assembly makes it possible to place it in the Global Assembly Cache (GAC). This has the benefit of keeping multiple versions side by side, without breaking existing clients.
The alternative is to use the Windows registry via regasm's /codebase switch. Much in the same way as classic COM components are set up, this option registers your COM-visible assembly on a system wide basis.
Since you wish to deploy your COM wrapper via SxS / Registration-Free activation, thereby bypassing the registry and GAC altogether, there's really no need sign it.

Alternative for Obfuscation in the .NET world

Are there any alternatives for obfuscation to protect your code from being stolen?
An ultimate protection is the SaaS model. Anything else will expose your precious secrets one way or another.
See: http://en.wikipedia.org/wiki/Software_as_a_service
A short answer is:
Obfuscation has nothing to do with theft protection.
Obfuscation's only purpose is to make it harder to read and understand your code so that in best case reverse engineering is economical unattractive.
It is still possible that someone steals your source code. Even if you use the best available obfuscation technology or if you think about SaaS scenarios.
You normally have your source code at least at two places together with all meta files necessary to build the project:
Your development computer
Your code repository
If you want to protect your code against theft, these are the first places where must be active. Even the biggest players on the market like Adobe, Microsoft Corporation, Symantec have lost source code as a result of a theft but not as a result of reverse engineering. And in bigger companies it does not need an external attacker - an leaving employee is sometimes enough.
So you might be interested in:
Strong machine encryption
Anti virus, Anti rootkit, Anti malware
Firewall and Intrusion Detection
Digital Property Protection
Limited internet access on development computers
Managed remote development environments so that source never leaves secured servers and infrastructure
Etc. pp.
Clear processes and consitent rights management
Today in many cases it is a bigger risk that some bad guy manages to get access to your repository or development system or that a leaving employee has a "backup copy" of your code than that some company invests time in reverse engineering of existing applications to create a 1:1 copy or to make modifications (both is in most countries illegal and may lead to big damage of reputation and expensive sentences and they also have no possibility to get professional support on such hacked and modified software)
Obfuscation does also not mean that your intellectual property is safe against beeing stolen or copied. Depending on the obfuscator you use it is still possible to analyze logic.
If you want to make analyzing logic harder, you need some kind of control flow obfuscation. But cfo can produce a lot of funny and hard to debug problems. I'm sure that's in most cases more an additional problem than an solution.
The bad reality is, that obfuscation solves not the problem of reverse engineering. It solves te problem of 1:1 (or close to 1:1) code copies. That's because most software has an recognizeable user interface or behavior and in nearly all cases it is possible to reproduce user interfaces and behaviors (or to be more exact: The results) and there exists no tool to protect software against this.
If you want to nag casual coders from understanding your code, open source tools like obfuscar may be good enough. But i bet, that you run into problems if you are using technologies like reflection, remoting, plugins, dynamic assembly loading and building etc. pp.
From my point of view - and that's also my experience - obfuscation is expendable in most cases.
If you really want to make it hard for others to access your code (while "really hard" is relative) you have in general two choices:
Some kind of a cryptographic container with a virtual execution environment and a virtual file system which protects not only your code but the complete application and it's structure. Attack vector is e.g. the memory during runtime or the container itself.
Think about SaaS which means, that you deliver the access to your software but not the software itself. But keep in mind that SaaS-Solutions can be hard to develop and expensive depending on the service level, security and confidence you want or must provide. Attack vector is e.g. the server infrastructure.
That ultimate 100% bullet proof solution does - in fact - not exist on this planet.
Last but not least it might be necessary to provide complete source code to customers in some situations. E.g. if you develop individual software and delivering code is part of your contract or if you want to make business in critical segments like aerospace, military industry, governmental systems etc. pp.
You could also code the sensitive functions/components into native C++, wrap it in C++/CLI and use with .NET.
Obviously, it can still be reverse engineered, but is an alternative nevertheless.
There is no obfuscator that will ever be secure enough to protect an application written in .NET. Forget it! Obfuscating is not a real protection.
If you have a .NET Exe file there is a FAR better solution.
I use Themida and can tell that it works very well.
Themida is by far cheaper than the the most obfuscators and is the best in anti piracy protection on the market. It creates a virtual machine were critical parts of your code are run and runs several threads that detect manipulation or breakpoints set by a cracker. It converts the .NET Exe into something that Reflector does not even recognize as a .NET assembly anymore.
Please read the detailed description on their website: http://www.oreans.com/themida_features.php
The only drawback of Themida is that it cannot protect .NET Dlls. (It's strength is protecting C++ code in Exe and DLLs)

How do i prevent my code from being stolen?

What happens exactly when I launch a .NET exe? I know that C# is compiled to IL code and I think the generated exe file just a launcher that starts the runtime and passes the IL code to it. But how? And how complex process is it?
IL code is embedded in the exe. I think it can be executed from the memory without writing it to the disk while ordinary exe's are not (ok, yes but it is very complicated).
My final aim is extracting the IL code and write my own encrypted launcher to prevent scriptkiddies to open my code in Reflector and just steal all my classes easily. Well I can't prevent reverse engineering completely. If they are able to inspect the memory and catch the moment when I'm passing the pure IL to the runtime then it won't matter if it is a .net exe or not, is it? I know there are several obfuscator tools but I don't want to mess up the IL code itself.
EDIT: so it seems it isn't worth trying what I wanted. They will crack it anyway... So I will look for an obfuscation tool. And yes my friends said too that it is enough to rename all symbols to a meaningless name. And reverse engineering won't be so easy after all.
If you absolutely insist on encrypting your assembly, probably the best way to do it is to put your program code into class library assemblies and encrypt them. You would then write a small stub executable which decrypts the assemblies into memory and executes them.
This is an extremely bad idea for two reasons:
You're going to have to include the encryption key in your stub. If a 1337 hacker can meaningfully use your reflected assemblies, he can just as easily steal your encryption key and decrypt them himself. (This is basically the Analog Hole)
Nobody cares about your 1337 code. I'm sorry, but that's tough love. Nobody else ever thinks anyone's code is nearly as interesting as the author does.
A "secret" that you share with thousands of people is not a secret. Remember, your attackers only have to break your trivial-to-break-because-the-key-is-right-there "encryption" scheme exactly once.
If your code is so valuable that it must be kept secret then keep it secret. Leave the code only on your own servers; write your software as a web service. Then secure the server.
the generated exe file just a launcher that starts the runtime and passes the IL code to it.
Not exactly. There are different ways you can set up your program, but normally the IL code is compiled to native machine code that runs in process with the runtime.
As for the kiddies — you're deluding yourself if you think you can sell to them or anyone who uses what they redistribute. If they can't unlock your app they'll move on and find one they can or do without. They represent exactly $0 in potential sales; it makes little sense to spend too much effort attempting to thwart them because there'd be no return on your investment. A basic obfuscator might be fine, but don't go much beyond that.
Realistically, most developers face a much bigger challenge from obscurity than from piracy. Anything you do that prevents you from getting the word out about your product hurts you more than the pirates do. This includes making people pay money to get it. Most of the time a better approach is to have a free version of your app that the kiddies don't even need to unlock; something that already works for them well enough that cracking your app would just be a waste of their time, and not just a time or feature-limited trial. Let them and as many others as possible spread it far and wide.
Now I know that you do eventually need some paying customers. The key is to now use all the attention you get from the free product to upsell or promote something else that's more profitable. One option here is to also have a premium version with additional features targeted largely at a business audience; things like making it easy to deploy to an entire network and manage that way. Businesses have deeper pockets and are more likely to pay your license fees. Your free version then serves to promote your product and give it legitimacy for your business customers.
Of course, there are other models as well, but no matter what you do it's worth remembering that obscurity is the bigger challenge and that pirated copies of your software will never translate into sales. Ultimately (and of course this depends on your execution) you'll be able to make more money with a business model that takes advantage of those points than you will trying to fight them.
"...prevent scriptkiddies to open my
code in Reflector and just steal all
my classes easily."
Unfortunately, regardless of how you obscure launching, it's a matter of half a dozen commands in a debugger to dump a currently-running assembly to a file of the user's choice. So, even if you can launch your application as Brian suggested, it's not hard to get that application's components into Reflector once it's running (I can post a sample from WinDbg if someone would find it interesting).
Obfuscation tools are created from huge amounts of technical experience, and are often designed to make it difficult for debuggers to reliably attach to a process, or to extract information from it. As Brian said: I'm not sure why you're determined to preserve the IL and, if you want any meaningful protection from script kiddies, that's something you may have to change your mind on.
"They copied all they could follow, but they couldn't copy my mind, so I left them sweating and stealing a year and a half behind." -- R. Kipling
Personally I think that obfuscation is the way to go. It is simple and can be effective, especially if all your code is within an exe (I'm not sure what the concern is with "messing up the IL").
However, if you feel like that won't work for you, perhaps you can encrypt your exe and embed it as a resoource within your launcher. The simplest way to handle it would be to decrypt the exe resource and write it out too file and execute it. Once the exe has completed executing, delete the file. You might also be able to run it through the Emit functions. I have no idea how this would work, but here is an article to get you started - Using Reflection Emit to Cache .NET Assemblies.
Of course your decryption key would probably have to be embedded in the exe as well so somebody really determined will be able to decrypt your assembly anyway. This is why obfuscation is probably the best approach.
Copying my answer from this question (which is not exactly duplicate but can be answered with the same answer, hence CW):
A Windows EXE contains multiple "parts". Simplified, the .net Code (=MSIL) is only a Part of the EXE, and there is also a "real" native Windows Part inside the EXE that serves as some sort of launcher for the .net Framework which then executes the MSIL.
Mono will just take the MSIL and execute it, ignoring the native Windows Launcher stuff.
Again, this is a simplified overview.
Edit: I fear my understanding of the deep depp details is not good enough for really much detail (I know roughly what a PE Header is, but not really the details), but i found these links helpful:
NET Assembly Structure – Part II
.NET Foundations - .NET assembly structure
Appendix: If you really want to go deeper, pick up a copy on Advanced .net Debugging. The very first chapter explains exactly how the .net Assembly is loaded prior and after Windows XP (since XP, the Windows Loader is .net aware which radically changes how .net Applications are started)

Categories