I have some code which needs to run under medium trust but doesn't. I've used permcalc in the past but it is rather painful to get the output and compare it to the medium trust definition. What I would really like is a tool which does the analysis for me and just outputs a list of things I need to address. Does such a tool exist? I have seen reference to a calculate permission tool in visual studio but I can't find it anywhere in VS2010.
Well the reality is that PermCalc is really not good at calculating what an application really needs and it tends to assign to many privileges.
That said, PermCalc should be able to give you some indications of what your code has that needs security permissions. Can you share more details about what those are? (btw, how are you using PermCalc? on VS2008 or command line?)
Also, do you have a strategy for handling the cases where you app has functionality that requires more privileges than the ones assigned by CAS Medium Trust?
Related
I have recently been tasked with figuring out a way to remove the elevated privileges requirement for a c# application.
I'm not super familiar with C# but I do have access to the source code (multiple projects in one solution) and I'm using VS 2013 Professional.
So far I've been going through the code by hand and referencing documentation online to try to determine where the elevated privilege requirements are coming from.
Is there a way to use Visual Studio (or another piece of software) to determine which function calls are forcing the administrator privileges requirement?
Not automatically, but there may be some ways to narrow things down quickly.
First if you have good test suites you could run these as a user without admin access and see which ones if any fail or prompt for UAC. This should allow you to narrow down sections of code quickly (I am a big fan of repurposing test suites btw). Also those that fail can be quickly corrected.
Another option would be manual testing, again by a user without admin access. Then the code can be reviewed where there are problems and the issues removed,
Without an automatic way of finding the problems, you should be prepared for some post-sign-off bug fixes however (so maybe having an experimental phase) may be good.
Are there any alternatives for obfuscation to protect your code from being stolen?
An ultimate protection is the SaaS model. Anything else will expose your precious secrets one way or another.
See: http://en.wikipedia.org/wiki/Software_as_a_service
A short answer is:
Obfuscation has nothing to do with theft protection.
Obfuscation's only purpose is to make it harder to read and understand your code so that in best case reverse engineering is economical unattractive.
It is still possible that someone steals your source code. Even if you use the best available obfuscation technology or if you think about SaaS scenarios.
You normally have your source code at least at two places together with all meta files necessary to build the project:
Your development computer
Your code repository
If you want to protect your code against theft, these are the first places where must be active. Even the biggest players on the market like Adobe, Microsoft Corporation, Symantec have lost source code as a result of a theft but not as a result of reverse engineering. And in bigger companies it does not need an external attacker - an leaving employee is sometimes enough.
So you might be interested in:
Strong machine encryption
Anti virus, Anti rootkit, Anti malware
Firewall and Intrusion Detection
Digital Property Protection
Limited internet access on development computers
Managed remote development environments so that source never leaves secured servers and infrastructure
Etc. pp.
Clear processes and consitent rights management
Today in many cases it is a bigger risk that some bad guy manages to get access to your repository or development system or that a leaving employee has a "backup copy" of your code than that some company invests time in reverse engineering of existing applications to create a 1:1 copy or to make modifications (both is in most countries illegal and may lead to big damage of reputation and expensive sentences and they also have no possibility to get professional support on such hacked and modified software)
Obfuscation does also not mean that your intellectual property is safe against beeing stolen or copied. Depending on the obfuscator you use it is still possible to analyze logic.
If you want to make analyzing logic harder, you need some kind of control flow obfuscation. But cfo can produce a lot of funny and hard to debug problems. I'm sure that's in most cases more an additional problem than an solution.
The bad reality is, that obfuscation solves not the problem of reverse engineering. It solves te problem of 1:1 (or close to 1:1) code copies. That's because most software has an recognizeable user interface or behavior and in nearly all cases it is possible to reproduce user interfaces and behaviors (or to be more exact: The results) and there exists no tool to protect software against this.
If you want to nag casual coders from understanding your code, open source tools like obfuscar may be good enough. But i bet, that you run into problems if you are using technologies like reflection, remoting, plugins, dynamic assembly loading and building etc. pp.
From my point of view - and that's also my experience - obfuscation is expendable in most cases.
If you really want to make it hard for others to access your code (while "really hard" is relative) you have in general two choices:
Some kind of a cryptographic container with a virtual execution environment and a virtual file system which protects not only your code but the complete application and it's structure. Attack vector is e.g. the memory during runtime or the container itself.
Think about SaaS which means, that you deliver the access to your software but not the software itself. But keep in mind that SaaS-Solutions can be hard to develop and expensive depending on the service level, security and confidence you want or must provide. Attack vector is e.g. the server infrastructure.
That ultimate 100% bullet proof solution does - in fact - not exist on this planet.
Last but not least it might be necessary to provide complete source code to customers in some situations. E.g. if you develop individual software and delivering code is part of your contract or if you want to make business in critical segments like aerospace, military industry, governmental systems etc. pp.
You could also code the sensitive functions/components into native C++, wrap it in C++/CLI and use with .NET.
Obviously, it can still be reverse engineered, but is an alternative nevertheless.
There is no obfuscator that will ever be secure enough to protect an application written in .NET. Forget it! Obfuscating is not a real protection.
If you have a .NET Exe file there is a FAR better solution.
I use Themida and can tell that it works very well.
Themida is by far cheaper than the the most obfuscators and is the best in anti piracy protection on the market. It creates a virtual machine were critical parts of your code are run and runs several threads that detect manipulation or breakpoints set by a cracker. It converts the .NET Exe into something that Reflector does not even recognize as a .NET assembly anymore.
Please read the detailed description on their website: http://www.oreans.com/themida_features.php
The only drawback of Themida is that it cannot protect .NET Dlls. (It's strength is protecting C++ code in Exe and DLLs)
I will be taking on the role of support for a complex application that is transitioning from the development team. This application is a sharepoint solution that connects to several (7) web services. The development team is rolling off almost immediately and will be available only for small questions.
I'm new to this role so I'm wondering what suggestions you have for me as I take on this large project. What are some considerations that should be made so that the transition to support is smooth and uninterupted?
I've been reading the documentation but I can already see some gaps that need to be filled. The applicaiton is very (perhaps overly) configurable and there is lots of injected code. Stepping through the code is about the only way I can gain an understanding of what is actually happening.
It sounds like you've already got your environment set up if you're able to debug the application, so that's the first thing I was going to suggest in a knowledge-transfer situation. Some general things that I would get from the developers before they depart:
A list of third-party components that the application uses, along with license information and website logins if applicable.
Access to every part of the environment that this thing runs on, both production and development. That means the source code management system, database server(s), etc. It sounds like you have some of these already but make sure you get access to absolutely everything.
If your development environment was given to you "as is" (i.e. you took it over from one of the departing developers, make sure you know how to rebuild it from scratch. They might have a document that describes the process of building a development box, but if not maybe you can get them to show you how to set up a fresh machine.
Three will go a long way towards this, but if setting up a server to run the application is different in any way from setting up a development environment, you'd want to know how so you can diagnose server configuration issues if they crop up, or even rebuild a server. Although this sort of thing may be someone else's responsibility depending on your organization.
Once you have those, you probably want to get some understanding of why the application does the things that it does. That will give you the context you need to understand support and enhancement requests when they come in.
Are the original developers the only source of this information, or are there business people who you will be working with after the developers leave? One of the first things I try to do when starting on an existing application that's new to me is to find someone who knows the business well and have them give me a high-level run-down of the application's purpose in life. From there you can go into more detail on individual components/features/whatever as needed. The business people may be a better source for this information than the developers are, so you may want to try them first.
Hopefully some of that helps.
If you're not the systems admin (as opposed to the SharePoint admin), develop an understanding with them of what tasks you are able to do and what you need of them.
This may include things like stopping and starting services (IIS, Timer Service, etc.) and filesystem and DB monitoring and maintenance. Getting this sorted out up front saves a lot of pain later.
If the sys admins don't have some understanding of SharePoint, educate them. They will need to know what the deal is with things like code deployments.
It's best not to feel my pain.
What happens exactly when I launch a .NET exe? I know that C# is compiled to IL code and I think the generated exe file just a launcher that starts the runtime and passes the IL code to it. But how? And how complex process is it?
IL code is embedded in the exe. I think it can be executed from the memory without writing it to the disk while ordinary exe's are not (ok, yes but it is very complicated).
My final aim is extracting the IL code and write my own encrypted launcher to prevent scriptkiddies to open my code in Reflector and just steal all my classes easily. Well I can't prevent reverse engineering completely. If they are able to inspect the memory and catch the moment when I'm passing the pure IL to the runtime then it won't matter if it is a .net exe or not, is it? I know there are several obfuscator tools but I don't want to mess up the IL code itself.
EDIT: so it seems it isn't worth trying what I wanted. They will crack it anyway... So I will look for an obfuscation tool. And yes my friends said too that it is enough to rename all symbols to a meaningless name. And reverse engineering won't be so easy after all.
If you absolutely insist on encrypting your assembly, probably the best way to do it is to put your program code into class library assemblies and encrypt them. You would then write a small stub executable which decrypts the assemblies into memory and executes them.
This is an extremely bad idea for two reasons:
You're going to have to include the encryption key in your stub. If a 1337 hacker can meaningfully use your reflected assemblies, he can just as easily steal your encryption key and decrypt them himself. (This is basically the Analog Hole)
Nobody cares about your 1337 code. I'm sorry, but that's tough love. Nobody else ever thinks anyone's code is nearly as interesting as the author does.
A "secret" that you share with thousands of people is not a secret. Remember, your attackers only have to break your trivial-to-break-because-the-key-is-right-there "encryption" scheme exactly once.
If your code is so valuable that it must be kept secret then keep it secret. Leave the code only on your own servers; write your software as a web service. Then secure the server.
the generated exe file just a launcher that starts the runtime and passes the IL code to it.
Not exactly. There are different ways you can set up your program, but normally the IL code is compiled to native machine code that runs in process with the runtime.
As for the kiddies — you're deluding yourself if you think you can sell to them or anyone who uses what they redistribute. If they can't unlock your app they'll move on and find one they can or do without. They represent exactly $0 in potential sales; it makes little sense to spend too much effort attempting to thwart them because there'd be no return on your investment. A basic obfuscator might be fine, but don't go much beyond that.
Realistically, most developers face a much bigger challenge from obscurity than from piracy. Anything you do that prevents you from getting the word out about your product hurts you more than the pirates do. This includes making people pay money to get it. Most of the time a better approach is to have a free version of your app that the kiddies don't even need to unlock; something that already works for them well enough that cracking your app would just be a waste of their time, and not just a time or feature-limited trial. Let them and as many others as possible spread it far and wide.
Now I know that you do eventually need some paying customers. The key is to now use all the attention you get from the free product to upsell or promote something else that's more profitable. One option here is to also have a premium version with additional features targeted largely at a business audience; things like making it easy to deploy to an entire network and manage that way. Businesses have deeper pockets and are more likely to pay your license fees. Your free version then serves to promote your product and give it legitimacy for your business customers.
Of course, there are other models as well, but no matter what you do it's worth remembering that obscurity is the bigger challenge and that pirated copies of your software will never translate into sales. Ultimately (and of course this depends on your execution) you'll be able to make more money with a business model that takes advantage of those points than you will trying to fight them.
"...prevent scriptkiddies to open my
code in Reflector and just steal all
my classes easily."
Unfortunately, regardless of how you obscure launching, it's a matter of half a dozen commands in a debugger to dump a currently-running assembly to a file of the user's choice. So, even if you can launch your application as Brian suggested, it's not hard to get that application's components into Reflector once it's running (I can post a sample from WinDbg if someone would find it interesting).
Obfuscation tools are created from huge amounts of technical experience, and are often designed to make it difficult for debuggers to reliably attach to a process, or to extract information from it. As Brian said: I'm not sure why you're determined to preserve the IL and, if you want any meaningful protection from script kiddies, that's something you may have to change your mind on.
"They copied all they could follow, but they couldn't copy my mind, so I left them sweating and stealing a year and a half behind." -- R. Kipling
Personally I think that obfuscation is the way to go. It is simple and can be effective, especially if all your code is within an exe (I'm not sure what the concern is with "messing up the IL").
However, if you feel like that won't work for you, perhaps you can encrypt your exe and embed it as a resoource within your launcher. The simplest way to handle it would be to decrypt the exe resource and write it out too file and execute it. Once the exe has completed executing, delete the file. You might also be able to run it through the Emit functions. I have no idea how this would work, but here is an article to get you started - Using Reflection Emit to Cache .NET Assemblies.
Of course your decryption key would probably have to be embedded in the exe as well so somebody really determined will be able to decrypt your assembly anyway. This is why obfuscation is probably the best approach.
Copying my answer from this question (which is not exactly duplicate but can be answered with the same answer, hence CW):
A Windows EXE contains multiple "parts". Simplified, the .net Code (=MSIL) is only a Part of the EXE, and there is also a "real" native Windows Part inside the EXE that serves as some sort of launcher for the .net Framework which then executes the MSIL.
Mono will just take the MSIL and execute it, ignoring the native Windows Launcher stuff.
Again, this is a simplified overview.
Edit: I fear my understanding of the deep depp details is not good enough for really much detail (I know roughly what a PE Header is, but not really the details), but i found these links helpful:
NET Assembly Structure – Part II
.NET Foundations - .NET assembly structure
Appendix: If you really want to go deeper, pick up a copy on Advanced .net Debugging. The very first chapter explains exactly how the .net Assembly is loaded prior and after Windows XP (since XP, the Windows Loader is .net aware which radically changes how .net Applications are started)
Is it necessary to learn about Code Access Security (CAS) for desktop application development in C#.
That’s a pretty broad question to ask, and the answer depends on a number of things. The two most important factors, however, are your target environment your method of deployment.
Most consumer software is installed with an installer (MSI) and gets full-trust on the target machine. If this is your environment and method of delivery, you’ll likely never need to learn or master Code Access Security. On the other hand, enterprise customers generally want more control over what software can and can’t do. Code Access Security provides IT with the ability to lock down applications and the control they can assert of the machine they’re installed on. So if you’re building for Enterprise, understanding CAS may be a requirement.
Regardless of your target market, how you deploy your application may require you to learn about CAS. XBAP applications are by default NOT full-trust and require significant steps to elevate to full-trust. Click-Once deployed applications are also not full-trust but can be elevated to full-trust more easily. So if you plan to deploy software using either of these methods, you’ll likely want to understand CAS.
And finally, Silverlight as a platform by definition is not full-trust. In fact it can never be full-trust. This is not a CAS issue because no depth of understanding CAS will help you overcome the fact that Silverlight does not include code required to perform full-trust activities. The reason I include Silverlight here, however, is that a good understanding of CAS might come in handy when understanding some of the security limitations that exist in the platform by design.
Hope that helps.
Yes if you want to get an MCPD. In the real world I have never needed it. I write applications for the Government and they are pretty tight on security and they have never requested it.
It is not essential, but it certainly helps to make your application more secure. Implict declarations on methods makes your intentions clear.
Ugh, it was a nice idea (I guess), but in real life CAS only rears its ugly head when you try to deploy or read a file off a network drive. It's not difficult to 'learn', but I wouldn't dwell on it.
Desktop Applications are considered "Full Trust", you will never need CAS for full trust applications.
I bought a book on it shortly after .NET 1.0 came out, I'll never get the time I spent reading it back.
I have never actually run across a situation that required code access security. It is a definite requirement for getting an MCPD or MCSD (or whatever the new cert is), but I think a better idea would be to understand secure coding practices (configuration encryption, dealing with user input, etc.) before going down the route of code access security.