Strong named assembly needed for COM interoperability? - c#

I have several C# assembly libraries, which are not strongly named (signed). I would like to make a SxS COM-wrapper over those components using the tlbexp.exe to consume in native programs. Is it necessary to sign them or is there another way to do it?
Thanks

There are strong misconceptions in this question, it confuzzles the roles of two programmers. You are the author of the library, somebody else uses your library and probably works for another company and has no idea who you are. The client programmer. You in turn have no idea how the client programmer uses your library, how many programs he wrote and what he does to deploy your library on his users' machines. You run Tlbexp.exe only to help him write his code.
This is a recipe for trouble, like it is no matter what language or tooling you use when you create libraries. That trouble starts when you make a change in the library and the client programmer has to rebuild and redeploy his programs that use your library.
There is extra trouble in a COM library because by default registration is machine-wide. Which is pretty nice if the change you made is a bug-fix, all of the client programs that use your library automatically get the fix. But it is not nice if the change is breaking and causes the old client program to fail. The standard disaster is that the client programmer rebuilds some of this programs but forgets or ignores some old ones he no longer maintains. The end-user is often the real victim, he's got a program that crashes but two programmers that don't think it is their problem to fix.
What is necessary is that programs that the client programmer does not update keep using the old version of your library so it is unaffected by the change. In other words, there need to be multiple copies of your DLL on the users machine and a program automagically needs to pick the right one.
Thankfully that is easy to do for a [ComVisible] .NET assembly. Either the client programmer, his user or an installer you provide for him can put the assembly in the GAC. Which allows multiple copies of an assembly to exist side-by-side and the CLR can automatically find the correct one. That has two requirements. You need to bump the [AssemblyVersion] of your library, that's standard. And the assembly needs to have a strong name so it can be put in the GAC. That is trivial to do by you, using Project > Properties > Signing and ticking the "Sign the assembly" checkbox. This has no security implications so the key doesn't matter and a password is entirely unnecessary. It is not easy to do by the client programmer so this is something that you must do. Always.
The client programmer also has the option to use isolated COM with a manifest (aka "regfree COM"), probably what you meant with "SxS COM-wrapper". With the benefit that each program he writes has its own copy of the DLL, the way it works by default in .NET. Bug-fixes need to be deployed manually but a change in your library can't break an unmaintained client program. But this is entirely his choice, there is nothing that you can do to ensure that this is done. You must assume that he doesn't use it, and he almost certainly won't at first, so you can't bypass the need to strong-name.

When an assembly is strongly named, its types can only be used from other strongly named assemblies. Since your assemblies are not strongly named, there's no need to sign your COM wrapper.
Signing an assembly makes it possible to place it in the Global Assembly Cache (GAC). This has the benefit of keeping multiple versions side by side, without breaking existing clients.
The alternative is to use the Windows registry via regasm's /codebase switch. Much in the same way as classic COM components are set up, this option registers your COM-visible assembly on a system wide basis.
Since you wish to deploy your COM wrapper via SxS / Registration-Free activation, thereby bypassing the registry and GAC altogether, there's really no need sign it.

Related

C# Control what an external DLL can access?

I'm building a project that will support loading in external, managed DLLs, essentially as a modding system. However due to security reasons I'd like to restrict (as far as possible) what those external DLLs can access and do because they won't be made by myself.
My current plan was to simply blanket ban every assembly besides a select whitelist which I can add to upon request, however my main issue is the System.dll. It's probably the most important one to restrict access to due to the obvious reason that it can access System, however it also has vital namespaces like System.Collections, so it needs to be useable.
Is there a way to check specifically what assemblies and namespaces a loaded DLL is utilising or am I going about this the wrong way?

How do I tell vb6 not to create new versions of interfaces/com objects everytime I make dll?

I have vb6 com server (ActiveX DLL project) that is used by .NET code
Everytime I put changes into vb6 code and make dll, I have to recompile my .NET client code as well, because it looks like VB6 generates new GUIDs or versions to interfaces and com-objects.
I admit it's a good practice because changes are made but I'd like to disable this behavior to let my .NET client code be the same everytime I update my vb6 dll.
How can I tell VB6 to keep all GUIDs and versions for ActiveX dll the same no matter what changes are done to COM objects or COM interfaces?
The selection in the Project + Properties, Components tab matters. You have to select "Binary compatibility" here to force it to re-use old guids. And keep a copy of the DLL around to act as the "master" that provides the guids, check it into source control.
When you add new classes then you also have to update that copy so future versions will know to reuse the same guids for those added classes. Easy to forget, pretty hard to diagnose when you do.
It is very dangerous, reusing guids is a very strong DLL Hell inducer. You can get old client programs to continue using the new DLL as long as you careful avoid changing existing methods. Not just their method signature but also their implementation. An updated client that encounters an old version of the DLL will fail in a very nasty way, the access violation crash is near impossible to diagnose.
Using Binary Compatibility only buys you so much. Maintaining interface compatibility over the long term only works well for very simple libraries or when your interfaces were very well planned and future-proofed from the beginning. It can be scary to look at some peoples' VB6 libraries and see interface version numbers in the high hundreds (both GUIDs and version numbers are used to identify an interface) even when they think they've carefully managed Binary Compatibility.
It can get even worse when you have a system of programs that share libraries. A new requirement or even a bug fix might require a breaking change to a library's interface for one program but not the other 12 or 20 of them.
You have to accomodate this by explicit manual versioning, where you actually change the name of the library to reflect a new version with an entirely new set of GUIDs. Often this is done with numbering, so that a ProgId like "FuddInvLib1.DalRoot" can coexist side by side with a new "FuddInvLib2.DalRoot" in two libraries with entirely different sets of GUIDs: FuddInvLib1.dll and FuddInvLib2.dll.
As each is changed for support purposes you can maintain Binary Compatible versioning within each of them over time, eventually phasing out the FuddInvLib1.dll entirely. This of course means migrating client code to use the newer library or libraries over time, but you have the luxury of doing this incrementally at a planned pace.
The COM contract stipulates that interface definitions are immutable (no change to method names, argument lists, order of methods, number of methods), but that implementations can be freely altered. Naive, but true. (VB Binary compatibility will not permit you to change the method signatures or method order in an interface, although it will allow you to append new methods to it--go figure). Nevertheless, making any change to an interface or its methods for a DLL that is in production is "worst practice", as years of DLL Hell have borne out and as Hans has explained.
One approach for preserving binary compatibility through version changes is to add new interfaces to the component, and never (never ever) touch the old interfaces once any version of the DLL is in production. The old client will happily use the old interface, and newer clients can use the new interface.
The new client using an old interface error is trappable by using the iUnknown.QueryInterface method. In VB, you can do this:
If Not TypeOf myObjectReference Is myNewInterface Then
'gracefully handle the error
End If
This will call the QueryInterface method, and will return false if you are referencing an older version of the DLL. You can message the user that he needs to install the newer version of the DLL and quit. You can wrap this functionality into a function and call it when you initialize the object in your new client. If the new interface isn't supported, you have an old version of the DLL; you can message the user to install the newer version and go from there.

Native DLL Resolve for P/Invoke and User Expectations

I've written a .NET assembly which uses P/Invoke to expose functionality of a native 3rd party library. I am, however, not distributing this library with my assembly. This means that the responsibility is on the user to provide the library through whatever means required. Which leads me to my question:
As either a library author with experience in this situation or a potential user of this assembly, what are some common user expectations one would have for resolving the DLL location in this use case?
Is the default Windows DLL search order enough? "If it blows up, it blows up. They should have read the documentation."
Should I automatically expand %PATH% at run-time to common library install locations to try and find it or at least increase the chances? I'm not really a fan of this as we're changing state behind the scenes.
Should I provide some form of configuration setting to allow the user to specify the location and then manually call LoadLibrary?
As per comments the users are developers:
I would go with convention over configuration... basically default Windows DLL search order... plus a configuration setting for situations where configuration is needed... if that setting is configured it takes precedence...

Proper API Design for Version Independence?

I've inherited an enormous .NET solution of about 200 projects. There are now some developers who wish to start adding their own components into our application, which will require that we begin exposing functionality via an API.
The major problem with that, of course, is that the solution we've got on our hands contains such a spider web of dependencies that we have to be careful to avoid sabotaging the API every time there's a minor change somewhere in the app. We'd also like to be able to incrementally expose new functionality without destroying any previous third party apps.
I have a way to solve this problem, but i'm not sure it's the ideal way - i was looking for other ideas.
My plan would be to essentially have three dlls.
APIServer_1_0.dll - this would be the dll with all of the dependencies.
APIClient_1_0.dll - this would be the dll our developers would actual refer to. No references to any of the mess in our solution.
APISupport_1_0.dll - this would contain the interfaces which would allow the client piece to dynamically load the "server" component and perform whatever functions are required. Both of the above dlls would depend upon this. It would be the only dll that the "client" piece refers to.
I initially arrived at this design, because the way in which we do inter process communication between windows services is sort of similar (except that the client talks to the server via named pipes, rather than dynamically loading dlls).
While i'm fairly certain i can make this work, i'm curious to know if there are better ways to accomplish the same task.
You may wish to take a look at Microsoft Managed Add-in Framework [MAF] and Managed Extensibiility Framework [MEF] (links courtesy of Kent Boogaart). As Kent states, the former is concerned with isolation of components, and the latter is primarily concerned with extensibility.
In the end, even if you do not leverage either, some of the concepts regarding API versioning are very useful - ie versioning interfaces, and then providing inter-version support through adapters.
Perhaps a little overkill, but definitely worth a look!
Hope this helps! :)
Why not just use the Assembly versioning built into .NET?
When you add a reference to an assembly, just be sure to check the 'Require specific version' checkbox on the reference. That way you know exactly which version of the Assembly you are using at any given time.

.Net Dynamic Plugin Loading with Authority

What recommendations can you give for a system which must do the following:
Load Plugins (and eventually execute them) but have 2 methods of loading these plugins:
Load only authorized plugins
(developed by the owner of the
software)
Load all plugins
And we need to be reasonably secure that the authorized plugins are the real deal (unmodified). However all plugins must be in seperate assemblies. I've been looking at using strong named assemblies for the plugins, with the public key stored in the loader application, but to me this seems too easy to modify the public key within the loader application (if the user was so inclined) regardless of any obfuscation of the loader application. Any more secure ideas?
Basically, if you're putting your code on someone else's machine, there's no absolute guarantee of security.
You can look at all kinds of security tricks, but in the end, the code is on their machine so it's out of your control.
How much do you stand to lose if the end user loads an unauthorised plugin?
How much do you stand to lose if the end user loads an unauthorised plugin?
Admittedly this won't happen often, but when/if it does happen we lose a lot and I although I understand we will produce nothing 100% secure, I want to make it enough of a hindrance to put people off doing it.
The annoying thing about going with a simple dynamic loading with full strong name, is that all it takes is a simple string literal change within the loader app to load any other assembly even though the plugins are signed.
you can broaden your question : "how can I protect my .net assemblies from reverse engineering ?"
the answer is - you can not. for those who havent seen it yet, just look up "reflector", and run it on some naive exe.
(by the way, this is always the answer for code that is out of your hands, as long as you do not have en/decryption hardware sent with it),
obfuscating tries to make the reverse engineering to be harder (cost more money) than development, and for some types of algorithems it succeeds.
Sign the assemblies.
Strong-name signing, or strong-naming,
gives a software component a globally
unique identity that cannot be spoofed
by someone else. Strong names are used
to guarantee that component
dependencies and configuration
statements map to exactly the right
component and component version.
http://msdn.microsoft.com/en-us/library/h4fa028b(VS.80).aspx

Categories