COM Object Registration--Multiple allowed? - c#

I have two COM objects with different GUID values, but the same name. One is a newer version of the other. What happens when I register both using Name.exe /regserver? I've looked in the registry, and they both show up with the same ProgID, but their respective GUID values are different. They point to their separate locations on the hard drive. Is this a problem?
I'm trying to get the old version of a project to work alongside the new version of a project (but not running at the same time), and I think these two things are fighting.
The COM objects were created in VB6. The code that uses them is C#. They are added to their individual C# projects as references. When one is registered, I can't compile the other (nor run successfully).
What other information would be helpful while investigating this issue?

Converting my comment into an answer:
You have created a new version of a component which is not backward compatible with the old version.
You really should change the ProgID to indicate that this is effectively a new component. Client apps will have to explicitly target either the new component or the old one. People often just append a version number (e.g. 2) to the ProgId.

You are violating hard COM rules. Either your replacement must be an exact match with the component you replace. Or you must generate a new version that:
Uses a different [Guid] for the coclass, you did that correctly.
Uses a different ProgId, you didn't do that. Boilerplate is to include a version number in the ProgId itself. So a Foo.Bar becomes Foo.Bar.2
Uses different [Guids] for the interfaces implemented by the class. This is easy to overlook since they are hidden so well in a VB6 component. Crucial however whenever the class is used from another apartment. COM needs to find the type library for the component so it knows how to marshal the interface method call. Be sure to declare your interfaces explicitly in your C# code.
The best way to double-check all this is by running OleView.exe, File + View Typelib command. That decompiles the type library content back to IDL, you will see the guids and interfaces. If you want to create an exact substitute for the old component then everything must match exactly. Exact same guids, exact same interfaces with the exact same order of methods and exact same arguments.

I haven't ever accessed VB6 ActiveX exes from .NET (just dlls), so this is a shot in the dark (and is weak enough to be just a comment except it is too long).
Perhaps you can create / export a .tlb each for the two VB6 components to compile your C# against. You shouldn't need the exes to compile.
Next manually add the registry entries as if they had separate Programmatic IDs (say MyComponent.ServerClass.1 and MyComponent.ServerClass.2) and then load them by name in your C#.

Related

Integrating matlab functions into c# project

I have a nice .net assembly of core matlab functions, created of course with the matlab compiler. For functions that accept numbers or arrays of numbers, this is fine; I can write code in c# without having to revert to matlab (well, the RCM has to be installed; that’s fine).
For functions that must reference other functions, however, the only way I can find so far to get a c# programme going is to compile both functions into the assembly. To explain better, let’s say I have a library in which I’ve stored the ode45 routine. If I want to solve a specific equation, let’s say something simple like dy/dx = -y, then I have to create a matlab script file which may be written as follows:
function dydx = diffeq(x, y)
dydx = -y
[obviously the analytical solution exists, but for the sake of this example let’s say I want to solve it this way]
Now in order to solve this equation, I would have to add this function as a method in my class to be compiled into the .net assembly. This of course ruins the generality of my library; I want application-specific equations in a different library to my core math function library. That is, the ODE45 method should reside in a “more core” library than the library in which the “diffeq” method would reside.
More than that, I would much prefer to create the “diffeq” method in a c# class that I can edit directly in e.g. VS2012. I would like to edit the equation directly rather than having to enter matlab each time and recompile an assembly.
To solve this problem, I have gone to the extent of decompiling the assembly which contains both the ode45 code and my differential equation method; it turns out the assembly is nothing but an interface to the MCR; the diffeq methods in the assembly return something like the following:
return mcr.EvaluateFunction(numArgsOut, “diffeq”, new object[0]);
We note that the function/method “diffeq” is not part of the MCR; MCR does not change. However, I can’t find the equation anywhere in the assembly.
Which begs the question “Dude, where’s my function?”
There is a ‘resources’ component of the assembly in which we find [classname].ctf, and in that we’ll find some machine code. This looks encrypted, but the equation might be hidden in there. If so, that would be a deliberate attempt to prevent when I am attempting, and kudos to MathWorks for making it impossible for me to avoid having to enter the matlab application!
However, there doesn’t seem to be anything in licensing to prevent what I want to do; I think it would be great if mathworks would allow as open an approach as that, but in the interrim, does anyone know how to do this?
The "MATLAB Compiler" has a somewhat misleading name. It is more of a deployment solution than a compiler in the actual sense (see note below). It is mainly intended to distribute MATLAB applications to end-users without requiring a full MATLAB installation on their part (only the royalty-free MCR runtime needs to be installed).
The MCR is in fact a stripped-down version of the MATLAB engine along with accompanying libraries.
When you use MATLAB Compiler to generate a binary package, the result is a target-specific wrapper (be it a standalone application, C/C++ shared library, Java package, or a .NET assembly) that calls the MCR runtime. The binary generated includes an embedded CTF archive containing all the original MATLAB content (your M-files and other dependencies) but in an encrypted form. When first executed, the CTF archive is extracted to a temp folder, and the M-files (still encrypted) are then interpreted by the MCR at runtime like typical MATLAB code.
There is an option in deploytool (mcc -C) to tell the compiler not to embed the CTF archive inside the binary as a resource, instead to place it as a seperate file next to the generated binary (this CTF archive can be inspected as a regular ZIP-file, but the source files inside are still encrypted of course).
See the following documentation page for more information:
Application Deployment Products and the Compiler Apps
PS: The truth is MATLAB Compiler started out as a product to convert MATLAB code into full C/C++ code which used the now discontinued "MATLAB C/C++ Math Library" (no runtime requirement, you just compile the generated C++ code and link to certain shared libraries; the result is a true compiled executable not a wrapper). This functionality completely changed around the time MATLAB 7 was released (the reason being that the old way only supported a subset of the MATLAB language, while using the current MCR mechanism enables deploying almost any code). Years later, MATLAB added a new product to replace the once-removed functionality of code translation, namely the MATLAB Coder.

Is there a good reason for preferring reflection over reference?

Going over some legacy code, I ran into piece of code that was using reflection for loading some dll's that their source code was available (they were another project in the solution).
I was cracking my skull trying to figure out why it was done this way (naturally the code was not documented...).
My question is, can you think about any good reason for preferring to load an assembly via reflection rather than referencing it?
Yes, if you have a dynamic module system, where different DLLs should be loaded depending on conditions at runtime. We do this where I work; we do a license check for different optional modules that may be loaded into our system, and then only load the DLLs associated with each module if the license checks out. This prevents code that should never be executed from being loaded, which can both improve performance slightly and prevent bugs.
Dynamically loading DLLs may also allow you to drastically change functionality without changing any source code. The main assembly may for instance set in motion a discovery process where it finds all classes that implement some interface, and chooses which one to use depending on some runtime criterion.
These days you'll typically want to use MEF for this kind of task, but that's only been around since .NET 4.0, so there are probably many codebases out there that do it manually. (I don't know much about MEF. Maybe you have to do this part manually there as well.)
But anyway, the answer to your question is that there certainly are good reasons to dynamically load DLLs using reflection. Whether it applies in your case is impossible to say without more details.
Without knowing you specific project, noone here can tell you why it was done that way in your case.
But the general reasons are:
updateability: You can simply recompile and replace the updated libary instead of having to recompile and replace the whole application.
cooperation: if the interface is clear, that way multiple teams can work together. one for the main application and others for the dlls
reusability: sometimes you need the same functionality in multiple projects, so the same dll can be used again and again
extensability: in some cases you want to be able to later extend your program with plugins that where not present at shipment time. This can be realized using dlls.
I hope this helps you understand some of your setup..
Reason for loading an assembly via reflection rather than referencing it?
Let us consider a scenario, where there are three classes with method DoWork() this method returns string, you are accessing it by checking the condition (strong type).
Now you have two more classes in two different DLL's how would you cope up the change?
1)You can add reference of new DLL's , change the conditional check and make it work.
2)You can use reflection , pass on condition and assembly name at run time, this allows you to add any number of functionality at runttime without any change of code in primary appliation.

How does .NET's Primary Interop Assembly Embedding work?

I am researching the .NET Common Language Infrastructure, and before I get into the nitty-gritty of the compiler I'm going to write, I want to be sure that certain features are available. In order to do that, I must understand how they work.
One feature I'm unsure of is the .NET Primary Interop Assembly embedding. I'm not quite sure how .NET goes about embedding only the types you use versus the types that are exposed by the types you use. From the bit of research I've done into this, I've noticed that it emits a bare-bones interface that utilizes vtable gap methods, where the method name format is VtblGap{0}_{1} where {0} is the index of the gap and {1} is the member size of the gap. These methods are marked rtspecialname and specialname. Whether this is accurate or not, is the question.
Assuming the above is true, how would I go about obtaining the necessary information to embed similar metadata into the resulted application?
From what I can tell, you can order the MemberInfo objects obtained via their metadata tokens for the order, and the dispid information is obtained via the attributes from the interop assembly. The area I'm most confused on are the interfaces that are imported that seem to have no direct correlation with the other embedded types, sequentially indexed interfaces that seem to be there for versioning reasons. Is their inclusion based off of their indexing or is there some other logic used? An example is Microsoft.Office.Interop.Word, when you add a document to an Application and, in doing something with it, it imports the document, its events, and so on.
Here's hoping someone in-the-know can clue me in on what else might be involved in embedding these types.

Product Name define for visual c++ and c#

Our product contains a bunch of modules spread out over several visual studio solutions and uses C++ and C#. I'd like to define a product name and use it as part of default folder locations, registry keys, etc.
What is the simplest way to define this product name in one place? And if I have to use a different approach for C++ and C#, what would you advise for each of them?
According to Microsoft, it looks like you should be able to put everything into 1 solution, then have sub-solutions within that:
MSDN Structuring Solutions and Projects
EDIT: Article is for Team Foundation Server, so I guess you can't necessarily do this.
I can't necessarily say what would be the simplest, but I do know what we've done here thats worked out reasonably well.
For C++ projects we have a common header file that is included - it has #defines for all the common non-localizable strings used by the applications (ProductNames, CompanyName, Version, Registry Keys, File Prefix/Extensions, etc). And the individual project just include and reference those defines. I used defines specifically rather than constants because that way i could also change all the Version resources to reference those same defines without any issues (In fact, all the project's .rc files include the same version.rc to guarantee uniformity).
For our C# projects i use a simple class to contain constants that are referenced by the c# projects.
Unfortunately this leaves two places for maintenance but at this point it works well enough and we've had so little need to update those Defines/Constants that we haven't needed to come up with a more integrated approach yet.
I'd be interested in hearing other approaches...
This is the solution I will try to implement:
C++ and C# will each have their own function to get the product name, and those functions will have a default name.
The default name can be overwritten by the environment variable "PRODUCTNAME", this way we can easily build our software under different names by only modifying that environment variable.
[Edit] My C++ solution compiles a DLL which contains (among others) the function:
GetProductName(char* pName, int iSize);
so product name is now only defined in one place.

HRESULT:0x80040154 COM object with CLSID {} is either not valid or not registered

I am using COM Object in my C# .NET 1.1 application. When the COM object is not registered on client machine, I get the exception:
"COM object with CLSID {} is either
not valid or not registered."
(HRESULT:0x80040154)
I know how to register the COM object using regsvr32. Our requirement is to identify the COM object which is not registered. We don't want to hardcode the name of the COM Object, rather identify it dynamically using the COM Exception/HResult. Any ideas?
Given the situation mentioned in the comment to Benjamin Podszun's answer, your approach could be:
Look up the CLSID in the registry (HKEY_CLASSES_ROOT\CLSID\{...})
If not found, just throw so that the generic error message is displayed
Look at the appropriate subkey depending on the type of object (e.g. InProcServer32 for an in-proc native COM DLL/OCX). The default value will be the path to the DLL/OCX.
Check for the existence of this path and fabricate a suitable message with the name of the file.
This enables you to display a different message if the OCX/DLL has been moved, but of course won't help if it is not registered.
You also might want to spend some time trying to understand why (or even five whys!) the components are being moved / unregistered on some client machines, which may suggest a way to prevent it . For example, one possible scenario is that the user installs version 1, then installs version 2, then uninstalls version 1. Depending on how your installer is implemented, uninstalling version 1 may deregister or remove the components needed for version 2.
Another approach might be to see how the installer behaves as a normal user vs an administrator.
Identifying the exact sequence of events on failing client machines may be a bit of a challenge, but it's worth trying.
The only way to solve the problem you describe is to have a global list of CLSIDs for all objects you want to check.
If you only care about a few libraries, you can install them, check their GUIDs, and build this database yourself.
If you want to identify any possible library worldwide that might ever exist, I would give up now and have some coffee.
If I understand you correctly you get the message "I don't know how to find the COM object for this GUID" and want to do it better..?
In other words: Usually you'd register the COM object and it is referenced by its class id or ProgId. For now windows doesn't know about your target object.
I'm not sure how to do that. If your COM objects are .Net assemblies you might be able to use Reflection to iterate over a list of files in your program directory and lookup the relevant attribute, comparing it with the error message.
If they are native libraries there are probably P/Invoke ways to do the same, but I wouldn't know how and Google doesn't help so far.
I would love to understand the requirement though: Since you're distributing that app and know the dependencies, why do you want to add this extra layer of complexity?

Categories