I'm getting System.IO.FileNotFoundException: The specified module could not be found when running C# code that calls a C++/CLI assembly which in turn calls a pure C DLL. It happens as soon as an object is instantiated that calls the pure C DLL functions.
BackingStore is pure C.
CPPDemoViewModel is C++/CLI calling BackingStore it has a reference to BackingStore.
I tried the simplest possible case - add a new C# unit test project that just tries to create an object defined in CPPDemoViewModel . I added a reference from the C# project to CPPDemoViewModel .
A C++/CLI test project works fine with just the added ref to CPPDemoViewModel so it's something about going between the languages.
I'm using Visual Studio 2008 SP1 with .Net 3.5 SP1. I'm building on Vista x64 but have been careful to make sure my Platform target is set to x86.
This feels like something stupid and obvious I'm missing but it would be even more stupid of me to waste time trying to solve it in private so I'm out here embarrassing myself!
This is a test for a project porting a huge amount of legacy C code which I'm keeping in a DLL with a ViewModel implemented in C++/CLI.
edit
After checking directories, I can confirm that the BackingStore.dll has not been copied.
I have the standard unique project folders created with a typical multi-project solution.
WPFViewModelInCPP
BackingStore
CPPViewModel
CPPViewModelTestInCS
bin
Debug
Debug
The higher-level Debug appears to be a common folder used by the C and C++/CLI projects, to my surprise.
WPFViewModelInCPP\Debug contains BackingStore.dll, CPPDemoViewModel.dll, CPPViewModelTest.dll and their associated .ilk and .pdb files
WPFViewModelInCPP\CPPViewModelTestInCS\bin\Debug contains CPPDemoViewModel and CPPViewModelTestInCS .dll and .pdb files but not BackingStore. However, manually copying BackingStore into that directory did not fix the error.
CPPDemoViewModel has the property Copy Local set which I assume is responsible for copying its DLL when if is referenced. I can't add a reference from a C# project to a pure C DLL - it just says A Reference to Backing Store could not be added.
I'm not sure if I have just one problem or two.
I can use an old-fashioned copying build step to copy the BackingStore.dll into any given C# project's directories, although I'd hoped the new .net model didn't require that.
DependencyWalker is telling me that the missing file is GPSVC.dll which has been suggested indicates security setting issues. I suspect this is a red herring.
edit2
With a manual copy of BackingStore.dll to be adjacent to the executable, the GUI now works fine. The C# Test Project still has problems which I suspect is due to the runtime environment of a test project but I can live without that for now.
Are the C and C++ DLLs in the same directory as the C# assembly that's executing?
You may have to change your project output settings so that the C# assembly and the other DLLs all end up in the same folder.
I've often used the Dependency Walker in cases like this; it's a sanity check that shows that all the dependencies can actually be found.
Once your app is running, you may also want to try out Process Monitor on the code you are running, to see which DLLs are being referenced, and where they are located.
The answer for the GUI, other than changing output settings, was the addition of a Pre-Build Step
copy $(ProjectDir)..\Debug\BackingStore.* $(TargetDir)
The answer for the Test projects was to add the missing DLL to the Deployment tab of the testrunconfig. You can either do so by directly editing the default LocalTestRun.testrunconfig (appears in Solution under Solution Items) or right-click the Solution and Add a new test run config, which will then appear under the main Test menu.
Thanks for the answers on this SO question on test configurations for leading me to the answer.
The reason why this happens is because you either are loading DLLMAIN from managed code, before the CRT has an opportunity to be initialized. You may not have any managed code, be executed DIRECTLY or INDERECTLY from an effect of DllMain notifications. (See: Expert C++/CLI: .Net for Visual C++ Programmers, chapter 11++).
Or you have no native entrypoint defined wahtsoever, yet you have linked to MSVCRT. The CLR is automatically initialized for you with /clr, this detail causes a lot of confusion and must be taken into account. A mixed mode DLL actually delay loads the CLR through the use of hot-patching all of the managed entry point vtables in your classes.
A number of class initialization issues surround this topic, loader lock and delay loading CLR are a bit trickey sometimes. Try to declare global's static and do not use #pragma managed/unmanaged, isolate your code with /clr per-file.
If you can not isolate your code from the managed code, and are having trouble, (after taking some of these steps), you can also look towards hosting the CLR yourself and perhaps going through the effort of creating a domain manager, that would ensure your fully "in-the-loop" of runtime events and bootstrapping.
This is exactally why, it has nothting todo with your search path, or initialization. Unfortunately the Fusion log viewer does not help that much (which is the usual place to look for .NET CLR assembly binding issues not dependency walker).
Linking statically has nothing todo with this either. You can NOT statically link a C++/CLI application which is mixed mode.
Place your DLLMAIN function into a file by itself.
Ensure that this file does NOT have /CLR set in the build options (file build options)
Make sure your linking with /MD or /MDd, and all your dependencies which you LINK use the exact same CRT.
Evaluate your linker's settings for /DEFAULTLIB and /INCLUDE to identify any possiable reference issues, you can declare a prototype in your code and use /INCLUDE to override default library link resolution.
Good luck, also check that book it's very good.
Make sure the target system has the correct MS Visual C runtime, and that you are not accidentally building the C dll with a debug runtime.
This is an interesting dilemma. I've never heard of a problem loading native .DLLs from C++/CLI after a call into it from C# before. I can only assume the problem is as #Daniel L suggested, and that your .DLL simply isn't in a path the assembly loader can find.
If Daniel's suggestion doesn't work out, I suggest you try statically linking the native C code to the C++/CLI program, if you can. That would certainly solve the problem, as the .DLL would then be entirely absorbed into the C++/CLI .DLL.
Had the same problem switching to 64-bit Vista. Our application was calling Win32 DLLs which was confusing the target build for the application. To resolve it we did the following:
Go to project properties;
Select Build tab;
Change 'Platform target:' option to x86;
Rebuild the application.
When I re-ran the application it worked.
Related
I wrote a Win32-DLL (with clr support in VS 2010/13, c++) as extension for another/old VB6 app and use the opensource-dll PDFSharp.
It works fine, but if the "PDFSharp.dll" removed from Directory the Application crashes if the program try to load my dll.
I want to include the Sharp DLL into mine, so that only one DLL is needed.
I tried to add it to resources, and load/catch the error during run time by
AppDomain^ root = AppDomain::CurrentDomain;
root->CurrentDomain->AssemblyResolve += gcnew ResolveEventHandler(MyResolveEventHandler);
in the first Function that the app calls, but my Problem is, the app/dll crashes before i can handle something.
ILMerge can't help, because it is a Win32/net(clr) DLL not a 100% NET-DLL.
C++/CLI mixed-mode DLLs have two sets of references: the native imports in the PE header, and the .NET assembly references. Problems finding the native imports will cause the symptom you observed, that loading the assembly fails early during load and cannot be intercepted and recovered.
It's not clear to me why the native dependency rules are applicable here. For a true native dependency that needs to be located using an alternate search order under your control, delay-loading could be applied. But that can't be used with a referenced .NET assembly.
In any case, the simplest fix is to not need a separate assembly at all. Your goal is single file deployment, and the ideal single file deployment scenario is when all the code is contained in a single DLL and you don't need to unpack a second file at runtime.
For pure .NET assemblies, there is an ILMerge tool that combines multiple DLLs into a single file. But your case has a C++/CLI mixed mode DLL, not pure MSIL.
Using multiple languages in a native program generally works a little bit differently. Instead of producing a complete executable from each toolset, native code standardizes an object file format (Windows .obj, Linux .o) which all the various toolsets know how to produce, and then the link step can link together object files from a variety of languages. The object files are often bundled into static libraries. (A static library is just an archive of object files, with a symbol index) Because the C++/CLI toolset is patterned on native C++, it uses this model as well.
The .NET version of this language-independent "object file" which can be further linked is a .netmodule file. Internally, it is a .NET assembly without a manifest. Functionally, it acts like a static library. And the C++/CLI link.exe can link together C# (and VB, and F#, etc) .netmodule static libraries together with the C++/CLI object files and static libraries, and native object files and libraries, when it creates the mixed-mode assembly.
This isn't the most straightforward process, because while it is supported by the underlying toolchains, the Visual Studio project options dialog boxes don't have a UI for either creating or consuming .netmodule static libraries.
For the C# side to produce a .netmodule, you should open your .csproj file and change the <OutputType> setting to module. Then reopen the project in Visual Studio and build as usual.
On the C++/CLI side, the project options dialog allows you to customize the compile and link command-lines. Change the linker command to include /link and the name of the .netmodule file.
If you've done it right, the C++/CLI linker will create a single mixed-mode DLL with all the types and code from both the C# and C++/CLI source files. And all the internal usage between C# and C++/CLI will be already resolved, so you won't have to worry about missing dependencies at run time. Well, at least not these dependencies; any you didn't choose to link in will still be handled normally.
I have a c++/CLI library that is in turn calling a c# library. That is fine, it is linking implicitly and all is good with the world. But for various reasons the libraries are not getting quite the prefect treatment by our automated build process, and the libraries are not finding each other unless we move the libraries to locations that we would rather not have them in, and would rather not fold into our build process.
It is suggested to me that we/I could write a post-build event that uses XCOPY. but lets say we don't want to do that.
Another suggestion is to explicitly load the dll. Windows says that to link explicitly "Applications must make a function call to explicitly load the DLL at run time." The problem is that Microsoft's example is not enough for my small mind to understand how to proceed with this idea. Worse, the only example I could find is out of date. Perhaps I am not using the right search terms but I am having difficulty finding more about it with google.
How do we explicitly Link a c++/Cli Library to a C# .dll?
----edit
OK, How do we explicitly Link a C++/CLI code, which exports a library using __declspec(), to a C# .dll.
There is no such thing as a "C++/CLI library", only assemblies are supported. There is no explicit or implicit linking, binding always happens at runtime. Assemblies are found at runtime by the CLR, the rules it uses to locate them are described in detail in the MSDN library.
Copying all dependencies into the same directory as the EXE is the sane way to go about it while you are developing the code. Well supported by build system, the C# and C++ rules are however different. C++ projects build to the solution's Debug directory, C# projects build to the EXE project's bin\Debug directory. So yes, altering a C++ project's Output Directory setting or copying files with a post build event is usually required to get everything together.
Ok this question is more about understanding what the issues are as I dont think anyone will be able to tell me how to fix the problem.
I am writing a .net 4 application and I have a 3rd party dll ( hasp dongle protection ) that I want to reference.
Visual studio allows me to create the reference fine and use classes contained within the dll within my code.
The first issue occurs when the program is run and the dll is actually loaded. I then get the following error.
System.BadImageFormatException: Could not load file or assembly
'hasp_net_windows.dll' or one of its dependencies. is not a valid
Win32 application
This weblink states how to fix this error. Coud someone expalain what the issue is and why im getting it.
After following this advice I then set the main project build to x86 and I then get another error replacing the other. The new error is:
System.IO.FileLoadException: Mixed mode assembly is built against
version 'v1.1.4322' of the runtime and cannot be loaded in the 4.0
runtime without additional configuration information
This weblink states how to fix the error, but I dont have an app.config in my project and want to avoid having one if at all possible. If someone could explain what the issue is again that would be helpful?
Please let me know if you require anymore information.
The issue is the "bitness" of your application. Once chosen (32 bit or 64 bit) all DLLs within that process need to be the same. This exception tells me that one of your DLLs is the wrong "bitness".
You simply cannot have DLLs with different compilation targets within a given process, a process has "bitness" affinity.
If this is a third party unmanaged DLL then it is very likely 32-bit compiled.
Setting the build output as x86 for the root project (the one that creates the exe) should suffice as this will dictate the process that is created. Any other .NET projects can then simply be Any CPU and will fit in either the 32 or 64 bit runtimes.
Unfortunately for your second issue, the provided link is the way to solve it. There is nothing wrong with having an app.config in a project and you haven't stated why you don't want one.
The answer by Adam Houldsworth notwithstanding, I'd like to add that it is possible to do it without an app.config. However, this requires a tiny bit more work and potentially a proper understanding of COM interop. Whether it's worth the trouble is up to you of course ;).
You can set useLegacyV2RuntimeActivationPolicy programmatically by using the ICLRRuntimeInfo::BindAsLegacyV2Runtime method.
A quick rundown on how to do this is posted in this blogpost. Take note of his warning though, which might make you think twice in using this approach:
This approach works, but I would be very hesitant to use it in public
facing production code, especially for anything other than
initializing your own application. While this should work in a
library, using it has a very nasty side effect: you change the runtime
policy of the executing application in a way that is very hidden and
non-obvious.
I cannot use an app.config file because the assembly is loaded via COM from a native program.
I found the library that supports .net framework 4.0. here. In this scenario, no other solutions had worked for me.
Situation
I run a build system that executes many builds for many project. To avoid one build impacting another we lock down the build user to only its workspace. Builds run as a non privileged users who only have write ability to the workspace.
Challenge
During our new build we need to use a legacy 3rdparty DLL that exposes its interface through COM. The dev team wants to register the build(regsrv32.exe) but our build security regime blocks this activity. If we relax the regime then the 3rdparty DLL will impact other builds and if I have two build which need two different versions I may have the wrong build compile against the wrong version (a very real possibility).
Question
Are there any other options besides registration to handle legacy DLLs which expose their interface via COM?
Thanks for the help
Peter
For my original answer to a similar question see: TFS Build server and COM references - does this work?
A good way to compile .NET code that references COM components without the COM components being registered on the build server is to use the COMFileReference reference item in your project/build files instead of COMReference. A COMFileReference item looks like this:
<ItemGroup>
<COMFileReference Include="MyComLibrary.dll">
<EmbedInteropTypes>True</EmbedInteropTypes>
</COMFileReference>
</ItemGroup>
Since Visual Studio provides no designer support for COMFileReference, you must edit the project/build file by hand.
During a build, MSBuild extracts the type library information from the COM DLL and creates an interop assembly that can be either standalone or embedded in the calling .NET assembly.
Each COMFileReference item can also have a WrapperTool attribute but the default seemed to work for me just fine. The EmbedInteropTypes attribute is not documented as being applicable to COMFileReference, but it seems to work as intended.
See https://learn.microsoft.com/en-ca/visualstudio/msbuild/common-msbuild-project-items#comfilereference for a little more detail. This MSBuild item has been available since .NET 3.5.
It's a shame that no-one seems to know anything about this technique, which to me seems simpler than the alternatives. It's actually not surprising since I could only find just the one above reference to it on-line. I myself discovered this technique by digging into MSBuild's Microsoft.Common.targets file.
There's a walkthrough on registration-free COM here:
http://msdn.microsoft.com/en-us/library/ms973913.aspx
And excruciating detail here:
http://msdn.microsoft.com/en-us/library/aa376414
(the root of that document is actually here: http://msdn.microsoft.com/en-us/library/dd408052 )
Also, for building in general, you should be able to use Tlbimp or tlbexp to create a TLB file that you can use for building, assuming the point of registering is just to be able to compile successfully, and not to run specific tests.
Installation tools such as Installshield can extract the COM interfaces from the DLLs and add them to the registry. It can also use the self-registration process of the DLL (which I believe is what regsvr does), but this is not a Microsoft installer best practice.
in .NET COM is normally done thru Interop in order to register .DLL in .NET they are called Assemblies and that can be done several ways.. by adding references via VS IDE at the project level, or writing code that Loads and unloads the assembly.. by .Config file that haas the reference to the assembly as well as the using of that reference within the project... GAC.
If you have access to the 3rd party .DLL's you can GAC them, and reference them in your project
you can add a using to your .cs file header as well as add the reference to the project by right clicking on reference --> add Reference ...
you can also do the above step as well as set the copy local = true in the properties for that .dll.. I hope that this gives you some ideas.. keep in mind that .NET assemblies are Managed code so there are several ways to Consume those 3rd party .DLL's using other methods within C# like LoadFromAssembly ect..
Thanks for all the help.
We changed from early-binding to late-binding because we never really needed the DLL at compile time. This pushed the registration requirement from the build server to the integration test server (where we execute the installer which handles the registration). We try to keep the build system pristine and have easy-to-reset integration systems.
Thanks again
Peter
I'm working with an external DLL to consume an OCR device using a wrapper written by me. I have made tests on the wrapper and it works perfectly. But, when I use a WinForms project to consume the client class of the wrapper (located an another project), an error arises when calling C# methods imported from the DLL (using [DLLImport(...)]) saying that the DLL is not registered.
The error says:
"DLL Library function no found. Check registry install path."
All executions have been made in debug mode.
I've compared both projects configuration. The most relevant difference is that Test project is oriented to Any CPU and WinForms app only points to x86.
What could it be?
Updates
I've tried to register the dll using Regsvr32.exe but it didn't work. I thought about using Gacutil.exe but it required to uninstall all frameworks beyond .net framework 1.1...
I was wondering... at testing environment probably everything works well because testing framework has its dll's or executable files (or something like that) totally registered in windows, so those are trusted dlls. It is possible that debug generated dlls are not trusted by windows and therefore this problem arises?
I've created a form in the same troubling project and then I call the OCRWrapper from a button I've added to it. The OCR's worked!!. Unfortunately, it is difficult to rewrite the first form because we have invested a lot of hours in it; so, I'm still wondering what do I need to change in the troubling form...
I started again the form's development from scratch and added all the components related to it; everything worked well, the OCR read succesfully all data. When I loaded a combo box using a call to an ObjectContext and the error appeared again... I'm using an entity framework connected to Oracle.
I have a theory.
Let's imagine the following situation:
The ocr.dll depends on some other native DLL, lets call it other.dll [A].
In case this is a static dependency, you'll see it through Dependency Walker.
If its dynamic, you can use Sysinternals Process Explorer to monitor DLL loading in your working test project at run-time.
Your ADO.NET provider uses native DLLs under the hood (this is certainly true for ODP.NET) which depend on other.dll [B], which happens to have the same name but is actually a different DLL (or at least a different version) compared to other.dll [A].
Then, in run-time, this might happen:
When you connect to the database, ADO.NET provider dynamically loads its native DLLs, including the other.dll [B].
Then you try to call a function from OCR DLL. The P/Invoke tries to load the OCR DLL dynamically and succeeds, but the other.dll [B] is already loaded and ocr.dll tries to use some function from it, instead from other.dll [A] where it actually exists.
Welcome to DLL hell. So what can you do?
Try varying the order of calls to ocr.dll and ADO.NET provider to see anything changes. If you are (very) lucky, other.dll [A] might actually be a newer version that is still backward-compatible with other.dll [B] and things migh magically start to work.
Try another version of ADO.NET provider.
Try another ADO.NET provider.
Try getting a statically-linked ocr.dll from your vendor (i.e. no run-time dependency on other.dll [A]).
So, the call to the DLL works from a single button, but it does not work from a complex form. I'd say that there is an undefined behavior going on. The question remains whether it is you, that wrote the marshalling incorrectly, or it the DLL that is badly written.
Since we do not have access to the source code of the DLL, maybe you can post the prototype of the function, and all relevant struct definitions, and the DllImport line that you wrote for it?
Google can't find that error message which means(not definitely though :)) it is not a system message but a custom one coming from the code in the dll. So the dll does something dodgy. I guess it tries to double dispatch your call to another function internally.
Few things I suggest you try:
Run a x86 configuration. In the project properties -> Build tab set the platform to x86. this is assuming the dll is an x86 dll.
dumpbin /headers orc.dll
File Type: DLL
FILE HEADER VALUES
14C machine (**x86**)
4 number of sections
4CE7B6FC time date stamp Sat Nov 20 11:54:36 2010
0 file pointer to symbol table
0 number of symbols
E0 size of optional header
2102 characteristics
Executable
32 bit word machine
DLL
This command line should tell you the bitness. In case it is a 64 bit run a 64 bit config instead but I bet it is 32 bit.
Do not include the dll in the project. I guess you do that already. Make sure the dll is in a folder that is in the %PATH% environment variable. When you run this at command prompt:
where ocr.dll
should tell you where the dll is. If it doesn't add the folder where the dll is installed to the %PATH%.