I am attempting to write a a CLI interop layer between an existing c++ codebase and a c# wpf app. My c++ libraries already overload global new and delete in order to implement my own memory tracking and other niceties. So the dependencies look something like this:
(Native library compiled to static lib)->(CLI layer)->(C# WPF application)
However, whenever I include my native libraries and try and build my CLI project, I run into a conflicting symbol for global delete which was already defined in msvcrtd:
Error LNK2005 "void __cdecl operator delete(void *)" (??3#YAXPAX#Z) already defined in msvcrtd.lib(delete_scalar.obj)
I'm not sure how to get my build to take my global delete instead of the one in the default library. I have tried making another pure native project that compiles a DLL, and compiling all of my static libs into that, then having the interop layer load that DLL. This works, but I'd rather not have 2 layers of glue instead of the one.
I'm using visual studio 2015.
The native component uses the CRT as DLL? This is requirement if using mixed assemblies. If you statically linked library uses the static CRT you get into troubles.
Check and link with the /VERBOSE flag to see, where this other delete comes from. Eliminate this other library call.
Related
I wrote a Win32-DLL (with clr support in VS 2010/13, c++) as extension for another/old VB6 app and use the opensource-dll PDFSharp.
It works fine, but if the "PDFSharp.dll" removed from Directory the Application crashes if the program try to load my dll.
I want to include the Sharp DLL into mine, so that only one DLL is needed.
I tried to add it to resources, and load/catch the error during run time by
AppDomain^ root = AppDomain::CurrentDomain;
root->CurrentDomain->AssemblyResolve += gcnew ResolveEventHandler(MyResolveEventHandler);
in the first Function that the app calls, but my Problem is, the app/dll crashes before i can handle something.
ILMerge can't help, because it is a Win32/net(clr) DLL not a 100% NET-DLL.
C++/CLI mixed-mode DLLs have two sets of references: the native imports in the PE header, and the .NET assembly references. Problems finding the native imports will cause the symptom you observed, that loading the assembly fails early during load and cannot be intercepted and recovered.
It's not clear to me why the native dependency rules are applicable here. For a true native dependency that needs to be located using an alternate search order under your control, delay-loading could be applied. But that can't be used with a referenced .NET assembly.
In any case, the simplest fix is to not need a separate assembly at all. Your goal is single file deployment, and the ideal single file deployment scenario is when all the code is contained in a single DLL and you don't need to unpack a second file at runtime.
For pure .NET assemblies, there is an ILMerge tool that combines multiple DLLs into a single file. But your case has a C++/CLI mixed mode DLL, not pure MSIL.
Using multiple languages in a native program generally works a little bit differently. Instead of producing a complete executable from each toolset, native code standardizes an object file format (Windows .obj, Linux .o) which all the various toolsets know how to produce, and then the link step can link together object files from a variety of languages. The object files are often bundled into static libraries. (A static library is just an archive of object files, with a symbol index) Because the C++/CLI toolset is patterned on native C++, it uses this model as well.
The .NET version of this language-independent "object file" which can be further linked is a .netmodule file. Internally, it is a .NET assembly without a manifest. Functionally, it acts like a static library. And the C++/CLI link.exe can link together C# (and VB, and F#, etc) .netmodule static libraries together with the C++/CLI object files and static libraries, and native object files and libraries, when it creates the mixed-mode assembly.
This isn't the most straightforward process, because while it is supported by the underlying toolchains, the Visual Studio project options dialog boxes don't have a UI for either creating or consuming .netmodule static libraries.
For the C# side to produce a .netmodule, you should open your .csproj file and change the <OutputType> setting to module. Then reopen the project in Visual Studio and build as usual.
On the C++/CLI side, the project options dialog allows you to customize the compile and link command-lines. Change the linker command to include /link and the name of the .netmodule file.
If you've done it right, the C++/CLI linker will create a single mixed-mode DLL with all the types and code from both the C# and C++/CLI source files. And all the internal usage between C# and C++/CLI will be already resolved, so you won't have to worry about missing dependencies at run time. Well, at least not these dependencies; any you didn't choose to link in will still be handled normally.
I have a c++/CLI library that is in turn calling a c# library. That is fine, it is linking implicitly and all is good with the world. But for various reasons the libraries are not getting quite the prefect treatment by our automated build process, and the libraries are not finding each other unless we move the libraries to locations that we would rather not have them in, and would rather not fold into our build process.
It is suggested to me that we/I could write a post-build event that uses XCOPY. but lets say we don't want to do that.
Another suggestion is to explicitly load the dll. Windows says that to link explicitly "Applications must make a function call to explicitly load the DLL at run time." The problem is that Microsoft's example is not enough for my small mind to understand how to proceed with this idea. Worse, the only example I could find is out of date. Perhaps I am not using the right search terms but I am having difficulty finding more about it with google.
How do we explicitly Link a c++/Cli Library to a C# .dll?
----edit
OK, How do we explicitly Link a C++/CLI code, which exports a library using __declspec(), to a C# .dll.
There is no such thing as a "C++/CLI library", only assemblies are supported. There is no explicit or implicit linking, binding always happens at runtime. Assemblies are found at runtime by the CLR, the rules it uses to locate them are described in detail in the MSDN library.
Copying all dependencies into the same directory as the EXE is the sane way to go about it while you are developing the code. Well supported by build system, the C# and C++ rules are however different. C++ projects build to the solution's Debug directory, C# projects build to the EXE project's bin\Debug directory. So yes, altering a C++ project's Output Directory setting or copying files with a post build event is usually required to get everything together.
I'm in the process of wrapping a pure unmanaged VC++ 9 project in C++/CLI in order to use it plainly from a .NET app. I know how to write the wrappers, and that unmanaged code can be executed from .NET, but what I can't quite wrap my head around:
The unmanaged lib is a very complex C++ library and uses a lot of inlining and other features, so I cannot compile this into the /clr-marked managed DLL. I need to compile this into a seperate DLL using the normal VC++ compiler.
How do I export symbols from this unmanaged code so that it can be used from the C++/CLI project? Do I mark every class I need visible as extern? Is it that simple or are there some more complexities?
How do I access the exported symbols from the C++/CLI project? Do I simply include the header files of the unmanaged source code and will the C++ linker take the actual code from the unmanaged DLL? Or do I have to hand write a seperate set of "extern" classes in a new header file that points to the classes in the DLL?
When my C++/CLI project creates the unmanaged classes, will the unmanaged code run perfectly fine in the normal VC9 runtime or will it be forced to run within .NET? causing more compatibility issues?
The C++ project creates lots of instances and has its own custom-implemented garbage collector, all written in plain C++, it is a DirectX sound renderer and manages lots of DirectX objects. Will all this work normally or would such Win32 functionality be affected in any way?
You can start with an ordinary native C++ project (imported from, say, Visual Studio 6.0 from well over a decade ago) and when you build it today, it will link to the current version of the VC runtime.
Then you can add a single new foo.cpp file to it, but configure that file so it has the /CLR flag enabled. This will cause the compiler to generate IL from that one file, and also link in some extra support that causes the .NET framework to be loaded into the process as it starts up, so it can JIT compile and then execute the IL.
The remainder of the application is still compiled natively as before, and is totally unaffected.
The truth is that even a "pure" CLR application is really a hybrid, because the CLR itself is (obviously) native code. A mixed C++/CLI application just extends this by allowing you to add more native code that shares the process with some CLR-hosted code. They co-exist for the lifetime of the process.
If you make a header foo.h with a declaration:
void bar(int a, int b);
You can freely implement or call this either in your native code or in the foo.cpp CLR code. The compiler/linker combination takes care of everything. There should be no need to do anything special to call into native code from within your CLR code.
You may get compile errors about incompatible switches:
/ZI - Program database for edit and continue, change it to just Program database
/Gm - you need to disable Minimal rebuild
/EHsc - C++ exceptions, change it to Yes with SEH Exceptions (/EHa)
/RTC - Runtime checks, change it to Default
Precompiled headers - change it to Not Using Precompiled Headers
/GR- - Runtime Type Information - change it to On (/GR)
All these changes only need to be made on your specific /CLR enabled files.
As mentioned from Daniel, you can fine-tune your settings on file level. You can also play with '#pragma managed' inside files, but I wouldn't do that without reason.
Have in mind, that you can create a complete mixed mode assembly. That means, you can compile your native code unchanged into this file PLUS some C++/CLI wrapper around this code. Finally, you will have the same file as native Dll with all your exported native symbols AND as full-fledged .NET assembly (exposing C++/CLI objects) at the same time!
That also means, you have only to care about exports as far as native client code outside your file is considered. Your C++/CLI code inside the mixed dll/assembly can access the native data structures using the usual access rules (provided simply by including the header)
Because you mentioned it, I did this for some non-trivial native C++ class hierarchy including a fair amount of DirectX code. So, no principal problem here.
I would advise against usage of pInvoke in a .NET-driven environment. True, it works. But for anything non-trivial (say more than 10 functions) you are certainly better with an OO approach as provided by C++/CLI. Your C# client developers will be thankful. You have all the .NET stuff like delegates/properties, managed threading and much more at your finger tips in C++/CLI. Starting with VS 2012 with a somewhat usable Intellisense too.
You can use PInvoke to call exported functions from unmanaged DLLs. This is how unmanaged Windows API is accessed from .Net. However, you may run into problems if your exported functions use C++ objects, and not just plain C data structures.
There also seems to be C++ interop technology that can be of use to you: http://msdn.microsoft.com/en-us/library/2x8kf7zx(v=vs.80).aspx
I am trying to implement a COM interface in my C# dll for others to consume. I have defined an interface in foo.idl.
I've run foo.idl through tlbimp to produce foo.dll, a .Net assembly. Now to implement my interface, I can reference foo.dll in my dll to implement the interface.
This works perfectly as it stands with one exception: I now have to distribute two dlls instead of one. This actually goes against the requirements of the project I'm working on: deliver one DLL.
Is there a way to merge the tlbimp dll into mine, or any other way to do this (implement a COM interface in C# without the second dll)?
A good disassembler gets the job done, like Reflector. You can simply disassemble the interop assembly and copy the generated C# interface declarations into your source code. Of course you should only do this if the interface declarations and IIDs are stable.
And definitely consider upgrading to VS2010. Its "embed interop types" feature allows you to ship your assembly without the interop assembly.
You could probably cheat by using a .tlb instead of the 'glue' dll.
I'd suggest you create a mixed-mode assembly using MSVC++/CLR
http://msdn.microsoft.com/en-us/library/k8d11d4s(v=vs.100).aspx
Interop (How Do I in Visual C++)
This might have the drawback that you can't use C# in the same assembly. Should you want to add C# code to the mix, you might be able to squeeze out of your tough situation using
IlMerge
For other, possibly interesting, thoughts see my earlier answer:
Is it possible to compile a console application into a single .dll file?
I'm getting System.IO.FileNotFoundException: The specified module could not be found when running C# code that calls a C++/CLI assembly which in turn calls a pure C DLL. It happens as soon as an object is instantiated that calls the pure C DLL functions.
BackingStore is pure C.
CPPDemoViewModel is C++/CLI calling BackingStore it has a reference to BackingStore.
I tried the simplest possible case - add a new C# unit test project that just tries to create an object defined in CPPDemoViewModel . I added a reference from the C# project to CPPDemoViewModel .
A C++/CLI test project works fine with just the added ref to CPPDemoViewModel so it's something about going between the languages.
I'm using Visual Studio 2008 SP1 with .Net 3.5 SP1. I'm building on Vista x64 but have been careful to make sure my Platform target is set to x86.
This feels like something stupid and obvious I'm missing but it would be even more stupid of me to waste time trying to solve it in private so I'm out here embarrassing myself!
This is a test for a project porting a huge amount of legacy C code which I'm keeping in a DLL with a ViewModel implemented in C++/CLI.
edit
After checking directories, I can confirm that the BackingStore.dll has not been copied.
I have the standard unique project folders created with a typical multi-project solution.
WPFViewModelInCPP
BackingStore
CPPViewModel
CPPViewModelTestInCS
bin
Debug
Debug
The higher-level Debug appears to be a common folder used by the C and C++/CLI projects, to my surprise.
WPFViewModelInCPP\Debug contains BackingStore.dll, CPPDemoViewModel.dll, CPPViewModelTest.dll and their associated .ilk and .pdb files
WPFViewModelInCPP\CPPViewModelTestInCS\bin\Debug contains CPPDemoViewModel and CPPViewModelTestInCS .dll and .pdb files but not BackingStore. However, manually copying BackingStore into that directory did not fix the error.
CPPDemoViewModel has the property Copy Local set which I assume is responsible for copying its DLL when if is referenced. I can't add a reference from a C# project to a pure C DLL - it just says A Reference to Backing Store could not be added.
I'm not sure if I have just one problem or two.
I can use an old-fashioned copying build step to copy the BackingStore.dll into any given C# project's directories, although I'd hoped the new .net model didn't require that.
DependencyWalker is telling me that the missing file is GPSVC.dll which has been suggested indicates security setting issues. I suspect this is a red herring.
edit2
With a manual copy of BackingStore.dll to be adjacent to the executable, the GUI now works fine. The C# Test Project still has problems which I suspect is due to the runtime environment of a test project but I can live without that for now.
Are the C and C++ DLLs in the same directory as the C# assembly that's executing?
You may have to change your project output settings so that the C# assembly and the other DLLs all end up in the same folder.
I've often used the Dependency Walker in cases like this; it's a sanity check that shows that all the dependencies can actually be found.
Once your app is running, you may also want to try out Process Monitor on the code you are running, to see which DLLs are being referenced, and where they are located.
The answer for the GUI, other than changing output settings, was the addition of a Pre-Build Step
copy $(ProjectDir)..\Debug\BackingStore.* $(TargetDir)
The answer for the Test projects was to add the missing DLL to the Deployment tab of the testrunconfig. You can either do so by directly editing the default LocalTestRun.testrunconfig (appears in Solution under Solution Items) or right-click the Solution and Add a new test run config, which will then appear under the main Test menu.
Thanks for the answers on this SO question on test configurations for leading me to the answer.
The reason why this happens is because you either are loading DLLMAIN from managed code, before the CRT has an opportunity to be initialized. You may not have any managed code, be executed DIRECTLY or INDERECTLY from an effect of DllMain notifications. (See: Expert C++/CLI: .Net for Visual C++ Programmers, chapter 11++).
Or you have no native entrypoint defined wahtsoever, yet you have linked to MSVCRT. The CLR is automatically initialized for you with /clr, this detail causes a lot of confusion and must be taken into account. A mixed mode DLL actually delay loads the CLR through the use of hot-patching all of the managed entry point vtables in your classes.
A number of class initialization issues surround this topic, loader lock and delay loading CLR are a bit trickey sometimes. Try to declare global's static and do not use #pragma managed/unmanaged, isolate your code with /clr per-file.
If you can not isolate your code from the managed code, and are having trouble, (after taking some of these steps), you can also look towards hosting the CLR yourself and perhaps going through the effort of creating a domain manager, that would ensure your fully "in-the-loop" of runtime events and bootstrapping.
This is exactally why, it has nothting todo with your search path, or initialization. Unfortunately the Fusion log viewer does not help that much (which is the usual place to look for .NET CLR assembly binding issues not dependency walker).
Linking statically has nothing todo with this either. You can NOT statically link a C++/CLI application which is mixed mode.
Place your DLLMAIN function into a file by itself.
Ensure that this file does NOT have /CLR set in the build options (file build options)
Make sure your linking with /MD or /MDd, and all your dependencies which you LINK use the exact same CRT.
Evaluate your linker's settings for /DEFAULTLIB and /INCLUDE to identify any possiable reference issues, you can declare a prototype in your code and use /INCLUDE to override default library link resolution.
Good luck, also check that book it's very good.
Make sure the target system has the correct MS Visual C runtime, and that you are not accidentally building the C dll with a debug runtime.
This is an interesting dilemma. I've never heard of a problem loading native .DLLs from C++/CLI after a call into it from C# before. I can only assume the problem is as #Daniel L suggested, and that your .DLL simply isn't in a path the assembly loader can find.
If Daniel's suggestion doesn't work out, I suggest you try statically linking the native C code to the C++/CLI program, if you can. That would certainly solve the problem, as the .DLL would then be entirely absorbed into the C++/CLI .DLL.
Had the same problem switching to 64-bit Vista. Our application was calling Win32 DLLs which was confusing the target build for the application. To resolve it we did the following:
Go to project properties;
Select Build tab;
Change 'Platform target:' option to x86;
Rebuild the application.
When I re-ran the application it worked.