I want to make use of .net dlls in node.js. Does that mean I need to make those dlls available with c/c++ using 'clr hosting', a la
.NET Framework 4 Hosting Interfaces or
Hosting the Common Language Runtime
Unfortunately the example Creating a nodejs native .Net extension over at github was a bit of a disappointment, just scroll down to the last step
Change the "Common Language Runtime Support" option to No Common Language RunTime Support
and you know what I mean. Correction to do that article justice: It suggests to change that option to "No Common Language RunTime Support" only for the file SharpAddon.cpp, so other .cpp-files you add will have CLR support enabled (the default for a CLR project), which means you can in fact use .net dlls from those other .cpp files.
This question is actually a duplicate of Using a .NET DLL in Node.js / serverside javascript, which was written at a time when there was not even a native Windows port of node, so times might have changed, although google makes me doubt it.
Update: node-gyp can do the manual steps below automatically when the binding.gyp file is setup properly. See this answer for this simplified procedure.
It turned out to be rather easy. After struggling with CLR hosting and getting data in and out of the host for a while, it turns out you can actually enable /clr for your node extension no problem (so far). Here's how:
follow the instructions on http://nodejs.org/api/addons.html to generate the project files
open the generated .sln in Visual Studio (I'm on VS 2010) and enable /clr in the project settings
now it probably won't build and you have to let the - in this case actually quite helpful - error messages guide you to the flags that conflict with /clr
The flags that I had to change to make it work:
disable /EHsc (C++ exceptions)
disable /RTC1 and /RTCsu
Release: change /MT to /MD
Debug: change /MTd to /MDd
Release: change /GR- to /GR
Then you can mix managed and unmanaged code like this, referencing your .net dlls.
#pragma managed
#using <managed.dll>
void callManaged()
{
managed::Class1^ c1 = gcnew managed::Class1();
System::String^ result = c1->Echo("hola");
System::Console::WriteLine("It works: " + result);
}
#pragma unmanaged
Handle<Value> Method(const Arguments& args) {
HandleScope scope;
callManaged();
return scope.Close(String::New("world"));
}
Update Just discovered this link with an easy howto: http://joseoncode.com/2012/04/10/writing-your-first-native-module-for-node-dot-js-on-windows/
Sounds like edge.js is the new answer from the author of iisnode:
Edge.js supports using C# and .NET instead of writing native node.js extensions
These days, there's cmake-js and node-addon-api that make things both easier, plus the ABI of node-addon-api means the module does not need to be recompiled when used with a newer version of Node.js.
See this answer for a short tutorial: https://stackoverflow.com/a/54339042/709537
Related
How do you compile .cs files using C++
I have searched all through Mono's documentation and can't find a way to just compile C# code from the embedded mono runtime in C++. I know how to open a C# .exe assembly file using the embedded mono functions from C++, but I can't seem to find a way to just compile a .cs file to the .exe from C++.
I have also managed to compile the .cs files by calling the mcs.bat file from the CreateProcessA() function that Windows provides, however this does not give me a way to log errors or even check if it succeeded in compilation etc. (It also feels like a hack and not the official solution). The main reason I need to do this is so that I can recompile C# scripts on the fly by detecting when the source code has changed and another subset of conditions.
Does anyone know of a way to properly compile C# files using the embedded Mono runtime? And where to find the documentation for this? Currently I've been using the documentation here: http://docs.go-mono.com/?link=xhtml%3adeploy%2fmono-api-assembly.html which provides enough information for the most part.
Linking Mono in a DLL
Also, if you're familiar with embedding mono, do you know how to use it in a dll? I've managed to successfully link and compile it within a console application, but when I try to compile it as a part of a dynamic library, I get unresolved external symbol errors (specifically functions with the prefix __imp*).
Lastly, I'm using mono to embed C# as a scripting language for my game engine, however I don't know if there is a better (smaller) solution that I can use. If you know of any better solution feel free to leave a recommendation.
The mono runtime is a "Runtime", only for running the code,
but if you have installed the csc command then you can use this:
#include <cstdlib>
int main(){
system("csc yourfile.cs")
return 0;
}
I wrote a windows application using C# .Net 2.0 and i want to do something which hide the source code, so when any one use refactor tool can't see the source code.
I used dotfuscator but it just changed the function names but not all the source code.
UPDATE:
I want to hide the source code, not because of hiding the key, but to hide how the code is working.
Thanks,
IL is by definition very expressive in terms of what remains in the body; you'll just have to either:
find a better (read: more expensive) obfuscator
keep the key source under your control (for example, via a web-service, so key logic is never at the client).
Well, the source code is yours and unless you explicitly provide it, youll perobably only be providing compiled binaries.
Now, these compiled binaries are IL code. To prevent someone "decompiling" and reverse engineering your IL code back to source code, you'll need to obfuscate the IL code. This is done with a code obfuscator. There are many in the marketplace.
You've already done this with dotfuscator, however, you say that it only changed the function names, not all the source code. It sounds like you're using the dotfuscator edition that comes with Visual Studio. This is effectively the "community edition" and only contains a subset of the functionality of the "professional edition". Please see this link for a comparison matrix of the features of the community edition and the professional edition.
If you want more obfuscation of your code (specifically to protect against people using tools such as Reflector), you'll need the professional edition of Dotfuscator, or another code obfuscator product that contains similar functionality.
As soon as people get a hand on your binaries they can reverse-engineer it. It’s easier with languages that are compiled to bytecode (C# and Java) and it’s harder with languages that are compiled to CPU-specific binaries but it’s always possible. Face it.
Try SmartAssembly
http://www.smartassembly.com/index.aspx
There are limits to the lengths obfuscation software can go to to hide the contents of methods, fundamentally changing the internals without affecting the correctness (and certainly performance) is extremely hard.
It is notable that code with many small methods tends to become far harder to understand once obfuscated, especially when techniques for sharing names between methods that would appear to collide to the eye but not to the runtime are employed.
Some obfuscators allow the generation of constructs which are not representable in any of the target languages, the set of all operations allowable in CIL for example is way more than that expressible through c# or even C++/CLI. However this often requires an explicit setting to enable (since it can cause problems). This can cause decompilers to fail, but some will just do their best and work around it (perhaps inlining the il it cannot handle).
If you distribute the pdb's with the app then even more can inferred due to the additional symbols.
Just symbol renaming is not enough of a hindrance to reverse-engineering your app. You also need control flow obfuscation, string encryption, resource protection, meta data reduction, anti-reflector defenses, etc, etc. Try Crypto Obfuscator which supports all this and more.
Create a setup project for your application and install the setup on your friends computer like a software. There are 5 steps to creating the setup project using microsoft visual studio.
Step 1: Create a Sample .Net Project. I have named this project as "TestProject" after that build your project in release mode.
Step 2: Add New Project using right click on your solution and select setup project and give the name this as "TestSetup".
Step 3: Right click on setup project and Add primary Output and select your project displayed.
Step 4: Right Click the setup project and select View-> File System -> Application Folder. Now copy what you want to be in installation folder.
Step 5: Now go to our project folder and open the release folder you can get the setup.exe file here. Double click on the "TestSetup" file and install your project to your and other computer.
I have a c++/CLI library that is in turn calling a c# library. That is fine, it is linking implicitly and all is good with the world. But for various reasons the libraries are not getting quite the prefect treatment by our automated build process, and the libraries are not finding each other unless we move the libraries to locations that we would rather not have them in, and would rather not fold into our build process.
It is suggested to me that we/I could write a post-build event that uses XCOPY. but lets say we don't want to do that.
Another suggestion is to explicitly load the dll. Windows says that to link explicitly "Applications must make a function call to explicitly load the DLL at run time." The problem is that Microsoft's example is not enough for my small mind to understand how to proceed with this idea. Worse, the only example I could find is out of date. Perhaps I am not using the right search terms but I am having difficulty finding more about it with google.
How do we explicitly Link a c++/Cli Library to a C# .dll?
----edit
OK, How do we explicitly Link a C++/CLI code, which exports a library using __declspec(), to a C# .dll.
There is no such thing as a "C++/CLI library", only assemblies are supported. There is no explicit or implicit linking, binding always happens at runtime. Assemblies are found at runtime by the CLR, the rules it uses to locate them are described in detail in the MSDN library.
Copying all dependencies into the same directory as the EXE is the sane way to go about it while you are developing the code. Well supported by build system, the C# and C++ rules are however different. C++ projects build to the solution's Debug directory, C# projects build to the EXE project's bin\Debug directory. So yes, altering a C++ project's Output Directory setting or copying files with a post build event is usually required to get everything together.
I'm in the process of wrapping a pure unmanaged VC++ 9 project in C++/CLI in order to use it plainly from a .NET app. I know how to write the wrappers, and that unmanaged code can be executed from .NET, but what I can't quite wrap my head around:
The unmanaged lib is a very complex C++ library and uses a lot of inlining and other features, so I cannot compile this into the /clr-marked managed DLL. I need to compile this into a seperate DLL using the normal VC++ compiler.
How do I export symbols from this unmanaged code so that it can be used from the C++/CLI project? Do I mark every class I need visible as extern? Is it that simple or are there some more complexities?
How do I access the exported symbols from the C++/CLI project? Do I simply include the header files of the unmanaged source code and will the C++ linker take the actual code from the unmanaged DLL? Or do I have to hand write a seperate set of "extern" classes in a new header file that points to the classes in the DLL?
When my C++/CLI project creates the unmanaged classes, will the unmanaged code run perfectly fine in the normal VC9 runtime or will it be forced to run within .NET? causing more compatibility issues?
The C++ project creates lots of instances and has its own custom-implemented garbage collector, all written in plain C++, it is a DirectX sound renderer and manages lots of DirectX objects. Will all this work normally or would such Win32 functionality be affected in any way?
You can start with an ordinary native C++ project (imported from, say, Visual Studio 6.0 from well over a decade ago) and when you build it today, it will link to the current version of the VC runtime.
Then you can add a single new foo.cpp file to it, but configure that file so it has the /CLR flag enabled. This will cause the compiler to generate IL from that one file, and also link in some extra support that causes the .NET framework to be loaded into the process as it starts up, so it can JIT compile and then execute the IL.
The remainder of the application is still compiled natively as before, and is totally unaffected.
The truth is that even a "pure" CLR application is really a hybrid, because the CLR itself is (obviously) native code. A mixed C++/CLI application just extends this by allowing you to add more native code that shares the process with some CLR-hosted code. They co-exist for the lifetime of the process.
If you make a header foo.h with a declaration:
void bar(int a, int b);
You can freely implement or call this either in your native code or in the foo.cpp CLR code. The compiler/linker combination takes care of everything. There should be no need to do anything special to call into native code from within your CLR code.
You may get compile errors about incompatible switches:
/ZI - Program database for edit and continue, change it to just Program database
/Gm - you need to disable Minimal rebuild
/EHsc - C++ exceptions, change it to Yes with SEH Exceptions (/EHa)
/RTC - Runtime checks, change it to Default
Precompiled headers - change it to Not Using Precompiled Headers
/GR- - Runtime Type Information - change it to On (/GR)
All these changes only need to be made on your specific /CLR enabled files.
As mentioned from Daniel, you can fine-tune your settings on file level. You can also play with '#pragma managed' inside files, but I wouldn't do that without reason.
Have in mind, that you can create a complete mixed mode assembly. That means, you can compile your native code unchanged into this file PLUS some C++/CLI wrapper around this code. Finally, you will have the same file as native Dll with all your exported native symbols AND as full-fledged .NET assembly (exposing C++/CLI objects) at the same time!
That also means, you have only to care about exports as far as native client code outside your file is considered. Your C++/CLI code inside the mixed dll/assembly can access the native data structures using the usual access rules (provided simply by including the header)
Because you mentioned it, I did this for some non-trivial native C++ class hierarchy including a fair amount of DirectX code. So, no principal problem here.
I would advise against usage of pInvoke in a .NET-driven environment. True, it works. But for anything non-trivial (say more than 10 functions) you are certainly better with an OO approach as provided by C++/CLI. Your C# client developers will be thankful. You have all the .NET stuff like delegates/properties, managed threading and much more at your finger tips in C++/CLI. Starting with VS 2012 with a somewhat usable Intellisense too.
You can use PInvoke to call exported functions from unmanaged DLLs. This is how unmanaged Windows API is accessed from .Net. However, you may run into problems if your exported functions use C++ objects, and not just plain C data structures.
There also seems to be C++ interop technology that can be of use to you: http://msdn.microsoft.com/en-us/library/2x8kf7zx(v=vs.80).aspx
I just looked at the source of Mono for the first time and I thought I would find a bunch of C or C++ code, instead I found 26,192 .cs files and 7 .cpp files.
I am not totally shocked but it made me think of a quesiton I've always had in the back of my mind:
How does a project end up being written in "itself" like this?
Was an older version of mono more c/c++? Or was there initial effort to create some kind of machine coded compiler...
What's the "trick" here?
Mono's compiler is written in C#. You may want to read about compiler bootstrapping.
You should be looking for .c files, instead of .cpp files: the mono runtime is written in C, not C++.
I think it is also important to remember that mono is both a virtual machine runtime (the JIT compiler, garbage collector, etc.) as well as a collection of class libraries that run on this framework (the System.Linq namespace, the XML parsers, etc.).
The majority of the .cs files you see are part of the class libraries. These are basically C# code that run like your own C# code (with some exceptions, but basically it doesn't make sense for everyone to reinvent and re-distribute the wheel over and over, so these are the C# "base" class libraries). This is why you can download complex mono programs as such small file sizes if mono is already installed on the machine.
For mono, the JIT, runtime and garbage collector are largely written in C/C++ as you would expect. If you ever get a low level error, you will often see GNU debug tool dumps as you would in C, just with lots more useful information. The Mono framework is very good at taking any C# code and converting it to CIL code that can run anywhere, and they use whatever toolset is best suited to ensure the code does run anywhere (which in this case meant a C compiler runtime on linux).