Is compilation different for each targeted .NET Platform? - c#

In .NET we have several platforms each of which is composed by its own runtime, its own base libraries and its own supporting software for booting the runtime and so forth.
Concerning those different platforms we can target a specific platform when we are compiling our code. This means that we compile for a specific platform.
In the new .NET Core project model this is even clearer. On the project.json file we specify in the frameworks section the platforms we want to compile for by listing their TFM's.
My problem here is that as I understand, the main difference between developing to a platform or another is the base libraries available (for the full .NET we have the whole BCL for instance). But this seems to be one "run time issue" rather than "compile time issue".
The reason is that when the code is deployed as IL to the specific platform and when it's going to run that it'll see if the necessary assemblies from the needed base libraries are available right?
In that case, why there's this idea of "compiling for a specific platform"? Is the compilation process different for each platform? Is the generated IL different for each platform?

In that case, why there's this idea of "compiling for a specific platform"? Is the compilation process different for each platform? Is the generated IL different for each platform?
The IL is different, but generally only slightly, i.e. the assembly flags may be different, to indicate the target platform specified when compiled.
Of course, you may have conditionally-compiled code in your assembly, protected by #if directives. I assume you are not referring to that sort of difference. But just because the main part of the IL is the same from platform to platform, that doesn't necessarily mean you can run any IL on any platform.
Often, the target platform specified during compilation will be a critical choice, because the managed code engages in some kind of interop with native code that's available only for a specific architecture. Another reason is if the program for some reason requires the use of x64 architecture for virtual address space reasons (i.e. the process expects to need to allocate more than the nominal 3GB maximum available to x86 processes).

Related

Does ApiInformation not respect the app target version

Imagine the following setup:
UWP Library:
MinVersion: 10240
TargetVersion: 16299
This library checks at runtime if the UniversalApiContract Version 5 is present.
If yes, it will use the new NavigationView control that.
UWP App:
MinVersion: 10240
TargetVersion: 10240
This app references the UWP Library project.
When I run this app on my Computer, which has Windows 10 Version 16299 installed, the following happens:
The UWP Library checks at runtime for the api contract. As I have the newest version of Windows 10, yes it is present.
Then it tries to create the NavigationView control, and I get a TypeLoadException with the message Could not find Windows Runtime type 'Windows.UI.Xaml.Controls.NavigationView'.
What? Why? Does the ApiInformation class not respect the target version of the running app?
What can I do to work around this issue?
I thought ApiInformation was the way to avoid this, but apparently not?!
Here is a Github repository showcasing the error:
https://github.com/haefele/ApiInformationTargetVersionFail
If you set the target-version of the MyApp project to 16299, everything works fine.
[Edit May 29 2018]
The methods of the ApiInformation type are based on a trivial lookup of the WinRT metadata on disk -- if the metadata is there, the call succeeds. This enables "light-up" of new features on new platforms without increasing your minimum version. What's important though is that ApiInformation knows nothing about the implementation of the API: sometimes it might be missing (eg on early "Insider" builds of the OS) and sometimes it might not work due to "quirks" (see example below). .NET also had a different view of the world due to the way the JIT and .NET Native toolchains work.
This can cause problems...
.NET apps use a concept of a "Union WinMD" which is the union of all known types (including Extension SDKs) that exist in the Windows SDK that corresponds to the MaxVersionTested setting of the app. If you're running the app on a down-level platform, ApiInformation will tell you the API doesn't exist, but .NET can still JIT methods based on the Union WinMD and perform some reflection tasks. If you actually try and call the API (because you forgot the ApiInformation check) you will get a MissingMethodException at runtime because the API doesn't really exist.
A different problem can occur if you include a higher-versioned .NET library inside a lower-versioned app and then try to run it on the higher-versioned build of the OS. In this case, ApiInformation will succeed because the type exists in the system metadata, but .NET will throw a MissingMethodException at runtime because the type didn't exist in the Union WinMD used to build the app.
Important: This is based on the Target Version (aka MaxVersionTested) of the app, not the library!
If you build a release version of the app, you will even see the .NET Native toolchain display a warning such as this in the Output window:
warning : ILTransform : warning ILT0003: Method 'Foo.Bar()' will always throw an exception due to the missing method 'SomeNewType.NewMethod()'. There may have been a missing assembly.
There is no good way around this, other than to build your application with the same target version as the library (so that it can resolve all the references).
Another problem you can encounter is when your app (or a library it consumes) use APIs "from the future" that didn't exist in the OS listed as the MaxVersionTested of the app. Many of the APIs will work, but some don't due to incompatibilities with the simulated legacy mode the app is running in.
Hypothetical example of the library problem
Imagine that version X of the OS only supported black-and-white apps, where the background is always white and text, graphics, etc. are always black. Apps are built using this underlying assumption - including having graphics buffers that allocate only 1-bit-per-pixel, or never worrying about text being invisible because the background and foreground colours are the same. Everything is fine.
Now version Y of the OS comes out, and it supports colour graphics (say, 8-bits-per-pixel). Along with this new feature comes a pair of new APIs, SetForegroundColor() and SetBackgroundColor() that let you choose whatever colour you want. Any app (or library) that asks ApiInformation whether these two new APIs exists will succeed on version Y of the OS, and any app with a MaxVersionTested of at least Y can use them successfully. But for compatibility reasons they cannot work in an app that only targeted version X because it has no idea colours exist. Their graphics buffers are the wrong size, their text might become invisible, and so on. So the APIs will fail at runtime when used in an X-targeted app, even though the OS has the metadata (and the implementation) to support them.
Unfortunately there is no good way of handling this situation today, but it is a relatively rare occurrence. It is equivalent to a legacy Win32 library using LoadLibrary / GetProcAddress (or a legacy .NET library using reflection) to discover APIs that are "from the future."

C# Dynamic compile and replace/reload of assembly from within same assembly

I have several issues with several SDK's comming from OEM manufacturers for specific devices. SDK is usually based on C or C++ dll, so I have a lot of Marshaling going around (a lot===YOU CAN'T EVEN IMAGINE). Problem start with next version of SDK when they extend some functions or some structures, they effectively break compatibility. In past I have made copy of our library supporting their device and start making changes to support new SDK. But each time our library was only for specific SDK, and upgrades of our systems were tough (Installation script if one heavy thing also ~ 3 GB install).
I have 78 projects in solution, commonly 4-5 libraries for each OEM Manufacturer, this is without any service tools. And Yesterday I said NO MORE. Started research on subject how to recompile C# code in runtime and reload/replace same assembly without quiting App.
And the result is the following:
- Class file that defines external C/C++ dll API was referenced from external Project referencing only System.dll. And me being insane I've already had each SDK version changes wrapped around #if #elif #endif so I could recompile last version of our library to support previous version of SDK. But that was maybe only once done, I've used #defines along with CSharpCodeProvider to recompile this assembly in runtime. Idea was like this:
Application loading ...
Open main SDK file get file version (extract version and identify it).
Load original External Assembly in new AppDomain (so I could destroy domain later).
Extract current version from external assembly.
Destroy new AppDomain to release hook from external assembly.
If versions mismatch, recompile external assembly (source code for external assembly is embedded within parent assembly), and replace original DLL with just compiled one.
Continue loading application...
So far this test approach works on one live demo system, and I was amazed. Switching from one to another SDK was flawless without any hick-ups.
And also code recompiles it self only when SDK version changes. So with safe guard I could say this is my first Metamorphic code I've wrote, that recompiles/changes it self from runtime.
Unfortunately this approach requires me to add one more Project for each OEM Manufacturers SDK. Which effectively kills my first though why I said NO MORE. True I now have only two libraries to maintain per one OEM manufacturer, and there will be no more projects added after this. But...
I wonder is there better approach which could allow me to replace DLL of currently loaded assembly in runtime from true within same assembly? Or change executing code on "fly" each time, this mainly includes Marshaled function, classes, structures, constants, ...?
Please notice code should be maintained from within same project without any externals. Also please notice this project exposes only hard-coded interface to "outside" world (Interface is referenced Interface only project - is more complex than I wrote). But this "outside" world is blind to any OEM specific stuff, which was the point using interface to have exactly same behavior across any OEM Device.
Any ideas? thoughts? suggestions?

C# (MSIL) into native X86 code?

I would first like to say my goal is to convert MSIL into native X86 code. I am fine with my assembly's still needing the .net framework installed. NGEN is not what I want as you still need the original assembly's.
I came across ilasm, and what I am wondering is this what I want, will this make pure assembly code?
I have looked at other projects like mono (which does not support some of the key features my app uses) and .net linkers but they simple just make a single EXE with the .net framework which is not what I am looking for.
So far any research has come up with...you can't do it. I am really no sure as to why as the JIT does it when it loads the MSIL assembly. I have my own reasons for wanting this, so I guess my question(s) come down to this.
Is the link I posted helpful in anyway?
Is there anything out there that can turn MSIL into x86 assembly?
There are various third-party code-protection packages available that hide the IL by encrypting it and packing it with a special bootloader that only unpacks it during runtime. This might be an option if you're concerned about disassembly of your code, though most of these third-party packages are also already cracked (somewhat unavoidable, unfortunately.) Simple obfuscation may ultimately be just as effective, assuming this is your underlying goal.
One the major challenges associated with 'pre-jitting' the IL is that you end up including fixed address references in the native code. These in turn will need to be 're-based' when the native code is loaded for execution under the CLR. This means you need more than just the logic that gets compiled; you also need all of the reference context information necessary to rebase the fixed references when the code is loaded. It's a lot more than just caching code.
As with most things, the first question should be why instead of how. I assume you have a specific goal in mind, if you want to generate native code yourself (also, why x86? Why not x64 too?). This is the job of the JIT compiler - to compile an optimized instruction set on a particular platform only when needed, and execute it later.
The best source I can recommend to try and understand how the CLR works and how JIT works is taking a look at SSCLI - an implementation of the CLR based on the ECMA-335 spec.
Have you considered not using C#? Given that the output of the C# compiler is MSIL, it would make sense to develop on a different platform if that is not what you want.
Alternatively it sounds like NGEN does the operation you are wanting, it just doesn't handle putting the entire thing into an executable. You could analyze the resultant NGEN image to determine what needs to be done to accomplish that (note that NGENed images are PE files per the documentation)
Here is a link on NGEN that contains information on where the images are stored: C:\windows\assembly\NativeImages_CLR_Bit for instance C:\windows\assembly\NativeImages_v2.0.50727_86. Note that .NET 3.0 and 3.5 are both part of 2.0.

Whats the relation(if any) of MASM assembly language and ILASM?

whats the relation(if any) of MASM assembly language and ILASM. Is there a one to one conversion? Im trying to incorporate Quantum GIS into a program Im kinda writing as I go along! I have GIS on my computer, I have RedGate Reflector and it nor the Object Browser of Visual Studio 2008 couldnt open one(of several which I dont have a strong clue to how they behave) of the .dlls in Quantum. I used the MASM assembly editor and "opened" the same dll and it spewed something I didnt expect to necessarily understand in the first place. How can I/can I make a conversion of that same "code" to something I can interact with in ILASM and Im assuming consequently in Csharp? Thanks a ton for reading and all the responses to earlier questions...please bear in mind Im relatively new to programming in Csharp, and even fresher to MASM and ILASM.
MASM deals with the x86 instructions and is platform/processor dependent, while ILASM reffers to the .Net CIL (common intermediary language) instructions which are platform/processor independent. Switching from something specific to something more general is hard to achieve, that's why, AFAIK, there is no converter from MASM to ILASM (inverse, there is!)
IL is a platform independent layer of abstraction over native code. Code written on the .NET platform in C#, VB.NET, or other .NET language all compile down to an assembly .EXE/.DLL containing IL. Typically, the first time the IL code is executed the .NET runtime will run it through NGen, which compiles it once again down to native code and stores the output in a temporary location where it is actually executed. This allows .NET platform code to be deployed to any platform supporting that .NET framework, regardless of the processor or architecture of the system.
As you've seen, Reflector is great for viewing the code in an assembly because IL can easily be previewed in C# or VB.NET form. This is because IL is generally a little higher level instructions and also contain a lot of metadata that native code wouldn't normally have, such as class, method, and variable names.
It's also possible to compile a .NET project directly to native code by setting the Visual Studio project platform or by calling Ngen.exe directly on the assembly. Once done, it's really difficult to make sense of the native code.
Ther is no relationship between MASM assembly language and ILASM. I don't see you have any way to convert native code to IL code. IL can be understood by CLR only while the MASM assembly language is about native machine code. CLR turns the IL into native code in runtime

.NET code compilation or complication?

Q1) Why is C# initially compiled to IL and then at runtime JIT complied and run on top of a virtual machine(?). Or is it JIT complied to native machine code?
Q2) If the second is true (JIT complied to native machine code), then where is the .NET sandbox the code runs under?
Q3) In addition, why is the code compiled to IL in the first place. Why not simply compile to native machine code all the time? There is a tool from MS from this called ngen but why is that optional?
The IL is JIT'd (JIT = Just In Time) compiled to native machine code as the process runs.
The use of a virtual machine layer allows .NET to behave in a consistent manner across platforms (e.g. an int is always 32 bits regardless of whether you're running on a 32- or 64- bit machine, this is not the case with C++).
JIT compiling allows optimisations to dynamically tailor themselves to the code as it runs (e.g. apply more aggressive optimisations to bits of code that are called frequently, or make use of hardware instructions available on the specific machine like SSE2) which you can't do with a static compiler.
A1) JIT compiles to native machine code
A2) In .net there is no such term as sandbox. There is AppDomains instead. And they runs as part of CLR (i.e. as part of executable process)
A3) NGen drawbacks from Jeffrey Richter:
NGen'd files can get out of sync.
When the CLR loads an NGen'd file, it compares a
number of characteristics about the previously compiled code and the current execution
environment. If any of the characteristics don't match, the NGen'd file cannot be
used, and the normal JIT compiler process is used instead.
Inferior Load-Time Performance (Rebasing/Binding).
Assembly files are standard Windows PE files, and, as such, each contains a preferred base address. Many Windows
developers are familiar with the issues surrounding base addresses and rebasing. When JIT compiling code, these issues aren't a concern because correct memory address references are calculated at run time.
Inferior Execution-Time Performance.
When compiling code, NGen can't make as many
assumptions about the execution environment as the JIT compiler can. This causes
NGen.exe to produce inferior code. For example, NGen won't optimize the use of
certain CPU instructions; it adds indirections for static field access because the actual
address of the static fields isn't known until run time. NGen inserts code to call class
constructors everywhere because it doesn't know the order in which the code will execute
and if a class constructor has already been called.
You can use NGEN to create native versions of your .NET assemblies. Doing this means that the JIT does not have to do this at runtime.
.NET is compiled to IL first and then to native since the JIT was designed to optimize IL code for the current CPU the code is running under.
.NET code is compiled to IL for compatability. Since you can create code using C#, VB.NET, etc then the JIT needs a common instruction set (IL) in order to compile to native code. If the JIT had to be aware of languages, then the JIT would need to be updated when a new .NET language was released.
I'm not sure about the sandbox question, my best guess is that a .NET app runs with 3 application domains. One domain contains the .NET runtimes (mscorlib, system.dll, etc), another domain contains your .NET code, and I can't recall what the other domain's for.
Check out http://my.safaribooksonline.com/9780321584090
1. C# is compiled in to CIL (or IL) because it shares a platform with the rest of the .NET languages (which is why you can write a DLL in C# and use it in VB.NET or F# without hassle). The CLR will then JIT Compile the code into Native Machine Code.
.NET can also be run on multiple platforms (Mono on *NIX and OS X). If C# compiled to native code, this wouldn't be nearly as easy.
2. There is no sandbox.
3. Covered in the answer to #1
A1) This way it's platform agnostic (Windows, Linux, Mac) and it can also use specific optimizations for your current hardware. When it gets JIT compiled it's to machine code.
A2) The whole framework (the .NET framework) is all sandbox so all calls you might make through your app will go through the .NET framework sandbox.
A3) As in answer 1, it allows the .NET binary to work in different platforms and perform specific optimizations in the client machine on the fly.
Compiled .Net code becomes IL which is an intermediate language in the exact same way as that of Javas' object code. Yes it is possible to generate native machine code using the NGen tool. NGen binds the resulting native image to the machine, so copying the ngen'd binary to a different system would not produce expected results. Compiling to intermediate code allows for runtime decisions that can be made that otherwise can't (easily) be made with a staticly-typed language like C++, it also allows the functioning of code on different hardware archetectures because the code then becomes descriptive in the sense that it also describes the intent of what should happen in a bit (eg 32 or 64)-agnostic way, as opposed to machine-specific code that only works on 32-bit systems or 64-bit systems but not both.
Also, NGen is optional because as I said it binds the binary to the system, it can be useful when you need the performance of compiled machine code with the flexibility of a dynamically typed language and you know that the binary won't be moving to a system it's not bound to.

Categories