What code changes are required to migrate C# to 64-bits? - c#

I learned that a working 32-bit C# Windows (or Console) Application may exhibit different functional behavior when compiled for 64-bits (by deselecting the 'Prefer 32-bit' compiler option in Visual Studio 2013). For more details, see: What causes significant loss of FP precision when compiling for 64-bit?
Surprised by the fact that multiple code changes were required (to create code that is functionally equivalent when compiled as 32-bit or 64-bit), I ask here the specific question if anyone has come across other issues that required changes to existing C# code in order to make it function correctly as a 64-bit application.
Note: in both cases Windows 7 (or higher) 64-bit is used as the operating system.

Changes to code, no. .NET code is compiled to MSIL which is platform independent.
The only changes you have to make is using 64-bit referenced assemblies instead of the 32-bit versions (or native assemblies written in C++). This is the case for example with ADO.NET providers that are written for a specific architecture.

No changes because the C# is compiled into Common Intermediate Language (CIL, MSIL is the old name), which is independent of architecture.
Except when you use P/Invoke magic like:
[DllImport("advapi32.dll", SetLastError=true)]
static extern bool AbortSystemShutdown(string lpMachineName);
On Windows or 3rd party DLL's you might run into trouble when 32-bit DLL's are called from a C# program running in 64-bit mode at runtime.

Related

Understanding 32-bit vs 64-bit in .Net

I've always skirted around the issue of building 64-bit desktop apps as I somehow got it in my head that it wasn't a trivial thing to do, plus I guess I didn't really understand the solution "configuration manager" dialog, platforms and configurations, either.
Anyway, I've just tried converting an existing application by simply changing all of the solution projects' platforms to x64, and it worked. The only issue I had was with a C++ DLL referenced by one of the projects, which needed rebuilding as x64.
So my question (finally) is: why did I only have an issue with the C++ DLL, but the many .Net assembly DLLs referenced by the projects in my solution were fine?
And what actually determines whether my application is built as 64-bit? I changed all of the solution's projects to "x64", but would it still have worked if they were left as "Any CPU", and I only changed the WPF (startup) project to "x64"?
Edit
So in the end it seems that I was simply able to set all project platforms to "Any CPU" rather than "x64" (the TFS server wasn't able to run unit tests with the latter). Even the projects referencing the unmanaged 64-bit DLL seem happy with this (not sure why).
The trick was to uncheck the Prefer 32-bit option in the WPF (startup) project's properties. The built application now runs as a 64-bit app, and TFS/unit tests run happily.
TL;DR: .Net assemblies are never assembled in to machine specific instructions, instead they are compiled into MSIL which runs under the .Net virtual machine, at runtime the VM uses the JITer to produce machine instructions which can be executed by the CPU.
The differences between 32bit and 64bit that "breaks" object file are either: Pointer size, Memory model and most importantly the instruction set architecture (ISA). Lets break down each point and see why .Net assemblies are mostly unaffected.
Point Size (or Length)
In a 32bit execution environment pointers are 32 bit long, and as you'd expect int 64 bit they are 64 bit long. This fact can break your assembly in one of two ways:
Structures containing pointers will have their internal memory structure changes are they need more (or less) bytes to store pointers.
Pointer arithmetic - some clever developers like to play around with pointers, masking them, adding them and so on. In some cases the size of the pointer may be integral for these computations to yield correct results.
Why .Net assemblies are not affected
Since .Net languages are safe - you don't get to play with pointers (generally, unless you are under an unsafe context). Not having the option to play with pointers solves the 2nd point. As for the first point - this is the reason you can get the size (using sizeof) of a class no the size of a struct containing class references.
Memory Model
The old 32bit processors used to have a feature called Memory segmentation which allowed the OS or a Supervisor program to declare regions of memory that are accessed using special CPU registers called Segment registers. In 64bit this feature is mostly disabled. So programs compile to work under memory segmentation may have problems working in a non segmented environment.
Why .Net assemblies are not affected
Generally we don't deal with memory at this low level, this is possible because of the .Net virtual machine as explained in the next point.
Instruction Set Architecture
32bit and 64bit (x86 and AMD64) are similar but completely different ISAs which means that code which was assembled to run under one will not run under the other. So unlike the other two points this one will break your assembly regardless of what you've written in it.
Why .Net assemblies are not affected
So how could .Net assemblies possibly be compiled as Any CPU? The trick is that when you compile a .Net language you never assemble it. That means when you press compile you only compile your code into an intermediate object (known as a .Net assembly) that is not assembled (ie. not ran through an assembler).
Usually when you compile code in C/++ you first run it through a compile the generates some assembly instructions these instructions are then passed down to an assembler which generates machine instructions. The CPU is only capable of executing these machine instructions.
.Net languages are different, as mentioned above, when you compile a .Net language you run it through a compiler and stop there, no assembler involved. The compile here, too, generates some assembler instructions but unlike the instructions produced by a c/c++ compile these instructions are machine agnostic, they are written in language called MSIL - Microsoft intermediate language. No CPU knowns how to execute these instructions, since just like ordinary assembler instructions they too must be assembled into machine instructions.
The trick is that this all happens at run time, when a user opens you program he launches an instance of the .Net runtime in which your program runs, your program never runs directly against the native machine itself. When your program needs to call a method a special component in the .Net virtual machine called the JITer - just in time compiler is tasked with assembling this method from MSIL to machine instructions specific for the machine the .Net framework was installed on.
From a correctness standpoint 32/64 only matters when you are interoperating with native code, using low-level unsafe features, or allocating a lot (> 4GB) of memory. Absent those considerations you might as well just use AnyCpu. AnyCpu code will default to your host OS bitness (though that default can be altered), so for most people these days it will end up running in a 64 bit process.
There are sometimes performance reasons to explicitly prefer 32 or 64 bits. 32 bit apps will generally be more cache friendly. Compute-bound apps should perform better when run as 64 bit processes (though not always).

If C# code is JIT compiled why do I have to choose a target platform in Visual Studio when building?

My assumption could be wrong, but I thought that the main advantage of JIT compilation is that allows a single distributable that the JIT compiler will compile for the target machines particular hardware.
If this is true, then why can a target platform be specified when compiling C#?
Thanks.
Much of the time you don't need to, and the default is okay. However, there are some circumstances where the default Any CPU can cause problems. One example is if you have a dependency on a 32-bit unmanaged dll. If you leave it at Any CPU and try to run this on a 64-bit system, the JITter will build your .Net code into a 64-bit executable, which will then fail to successfully load the 32-bit dependency. There are other cases as well where it's advantageous to be able to specify the platform. Another example is when you're building an application that you know will be used to load very large data sets into memory. If those data sets could exceed the virtual address space available for 32-bit programs, you may need to specify that you're building a 64-bit application.

Does setting the platform when compiling a c# application make any difference?

In VS2012 (and previous versions...), you can specify the target platform when building a project. My understanding, though, is that C# gets "compiled" to CIL and is then JIT compiled when running on the host system.
Does this mean that the only reasons for specifying a target platform are to deliberately restrict users from running the software on certain architectures or to force the application to run as 32 bit on a 64 bit machine? I can't see that it would be to do with optimization, as I guess that happens at the CIL-->Native stage, which happens Just-In-Time on the host architecture?
This MS Link does not seem to offer any alternative explanation and I can find no suggestion of the fact that you should, for example, release separate 32 / 64 bit versions of the same application - it would seem logical that something compiled for "anycpu" should run just as well and, again, optimizations will be applied at the JIT stage.
Does this mean that the only reasons for specifying a target platform are to deliberately restrict users from running the software on certain architectures or to force the application to run as 32 bit on a 64 bit machine?
Yes, and this is critical if you're using native code mixed with your managed code. It does not change what gets optimized at runtime, however.
If your code is 100% managed, then AnyCPU (or the new AnyCPU Prefer 32-Bit) is likely fine. The compiler optimizations will be the same, and the JIT will optimize at runtime based on the current executing platform.
I can find no suggestion of the fact that you should, for example, release separate 32 / 64 bit versions of the same application
There is no reason to do this unless you're performing interop with non-managed code, in which case, that will require separate 32 and 64 bit DLLs.
Reed has a good answer here. However, I think it's also important to point out that this setting is just a flag in the DLL - it pretty much has no effect whatsoever in most situations. It is the runtime loader's (the bit of native code that starts the .NET runtime) responsibility to look at this flag, and direct the appropriate version of of the .NET runtime to be started up.
Because of this - the flag will mostly only matter when it is set on an EXE file - and have no effect when set on a DLL.
For example - if you have a '32-bit-flagged .NET DLL' which is used by either a 64-bit-flagged .NET EXE, or an any-cpu-flagged .NET EXE, and you run the EXE on a 64-bit machine - then the loader will start the 64-bit runtime. When it comes time to load the 32-bit DLL, it's too late - the 64-bit runtime has already been chosen, so your program will fail (I believe it's a BadImageFormatException that you will receive).

Will a .NET Windows Forms application work in a 64-bit OS or does it need to be modified?

Generally speaking, will a .NET Windows Forms application work in a 64-bit OS or does it need to be modified?
If it doesn't rely on a 32 bit external library (e.g. COM component), it'll work perfectly as a 64 bit process and will leverage its benefits (large address space, x64 instruction set, ...). If it relies on 32 bit stuff, most of the time, you can still run it as a 32 bit application by setting the target platform to x86.
Most .NET applications should work unmodified in 64 bits if they target x86 instead of Any CPU which is the VS.NET default.
According to this link: MSDN - Migrating 32-bit Managed Code to 64-bit.
If you have 100% type safe managed code then you really can just copy it to the 64-bit platform and run it successfully under the 64-bit CLR.
But if you are using any of the following features:
Invoking platform APIs via p/invoke
Invoking COM objects
Making use of unsafe code
Using marshaling as a mechanism for sharing information
Using serialization as a way of persisting state
it indicates that the application might not be completely compatible.
For the most part it should work just fine. You should be careful if you are doing anything with native code, whether it be unsafe managed code, or interop/PInvoke, but if all of your code is managed you shouldn't have any problems.
A pure .NET application will run on a 64-bit operating system with no modifications.
If you use a C++/CLI library, use architecture specific COM components, or do any P/Invoke calls, you may need to update your application for a 64-bit environment.
Most 64-bit OS's are able to handle 32-bit apps without problems. This is why you see a Program Files (x86) folder on your 64-bit OS to handle a lot of your old 32-bit apps.
As long as you don't mix and match library platforms, you'll be fine. Target x86 when you compile and you should be good to go.

What type of binary and execution environment is used by Visual Basic 6?

We are planning to move a VB6 binary only from one Windows machine to another in production environment.
What type of Windows OS cannot run a VB6 binary?
What type of extra run time application needs to be installed to make sure that a VB6 binary would run on that machine?
Underneath the hood how is a VB6 binary different from a C# binary?
VB6 makes heavy use of COM objects that you will have to make sure are installed on the new system.
This means installing the VB6 runtime on that machine and any other COM/ActiveX controls that the application uses.
VB6 apps are quite different from C# apps. C# compiled code is represented as CIL while VB6 is either native code or the old Pcode. They also use different standard libraries.
I am not sure about operating system compatibility. If it's a 32-bit app, then I imagine you could coax it into running on any version of Windows.

Categories