As .net matures, the JIT capabilities have been improved to be brilliantly lazy. That is, don't produce machine code if it isn't needed. In general, this is a good thing.
However, if I am trying to warmup an application I may prefer an aggressive JIT stance. Is there a way to configure a .net application so that all methods of a class are JIT compiled, simply because -the class was constructed?
If yes, my favorite object-creational pattern could instantiate my appliation's object-graph, and I would have everything JIT-ready simultaneously. That would be nice.
Can this be done?
As a side note - you could use NGen.exe to produce native image of your dll's at deployment time (NB:It's not a perfect solution - as it has some drawbacks - check out the documentation carefully)
you mean, you want everything ahead-of-time compiled :-)
They finally did this for you - announced at Build 2014:
Somasegar told me that developers told Microsoft that .NET was always
a very productive language to program in, but it didn’t always deliver
the performance they were looking for. “We’ve been working on a lot of
innovation to show developers that .NET is still a very viable
platform for developers who want to build apps in this modern world,”
he said. With the .NET native ahead-of-time compiler, developers will
see faster startup times, lower memory usage and overall better
performance, Microsoft promises. This new feature is currently in
preview and allows developers to target both the X64 and ARM
platforms.
You can try it out (preview only at the moment, and possibly only working for 'universal' apps so YMMV as to how useful it is), more info is available on MSDN
Related
I am learning about the conversion of source code to machine code via the .NET and JRE Frameworks. To start off I did some research comparing the two processes and created this diagram. I need some help in criticizing its correctness, and more importantly adding any serious things I missed out to better understand the compilation pathway.
Both .NET and Java compile down to bytecode, that is an intermediate language which contains instructions for a virtual machine. It's not machine code because it cannot run directly on a physical machine. What happens instead (today at least; Java has a darker history in this regard) is that at runtime a just-in-time compiler is run which translates the VM instructions into native code that is then run directly. This has a major performance benefit over only interpreting it.
They differ in this regard a little. Oracle's Java implementation (Hotspot) uses a clever mix of interpretation, measuring and JIT compiling just the parts that are heavily used and interpreting otherwise. This is to reduce initial impact by the JIT compiler (which needs to run upfront otherwise, lengthening process startup time) while still allowing good performance where needed. .NET on the other hand always JIT-compiles all code that is used (unused code is not compiled, though).
Edit (2019): By now .NET also has tiered compilation where depending on what code runs a lot, that code will be optimized further.
As for a question you mentioned in your comments: Yes, the CLR and the JVM are the platforms such programs are run on. A virtual machine is a machine too, just less hardware-y. They both are tightly integrated with a corresponding framework, the Base Class Library for .NET and the Java class library for Java. Those are frameworks.
I saw this thread here. I was wondering if this was legit (sounds like it) and what are the drawbacks of doing this. What does it entail to run it stand alone in some architecture?
Thanks
Trying to create an operating system in a managed language is currently an "interesting research problem". This means that it seems possible, but there are still quite a few important issues that need to be resolved (for example, I wouldn't expect "managed windows" anytime soon).
For example, take a look at the Singularity project (also available at CodePlex). It still has some native parts, but very few of them. As far as I know, even the garbage collector is written in managed code (with some language extension that allows safe manipulation with pointers).
The trick is that even managed code will eventually be compiled to native code. In .NET, the compilation is done usually by JITter when you start the application. In Singularity, this is done in advance, so you run native code (but generated from managed). Singularity has some other interesting aspects - for example, processes communicate via messages (and cannot dynamically load code), which makes it possible to do some aggressive optimizations when generating native code.
There's an open source project that's trying to achieve exactly that.
It's called the "Managed Operating System Alliance". Mainly targeted as a framework (supplying users with a compiler, libraries, interfaces, tools and an example kernel), it will also feature a complete operating system kernel and small apps.
For further information:
Website: http://mosa-project.org/projects/mosa
IRC: #mosa on freenode
It is legit. Drawbacks are clear, this is a micro kernel. It is going to be a while before your video adapter driver will be fully managed as well. That takes acquiring critical mass with many devs and manufacturers jumping on the bandwagon. Difficult, but it has happened with Linux as the obvious example.
This is being pursued by Microsoft as well. Singularity has been well published about. It has evolved into a secret research project named Midori. There have been enough leaks about it to know its goal, Wikipedia has an article about it. I think lots of the devs that worked on the original CLR joined this project. Whether it will come to a good end is an open question. If it does, clearly the project backer is probably enough to get that critical mass rolling.
Microsoft's Singularity project is a operating system architecture framework which will allow people to write customizable operating system and probably Microsoft's new operating system will be based on singularity.
.NET is very powerful framework, it evolved and it probably contains everything from metadata attributes to linq and which certainly makes us free from bad pointer errors.
Just like Windows Phone and iPhone, people will be able to write customizable operating system for devices.
Today most of firewall, routers (the hardware ones) contain customized linux, that can be replaced with Singularity kernal and your own business process.
Singularity kernel is small it looks like perfect alternative of embedded windows/linux.
I dont think there is any drawback, except that it is totally new system and it will take time for hardware vendors to supply devices comptabile with this, but it will happen in future.
Here's the deal: I'm in the process of planning a mid-sized business application that absolutely must support Win2k. AFAIK, official .NET support for Win2k was scrapped a while ago (IIRC, it stopped at version 2.0).
Now, I already wrote (ages ago) libraries in C++ that allow me to accomplish the end result (i.e., finish this project) just as quickly as if I was writing this application with the help of the .NET Framework -- so .NET's RAD "advantage" is almost negated.
I'm sure a lot of people here deal with business applications that need to support old OS's. So, given my library situation, what advantage(s) are there for me in using .NET over native C++ and vice versa? I'm just not sure which of the two is right for the job -- because it seems that I could use either. Then again, there's that framework support issue to deal with...
I will gladly add more information, if required.
The last .NET version that runs under Windows 2000 is .NET 2.0 SP2. It does include the features required by System.Core.dll (that is part of .NET 3.5).
The answer is YES, you can use .NET 3.5 SP1 under Windows 2000 if you're not going to use .NET 3.0 libraries (WCF, WF, WPF, CardSpace). But you have LINQ, LINQ to XML, LINQ to SQL.
The only thing you need to do is to deploy three core .NET 3.5 SP1 files:
System.Core.dll
System.Xml.Linq.dll (LINQ to XML)
System.Data.Linq.dll (LINQ to SQL)
Disadvantages of this method (read carefully):
Not sure whether it's permitted or forbidden by the EULA (end-user license agreement)
This scenario is not supported by Microsoft.
I'd look to see if Mono (mono-project) works for you. i.e. runs on win2k - if it does it would allow you to port your app to MS .NET and later OS versions should the need arise later in the project. Any .NET is going to be easier than C++ IMHO :)
The biggest difference is that you are (or your boss is) more likely to find developers to maintain your .NET code after you move on to other things.
C++ has the advantage of giving you job stability - although that might not be what you want. :)
I think, given your situtation, it boils down to what you feel more comfortable in writing. If C++ is a comfortable language for you, do that. It will help get you into the code and make it easier to finish.
I would also take care to keep the future in mind. If the Win2K requirement drops that might require you to rewrite if you wrote in C++. It might not. Just keep it in mind while you decide how to proceed.
You can develop with .NET but set the compiler options to target the .NET 2.0 framework. If the OS gets upgraded in the near (or far) future, you can upgrade your program to target the 3.5 framework. I would go this route as it allows for easier future maintenance by others.
Have you considered Delphi? You can download Turbo Delphi for free and it you can easily write code targeting Windows 2000. With Delphi, you'll get an excellent RAD (arguably better than anything you'll find in C++...unless you use C++ Builder).
Delphi creates native code, and has no runtime requirements.
Of course, the downside is if that you don't know Delphi (which is Object-Pascal) you have to familiarize yourself with a new language. However, if you know C++, you'll feel at home in Delphi in no time.
What can be done in VC++ (native) that can't be done with VC#?
From what I can tell the only thing worth using VC++ native for is when you need to manage memory yourself instead of the CLR garbage collector, which I haven't seen a purpose in doing either (but thats for another question to be asked later).
Cross-platform development. Yes Mono exists, and Java's somewhat more predictable to have it function EXACTLY the same on more platforms, you can find a C/C++ compiler for just about any platform out there, where you can't with C#.
Also linking into 3rd-party libraries, while I'm sure there's a way to leverage them in C#, you'll be able to take advantage of them without interop (Marshaling, etc) in C++.
Edit: one last thing: RELIABLE memory management. Yes you can use dispose(), and try-finally, but there's nothing quite like KNOWING the memory is gone when it's popped off of the stack. Through techniques like RAII, when you use well-constructed classes, you will KNOW when your classes release resources, and not waiting around for the GC to happen.
With P/Invoke there is very little that is impossible in .NET (most obviously device drivers).
There are also things where the advice is to not use .NET (e.g. shell extensions, which get loaded into any process that opens a file dialogue1).
Finally there are things which will be much harder in .NET, if possible at all (e.g. creating a COM component that aggregates the FTM).
1 This can create a problem if that process is already using a different version of .NET. This should be alleviated in the future with .NET 4 having the ability to support side by side instances of the runtime.
I'm not sure if you're talking about language features or applications. My answer though is for applications / components.
Really there are only 2 things you cannot do in C# that you can do in C++.
You cannot use C#, or any other .Net language, to write a component for a system that only accepts native components
You cannot use C#, or any other .Net language, to alter certain properties of a CCW for which the CLR does not allow customization
The most notable item here is Device Drivers. This is a framework that only accepts native components and there is no way to plug in a managed component.
For everything else it's possible to do the same thing in C# as it is in C++. There are just a lot of cases where you simply don't want to and a native solution is better. It's possible for instance to manage and manipulate memory in C# via unsafe code or IntPtr. It's just not nearly as easy and generally there's no reason.
You can't write device drivers for one.
I think there are several important points:
You can do anything in C#/C++/Java/Python/Lisp or almost any other language, finally all of them Turing complete ;)... The question is it suits your needs?
There is one big and extreamly important limitation of C#... It runs only one single platform Windows... (Mono is still not mature enough).
There are many applications where GC is just a waste of resources, applications that can't afford you throw up 1/2 of memory untill next gc cycle: Games, Data Bases, Video auido Processing and many other mission critical applications.
Real Time applications (again games, video processing and so on). Non-deterministic GC makes life much harder for them.
In fact, most of desktop applications: Web Browsers, Word Processors, Desktop Environment itself (like Windows Explorer, KDE or Gnome) are written in compiled languages with careful thinking about resources... Otherwise, they would just be terrible bloated applications.
Whereas writing shell extensions in Windows XP was possible in C# it is next to impossible to write shell extensions for Vista and Windows 7. Shell extensions and Namespace extensions (and anything else that uses the new Properties system) (kindof) must be done in C++ unless you're into pain.
There are two obvious answers:
VC# can never run without the .NET
framework. Native C++ can. That may
be necessary in some areas (others
have mentioned device drivers, but
more common examples might simply be
clients where the .NET framework is
not installed. Perhaps you're
distributing an application and you
know not all of your customers are
willing to install .NET, so your
sales would go up if you made an app
that just worked without the
dependency on .NET. Or perhaps you're
working on some mobile device where
the couple of megabytes taken up by
the .NET CF can not be justified. Or shell extensions where using .NET can cause nasty problem for the user.
And VC# can never use C++ language
features. Native C++ can. (Managed
C++ can too, of course, but that's a
different issue). There are, believe it or not, things that can be done more conveniently or elegantly in C++. And they're only accessible if you're programming in C++.
System calls are no problem, however. p/invoke lets you do those from C#, almost as easily as you could from C++.
inline assembler
You cannot use C++-Libraries with classes (P/Invoke can only be used for functions AFAIK)
You cannot use callbacks with P/Invoke.
Is C# in particular and .NET in general self compiling yet (this is not a troll, I genuinely don't know)? If not, you can use VC++ to write C# and .NET, but you can't use C# to do the same job.
This is tongue in cheek, but it also is an answer to your question... you can screw things up much more severely in VC++ than you can in VC#. Not that you can't manage to screw things up severely in VC#, but in general, you can screw them up easier and more thoroughly in VC++.
Again, kind of tongue in cheek, but also an answer to your question. Perhaps not what you were hoping for, but... :-)
There's also hard real-time applications. Any language with a GC cannot be used, just in case it decides to collect during a time-constrained part of the code. Java was notorious for not even allowing you to try (hence the EULA about not using it for software "intended for use in the design, construction, operation or maintenance of any nuclear facility"
(yes, I know they've since made a modified version of Java for real time systems).
For example, it makes sense to use C++ if it's harder to translate the header files for existing libraries than it is to give up the existing managed libraries.
The Main difference is:
C++ is a core language with which you can build stand-alone programs. These Programs communicate directly with the the operating system and nothing else. C++ compilers exist for more or less all platforms (operating systems).
C# is a language that conforms to the CLS. A program written in C# can not start without a CLI engine (.NET Framework, Mono, etc.). A Program written in C# communicates with the .NET framework AND with the operating system. You have a man in the middle. Like all servicing personal, this man can help but it will cause additional trouble. If you want to port, you have a different man in the middle etc. CLI Implementations do not exist for all platforms.
By my opinion every additional framework is a additional source of problems.
Using SSE instructions seems to be one of these cases. Some .NET runtimes will use some SSE instructions, depending on your code. But in VC++, you can use the SSE intrinsics directly. So, if you're writing a multimedia code, you'd probably want C++. (C++/CLI might work as well, presumably)
Writing fast native applications, with API calls and etc, in a modern cross platform programming language like C# would be awesome, wouldn't it? For example if you want to write a simple utility for helping IT people with installing things, which wouldn't need another components, in an easy and modern programming language? or if you want to write a 3D game, it should be fast, and JIT would just make it slower...
Why, why isn't it possible? Why there are no native modern programming languages for these things?
C# and .Net are native code. I think you misunderstand the JITter. It's not a VM. A C# program is compiled to fully native code before any of it is executed.
Now, the "needing other components" part is a concern. Give it time, though. You'll be hard pressed to find a windows installation these days without at least .Net 2.0, and even a couple mainstream linux distros include mono out of the box.
Don't assume the JIT makes things slower. The JIT can optimize for the exact computer running the application rather than a generic computer like a 386 or Pentium. It can even make better speed/memory trade-off decisions when generating code because it knows exactly what's available. And if JIT still makes things slower, you can NGEN them so that JITting is all done beforehand.
As proof of this, consider that Quake has been ported to the CLR a couple of times, and in my personal tests, the frames per second have been faster when Quake runs on the CLR about half the times I demo it.
Compiled .NET programs have been shown to run just as quickly as C. If you want it ultra-lean write it in assembly for your native processor.
You can use the Microsoft NGEN.EXE tool to create a native image of a .NET assembly.
See MSDN NGEN documentation. Microsoft already though about what you're getting at here.
Microsoft also makes ILMERGE.EXE tool to merge multiple assembly files into one. This might border on optimization and speed too.
As a side note, Mono has full ahead of time compiling, eliminating the runtime. (I think that's how they get away running on iPhone, which prohibits any JIT.)
So, does this mean that we could fully compile and link a C# program using (limited) .NET calls into a standalone EXE that would run without .NET being installed at all?
FYI: Checking a server estate of some 5000 servers revealed about 200 without even .NET 2.0.
This causes problems for code that must run on "all Windows instances". With .NET 4.0+ not including 2.0 this gets worse as both new AND old Windows machines might not have the 'right' .NET
there is, C. C can be used to write any application , ever !!!