Is C# partially interpreted or really compiled? - c#

There is a lot of contradicting information about this. While some say C# is compiled (as it is compiled into IL and then to native code when run), others say it's interpreted as it needs .NET. EN Wiki says:
Many interpreted languages are first compiled to some form of virtual
machine code, which is then either interpreted or compiled at runtime
to native code.
So I'm quite confused. Could anyone explain this clearly?

C# is compiled into IL, by the c# compiler.
This IL is then compiled just-in-time (JIT) as it's needed, into the native assembly language of the host machine. It would be possible to write a .NET runtime that interpreted the IL instead though. Even if this was done, I'd still argue that c# is a compiled language.

A purely compiled language has some advantages. Speed, as a rule, and often working set size.
A purely interpreted language has some advantages. Flexibility of not needing an explicit compilation stage that allows us to edit in place, and often easier portability.
A jitted language fits in a middle ground in this case.
That's a reason alone why we might think of a jitted language as either compiled or as interpreted depending on which position on which metric we care about attaining, and our prejudices for and against one or the other.
C# can also be compiled on first run, as happens in ASP.NET, which makes it close to interpreted in that case (though it's still compiled to IL and then jitted in this case). Certainly, it has pretty much all the advantages of interpreted in this case (compare with VBScript or JScript used in classic ASP), along with much of the advantages of compiled.
Strictly, no language is jitted, interpretted or compiled qua language. We can NGen C# to native code (though if it does something like dynamically loading an assembly it will still use IL and jitting). We could write an intepretter for C or C++ (several people have done so). In its most common use case though, C# is compiled to IL which is then jitted, which is not quite the classic definition of interpreted nor of compiled.

Too many semantics and statements based on opinion.
First off: C# isn't an interpreted language; the CLR and JVM are considered "runtimes" or "middleware", but the same name applies to things like Perl. This creates a lot of confusion among people concerned with names.
The term "Interpreter" referencing a runtime generally means existing code interprets some non-native code. There are two large paradigms: Parsing reads the raw source code and takes logical actions; bytecode execution first compiles the code to a non-native binary representation, which requires much fewer CPU cycles to interpret.
Java originally compiled to bytecode, then went through an interpreter; now, the JVM reads the bytecode and just-in-time compiles it to native code. CIL does the same: The CLR uses just-in-time compilation to native code.
Consider all the combinations of running source code, running bytecode, compiling to native, just-in-time compilation, running source code through a compiler to just-in-time native, and so forth. The semantics of whether a language is compiled or interpreted become meaningless.
As an example: many interpreted languages use just-in-time bytecode compilation. C# compiles to CIL, which JIT compiles to native; by contrast, Perl immediately compiles a script to a bytecode, and then runs this bytecode through an interpreter. You can only run a C# assembly in CIL bytecode format; you can only run a Perl script in raw source code format.
Just-in-time compilers also run a lot of external and internal instrumentation. The runtime tracks the execution of various functions, and then adjusts the code layout to optimize branches and code organization for its particular execution flow. That means JIT code can run faster than native-compiled code (like C++ typically is, or like C# run through IL2CPP), because the JIT adjusts its optimization strategy to the actual execution case of the code as it runs.
Welcome to the world of computer programming. We decided to make it extremely complicated, then attach non-descriptive names to everything. The purpose is to create flamewars over the definition of words which have no practical meaning.

If you feel, learned, or are old school, that a compiled EXE is going from source to machine code then C# is interpreted.
If you think compiled means converting source code into other code such as byte code, then yes its converted. For me, anything that takes run-time processing to work in the OS it was built for is interpreted.

Look here: http://msdn.microsoft.com/library/z1zx9t92
Source code written in C# is compiled into an intermediate language
(IL) that conforms to the CLI specification.
(...)
When the C# program is executed, the assembly is loaded into the CLR,
which might take various actions based on the information in the
manifest. Then, if the security requirements are met, the CLR performs
just in time (JIT) compilation to convert the IL code to native
machine instructions.

First off let's understand the definitions of interpreted and compiled.
"Compile" (when referring to code) means to translate code from one language to another. Typically from human readable source code into machine code that the target processer can... process.
"Interpret" (when referring to code) ALSO means to translate code from one language to another. But this time it's typically used to go from human readable source code into an intermediate code which is taken by a virtual machine which interprets it into machine code.
Just to be clear
Source code -> Compiler -> Machine code
Source code -> Compiler -> Byte Code -> Interpreter -> Machine code
Any language can, in theory, be interpreted or compiled. Typically Java is compiled into bytecode which is interpreted by the Java virtual machine into machine code. C# is typically interpreted into bytecode which is compiled by the CLR, the common language runtime, another virtual machine.
By and far the whole thing is a marketing gimmick. The term "interpreted" was added (or at least, increased in usage) to help showcase how neat just-in-time compiling was. But they could have just used "compiled". The distinction is more a study of the English language and business trends rather than anything of a technical nature.

C# is both interpreted and compiled in its lifetime. C# is compiled to a virtual language which is interpreted by a VM.
The confusion stems from the fuzzy concept of a "Compiled Language".
"Compiled Language" is a misnomer, in a sense, because compiled or interpreted is not a property of the language but of the runtime.
e.g. You could write a C interpreter but people usually call it a "Compiled Language", because C implementations compile to machine code, and the language was designed with compilation in mind.

Most languages, if not all, requires an interpreter that translates their scripts to machine codes in order to allow the cpu to understand and execute it!
Each language handles the translation process differently!
For example, "AutoIt" is what we can describe as being a 100% interpreted language!
why?
Because "AutoIt" interpreter is constantly needed while its script is being executed! See example below:
Loop, 1000
Any-Code
"AutoIt" interpreter would have to translate "Any-Code" 1000 times to machine code, which automatically makes "AutoIt" a slow language!
In the other hand, C# handles the translation process differently, C#'s interpreter is required only once, before script execution, after that it is not required anymore during script execution!
C#'s interpreter would have to translate "Any-Code" only once to machine code, which automatically makes "C#" a fast language!
So basically,
A language that requires its interpreter during script execution is an "Interpreted Language"!
A language that requires its interpreter only once (before script execution) is a "Compiled Language"!
Finally,
"AutoIt" is an "Interpreted Language"!
"C#" is a "Compiled Language"!

I believe this is a pretty old topic.
From my point of view, interpreted code will go through an interpreter, line by line translate and execute at the same time. Like example javascript, it is an interpreted code, when a line of javascript ran into an error, the script will just break.
While compiled code, it will go through a compiler, translate all code to another form of code at once, without execute it first. The execution is in another context.

If we agree with the definition of interpreter «In computer science, an interpreter is a computer program that directly executes, i.e. performs, instructions written in a programming or scripting language, without requiring them previously to have been compiled into a machine language program.» there is no doubt: C# is not an interpreted language.
Interpreter on Wikipedia

C#, like Java, has a hybrid language processor. Hybrid processors perform the jobs of both interpretation and compilation.

Since a computer can only execute binary code, any language will lead to the production of binary code at one point or another.
The question is : does the language let you produce a program in binary code?
If yes, then it is a compiled language : by definition "compiled" in "compiled language" refers to compilation into binary code, not transformation into some intermediary code.
If the language lead to the production of such intermediary code for a program, it will need an additional software to perform the binary compilation from this code : it is then an interpreted language.
Is a program "compiled" by C# directly executable on a machine without any other software at all installed on this machine? if no, then it is an interpreted language.
For an interpreted language, it is an interpreter that will generate the underlying binary code, most of the time in a dynamic way since this mechanism is the basis of the flexibility of such languages.
rem. : sometimes it does not look obvious because the interpreter is bundled into the OS

C# is compilable language.
Probably, as I met too those kind of opinions, the fact that someone thinks that there is an Interpreter for C# language, is due the kind of projects like
C# Interpreter Console
or, for example, famous
LinqPAD
where you can write just lines of the code and execute them, which brings to think that it's Python like language, which is not true. It compiles those lines and executes them, like a ordinary compilable programming language (from workflow point of view).

Great debate going on here. I have read all the answers and want to express some conclusions, based on my research and Programming language implementation concept.
There is a concept of Programming language implementation.
Programming language implementation: In computer programming, a programming language implementation is a system for executing computer programs. There are two general approaches to programming language implementation:
Compilation:
The program is read by a compiler, which translates it into some other language, such as bytecode or machine code. The translated code may either be directly executed by hardware, or serve as input to another interpreter or another compiler.
Interpretation:
An interpreter is a computer program that directly executes instructions written in a programming or scripting language, without requiring them previously to have been compiled into a machine language program.
Parse the source code and perform its behavior directly.
Translate source code into some efficient intermediate representation or object code and immediately execute that.
Explicitly execute stored precompiled bytecode made by a compiler and matched with the interpreter Virtual Machine.
Conclusions:
So any language which converts the code to intermediate byte code or machine code are compiled languages.
There are multiple types of Interpreters like Byte Code Interpreters, Just-in-time interpreters, etc.
Famous Compiled Languages:
JAVA C# C C++ GO Kotlin Rust
Famous Interpreted Languages:
JavaScript PHP Python
Directly Compiled Languages v/s Languages which are compiled to bytecode first:
Bytecode interpreters (virtual machines) are generally slower than direct execution of machine code. Any interpreter has some overhead when converting bytecode to actual machine instructions.
Interpreters do line by line execution.
So, directly compiled languages like C++/C are faster then Java/C#

There are an implementation of C# that is a compiled language.
It is Remobjects c# island that compiles directly to binary machine code and run without an VM and without an Runtime but uses directly platform API (win32 on microsoft, Cocoa on apple and posix on linux).
Also remobjects c# interoperate directly with C/C++ compiled librarres because it's calling convention translate to c calling convention.

Related

Is the CLR a C# compiler or are these two different things?

I am trying to understand process of C# code compilation and execution. Websites have a lot of information about CLR, CIL, CLS, FCL etc., but I could not find information about who is responsible for compilation C# code in CIL. Is it CLR or a separate special compiler for c#?
C# code is compiled into CIL bytecode. Other languages like VB.NET can be compiled into CIL bytecode as well.
Once code is compiled into the CIL standard format, an engine runs it. That engine is the CLR or "Common Language Runtime".
Think of it like this: gas can be made by Exxon or Valero who get crude oil from different places. Once the Exxon or Valero refinery makes the crude into gas, then it can be used in any car whose engine is designed to run on the stuff.
In the analogy, Exxon and Valero refineries are "compilers" getting their materials into a common format: gas. "Gas" is bytecode, which can run on the engines designed for it. The "engine" is the CLR, which can actually use and run on the output.

Is JIT compiler a Compiler or Interpreter?

My question is whether JIT compiler which converts the IL to Machine language is exactly a compiler or an interpreter.
One more question :
Is HTML, JavaScript a compiled language or interpreted language?
Thanks in Advance
JIT (just in time) compiler is a compiler. It does optimizations as well as compiling to machine code. (and even called a compiler)
HTML, Javascript are interpreted, they are read as-is by the web browser, and run with minimal bug fixes and optimizations.
Technically, a compiler translates from one language to another language. Since a JIT compiler receives an IL as its input and outputs native machine binary, it easily fits this criteria and should be called a compiler.
Regarding Javascript, making a distinction here is more difficult. If you want to be pedantic, there's no such thing as a "compiled language" or "interpreted language". I mean, it's true that in practice most languages have one common way of running them and if that is an interpreter they are usually called interpreted languages, but interpretation or compilation are (usually) not traits of the language itself. Python is almost universally considered interpreted, but it's possible to write a compiler which compiles it to native binary code; does it still deserve the "interpreted" adjective?
Now to get to the actual answer: Javascript is typically ran by an interpreter which, among other things, uses a JIT compiler itself. Is that interpreted or compiled, then? Your call.
From Wiki's , just-in-time compiler(JIT), also known as dynamic translator, is used to improve the runtime performance of computer programs.
Just-in-time compilation is the conversion of non-native code, for example bytecode, into native code just before it is executed.JIT compiler is the one who compiles the IL code and output the native code which is cached, where as an interpreter will execute line by line code,
i.e in the case of java the class files are the input to the interpreter.
More on JIT here :
Difference between JIT Compiler and Interpreter (a)
Difference between JIT Compiler and Interpreter (b)
JIT-Compiler in detail
What a JIT compiler do ?
Yes, HTML, JavaScript are interpreted languages since they aren't compiled to any code. It means that scripts execute without preliminary compilation.
Also a good read here on JavaScript/HTML not being the compiled languages.
JIT processors like IL are compilers, mostly. JavaScript processors are interpreters, mostly. I understand your curiosity for this question, but personally I've come to think that there really isn't any 'right' anwser.
There are JavaScript interpreters that compiler parts or all of the code for efficiency reasons. Are those really interpreters?
JIT acts at runtime, so it can be understood as a clever, highly optimized interpreter. Which is it?
It's like "it's a plant" or "it's an animal" questions. There are live things that don't quite fit either mold very well: nature is what nature is, and 'classification' of things is a purely human intellectual effort that has its limitations. Even man-made things like 'code' are subject to the same considerations.
Ok; so maybe there is one right answer:
The way JavaScript is processed (say, as of 5 years ago) is called an 'Interpreter'. The way C++ is processed is considered a 'compiler'.
The way IL is processed is simply... a 'JIT'.
CLI (.Net bytecode) has features not found in native CPU's, so JIT is most definitively a compiler. Contrary to what some write here most of the optimizations has already been done however.
HTML is not programing language, so it is hard to say if it is compiled or interpreted... In sence of "if result of compilation is reused" HTML is not compiled by any browsers (it is parsed any time page is renderd).
JavaScript in older browsers is interpreted (preprocessed into intermediate representation, but not to machine code). Latest versions of browsers have JavaScript JIT compilers - so it is much harder to define if it is interpreted or compiled language now.
JIT (Just In Time) Compiler is a compiler only and not an interpreter,because JIT compiler compiles or converts certain pieces of bytecodes to native machine code at run-time for high performance,but it does'nt execute the instructions.
Whereas,an Interpreter reads and executes the instructions at runtime.
HTML and Javascript are interpreted,it is directly executed by browser without compilation.

What are the differences between a C#(.Net) Compiler and Java Compiler Technologies?

My professor asked us this question: What are the differences between a C#(.Net) Compiler and Java Compiler Technologies?
Both the Java and C# compilers compile to an "machine code" for an intermediate virtual machine that is independent of the ultimate execution platform; the JVM and CLR respectively.
JVM was originally designed solely to support Java. While it is possible to compile languages other than Java to run on a JVM, there are aspects of its design that are not entirely suited to certain classes of language. By contrast, the CLR and its instruction set were designed from day one to support a range of languages.
Another difference is in the way that JIT compilation works. According to Wikipedia, CLR is designed to run fully compiled code, so (presumably) the CLR's JIT compiler must eagerly compile the entire application before starting. (I also gather that you can compile the bytecodes to native code ahead of time.) By contrast, the Hotspot JVMs use true "just in time" compilation. Bytecode methods are initially executed by the JVM using a bytecode interpreter, which also gathers trace information about execution paths taken within the method. Those methods that are executed a number of times then get compiled to native code by the JIT compiler, using the captured trace information to help in the code optimization. This allows the native code to be optimized for the actual execution platform and even for the behaviour of the current execution of the application.
Of course, the C# and Java languages have many significant differences, and the corresponding compilers are different because of the need to handle these linguistic differences. For example, some C# compilers do more type inferencing ... because the corresponding C# language version relies more on inferred types. (And note that both the Java and C# languages have evolved over time.)
In terms of compiler, the largest difference I can think of (except the obvious "inputs" and "outputs") is the generics implementation, since both have generics, but very different (type erasure vs runtime-assisted). The boxing model is obviously different, but I'm not sure that is huge for the compiler.
The are obvious difference in features in terms of anonymous methods, anonymous inner classes, lambdas, delegates, etc but that is hard to compare 1:1. Ultimately, though, only your professor knows the answer he is looking for (and all due respect to professors, but don't necessarily be surprised if his answer is a year-or-more out of date with the bleeding edge).
One difference is that the C# compiler has some type inferencing capabilities that a Java compiler wouldn't have (although Java 7 may change this). As a simple example, in Java you have to type Map<String, List<String>> anagrams = new HashMap<String, List<String>>(); while in C# you can use var anagrams = new HashMap<String, List<String>>(); (although you can create very large, complex expressions in C# without ever having to name a type).
Another difference is that the C# compiler can create expression trees, enabling you to pass descriptions of a function to another function. For example, (Func<int,int>) x => x * 2 is a function that takes an int and doubles it, while (Expression<Func<int,int>>) x => x * 2 is a datastructure that describes a function that takes an int and doubles it. You can take this description and compile it into a function (to run locally) or translate it into SQL (to run as part of a database query).
http://www.scribd.com/doc/6256795/Comparison-Between-CLR-and-JVM
i think this will give you an basic idea

How do languages like C# and Java avoid C/C++-like independent compilation?

For my programming languages class, I'm writing a research paper on some papers by some important people in the history of language design. One by CAR Hoare struck me as odd because it speaks against independent compilation techniques used in C and later C++ before C even became popular.
Since this is primarily an optimization to speed up compilation times, what is it about Java and C# that make them able to avoid reliance on independent compilation? Is it a compiler technique or are there elements of the language that facilitate this? And are there any other compiled languages that used these techniques before them?
Short answer: Java and C# don't avoid separate compilation; they make full use of it.
Where they differ is that they don't require the programmer to write a pair of separate header/implementation files when writing a reusable library. The user writes the definition of a class once, and the compiler extracts the information equivalent to the "header" from that single definition and includes it in the output file as "type metadata". So the output file (a .jar full of .class files in Java, or an .dll assembly in .NET-based languages) is a combination of binaries AND headers in a single package.
Then when another class is compiled and it depends on the first class, it can look at the metadata instead of having to find a separate include file.
It happens that they target a virtual machine rather than a specific chip architecture, but that's a separate issue; they could put x86 machine code in as the binary and still have the header-like metadata in the same file as well (this is in fact an option in .NET, albeit rarely used).
In C++ compilers it is common to try to speed up compilation by using "pre-compiled headers". The metadata in .NET .dll and .class files is much like a pre-compiled header - already parsed and indexed, ready for rapid look-ups.
The upshot is that in these modern languages, there is one way of doing modularization, and it has the characteristics of a perfectly organised and hand-optimised C++ modular build system - pretty nifty, speaking ASFAC++B.
IMO, one of the biggest factors here is that both java and .NET use intermediate languages; that means that the compiled unit (jar/assembly) contains, as a pre-requisite, a lot of expressive metadata about the types, methods, etc; meaning that it is already laid out conveniently for reference checking. The runtime still checks anyway, in case you are pulling a fast one ;-p
This isn't very far removed from the MIDL that underpins COM, although there the TLB is often a separate entity.
If I've misunderstood your meaning, please let me know...
You could consider a java .class file to be similar to a precompiled header file in C/C++. Essentially the .class file is the intermediate form that a C/C++ linker would need as well as all of the information contained in the header (Java just doesn't have a separate header).
Form your comment in another post:
"I'm basically meaning the idea in
C/C++ that each source file is its own
individual compilation unit. This
doesn't as much seem to be the case in
C# or Java."
In Java (I cannot speak for C#, but I assume it is the same) each source file is its own individual compilation unit. I am not sure why you would think it is not... perhaps we have different definitions of compilation unit?
It requires some language support (otherwise, C/C++ compilers would do it too)
In particular, it requires that the compiler generates self-contained modules, which expose metadata that other modules can reference to call into them.
.NET assemblies are a straightforward example. All the files in a project are compiled together, generating one dll. This dll can be queried by .NET to determine which types it contains, so that other assemblies can call functions defined in it.
And to make use of this, it must be legal in the language to reference other modules.
In C++, what defines the boundary of a module? The language specifies that the compiler only considers data in its current compilation unit (.cpp file + included headers). There is no mechanism for specifying "I'd like to call function Foo in module Bar, even though I don't have the prototype or anything for it at compile-time". The only mechanism you have for sharing type information between files is with #includes.
There is a proposal to add a module system to C++, but it won't be in C++0x. Last I saw, the plan was to consider it for a TR1 after 0x is out.
(It's worth mentioning that the #include system in C/C++ was originally used because it'd speed up compilation. Back in the 70's, it allowed the compiler to process the code in a simple linear scan. It didn't have to build syntax trees or other such "advanced" features. Today, the tables have turned and it's become a huge bottleneck, both in terms of usability and compilation speed.)
The object files generated by a C/C++ are ment to be read only by the linker, not by the compiler.
As to other languages: IIRC Turbo Pascal had "units" which you could use without having any source code. I think the point is to create metadata along with compiled code which can then be used by the compiler to figure out the interface to the module (i.e. signatures of functions, class layout etc.)
One problem with C/C++ which prevents just replacing #include with some kind of #import is also the preprocessor, which can completely change the meaning/syntax etc of included/imported modules. This would be very difficult (if not impossible) with a Java-like module system.

CLR vs JIT

What is the difference between the JIT compiler and CLR? If you compile your code to il and CLR runs that code then what is the JIT doing? How has JIT compilation changed with the addition of generics to the CLR?
You compile your code to IL which gets executed and compiled to machine code during runtime, this is what's called JIT.
Edit, to flesh out the answer some more (still overly simplified):
When you compile your C# code in visual studio it gets turned into IL that the CLR understands, the IL is the same for all languages running on top of the CLR (which is what enables the .NET runtime to use several languages and inter-op between them easily).
During runtime the IL is interpreted into machine code (which is specific to the architecture you're on) and then it's executed. This process is called Just In Time compilation or JIT for short. Only the IL that is needed is transformed into machine code (and only once, it's "cached" once it's compiled into machinecode), just in time before it's executed, hence the name JIT.
This is what it would look like for C#
C# Code > C# Compiler > IL > .NET Runtime > JIT Compiler > Machinecode > Execution
And this is what it would look like for VB
VB Code > VB Compiler > IL > .NET Runtime > JIT Compiler > Machinecode > Execution
And as you can see only the two first steps are unique to each language, and everything after it's been turned into IL is the same which is, as I said before, the reason you can run several different languages on top of .NET
The JIT is one aspect of the CLR.
Specifically it is the part responsible for changing CIL (hereafter called IL) produced by the original language's compiler (csc.exe for Microsoft c# for example) into machine code native to the current processor (and architecture that it exposes in the current process, for example 32/64bit). If the assembly in question was ngen'd then the the JIT process is completely unnecessary and the CLR will run this code just fine without it.
Before a method is used which has not yet been converted from the intermediate representation it is the JIT's responsibility to convert it.
Exactly when the JIT will kick in is implementation specific, and subject to change. However the CLR design mandates that the JIT happens before the relevant code executes, JVMs in contrast would be free to interpret the code for a while while a separate thread creates a machine code representation.
The 'normal' CLR uses a pre-JIT stub approach where by methods are JIT compiled only as they are used. This involves having the initial native method stub be an indirection to instruct the JIT to compile the method then modify the original call to skip past the initial stub. The current compact edition instead compiles all methods on a type when it is loaded.
To address the addition of Generics.
This was the last major change to the IL specification and JIT in terms of its semantics as opposed to its internal implementation details.
Several new IL instructions were added, and more meta data options were provided for instrumenting types and members.
Constraints were added at the IL level as well.
When the JIT compiles a method which has generic arguments (either explicitly or implicitly through the containing class) it may set up different code paths (machine code instructions) for each type used. In practice the JIT uses a shared implementation for all reference types since variables for these will exhibit the same semantics and occupy the same space (IntPtr.Size).
Each value type will get specific code generated for it, dealing with the reduced / increased size of the variables on the stack/heap is a major reason for this. Also by emitting the constrained opcode before method calls many invocations on non reference types need not box the value to call the method (this optimization is used in non generic cases as well). This also allows the default<T> behaviour to be correctly handled and for comparisons to null to be stripped out as no ops (always false) when a non Nullable value type is used.
If an attempt is made at runtime to create an instance of a generic type via reflection then the type parameters will be validated by the runtime to ensure they pass any constraints. This does not directly affect the JIT unless this is used within the type system (unlikely though possible).
As Jon Skeet says, JIT is part of the CLR. Basically this is what is happening under the hood:
Your source code is compiled into a byte code know as the common intermediate language (CIL).
Metadata from every class and every methods (and every other thing :O) is included in the PE header of the resulting executable (be it a dll or an exe).
If you're producing an executable the PE Header also includes a conventional bootstrapper which is in charge of loading the CLR (Common language runtime) when you execute you executable.
Now, when you execute:
The bootstraper initializes the CLR (mainly by loading the mscorlib assembly) and instructs it to execute your assembly.
The CLR executes your main entry.
Now, classes have a vector table which hold the addresses of the method functions, so that when you call MyMethod, this table is searched and then a corresponding call to the address is made. Upon start ALL entries for all tables have the address of the JIT compiler.
When a call to one of such method is made, the JIT is invoked instead of the actual method and takes control. The JIT then compiles the CIL code into actual assembly code for the appropiate architecture.
Once the code is compiled the JIT goes into the method vector table and replaces the address with the one of the compiled code, so that every subsequent call no longer invokes the JIT.
Finally, the JIT handles the execution to the compiled code.
If you call another method which haven't yet being compiled then go back to 4... and so on...
The JIT is basically part of the CLR. The garbage collector is another. Quite where you put interop responsibilities etc is another matter, and one where I'm hugely underqualified to comment :)
I know the thread is pretty old, but I thought I might put in the picture that made me understand JIT. It's from the excellent book CLR via C# by Jeffrey Ritcher. In the picture, the metadata he is talking about is the metadata emitted in the assembly header where all information about types in the assembly is stored:
1)while compiling the .net program,.net program code is converted into Intermediate Language(IL) code
2)upon executing the program the Intermediate language code is converted into operating system Native code as and when a method is called; this is called JIT (Just in Time) compilation.
Common Language Runtime(CLR) is interpreter while Just In Time(JIT) is compiler in .Net Framework.
2.JIT is the internal compiler of .NET which takes MicroSoft Intermediate Code Language (MSICL) code from CLR and executes it to machine specific instructions whereas CLR works as an engine its main task is to provide MSICL code to JIT to ensure that code is fully compiled as per machine specification.

Categories