Call-site explanation? - c#

scaning the internet , im having trouble understanding in a simple manner - the term call-site (#dlr).
ive been reading here that CallSite is :
one site says
The location in which the method is called.
one book say :
call site . This is the sort of atom of the DLR - the smallest piece
of codewhich can be considered as a single unit. One expression may
contain a lot of call sites, but the behavioris built up in the
natural way, evaluating one call site at a time. For the rest of the
discussion, we'll onlyconsider a single call site at a time. It's
going to be useful to have a small example of a call site to refer
to,so here's a very simple one, where d is of course a variable of
type dynamic
d.Foo(10); The call site is represented in code as a
System.Runtime.CompilerServices.CallSite.
another book says :
the compiler emits code that eventually results in an expression tree
that describes the operation, managed by a call site that the DLR will
bind at runtime. The call site essentially acts as an intermediary
between caller and callee.
sorry , I cant see where those 3 explanations are combining into one simple explanation.
i'll be happy to get a simple explanation :
HOw can I explain my wife -what are call-sites ?

The first explanation has nothing to do with the dlr or the dynamic type: simply speaking, a call site is a location (or site) in the source code where a method is called.
In implementing the dynamic type, it is necessary to store information about the dynamic method calls contained in your code, so they can be invoked at runtime (the dlr needs to look up the method, resolve overloads, etc.). It seems natural that the object representing this information should also be called a ”call site”.

Ok this is how I see it.
For this example call is simply like a method or function that executes some code and returns.
For a static language runtime program (C, or CLR etc) a call site is essentially where a function call takes place. It's the location that the call will return to in a normal (non exceptional) flow. Since this is a static program the call site is simply a memory location, pushed on the stack.
For a dynamic language program (Ruby, Python, etc) , the code you are calling is not worked out until runtime. This means that some form of logic is needed to manage the process of making the correct function call and then cleaning up after the call (if needed). If the dynamic language program is on .NET 4 this is done using dlr (dynamic language runtime) objects of type System.Runtime.CompilerServices.CallSite. So the call will return to a method within the CallSite object and then on to location of the original call.
So the answer is that it depends upon how you doing the call and thus what platform you are using.

Related

Is there any way to auto run some code in DLL?

I have some DLL from third party that I need to license. It has some method that I must call from my own DLL. My DLL is referenced in couple of projects and I don't want to make changes to every hoster. Is there any method that I can use within my DLL which will call some method in my DLL? Like add some static class or constructor but without explicit call to that class from hosters? I am not sure if I am explaining it clearly. Please ask questions if needed.
ThirdPartyType license = new ThirdPartyType();
license.Load("license.xml");
This is a piece of licensing code that I want to place in my DLL and call it within the same DLL.
At the low level, the runtime supports "module initializers". However, C# does not provide any way of implementing them, so the closest you can manage is a static constructor ("type initializer") or just a regular constructor.
However, it is probably a bad idea to hook your licencing into either a module initializer or a type initializer, as you don't know when they will run, and it could impact code that wasn't going to access your lib. It is somewhat frowned upon to take someone's app down because your licensing code decided it was unhappy - especially if your library wasn't actively being invoked at the time.
As such: I suggest the most appropriate place to do this is in either a constructor, or a post-construction Initialize(...) method (with the tool refusing to work unless supplied with valid details).

Code Access Security - Understanding why SecurityTransparent can call SecurityCritical

I am researching Code Access Security. It's taking some effort to get my head around, so I thought that I would finally make some use of Reflector and start investigating how .NET 4.0 uses security attributes.
Observations
The System.IO.File.Delete method is decorated with the [SecuritySafeCritical] attribute.
The System.IO.File.Delete method delegates to an internal method InternalDelete which is decorated with the [SecurityCritical] attribute.
I have a method in one of my MVC app classes called DeleteFile that is running as SecurityTransparent (which I've verified by checking DeleteFile's MethodInfo.IsSecurityCritical property)
Permissions
From my current understanding that would mean that:
System.IO.File.Delete can call InternalDelete because [SecuritySafeCritical] methods can call [SecurityCritical] so no SecurityException is thrown.
DeleteFile can call System.IO.File.Delete because [SecurityTransparent] can call [SecuritySafeCritical]
So basically, without adjusting any of the out of the box security settings, this code will succesfully delete a dummy file called test.txt
namespace MyTestMvcApp
{
public class FileHelpers()
{
// Has SecurityTransparent
public void DeleteFile()
{
// Will succesfully delete the file
File.Delete("test.txt");
}
}
}
Confusion
Inside the InternalDelete method of System.IO.File.Delete, it uses the CodeAccessPermission.Demand method to check that all callers up the stack have the necessary permissions. What I don't quite understand is this line in the MSDN docs of CodeAccessPermission.Demand:
The permissions of the code that calls this method are not examined; the check begins from the immediate caller of that code and proceeds up the stack.
So my question is, if the DeleteFile method of my application is SecurityTransparent, how come it is allowed to call a SecurityCritical method?
This is probably a broken example perhaps with some missing concepts, but as I said I'm still getting my head around it and any insight people can give the more I'll develop my understanding.
Thanks
You're mixing up two CAS enforcement mechanisms. While they do interact a bit, it's not in quite the same way you seem to be worrying about. For the purposes of full permission demands, as represented by Demand, they're essentially independent.
The CLR applies the transparency verification before executing the code. If this passes, the CLR would then verify any declarative CAS demands applied via attributes. If these pass (or are absent), the CLR would then execute the code, at which time the imperative (inline) demand would run.
The Demand documentation note about "the permissions of the code that calls this method are not examined" apply to the Demand method itself. In other words, if you have a method Foo that calls Demand, the verified call stack starts will Foo's caller, not Foo itself. For example, if you have the call chain A -> B -> C -> Foo -> Demand, only A, B, and C will be verified to check if they have the granted permission.

Global vs. Member functions

I've been playing around with c++ lately and wondered why there were so many global functions. Then I started thinking about programming in c# and how member functions are stored, so I guess my question is if I have a:
public class Foo {
public void Bar() { ... }
}
and then I do something silly like adding 1,000,000 Foo's to a list; does this mean I have 1,000,000 Foo objects sitting in memory each with there own Bar() function? Or does something much more clever happen?
Thanks.
Nope, there is only one instance. All instances of a class point to an object that contains all the instance methods that take an implicit first parameter affectionately called this. When you invoke an instance method on an instance the this pointer for that instance is passed as the first parameter to that method. That is how the method knows all the instance fields and properties for that instance.
For details see CLR via C#.
This is, of course, complicated by virtual methods. CLR via C# will spell out the distinction for you and is highly recommended if you are interested in this subject. Either way, there is still only one instance of each instance method. The issue is just how these methods are resolved.
An instance method is simply a static method with a hidden this parameter.
(virtual methods are a little more complicated)
In C++ a member function doesn't normally require any per-object storage (an exception - virtual functions - is discussed in the next paragraph). Normally, at each point where the function is used the compiler generates CPU-specific machine code to directly call that function, and for inline functions the call may be avoided and the function's affect may be optimally integrated into the caller's code (which can be ~10x faster for small functions such as "getters and setters" that simply read or write one member variable).
For those classes that have one or more virtual functions, each object will have one extra pointer to a per-class table of function pointers and other information. Thus, each object grows by the size of a pointer - typically 4 or 8 bytes.
Addressing your original observation: C++ has more non-member functions (usually in the std namespace), but a namespace serves this purpose better than a class anyway. Indeed, namespaces are effectively logical interfaces for "static" functions and data that can span many "physical" header files. Why should the logical API of a program be compromised by considerations related to physical files and their implications to build times, file-modification-timestamp triggered make tools etc? In trivial cases where the namespace is in one header, C++ could use a class or struct to scope the same declarations, but that's less convenient as it prevents the use of namespace aliases, using namespaces, and Koenig lookup for implicitly searching namespaces matching a function arguments' namespaces - forcing very explicit prefixing at every point of use. It also gives the false impression that the user is intended to instantiate an object from the content.

Which language can change class member dynamically in run time?

I know in Ruby can add and modify method of class dynamically in run time. what about other language? what is C# ,can in this language modify or add some method and ... in run time and dynamically?
I think you are looking for prototype inheritance. A list of languages is mentioned in the same wikipedia page.
There is a similar question on SO which you can look up.
Yes, in C# you can add methods at runtime through reflection and the Emitter object.
In C# 4.0 you can even do it in plain C# code with the Expando object. This is arguably closer to the Ruby way (it's practically a carbon copy if I remember correctly) and a lot easier to use.
Edit: All of this applies to all .NET languages, including VB.Net and F#.
The whole point of static type systems like C#'s is that all functionality defined for a particular type is known (and checked) at compile-time.
If you write
foo.jump(42);
the compiler verifies that whatever type foo has, it supports an operation called jump taking an integer parameter.
Recently, C# got the possibility of having dynamically checked objects through the dynamic type, which basically allows what you described in a very limited context, but nevertheless, the overall language is statically typed.
So what's left are dynamic languages like Ruby, where method existence is just checked at run-time (or call-time).
I think JavaScript can change so called prototypes to add methods to objects and basically achive the same thing as Ruby.
Python excels at this operation - here are bunch of examples: Python: changing methods and attributes at runtime
Lisp's object system is also quite dynamic:
http://en.wikipedia.org/wiki/Common_Lisp_Object_System
"CLOS is dynamic, meaning that not only the contents, but also the structure of its objects can be modified at runtime. CLOS supports changing class definitions on-the-fly (even when instances of the class in question already exist) as well as changing the class membership of a given instance through the change-class operator. CLOS also allows one to add, redefine and remove methods at runtime."
in C# 4 you have dynamic object which you can add/modify at run time.

How does C# compilation get around needing header files?

I've spent my professional life as a C# developer. As a student I occasionally used C but did not deeply study it's compilation model. Recently I jumped on the bandwagon and have begun studying Objective-C. My first steps have only made me aware of holes in my pre-existing knowledge.
From my research, C/C++/ObjC compilation requires all encountered symbols to be pre-declared. I also understand that building is a two-step process. First you compile each individual source file into individual object files. These object files might have undefined "symbols" (which generally correspond to the identifiers declared in the header files). Second you link the object files together to form your final output. This is a pretty high-level explanation but it satisfies my curiosity enough. But I'd also like to have a similar high-level understanding of the C# build process.
Q: How does the C# build process get around the need for header files? I'd imagine perhaps the compilation step does two-passes?
(Edit: Follow up question here How do C/C++/Objective-C compare with C# when it comes to using libraries?)
UPDATE: This question was the subject of my blog for February 4th 2010. Thanks for the great question!
Let me lay it out for you. In the most basic sense the compiler is a "two pass compiler" because the phases that the compiler goes through are:
Generation of metadata.
Generation of IL.
Metadata is all the "top level" stuff that describes the structure of the code. Namespaces, classes, structs, enums, interfaces, delegates, methods, type parameters, formal parameters, constructors, events, attributes, and so on. Basically, everything except method bodies.
IL is all the stuff that goes in a method body -- the actual imperative code, rather than metadata about how the code is structured.
The first phase is actually implemented via a great many passes over the sources. It's way more than two.
The first thing we do is take the text of the sources and break it up into a stream of tokens. That is, we do lexical analysis to determine that
class c : b { }
is class, identifier, colon, identifier, left curly, right curly.
We then do a "top level parse" where we verify that the token streams define a grammaticaly-correct C# program. However, we skip parsing method bodies. When we hit a method body, we just blaze through the tokens until we get to the matching close curly. We'll come back to it later; we only care about getting enough information to generate metadata at this point.
We then do a "declaration" pass where we make notes about the location of every namespace and type declaration in the program.
We then do a pass where we verify that all the types declared have no cycles in their base types. We need to do this first because in every subsequent pass we need to be able to walk up type hierarchies without having to deal with cycles.
We then do a pass where we verify that all generic parameter constraints on generic types are also acyclic.
We then do a pass where we check whether every member of every type -- methods of classes, fields of structs, enum values, and so on -- is consistent. No cycles in enums, every overriding method overrides something that is actually virtual, and so on. At this point we can compute the "vtable" layouts of all interfaces, classes with virtual methods, and so on.
We then do a pass where we work out the values of all "const" fields.
At this point we have enough information to emit almost all the metadata for this assembly. We still do not have information about the metadata for iterator/anonymous function closures or anonymous types; we do those late.
We can now start generating IL. For each method body (and properties, indexers, constructors, and so on), we rewind the lexer to the point where the method body began and parse the method body.
Once the method body is parsed, we do an initial "binding" pass, where we attempt to determine the types of every expression in every statement. We then do a whole pile of passes over each method body.
We first run a pass to transform loops into gotos and labels.
(The next few passes look for bad stuff.)
Then we run a pass to look for use of deprecated types, for warnings.
Then we run a pass that searches for uses of anonymous types that we haven't emitted metadata for yet, and emit those.
Then we run a pass that searches for bad uses of expression trees. For example, using a ++ operator in an expression tree.
Then we run a pass that looks for all local variables in the body that are defined, but not used, to report warnings.
Then we run a pass that looks for illegal patterns inside iterator blocks.
Then we run the reachability checker, to give warnings about unreachable code, and tell you when you've done something like forgotten the return at the end of a non-void method.
Then we run a pass that verifies that every goto targets a sensible label, and that every label is targetted by a reachable goto.
Then we run a pass that checks that all locals are definitely assigned before use, notes which local variables are closed-over outer variables of an anonymous function or iterator, and which anonymous functions are in reachable code. (This pass does too much. I have been meaning to refactor it for some time now.)
At this point we're done looking for bad stuff, but we still have way more passes to go before we sleep.
Next we run a pass that detects missing ref arguments to calls on COM objects and fixes them. (This is a new feature in C# 4.)
Then we run a pass that looks for stuff of the form "new MyDelegate(Foo)" and rewrites it into a call to CreateDelegate.
Then we run a pass that transforms expression trees into the sequence of factory method calls necessary to create the expression trees at runtime.
Then we run a pass that rewrites all nullable arithmetic into code that tests for HasValue, and so on.
Then we run a pass that finds all references of the form base.Blah() and rewrites them into code which does the non-virtual call to the base class method.
Then we run a pass which looks for object and collection initializers and turns them into the appropriate property sets, and so on.
Then we run a pass which looks for dynamic calls (in C# 4) and rewrites them into dynamic call sites that use the DLR.
Then we run a pass that looks for calls to removed methods. (That is, partial methods with no actual implementation, or conditional methods that don't have their conditional compilation symbol defined.) Those are turned into no-ops.
Then we look for unreachable code and remove it from the tree. No point in codegenning IL for it.
Then we run an optimization pass that rewrites trivial "is" and "as" operators.
Then we run an optimization pass that looks for switch(constant) and rewrites it as a branch directly to the correct case.
Then we run a pass which turns string concatenations into calls to the correct overload of String.Concat.
(Ah, memories. These last two passes were the first things I worked on when I joined the compiler team.)
Then we run a pass which rewrites uses of named and optional parameters into calls where the side effects all happen in the correct order.
Then we run a pass which optimizes arithmetic; for example, if we know that M() returns an int, and we have 1 * M(), then we just turn it into M().
Then we do generation of the code for anonymous types first used by this method.
Then we transform anonymous functions in this body into methods of closure classes.
Finally, we transform iterator blocks into switch-based state machines.
Then we emit the IL for the transformed tree that we've just computed.
Easy as pie!
I see that there are multiple interpretations of the question. I answered the intra-solution interpretation, but let me fill it out with all the information I know.
The "header file metadata" is present in the compiled assemblies, so any assembly you add a reference to will allow the compiler to pull in the metadata from those.
As for things not yet compiled, part of the current solution, it will do a two-pass compilation, first reading namespaces, type names, member names, ie. everything but the code. Then when this checks out, it will read the code and compile that.
This allows the compiler to know what exists and what doesn't exist (in its universe).
To see the two-pass compiler in effect, test the following code that has 3 problems, two declaration-related problems, and one code problem:
using System;
namespace ConsoleApplication11
{
class Program
{
public static Stringg ReturnsTheWrongType()
{
return null;
}
static void Main(string[] args)
{
CallSomeMethodThatDoesntExist();
}
public static Stringg AlsoReturnsTheWrongType()
{
return null;
}
}
}
Note that the compiler will only complain about the two Stringg types that it cannot find. If you fix those, then it complains about the method-name called in the Main method, that it cannot find.
It uses the metadata from the reference assemblies. That contains a full type declaration, same thing as you'd find in a header file.
It being a two-pass compiler accomplishes something else: you can use a type in one source file before it is declared in another source code file.
It's a 2-pass compiler. http://en.wikipedia.org/wiki/Multi-pass_compiler
All the necessary information can be obtained from the referenced assemblies.
So there are no header files but the compiler does need access to the DLL's being used.
And yes, it is a 2-pass compiler but that doesn't explain how it gets information about library types.

Categories