I'm currently reading a book by Daniel M. Solis called "Illustrated C# 2010." The book says:
"When a method is called or invoked ..."
What is the difference between these two terms?
From my research (personal & unpaid), looking at the common way these terms are used in programming literature & "in the wild", I have found that these definitions seem to fit their usages.
Execution refers to the process of running code. Exact method does not matter, can be compiled or not, done by a computer or not.
Applying/Application refers to the binding of arguments to the function. Application can be both partial and complete. From functional programming world, partial application produces another function with less parameters while complete application produces a thunk. Thunks are functions with no parameters and can help with "lazy evaluation".
Invoking/Invocation refers to the process required to schedule the function, with its fully bound arguments, for execution. Such systems include pushing arguments onto the stack and transferring the PC to the new address, placing messages/objects/functions/thunks on a queue for later execution or various other RPC systems. Exact mechanism does not matter. The notion of scheduling for future execution matters. Invoking requires that the will function execute.
Calling is the least defined out of the lot. Generally refers to the combined process of fully applying the function then invoking it, usually with the added semantic that your code will wait for a return value.
Please also note that all of these terms are subjective from the point view of the current code that is being written. Invoking a function via a RPC call is only invoking it from the client's side. From the server's side the request has a different invocation point, if the function even has any "meaning" as a function on the server's side.
Function calling is when you call a function yourself in a program. While function invoking is when it gets called automatically.
For example, consider this program:
struct s
{
int a,b,s;
s()
{
a=2;
b=3;
}
void sum()
{
s=a+b;
}
};
void main()
{
struct s obj; //line 1
obj.sum(); // line 2
}
Here, when line 1 is executed, the function (constructor, i.e. s) is invoked.
When line 2 is executed, the function sum is called.
source: web
Method Invokation is a term usually refered to indirectly calling a method(function) because of problems or difficulties in calling it directly.
For example in the context of Parallel programming:Consider two threads inside one application space are running parallely. Calling a public method of an object residing on aother thread throws Cross Thread Invokation Exception because race may occure. The solution is invoking the object to execute the method and yeild the rest of job to the object to manage parallel requests.
Another example is when you have a delegate pointing to a method somewhere. When you ask the delegate to call that (unknown) method, you Invoke the method to run.
Maybe he simply considers the terms "call" and "invoke" synonymous, and just wants to mention both words because both terms can be encounter in the wild. Wouldn't it be possible to use or in that case?
When you execute the method in your code,directly, it's called Calling. When someone else executes it for you, it's Invoking. This is what I understand from Control.Invoke method.
I don't think there're any different official definitions for both terms (in all programming fields), all different explanations are made by different developers themselves. So, I prefer to consider both terms equally.
To "invoke" appears to mean to call a method indirectly through an intermediary mechanism. I'm sure the precise meaning gets blurred by authors. But, they must be trying to describe a different way of calling a method or the term wouldn't have arisen in the first place.
Also, the general (non computer) definition of "invoke" typically means to call out to a higher power for assistance. This would translate to asking an intermediary entity for help in getting something done.
simply "call" is when guarante that the method will be tooken
"invoke" is when we just ask for method to will be tooken in appropriate time
for example
the main thread (GUI) can modify controls by calling
but when you have another thread want to modify controls it just ask the main thread to do that when it is ready
Related
My object in C# will call some JavaScript when constructed through the webbrowser control. The JavaScript will then call back to our c# code with a success or failure. I need to have the constructor of the C# object wait until the JavaScript returns the callback before it leaves the constructor. How would I go about doing this?
Essentially, I need to make sure that the object will always be properly initialized when created. This is dependent on the javascript calling back, which is at least slightly variable.
While you cannot use async/await in a constructor, it still may be possible to make async JavaScript call-outs and wait for their completion, depending on what you actually do in JavaScript and inside the callback from JavaScript to C# . It's done by organizing a nested message loop, here's an example taking this approach. Note, this may be dangerous as it may lead to code reentrancy.
That said, you still might be able to refactor your code to use async/await with one of the approaches described by Stephen Cleary in his blog entry.
Instead of having an async constructor (which is not possible with .NET, anyway (thanks Servy) (and I don't think any other framework that allows for that is a sane framework, if such a thing exists)), you should:
Construct your object in such a way that you don't depend on that Javascript call;
Call whatever logic you need that will be done past .NET boundaries, and when the Javascript is done and responds...
... Call some initialize() method (name it like that or something similar).
You could have a flag in each instance telling whether is has already passed through the post-Javascript initialization, and some logic in your class so that its instances can only be considered to be in a valid, ready, usable state after that initialization step. Good luck and happy coding :)
For the question that I have probably I need more convincing answer to implement in my solution. I am not sure whether my understanding is correct. Following are the implementation details:
In a class, in the main method, where the class object C is created to call an instance method that takes integer as a parameter:
public <AnyClass> MyMethod(int classVar)
{
return new <AnyClass>(classVar);
// Can have more implementation, using the parameter passed
}
In the main, I want this method to be called on the multiple threads, using the same class object, the parameter would be the value supplied by for loop that starts the threads to execute. Now in memory we are executing same method, does this needs any kind of locking. In my view No, I have tested, but I am not sure in theory why, wouldn't different threads mess up with each other's parameter value, in my implementation it doesn't seems the case.
Only thing I cannot guarantee, is which thread access and returns first as that would not be in the order, but if I do not care about it, is this implementation correct.
Please note, this is an attempt to describe the issue in a stand alone manner, I have similar implementation as part of a complex project.
Any suggestions / pointers would be great. Please let me know if you need a clarification.
No, you don't have to lock anything here:
Code is read-only, so two threads executing the same code have no problem.
Each thread has its own stack, so threads can't mess up each other's stack-based variables.
However, when two threads may see the same object, and at least one modifies it, you may need to lock that object.
you are absolutely correct!
no sort of locking is required. locking is required when the code within the method accesses something other than what came through the parameters. if your code accesses instance variables or singleton objects then you might need locking. i say might because if your code accesses these external data in read-only manner then you wont need locking.
fundamentally, you need locking if and only if two parallel threads access and mutate (change) any data shared between them.
as for the method-arguments, they are personal to each thread. two threads can invoke same method, at the same time with different arguments. both threads will work fine. as long as the code is confined to working with only the data it got in the arguments.
in the sample code, you did not access any shared-data, hence locking is not required. hope you are convinced enough.
You are not accessing any shared resources in your method as it is written (assuming the constructor to AnyClass also doesn't)
In addition, the parameter (int) that you are passing in, is passed by value, so even if your method did change it, it would only change a local copy of it on the stack of the method being called.
So from what you've shown, there is no need to do any locking. The sort of thing where you would need to lock might be if you were passing in the same instance of an object into your method and then doing something to change the state of that object, in which case you would need to synchronize access to the state of the object.
I have the code below which is basically calling a Domain Service in a SilverLight Application.
LoadOperation<tCity> loadOperation = _dataContext.Load(query,callBack, true);
Can you tell me which operation is done first?
Is the callBack method called before loadOperation variable is assigned or after it is assigned?
Thanks
Assuming it's meant to be an asynchronous operation, it could happen either way, in theory. The asynchronous operation should occur in another thread, and if that finishes before Load returns, the callback could be called before the assignment completes.
In practice, I'd expect the async call to take much longer than whatever housekeeping Load does at the end of the method - but I also wouldn't put that assumption into the code. Unless there's explicit synchronization to ensure that the assignment occurs before the callback, I don't think it's a good idea to rely on it.
Even if at the moment the assignment always happens first, consider:
What happens if there's no network connection at the moment? The async call could fail very quickly.
What happens if some caching is added client-side? The call could succeed very quickly.
I don't know what kind of testing you're able to do against the RIA services, but sometimes you may want to be able to mock asynchronous calls by making them execute the callback on the same thread - which means the callback could happen in tests before the assignment. You could avoid this by forcing a genuinely asynchronous mock call, but handling threading in tests can get hairy; sometimes it's easiest just to make everything synchronous.
EDIT: I've been thinking about this more, and trying to work out the reasons behind my gut feeling that you shouldn't make this assumption, even though it's almost always going to be fine in reality.
Relying on the order of operations is against the spirit of asynchronicity.
You should (IMO) be setting something off, and be ready for it to come back at any time. That's how you should be thinking about it. Once you start down the slippery slope of "I'm sure I'll be able to just do a little bit of work before the response is returned" you end up in a world of uncertainty.
First, I would say write your callback without any assumptions. But aside from that I don't see how the callback could possibly occur before the assignment. The load operation would have to return immediately after the thread is spun.
There are 3 possible answers to this very specific RIA Services question:
It returns the assignment before the callback.
It may be possible for the callback to occur before the assignment.
You do not care.
Case 1:
Based on a .Net Reflector investigation of the actual load method in question, it appears impossible for it to call the callback before the return occurs. (If anyone wants to argue that they are welcome to explain the intricacies of spinning up background threads).
Case 2:
Proof that "the sky is falling" is possible would have to be shown in the reflected code. (If anyone wants to support this they are also welcome to explain the intricacies of spinning up background threads).
Case 3:
In reality, the return value of a RIA Services load method is normally used to assign a lazy loading data source. It is not used by the callback. The callback is passed its own context, of the loaded data, as a parameter.
StackOverflow is all about practical code answers, so the only practical answer is option 3:
You do not care (as you do/should not use the assignment value from the callback).
Right now I do:
Util.AssertBackgroundThread();
or
Util.AssertUIThread();
at the start of the methods. This is not too bad, but it's runtime error checking. The reason we use static languages like C# is to move more of the error checking onto the compiler's shoulders.
Now I don't think this is generally easy, but if I restrict myself to only launching threads (or using ThreadPool.QueueUserWorkItem) from my own utility methods, then it seems to me that if I mark those methods, it should be possible to do static analysis to verify that methods intended only to be run on the UI thread are indeed run only on the UI thread?
So two questions in one here.
Am I right that this can be checked
at compile time?
Is there any practical way to do this in Visual
Studio 2008 (with latest ReSharper
installed)
I always liked the pattern:
public void GuiMethod(object param)
{
if(this.InvokeRequired)
{
this.Invoke(delgateToGuiMethod, params,...)
}
else
{
//perform gui thread method
}
}
You suffer the penalty to invoke and to check, but you can guarantee the method is either running on the gui thread, or will be invoked onto the gui thread using this pattern.
The only thing I can think of to help you is to make your asserts use #if DEBUG in their bodies so that the methods are empty at release.
e.g.
public static void AssertUIThread()
{
#if DEBUG
//the code goes here
#endif
}
That way you can check during development if you're calling methods appropriately, and the JIT will remove the call entirely in your production code.
I don't see a way to do this at compile-time at the moment, but I'm favoriting this question in the hopes that it'll be answered.
Edit:
The more I think about it, the more I think you might be able to do what you want using a custom FxCop rule post-compilation. The thing is... I don't know the Introspection API that FxCop provides, and it's not well documented. Or rather, it's not documented at all. The best I can do for you is provide a tutorial or two that may or may not help you. I'm currently in the middle of reading them; if I find something interesting, I'll post it.
Edit 2:
Ahah! You can analyze the caller and the callees of a method. Using the tutorial specified there, create an attribute specifically for methods that should always be called from the UI thread, and another one for methods that should only be called from a separate thread. Your custom rule checks for one of these attributes and only runs if a method has the attribute. It then analyzes the callers of that method (and their callers, and so forth, recursively) until it can determine that the caller was either on the UI thread or from a new thread.
Now we've come to the tricky part. I haven't been able to figure this part out yet, and I leave it to you to see what you can come up with, since it's late and I can't devote much time to the problem but I'm very much interested in the solution. The problem I keep running into is that threads are all started using delegates, and I get the feeling there will be trouble going further up the caller chain that those delegates. I don't know if it'll be possible to get to the delegate; if it were possible, the delegate type could be compared to known threading delegates to determine if the call was made on a new thread or not.
Even if that's possible, there'd be the problem of going through the delegate. If you can't, you can only be certain up to the first delegate whether or not something is on a new thread.
So, problems to solve. But, hopefully, a first step for you.
What exactly do I need delegates, and threads for?
Delegates act as the logical (but safe) equivalent to function-pointers; they allow you to talk about an operation in an abstract way. The typical example of this is events, but I'm going to use a more "functional programming" example: searching in a list:
List<Person> people = ...
Person fred = people.Find( x => x.Name == "Fred");
Console.WriteLine(fred.Id);
The "lambda" here is essentially an instance of a delegate - a delegate of type Predicate<Person> - i.e. "given a person, is something true or false". Using delegates allows very flexible code - i.e. the List<T>.Find method can find all sorts of things based on the delegate that the caller passes in.
In this way, they act largely like a 1-method interface - but much more succinctly.
Delegates: Basically, a delegate is a method to reference a method. It's like a pointer to a method which you can set it to different methods that match its signature and use it to pass the reference to that method around.
Thread is a sequentual stream of instructions that execute one after another to complete a computation. You can have different threads running simultaneously to accomplish a specific task. A thread runs on a single logical processor.
Delegates are used to add methods to events dynamically.
Threads run inside of processes, and allow you to run 2 or more tasks at once that share resources.
I'd suggest have a search on these terms, there is plenty of information out there. They are pretty fundamental concepts, wiki is a high level place to start:
http://en.wikipedia.org/wiki/Thread_(computer_science)
http://en.wikipedia.org/wiki/C_Sharp_(programming_language)
Concrete examples always help me so here is one for threads. Consider your web server. As requests arrive at the server, they are sent to the Web Server process for handling. It could handle each as it arrives, fully processing the request and producing the page before turning to the next one. But consider just how much of the processing takes place at hard drive speeds (rather than CPU speeds) as the requested page is pulled from the disk (or data is pulled from the database) before the response can be fulfilled.
By pulling threads from a thread pool and giving each request its own thread, we can take care of the non-disk needs for hundreds of requests before the disk has returned data for the first one. This will permit a degree of virtual parallelism that can significantly enhance performance. Keep in mind that there is a lot more to Web Server performance but this should give you a concrete model for how threading can be useful.
They are useful for the same reason high-level languages are useful. You don't need them for anything, since really they are just abstractions over what is really happening. They do make things significantly easier and faster to program or understand.
Marc Gravell provided a nice answer for 'what is a delegate.'
Andrew Troelsen defines a thread as
...a path of execution within a process. "Pro C# 2008 and the .NET 3.5 Platform," APress.
All processes that are run on your system have at least one thread. Let's call it the main thread. You can create additional threads for any variety of reasons, but the clearest example for illustrating the purpose of threads is printing.
Let's say you open your favorite word processing application (WPA), type a few lines, and then want to print those lines. If your WPA uses the main thread to print the document, the WPA's user interface will be 'frozen' until the printing is finished. This is because the main thread has to print the lines before it can process any user interface events, i.e., button clicks, mouse movements, etc. It's as if the code were written like this:
do
{
ProcessUserInterfaceEvents();
PrintDocument();
} while (true);
Clearly, this is not what users want. Users want the user interface to be responsive while the document is being printed.
The answer, of course, is to print the lines in a second thread. In this way, the user interface can focus on processing user interface events while the secondary thread focuses on printing the lines.
The illusion is that both tasks happen simultaneously. On a single processor machine, this cannot be true since the processor can only execute one thread at a time. However, switching between the threads happens so fast that the illusion is usually maintained. On a multi-processor (or mulit-core) machine, this can be literally true since the main thread can run on one processor while the secondary thread runs on another processor.
In .NET, threading is a breeze. You can utilize the System.Threading.ThreadPool class, use asynchronous delegates, or create your own System.Threading.Thread objects.
If you are new to threading, I would throw out two cautions.
First, you can actually hurt your application's performance if you choose the wrong threading model. Be careful to avoid using too many threads or trying to thread things that should really happen sequentially.
Second (and more importantly), be aware that if you share data between threads, you will likely need to sychronize access to that shared data, e.g., using the lock keyword in C#. There is a wealth of information on this topic available online, so I won't repeat it here. Just be aware that you can run into intermittent, not-always-repeatable bugs if you do not do this carefully.
Your question is to vague...
But you probably just want to know how to use them in order to have a window, a time consuming process running and a progress bar...
So create a thread to do the time consuming process and use the delegates to increase the progress bar! :)