I have this class:
public class Statistics
{
List<string> _lsit;
public List<string> ipList
{
get { return _lsit; }
set { _lsit = value; }
}
string _Path = #"C:\Program Files\myApp.exe";
string _Path = "";
ProcessStartInfo ps = null;
public getStatistics(string Path)
{
_Path = Path;
getStatistics();
}
}
I want to start the function Statistics with different Thead and i did somthing like:
Statistics stat = new Statistics (some path);
Thread<List<string>> lList = new Thread<List<string>>(() => tsharkIps.getStatistics());
but the compiler Error says "The non-generic type 'System.Threading.Thread' cannot be used with type arguments"
I did not write all my class and only want to know hot to start the thread
thanks
You need to take a step back to start with and read the compiler error. Thread is not a generic type. It's really not at all clear what you're trying to do here, especially as you haven't even shown a parameterless getStatistics() method (which should be called GetStatistics() to follow .NET naming conventions) and the parameterized getStatistics() method you have shown doesn't have a return type.
Starting a thread with a lambda expression is the easy part:
Thread thread = new Thread(() => CallSomeMethodHere());
thread.Start();
It's not clear how that translates to your sample code though.
Or using the TPL in .NET 4, you can (and probably should use Task or Task<T>):
Task task = Task.Factory.StartNew(() => CallSomeMethodHere());
or
Task<string> task = Task.Factory.StartNew(() => CallSomeMethodReturningString());
It's possible that you really want:
Task<List<string>> statisticsTask = Task.Factory.StartNew(() => {
Statistics statistics = new Statistics(path);
return statistics.ipList();
});
Note that here the constructor is called within the new task - which is important, as it looks like that's probably doing all the work. (That's usually a bad idea to start with, but that's another matter.)
You should look at .NET naming conventions in general, btw...
Related
This question already has answers here:
Can constructors be async?
(15 answers)
Closed 25 days ago.
I've read many explanations but none of them made sense to me.
I'm doing this in Xamarin.Forms:
public class SomeClass
{
public SomeClass
{
var request = new GeolocationRequest(GeolocationAccuracy.High, TimeSpan.FromSeconds(10));
var cts = new CancellationTokenSource();
var location = Task.Run<Location>(async () => await Geolocation.GetLocationAsync(request, cts.Token));
var userLat = location.Latitude; // doesn't work
var userLon = location.Longitude; // doesn't work
}
}
The reason I'm doing this is that I'm trying to load all the methods I need such as getting the user's location, show it on the map, load up some pins etc. as soon as the Xamarin.Forms.Maps map appears.
I know it's bad practice from what you guys answered so I'm working on changing that but I'm still having a hard time understanding how to do it differently in the sense of it's confusing. But I'm reading your articles and links to make sure I understand.
I tried to run Task.Run(async () => await) on many methods, and tried to save their values in variables but I can't get the returned value and that's what made me post this question, I need to change my code.
I know could get the returned value using Task.Result() but I've read that this was bad.
How to get the UI to load and wait on the ViewModel to do what it has to do, and then tell the UI to then use whatever the ViewModel is giving him when it is ready ?
You mentioned in a comment to another answer that you can't do it async because this code is in a constructor. In that case, it is recommended to move the asynchronous code into a separate method:
public class MyClass
{
public async Task Init()
{
var request = new GeolocationRequest(GeolocationAccuracy.High, TimeSpan.FromSeconds(10));
var cts = new CancellationTokenSource();
var location = await Geolocation.GetLocationAsync(request, cts.Token);
var userLat = location.Latitude;
var userLon = location.Longitude;
}
}
You can use it the following way:
var myObject = new MyClass();
await myObject.Init();
The other methods of this class could throw an InvalidOperationException if Init() wasn't called yet. You can set a private boolean variable wasInitialized to true at the end of the Init() method. As an alternative, you can make your constructor private an create a static method that creates your object. By this, you can assure that your object is always initialized correctly:
public class MyClass
{
private MyClass() { }
public async Task<MyClass> CreateNewMyClass()
{
var result = new MyClass();
var request = new GeolocationRequest(GeolocationAccuracy.High, TimeSpan.FromSeconds(10));
var cts = new CancellationTokenSource();
var location = await Geolocation.GetLocationAsync(request, cts.Token);
result.userLat = location.Latitude;
result.userLon = location.Longitude;
return result;
}
}
I think it could be this:
public async Task SomeMethodAsync()
{
var request = new GeolocationRequest(GeolocationAccuracy.High, TimeSpan.FromSeconds(10));
var cts = new CancellationTokenSource();
var location = await Geolocation.GetLocationAsync(request, cts.Token);
var userLat = location.Latitude;
var userLon = location.Longitude;
}
Not that methods which contain await calls should be marked as async. If you need to return something from this method then the return type will be Task<TResult>.
Constructors cannot be marked with async keyword. And it's better not to call any async methods from constructor, because it cannot be done the right way with await. As an alternative please consider Factory Method pattern. Check the following article of Stephen Cleary for details: Async OOP 2: Constructors.
The normal way to retrieve results from a Task<T> is to use await. However, since this is a constructor, there are some additional wrinkles.
Think of it this way: asynchronous code make take some time to complete, and you can never be sure how long. This is in conflict with the UI requirements; when Xamarin creates your UI, it needs something to show the user right now.
So, your UI class constructor must complete synchronously. The normal way to handle this is to (immediately and synchronously) create the UI in some kind of "loading..." state and start the asynchronous operation. Then, when the operation completes, update the UI with that data.
I discuss this in more detail in my article on async data binding.
Do the asynchronous work before constructing the object and pass in the data to the constructor. I suggest a factory.
class MyFactory : IMyFactory
{
public async Task<MyClass> GetMyClass()
{
var request = new GeolocationRequest(GeolocationAccuracy.High, TimeSpan.FromSeconds(10));
var cts = new CancellationTokenSource();
var location = Task.Run<Location>(async () => await Geolocation.GetLocationAsync(request, cts.Token));
return new MyClass(location);
}
}
class MyClass
{
public MyClass(Location location)
{
var userLat = location.Latitude;
var userLon = location.Longitude;
//etc....
}
}
To create an instance, instead of
var x = new MyClass();
you'd call
var factory = new MyFactory();
var x = await factory.GetMyClass();
The nice thing about this approach is that you can mock the factory (e.g. for unit tests) in a way that does not depend on the external service.
I'm struggling to find simple docs of what AsyncLocal<T> does.
I've written some tests which I think tell me that the answer is "yes", but it would great if someone could confirm that! (especially since I don't know how to write tests that would have definitive control of the threads and continuation contexts ... so it's possible that they only work coincidentally!)
As I understand it, ThreadLocal will guarantee that if you're on a different thread, then you'll get a different instance of an object.
If you're creating and ending threads, then you might end up re-using the thread again later (and thus arriving on a thread where "that thread's" ThreadLocal object has already been used a bit).
But the interaction with await is less pleasant. The thread that you continue on (even if .ConfigureAwait(true)) is not guaranteed to be the same thread you started on, thus you may not get the same object back out of ThreadLocal on the otherside.
Conversely, AsyncLocal does guarantee that you'll get the same object either side of an await call.
But I can't find anywhere that actually says that AsyncLocal will get a value that's specific to the initial thread, in the first place!
i.e.:
Suppose you have an instance method (MyAsyncMethod) that references a 'shared' AsyncLocal field (myAsyncLocal) from its class, on either side of an await call.
And suppose that you take an instance of that class and call that method in parallel a bunch of times. * And suppose finally that each invocation happens to end up scheduled on a distinct thread.
I know that for each separate invocation of MyAsyncMethod, myAsyncLocal.Value will return the same object before and after the await (assuming that nothing reassigns it)
But is it guaranteed that each of the invocations will be looking at different objects in the first place?
As mentioned at the start, I've created a test to try to determine this myself. The following test passes consistently
public class AssessBehaviourOfAsyncLocal
{
private class StringHolder
{
public string HeldString { get; set; }
}
[Test, Repeat(10)]
public void RunInParallel()
{
var reps = Enumerable.Range(1, 100).ToArray();
Parallel.ForEach(reps, index =>
{
var val = "Value " + index;
Assert.AreNotEqual(val, asyncLocalString.Value?.HeldString);
if (asyncLocalString.Value == null)
{
asyncLocalString.Value = new StringHolder();
}
asyncLocalString.Value.HeldString = val;
ExamineValuesOfLocalObjectsEitherSideOfAwait(val).Wait();
});
}
static readonly AsyncLocal<StringHolder> asyncLocalString = new AsyncLocal<StringHolder>();
static async Task ExamineValuesOfLocalObjectsEitherSideOfAwait(string expectedValue)
{
Assert.AreEqual(expectedValue, asyncLocalString.Value.HeldString);
await Task.Delay(100);
Assert.AreEqual(expectedValue, asyncLocalString.Value.HeldString);
}
}
But is it guaranteed that each of the invocations will be looking at different objects in the first place?
No. Think of it logically like a parameter (not ref or out) you pass to a function. Any changes (e.g. setting properties) to the object will be seen by the caller. But if you assign a new value - it won't be seen by the caller.
So in your code sample there are:
Context for the test
-> Context for each of the parallel foreach invocations (some may be "shared" between invocations since parallel will likely reuse threads)
-> Context for the ExamineValuesOfLocalObjectsEitherSideOfAwait invocation
I am not sure if context is the right word - but hopefully you get the right idea.
So the asynclocal will flow (just like a parameter to a function) from context for the test, down into context for each of the parallel foreach invocations etc etc. This is different to ThreadLocal (it won't flow it down like that).
Building on top of your example, have a play with:
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using NUnit.Framework;
namespace NUnitTestProject1
{
public class AssessBehaviourOfAsyncLocal
{
public class Tester
{
public int Value { get; set; }
}
[Test, Repeat(50)]
public void RunInParallel()
{
var newObject = new object();
var reps = Enumerable.Range(1, 5);
Parallel.ForEach(reps, index =>
{
//Thread.Sleep(index * 50); (with or without this line,
Assert.AreEqual(null, asyncLocalString.Value);
asyncLocalObject.Value = newObject;
asyncLocalTester.Value = new Tester() { Value = 1 };
var backgroundTask = new Task(() => {
Assert.AreEqual(null, asyncLocalString.Value);
Assert.AreEqual(newObject, asyncLocalObject.Value);
asyncLocalString.Value = "Bobby";
asyncLocalObject.Value = "Hello";
asyncLocalTester.Value.Value = 4;
Assert.AreEqual("Bobby", asyncLocalString.Value);
Assert.AreNotEqual(newObject, asyncLocalObject.Value);
});
var val = "Value " + index;
asyncLocalString.Value = val;
Assert.AreEqual(newObject, asyncLocalObject.Value);
Assert.AreEqual(1, asyncLocalTester.Value.Value);
backgroundTask.Start();
backgroundTask.Wait();
// Note that the Bobby is not visible here
Assert.AreEqual(val, asyncLocalString.Value);
Assert.AreEqual(newObject, asyncLocalObject.Value);
Assert.AreEqual(4, asyncLocalTester.Value.Value);
ExamineValuesOfLocalObjectsEitherSideOfAwait(val).Wait();
});
}
static readonly AsyncLocal<string> asyncLocalString = new AsyncLocal<string>();
static readonly AsyncLocal<object> asyncLocalObject = new AsyncLocal<object>();
static readonly AsyncLocal<Tester> asyncLocalTester = new AsyncLocal<Tester>();
static async Task ExamineValuesOfLocalObjectsEitherSideOfAwait(string expectedValue)
{
Assert.AreEqual(expectedValue, asyncLocalString.Value);
await Task.Delay(100);
Assert.AreEqual(expectedValue, asyncLocalString.Value);
}
}
}
Notice how backgroundTask is able to see the same async local as the code that invoked it (even though it is from the other thread). It also doesn't impact the calling codes async local string or object - since it re-assigns to them. But the calling code can see its change to Tester (proving that the Task and its calling code share the same Tester instance).
I have API similar to following, which takes delegate to report the progress of operation. Also returns the task so that it can be cancelled.
If user of this function uses the same delegate instance for multiple function calls, how does user
determine which progress is for which function invocation.
class Program
{
public static Task LongOperationAsync(List<String> names, Action<String> progress)
{
return Task.Factory.StartNew(() =>
{
foreach (var item in names)
{
// do some long operation
progress(item.ToUpper());
}
});
}
static void Main(string[] args)
{
var friends = new List<String>() {"joey","ross","chandler","monica","phoebe","rachel" };
var seinfelds = new List<String>() { "Jerry", "kramer", "george", "elaine" };
Action<String> resultProcessor = (result) =>
{
// process the result. let's say update the UI as and when results arrive
Console.WriteLine(result);
};
var task1 = LongOperationAsync(friends, resultProcessor);
var task2 = LongOperationAsync(seinfelds, resultProcessor);
Task.WaitAll(task1, task2);
Console.ReadLine();
}
}
So I have 2 questions
Actually, I see three. Here's your first one (the lack of a question mark doesn't keep it from being a question :) ):
how does user determine which progress is for which function invocation.
How do you want the user to determine that? With the code you posted, it simply wouldn't be possible. There are a variety of ways you could fix that, but if you are talking about re-using the same delegate instance, you'll have to modify the called method:
public static Task LongOperationAsync(List<String> names, Action<String> progress, string id)
{
return Task.Factory.StartNew(() =>
{
foreach (var item in names)
{
// do some long operation
progress($"{id}: {item.ToUpper()}");
}
});
}
static void Main(string[] args)
{
var friends = new List<String>() {"joey","ross","chandler","monica","phoebe","rachel" };
var scienfelds = new List<String>() { "Jerry", "kramer", "george", "elaine" };
Action<String> resultProcessor = (result) =>
{
// process the result. let's say update the UI as and when results arrive
Console.WriteLine(result);
};
var task1 = LongOperationAsync(friends, resultProcessor, "task1");
var task2 = LongOperationAsync(scienfelds, resultProcessor, "task2");
Task.WaitAll(task1, task2);
Console.ReadLine();
}
Or a related alternative, if you want for the callback itself to be able to know which task the progress is for:
public static Task LongOperationAsync(List<String> names, Action<String, String> progress, string id)
{
return Task.Factory.StartNew(() =>
{
foreach (var item in names)
{
// do some long operation
progress(item.ToUpper(), id);
}
});
}
Your other questions are "primarily opinion-based" and so not really suitable for Stack Overflow, but since I'm writing anyway, here's my two cents:
•Is it always good practices to take user provided info for a function which takes delegate ( Action or Func) ?
Always? No. It depends on context.
•Is is recommended to reuse the delegate instance across multiple function calls ?
This also depends on context. If you can do so conveniently, or if you are in a scenario where otherwise an enormous number of delegate instances would have to be created, then you should probably reuse an instance.
But as long as there's no serious performance issue (and usually there wouldn't be), you should write the code in the way that makes the most sense, rather than worrying about creating the object.
For example, an alternative to your code, in which the delegate is not reused, would look something like this:
static void Main(string[] args)
{
var friends = new List<String>() {"joey","ross","chandler","monica","phoebe","rachel" };
var scienfelds = new List<String>() { "Jerry", "kramer", "george", "elaine" };
Action<String, String> resultProcessor = (result, id) =>
{
// process the result. let's say update the UI as and when results arrive
Console.WriteLine($"{id}: {result}");
};
var task1 = LongOperationAsync(friends, r => resultProcessor(r, "task1"));
var task2 = LongOperationAsync(scienfelds, r => resultProcessor(r, "task2"));
Task.WaitAll(task1, task2);
Console.ReadLine();
}
In this way, the callback is customized for the needs of the client code, without requiring a change to the callee (i.e. without requiring a change to LongOperationAsync()). The callee can remain agnostic about what types of callers there might be.
Yes, you wind up having to create additional delegate instances. But so what? You've got a long operation here. You're not going to be creating even a dozen of them in a second, never mind the tens of thousands it would require before the performance issues even start to become an issue, never mind really have to be solved.
I am educating myself on Parallel.Invoke, and parallel processing in general, for use in current project. I need a push in the right direction to understand how you can dynamically\intelligently allocate more parallel 'threads' as required.
As an example. Say you are parsing large log files. This involves reading from file, some sort of parsing of the returned lines and finally writing to a database.
So to me this is a typical problem that can benefit from parallel processing.
As a simple first pass the following code implements this.
Parallel.Invoke(
()=> readFileLinesToBuffer(),
()=> parseFileLinesFromBuffer(),
()=> updateResultsToDatabase()
);
Behind the scenes
readFileLinesToBuffer() reads each line and stores to a buffer.
parseFileLinesFromBuffer comes along and consumes lines from buffer and then let's say it put them on another buffer so that updateResultsToDatabase() can come along and consume this buffer.
So the code shown assumes that each of the three steps uses the same amount of time\resources but lets say the parseFileLinesFromBuffer() is a long running process so instead of running just one of these methods you want to run two in parallel.
How can you have the code intelligently decide to do this based on any bottlenecks it might perceive?
Conceptually I can see how some approach of monitoring the buffer sizes might work, spawning a new 'thread' to consume the buffer at an increased rate for example...but I figure this type of issue has been considered in putting together the TPL library.
Some sample code would be great but I really just need a clue as to what concepts I should investigate next. It looks like maybe the System.Threading.Tasks.TaskScheduler holds the key?
Have you tried the Reactive Extensions?
http://msdn.microsoft.com/en-us/data/gg577609.aspx
The Rx is a new tecnology from Microsoft, the focus as stated in the official site:
The Reactive Extensions (Rx)... ...is a library to compose
asynchronous and event-based programs using observable collections and
LINQ-style query operators.
You can download it as a Nuget package
https://nuget.org/packages/Rx-Main/1.0.11226
Since I am currently learning Rx I wanted to take this example and just write code for it, the code I ended up it is not actually executed in parallel, but it is completely asynchronous, and guarantees the source lines are executed in order.
Perhaps this is not the best implementation, but like I said I am learning Rx, (thread-safe should be a good improvement)
This is a DTO that I am using to return data from the background threads
class MyItem
{
public string Line { get; set; }
public int CurrentThread { get; set; }
}
These are the basic methods doing the real work, I am simulating the time with a simple Thread.Sleep and I am returning the thread used to execute each method Thread.CurrentThread.ManagedThreadId. Note the timer of the ProcessLine it is 4 sec, it's the most time-consuming operation
private IEnumerable<MyItem> ReadLinesFromFile(string fileName)
{
var source = from e in Enumerable.Range(1, 10)
let v = e.ToString()
select v;
foreach (var item in source)
{
Thread.Sleep(1000);
yield return new MyItem { CurrentThread = Thread.CurrentThread.ManagedThreadId, Line = item };
}
}
private MyItem UpdateResultToDatabase(string processedLine)
{
Thread.Sleep(700);
return new MyItem { Line = "s" + processedLine, CurrentThread = Thread.CurrentThread.ManagedThreadId };
}
private MyItem ProcessLine(string line)
{
Thread.Sleep(4000);
return new MyItem { Line = "p" + line, CurrentThread = Thread.CurrentThread.ManagedThreadId };
}
The following method I am using it just to update the UI
private void DisplayResults(MyItem myItem, Color color, string message)
{
this.listView1.Items.Add(
new ListViewItem(
new[]
{
message,
myItem.Line ,
myItem.CurrentThread.ToString(),
Thread.CurrentThread.ManagedThreadId.ToString()
}
)
{
ForeColor = color
}
);
}
And finally this is the method that calls the Rx API
private void PlayWithRx()
{
// we init the observavble with the lines read from the file
var source = this.ReadLinesFromFile("some file").ToObservable(Scheduler.TaskPool);
source.ObserveOn(this).Subscribe(x =>
{
// for each line read, we update the UI
this.DisplayResults(x, Color.Red, "Read");
// for each line read, we subscribe the line to the ProcessLine method
var process = Observable.Start(() => this.ProcessLine(x.Line), Scheduler.TaskPool)
.ObserveOn(this).Subscribe(c =>
{
// for each line processed, we update the UI
this.DisplayResults(c, Color.Blue, "Processed");
// for each line processed we subscribe to the final process the UpdateResultToDatabase method
// finally, we update the UI when the line processed has been saved to the database
var persist = Observable.Start(() => this.UpdateResultToDatabase(c.Line), Scheduler.TaskPool)
.ObserveOn(this).Subscribe(z => this.DisplayResults(z, Color.Black, "Saved"));
});
});
}
This process runs totally in the background, this is the output generated:
in an async/await world, you'd have something like:
public async Task ProcessFileAsync(string filename)
{
var lines = await ReadLinesFromFileAsync(filename);
var parsed = await ParseLinesAsync(lines);
await UpdateDatabaseAsync(parsed);
}
then a caller could just do var tasks = filenames.Select(ProcessFileAsync).ToArray(); and whatever (WaitAll, WhenAll, etc, depending on context)
Use a couple of BlockingCollection. Here is an example
The idea is that you create a producer that puts data into the collection
while (true) {
var data = ReadData();
blockingCollection1.Add(data);
}
Then you create any number of consumers that reads from the collection
while (true) {
var data = blockingCollection1.Take();
var processedData = ProcessData(data);
blockingCollection2.Add(processedData);
}
and so on
You can also let TPL handle the number of consumers by using Parallel.Foreach
Parallel.ForEach(blockingCollection1.GetConsumingPartitioner(),
data => {
var processedData = ProcessData(data);
blockingCollection2.Add(processedData);
});
(note that you need to use GetConsumingPartitioner not GetConsumingEnumerable (see here)
In general I get C#'s lambda syntax. However the anonymous thread syntax isn't completely clear to me. Can someone explain what a thread creation like this is actually doing? Please be as detailed as possible, I'd love to have a sort of step-by-step on the magic that makes this work.
(new Thread(() => {
DoLongRunningWork();
MessageBox.Show("Long Running Work Finished!");
})).Start();
The part that I really don't understand is the Thread(() => ...
When I use this syntax it seems like I remove a lot of the limits of a traditional ThreadStart such as having to invoke on a method that has no parameters.
Thanks for your help!
() => ... just means that the lambda expression takes no parameters. Your example is equivalent to the following:
void worker()
{
DoLongRunningWork();
MessageBox.Show("Long Running Work Finished!");
}
// ...
new Thread(worker).Start();
The { ... } in the lambda let you use multiple statements in the lambda body, where ordinarily you'd only be allowed an expression.
This:
() => 1 + 2
Is equivalent to:
() => { return (1 + 2); }
This is anonymous way to create a thread in C# which just start the thread (because you are using Start();)
Following 2 ways are equivalent. If you need Thread variable to do something (for example block the calling thread by calling thread0.join()), then you use the 2nd one.
new Thread(() =>
{
Console.WriteLine("Anonymous Thread job goes here...");
}).Start();
var thread0= new Thread(() =>
{
Console.WriteLine("Named Thread job goes here...");
});
thread0.Start();
Now the Thread method part. If you see the Thread declaration we have the following (I omitted 3 others).
public Thread(ThreadStart start);
Thread takes a delegate as a parameter. Delegate is reference to a method. So Thread takes a parameter which is a delegate. ThreadStart is declared like this.
public delegate void ThreadStart();
It means you can pass any method to Thread which return void and doesn't take any parameters. So following examples are equivalent.
ThreadStart del = new ThreadStart(ThreadMethod);
var thread3 = new Thread(del);
thread3.Start();
ThreadStart del2 = ThreadMethod;
var thread4 = new Thread(del2);
thread4.Start();
var thread5 = new Thread(ThreadMethod);
thread5.Start();
//This must be separate method
public static void ThreadMethod()
{
Console.WriteLine("ThreadMethod doing important job...");
}
Now we think that ThreadMethod method is doing little work we can make it to local and anonymous. So we don't need the ThreadMethod method at all.
new Thread( delegate ()
{
Console.WriteLine("Anonymous method Thread job goes here...");
}).Start();
You see after delegate to last curly braces is equivalent to our ThreadMethod(). You can further shorten the previous code by introducing Lambda statement (See MSDN). This is just you are using and see how it has been ended up like the following.
new Thread( () =>
{
Console.WriteLine("Lambda statements for thread goes here...");
}).Start();
As there was some answers before I started, I will just write about how additional parameters make their way into lambda.
In short this thing called closure. Lets dissect your example with new Thread(() => _Transaction_Finalize_Worker(transId, machine, info, newConfigPath)).Start(); into pieces.
For closure there's a difference between class' fields and local variables. Thus let's assume that transId is class field (thus accessible through this.transId) and others are just local variables.
Behind the scenes if lambda used in a class compiler creates nested class with unspeakable name, lets name it X for simplicity, and puts all local variables there. Also it writes lambda there, so it becomes normal method. Then compiler rewrites your method so that it creates X at some point and replaces access to machine, info and newConfigPath with x.machine, x.info and x.newConfigPath respectively. Also X receives reference to this, so lambda-method could access transId via parentRef.transId.
Well, it is extremely simplified but near to reality.
UPD:
class A
{
private int b;
private int Call(int m, int n)
{
return m + n;
}
private void Method()
{
int a = 5;
a += 5;
Func<int> lambda = () => Call(a, b);
Console.WriteLine(lambda());
}
#region compiler rewrites Method to RewrittenMethod and adds nested class X
private class X
{
private readonly A _parentRef;
public int a;
public X(A parentRef)
{
_parentRef = parentRef;
}
public int Lambda()
{
return _parentRef.Call(a, _parentRef.b);
}
}
private void RewrittenMethod()
{
X x = new X(this);
x.a += 5;
Console.WriteLine(x.Lambda());
}
#endregion
}