Thread lock clarification - c#

I am a beginer in programing.
When i execute the code by locking the operation:
class ThreadSafe
{
static List<string> list = new List<string>();
static object obj=new object();
static void Main()
{
new Thread(AddItems).Start();
new Thread(AddItems).Start();
foreach (string str in list)
{
Console.WriteLine(str);
}
Console.WriteLine("Count=" + list.Count.ToString());
Console.ReadKey(true);
}
static void AddItems()
{
lock (obj)
{
for (int i = 1; i < 10; i++)
list.Add("Item " + i.ToString());
}
}
}
even i am reciving,"InvalidOperationException".What would be the code alteration?

The issue is that your threads are altering the list while it is trying to be read.
class ThreadSafe
{
static List<string> list = new List<string>();
static object obj=new object();
static void Main()
{
var t1 = new Thread(AddItems);
var t2 = new Thread(AddItems);
t1.Start();
t2.Start();
t1.Join();
t2.Join();
foreach (string str in list)
{
Console.WriteLine(str);
}
Console.WriteLine("Count=" + list.Count.ToString());
Console.ReadKey(true);
}
static void AddItems()
{
for (int i = 1; i < 10; i++)
lock (obj)
{
list.Add("Item " + i.ToString());
}
}
}
The difference being that this code waits for both threads to complete before showing the results.
I also moved the lock around the specific instruction that needs to be locked, so that both threads can run concurrently.

You're enumerating over a collection with foreach (string str in list) while modifying it in AddItems(). For this code to work property you'll either have to Thread.Join() both threads (so that both will finish adding items to a list; I'm not sure, however if Add is threadsafe; I bet it's not, so you'll have to account for that by locking on SyncRoot) or use a ReaderWriterLock to logically separate these operations.

You are looping through the result list before the two AddItems threads have finished populating the list. So, the foreach complains that the list was updated while it was looping through that list.
Something like this should help:
System.Threading.Thread.Sleep(0); // Let the other threads get started on the list.
lock(obj)
{
foreach (string str in list)
{
Console.WriteLine(str);
}
}
Watch out though! This doesn't guarantee that the second thread will finish it's job before you have read through the list provided by the first thread (assuming the first thread grabs the lock first).
You will need some other mechanism (like John Gietzen's solution) to know when both threads have finished, before reading the results.

Use the debugger. :)
You receive the InvalidOperationException on the foreach.
What happens, is that the foreach is executed while your threads are still running.
Therefore, you're iterating over your list, while items are being added to the list. So, the contents of the list are changing, and therefore, the foreach throws an exception.
You can avoid this problem by calling 'Join'.
static void Main()
{
Thread t1 = new Thread (AddItems);
Thread t2 =new Thread (AddItems);
t1.Start ();
t2.Start ();
t1.Join ();
t2.Join ();
foreach( string str in list )
{
Console.WriteLine (str);
}
Console.WriteLine ("Count=" + list.Count.ToString ());
Console.ReadKey (true);
}

I have changed the code, which proves lock does nothing.
I expected add2 won't show up until add1 has finished. But add1 and add2 just mingled together.
using System;
using System.Threading;
public static class Example
{
public static void Main()
{
int data= 0;
Thread t1 = new Thread(()=> add1(ref data));
Thread t2 = new Thread(() => add2(ref data));
t1.Start();
t2.Start();
}
static void add1(ref int x)
{
object lockthis = new object();
lock (lockthis)
{
for (int i = 0; i < 30; i++)
{
x += 1;
Console.WriteLine("add1 " + x);
}
}
}
static void add2(ref int x)
{
object lockthis = new object();
lock (lockthis)
{
for (int i = 0; i < 30; i++)
{
x += 3;
Console.WriteLine("add2 " + x);
}
}
}
}

Related

C# Multi-Threaded Tree traversal

I am trying to write a C# system that will multi-threaded traverse a tree structure. Another way to look at this is where the consumer of the BlockingCollection is also the producer.
The problem I am having is telling when everything is finished.
The test I really need is to see if all the threads are on the TryTake.
If they are then everything has finished, but I cannot find a way to test of this or wrap this with anything that would help achieve this.
The code below is a very simple example of this code as far as I have it, but there is a condition in which this code can fail. If the first thread just passed the test.TryTake(out v,-1) and has not yet executed the s.Release(); and it just pulled the last item from the collection, and the second thread just performed the if(s.CurrentCount == 0 && test.Count ==0) this could return true, and incorrectly start finishing things up.
But then the first thread would continue on and try and add more to the collection.
If I could make the lines:
if (!test.TryTake(out v, -1))
break;
s.Release();
atomic then I believe this code would work. (Which is obviously not possible.)
But I cannot figure out how to fix this flaw.
class Program
{
private static BlockingCollection<int> test;
static void Main(string[] args)
{
test = new BlockingCollection<int>();
WorkClass.s = new SemaphoreSlim(2);
WorkClass w0 = new WorkClass("A");
WorkClass w1 = new WorkClass("B");
Thread t0 = new Thread(w0.WorkFunction);
Thread t1 = new Thread(w1.WorkFunction);
test.Add(10);
t0.Start();
t1.Start();
t0.Join();
t1.Join();
Console.WriteLine("Done");
Console.ReadLine();
}
class WorkClass
{
public static SemaphoreSlim s;
private readonly string _name;
public WorkClass(string name)
{
_name = name;
}
public void WorkFunction(object t)
{
while (true)
{
int v;
s.Wait();
if (s.CurrentCount == 0 && test.Count == 0)
test.CompleteAdding();
if (!test.TryTake(out v, -1))
break;
s.Release();
Console.WriteLine(_name + " = " + v);
Thread.Sleep(5);
for (int i = 0; i < v; i++)
test.Add(i);
}
Console.WriteLine("Done " + _name);
}
}
}
This can be parallelized using task parallelism. Every node in the tree is considered to be a task which may spawn sub-tasks. See Dynamic Task Parallelism for a more detailed description.
For a binary tree with 5 levels that writes each node to console and waits for 5 milliseconds as in your example, the ParallelWalk method would then look for example as follows:
class Program
{
internal class TreeNode
{
internal TreeNode(int level)
{
Level = level;
}
internal int Level { get; }
}
static void Main(string[] args)
{
ParallelWalk(new TreeNode(0));
Console.Read();
}
static void ParallelWalk(TreeNode node)
{
if (node == null) return;
Console.WriteLine(node.Level);
Thread.Sleep(5);
if(node.Level > 4) return;
int nextLevel = node.Level + 1;
var t1 = Task.Factory.StartNew(
() => ParallelWalk(new TreeNode(nextLevel)));
var t2 = Task.Factory.StartNew(
() => ParallelWalk(new TreeNode(nextLevel)));
Task.WaitAll(t1, t2);
}
}
The central lines are where the tasks t1 and t2 are spawned.
By this decomposition in tasks, the scheduling is done by the Task Parallel Library and you don't have to manage a shared set of nodes anymore.

What is the best scenario for one fast producer multiple slow consumers?

I'm looking for the best scenario to implement one producer multiple consumer multithreaded application.
Currently I'm using one queue for shared buffer but it's much slower than the case of one producer one consumer.
I'm planning to do it like this:
Queue<item>[] buffs = new Queue<item>[N];
object[] _locks = new object[N];
static void Produce()
{
int curIndex = 0;
while(true)
{
// Produce item;
lock(_locks[curIndex])
{
buffs[curIndex].Enqueue(curItem);
Monitor.Pulse(_locks[curIndex]);
}
curIndex = (curIndex+1)%N;
}
}
static void Consume(int myIndex)
{
item curItem;
while(true)
{
lock(_locks[myIndex])
{
while(buffs[myIndex].Count == 0)
Monitor.Wait(_locks[myIndex]);
curItem = buffs[myIndex].Dequeue();
}
// Consume item;
}
}
static void main()
{
int N = 100;
Thread[] consumers = new Thread[N];
for(int i = 0; i < N; i++)
{
consumers[i] = new Thread(Consume);
consumers[i].Start(i);
}
Thread producer = new Thread(Produce);
producer.Start();
}
Use a BlockingCollection
BlockingCollection<item> _buffer = new BlockingCollection<item>();
static void Produce()
{
while(true)
{
// Produce item;
_buffer.Add(curItem);
}
// eventually stop producing
_buffer.CompleteAdding();
}
static void Consume(int myIndex)
{
foreach (var curItem in _buffer.GetConsumingEnumerable())
{
// Consume item;
}
}
static void main()
{
int N = 100;
Thread[] consumers = new Thread[N];
for(int i = 0; i < N; i++)
{
consumers[i] = new Thread(Consume);
consumers[i].Start(i);
}
Thread producer = new Thread(Produce);
producer.Start();
}
If you don't want to specify number of threads from start you can use Parallel.ForEach instead.
static void Consume(item curItem)
{
// consume item
}
void Main()
{
Thread producer = new Thread(Produce);
producer.Start();
Parallel.ForEach(_buffer.GetConsumingPartitioner(), Consumer)
}
Using more threads won't help. It may even reduce performance. I suggest you try to use ThreadPool where every work item is one item created by the producer. However, that doesn't guarantee the produced items to be consumed in the order they were produced.
Another way could be to reduce the number of consumers to 4, for example and modify the way they work as follows:
The producer adds the new work to the queue. There's only one global queue for all worker threads. It then sets a flag to indicate there is new work like this:
ManualResetEvent workPresent = new ManualResetEvent(false);
Queue<item> workQueue = new Queue<item>();
static void Produce()
{
while(true)
{
// Produce item;
lock(workQueue)
{
workQueue.Enqueue(newItem);
workPresent.Set();
}
}
}
The consumers wait for work to be added to the queue. Only one consumer will get to do its job. It then takes all the work from the queue and resets the flag. The producer will not be able to add new work until that is done.
static void Consume()
{
while(true)
{
if (WaitHandle.WaitOne(workPresent))
{
workPresent.Reset();
Queue<item> localWorkQueue = new Queue<item>();
lock(workQueue)
{
while (workQueue.Count > 0)
localWorkQueue.Enqueue(workQueue.Dequeue());
}
// Handle items in local work queue
...
}
}
}
That outcome of this, however, is a bit unpredictable. It could be that one thread is doing all the work and the others do nothing.
I don't see why you have to use multiple queues. Just reduce the amount of locking. Here is an sample where you can have a large number of consumers and they all wait for new work.
public class MyWorkGenerator
{
ConcurrentQueue<object> _queuedItems = new ConcurrentQueue<object>();
private object _lock = new object();
public void Produce()
{
while (true)
{
_queuedItems.Enqueue(new object());
Monitor.Pulse(_lock);
}
}
public object Consume(TimeSpan maxWaitTime)
{
if (!Monitor.Wait(_lock, maxWaitTime))
return null;
object workItem;
if (_queuedItems.TryDequeue(out workItem))
{
return workItem;
}
return null;
}
}
Do note that Pulse() will only trigger one consumer at a time.
Example usage:
static void main()
{
var generator = new MyWorkGenerator();
var consumers = new Thread[20];
for (int i = 0; i < consumers.Length; i++)
{
consumers[i] = new Thread(DoWork);
consumers[i].Start(generator);
}
generator.Produce();
}
public static void DoWork(object state)
{
var generator = (MyWorkGenerator) state;
var workItem = generator.Consume(TimeSpan.FromHours(1));
while (workItem != null)
{
// do work
workItem = generator.Consume(TimeSpan.FromHours(1));
}
}
Note that the actual queue is hidden in the producer as it's imho an implementation detail. The consumers doesn't really have to know how the work items are generated.

A producer consumer queue with an additional thread for a periodic backup of data

I'm trying to implement a concurrent producer/consumer queue with multiple producers and one consumer: the producers add some data to the Queue, and the consumer dequeues these data from the queue in order to update a collection. This collection must be periodically backed up to a new file. For this purpose I created a custom serializable collection: serialization could be performed by using the DataContractSerializer.
The queue is only shared between the consumer and the producers, so access to this queue must be managed to avoid race conditions.
The custom collection is shared between the consumer and a backup thread.
The backup thread could be activated periodically using a System.Threading.Timer object: it may initially be scheduled by the consumer, and then it would be scheduled at the end of every backup procedure.
Finally, a shutdown method should stop the queuing by producers, then stop the consumer, perform the last backup and dispose the timer.
The dequeuing of an item at a time may not be efficient, so I thought of using two queues: when the first queue becomes full, the producers notify the consumer by invoking Monitor.Pulse. As soon as the consumer receives the notification, the queues are swapped, so while producers enqueue new items, the consumer can process the previous ones.
The sample that I wrote seems to work properly. I think it is also thread-safe, but I'm not sure about that. The following code, for simplicity I used a Queue<int>. I also used (again for simplicity) an ArrayList instead of collection serializable.
public class QueueManager
{
private readonly int m_QueueMaxSize;
private readonly TimeSpan m_BackupPeriod;
private readonly object m_SyncRoot_1 = new object();
private Queue<int> m_InputQueue = new Queue<int>();
private bool m_Shutdown;
private bool m_Pulsed;
private readonly object m_SyncRoot_2 = new object();
private ArrayList m_CustomCollection = new ArrayList();
private Thread m_ConsumerThread;
private Timer m_BackupThread;
private WaitHandle m_Disposed;
public QueueManager()
{
m_ConsumerThread = new Thread(Work) { IsBackground = true };
m_QueueMaxSize = 7;
m_BackupPeriod = TimeSpan.FromSeconds(30);
}
public void Run()
{
m_Shutdown = m_Pulsed = false;
m_BackupThread = new Timer(DoBackup);
m_Disposed = new AutoResetEvent(false);
m_ConsumerThread.Start();
}
public void Shutdown()
{
lock (m_SyncRoot_1)
{
m_Shutdown = true;
Console.WriteLine("Worker shutdown...");
Monitor.Pulse(m_SyncRoot_1);
}
m_ConsumerThread.Join();
WaitHandle.WaitAll(new WaitHandle[] { m_Disposed });
if (m_InputQueue != null) { m_InputQueue.Clear(); }
if (m_CustomCollection != null) { m_CustomCollection.Clear(); }
Console.WriteLine("Worker stopped!");
}
public void Enqueue(int item)
{
lock (m_SyncRoot_1)
{
if (m_InputQueue.Count == m_QueueMaxSize)
{
if (!m_Pulsed)
{
Monitor.Pulse(m_SyncRoot_1); // it notifies the consumer...
m_Pulsed = true;
}
Monitor.Wait(m_SyncRoot_1); // ... and waits for Pulse
}
m_InputQueue.Enqueue(item);
Console.WriteLine("{0} \t {1} >", Thread.CurrentThread.Name, item.ToString("+000;-000;"));
}
}
private void Work()
{
m_BackupThread.Change(m_BackupPeriod, TimeSpan.FromMilliseconds(-1));
Queue<int> m_SwapQueueRef, m_WorkerQueue = new Queue<int>();
Console.WriteLine("Worker started!");
while (true)
{
lock (m_SyncRoot_1)
{
if (m_InputQueue.Count < m_QueueMaxSize && !m_Shutdown) Monitor.Wait(m_SyncRoot_1);
Console.WriteLine("\nswapping...");
m_SwapQueueRef = m_InputQueue;
m_InputQueue = m_WorkerQueue;
m_WorkerQueue = m_SwapQueueRef;
m_Pulsed = false;
Monitor.PulseAll(m_SyncRoot_1); // all producers are notified
}
Console.WriteLine("Worker\t < {0}", String.Join(",", m_WorkerQueue.ToArray()));
lock (m_SyncRoot_2)
{
Console.WriteLine("Updating custom dictionary...");
foreach (int item in m_WorkerQueue)
{
m_CustomCollection.Add(item);
}
Thread.Sleep(1000);
Console.WriteLine("Custom dictionary updated successfully!");
}
if (m_Shutdown)
{
// schedule last backup
m_BackupThread.Change(0, Timeout.Infinite);
return;
}
m_WorkerQueue.Clear();
}
}
private void DoBackup(object state)
{
try
{
lock (m_SyncRoot_2)
{
Console.WriteLine("Backup...");
Thread.Sleep(2000);
Console.WriteLine("Backup completed at {0}", DateTime.Now);
}
}
finally
{
if (m_Shutdown) { m_BackupThread.Dispose(m_Disposed); }
else { m_BackupThread.Change(m_BackupPeriod, TimeSpan.FromMilliseconds(-1)); }
}
}
}
Some objects are initialized in the Run method to allow you to restart this QueueManager after it is stopped, as shown in the code below.
public static void Main(string[] args)
{
QueueManager queue = new QueueManager();
var t1 = new Thread(() =>
{
for (int i = 0; i < 50; i++)
{
queue.Enqueue(i);
Thread.Sleep(1500);
}
}) { Name = "t1" };
var t2 = new Thread(() =>
{
for (int i = 0; i > -30; i--)
{
queue.Enqueue(i);
Thread.Sleep(3000);
}
}) { Name = "t2" };
t1.Start(); t2.Start(); queue.Run();
t1.Join(); t2.Join(); queue.Shutdown();
Console.ReadLine();
var t3 = new Thread(() =>
{
for (int i = 0; i < 50; i++)
{
queue.Enqueue(i);
Thread.Sleep(1000);
}
}) { Name = "t3" };
var t4 = new Thread(() =>
{
for (int i = 0; i > -30; i--)
{
queue.Enqueue(i);
Thread.Sleep(2000);
}
}) { Name = "t4" };
t3.Start(); t4.Start(); queue.Run();
t3.Join(); t4.Join(); queue.Shutdown();
Console.ReadLine();
}
I would suggest using the BlockingCollection for a producer/consumer queue. It was designed specifically for that purpose. The producers add items using Add and the consumers use Take. If there are no items to take then it will block until one is added. It is already designed to be used in a multithreaded environment, so if you're just using those methods there's no need to explicitly use any locks or other synchronization code.

loop and thread are working parallel

my case is:
loop and thread are working parallel.. i want to stop the execution of loop untill thread is done with its functionality, when the thread state is stopped, at that time i want to execute the loop further..
for (int pp = 0; pp < LstIop.Count; pp++)
{
oCntrlImageDisplay = new CntrlImageDisplay();
oCntrlImageEdit = new CntrlImageEdit();
axAcroPDF1 = new AxAcroPDFLib.AxAcroPDF();
int pages = ConvertFileIntoBinary(LstIop[pp].Input, oCntrlImageEdit);
oCntrlImageDisplay.ImgDisplay.Image = LstIop[pp].Output;
oCntrlImageDisplay.ImgEditor.Image = oCntrlImageDisplay.ImgDisplay.Image;
if (t1 == null || t1.ThreadState.ToString() == "Stopped")
{
t1 = new Thread(() => convert(pages, LstIop[pp].Input, LstIop[pp].Output, stIop[pp].Temp));
t1.SetApartmentState(ApartmentState.STA);
t1.IsBackground = true;
CheckForIllegalCrossThreadCalls = false;
t1.Start();
}
}
as the others have said, there is no point in threading here, but if your hell bent on it, do Async. just use .Invoke or, .begininvoke followed by a .endInvoke
EX:
delegate void T2();
static void Main(string[] args)
{
T2 Thread = new T2(Work);
while (true)
{
IAsyncResult result = Thread.BeginInvoke(null, null);
//OTHER WORK TO DO
Thread.EndInvoke(result);
}
}
static void Work()
{
//WORK TO DO
}
using delegates is nice because you can specify return data, and send parameters
delegate double T2(byte[] array,string text, int num);
static void Main(string[] args)
{
T2 Thread = new T2(Work);
while (true)
{
IAsyncResult result = Thread.BeginInvoke(null, null);
//OTHER WORK TO DO
double Returned = Thread.EndInvoke(result);
}
}
static double Work(byte[] array, string text, int num)
{
// WORK TO DO
return(3.4);
}
To wait for the thread to finish executing, call:
t1.join();

How to call the method in thread with arguments and return some value

I like to call the method in thread with arguments and return some value here example
class Program
{
static void Main()
{
Thread FirstThread = new Thread(new ThreadStart(Fun1));
Thread SecondThread = new Thread(new ThreadStart(Fun2));
FirstThread.Start();
SecondThread.Start();
}
public static void Fun1()
{
for (int i = 1; i <= 1000; i++)
{
Console.WriteLine("Fun1 writes:{0}", i);
}
}
public static void Fun2()
{
for (int i = 1000; i >= 6; i--)
{
Console.WriteLine("Fun2 writes:{0}", i);
}
}
}
I know this above example run successfully but if method fun1 like this
public int fun1(int i,int j)
{
int k;
k=i+j;
return k;
}
then how can I call this in thread?
You should be able to use an anonymous method or lambda to provide full static checking:
Thread FirstThread = new Thread(() => Fun1(5, 12));
or if you want to do something with the result:
Thread FirstThread = new Thread(() => {
int i = Fun1(5, 12);
// do something with i
});
but note that this "do something" still runs in the context of the new thread (but with access to the other variables in the outer method (Main) courtesy of "captured variables").
If you have C# 2.0 (and not above), then:
Thread FirstThread = new Thread((ThreadStart)delegate { Fun1(5, 12); });
and
Thread FirstThread = new Thread((ThreadStart)delegate {
int i = Fun1(5, 12);
// do something with i
});
This may be another approach. Here input is passed as parameterized Thread and return type is passed in the delegate event, so that when the thread complete that will call the Delegate. This will be fine to get result when the thread completes.
public class ThreadObject
{
public int i;
public int j;
public int result;
public string Name;
}
public delegate void ResultDelegate(ThreadObject threadObject);
public partial class Form1 : Form
{
public event ResultDelegate resultDelete;
public Form1()
{
InitializeComponent();
resultDelete += new ResultDelegate(resultValue);
}
void resultValue(ThreadObject threadObject)
{
MessageBox.Show("Thread Name : " + threadObject.Name + " Thread Value : " + threadObject.result);
}
private void button1_Click(object sender, EventArgs e)
{
ThreadObject firstThreadObject = new ThreadObject();
firstThreadObject.i = 0;
firstThreadObject.j = 100;
firstThreadObject.Name = "First Thread";
Thread firstThread = new Thread(Fun);
firstThread.Start(firstThreadObject);
ThreadObject secondThreadObject = new ThreadObject();
secondThreadObject.i = 0;
secondThreadObject.j = 200;
secondThreadObject.Name = "Second Thread";
Thread secondThread = new Thread(Fun);
secondThread.Start(secondThreadObject);
}
private void Fun(object parameter)
{
ThreadObject threadObject = parameter as ThreadObject;
for (; threadObject.i < threadObject.j; threadObject.i++)
{
threadObject.result += threadObject.i;
Thread.Sleep(10);
}
resultValue(threadObject);
}
}
I like Mark Gravell's answer. You can pass your result back out to the main thread with just a little modification:
int fun1, fun2;
Thread FirstThread = new Thread(() => {
fun1 = Fun1(5, 12);
});
Thread SecondThread = new Thread(() => {
fun2 = Fun2(2, 3);
});
FirstThread.Start();
SecondThread.Start();
FirstThread.Join();
SecondThread.Join();
Console.WriteLine("Fun1 returned {0}, Fun2 returned {1}", fun1, fun2);
There is much simpler way to execute function in separate thread:
// Create function delegate (it can be any delegate)
var FunFunc = new Func<int, int, int>(fun1);
// Start executing function on thread pool with parameters
IAsyncResult FunFuncResult = FunFunc.BeginInvoke(1, 5, null, null);
// Do some stuff
// Wait for asynchronous call completion and get result
int Result = FunFunc.EndInvoke(FunFuncResult);
This function will be executed on thread pool thread, and that logic is completely transparent to your application.
An in general, I suggest to execute such small tasks on thread pool instead of dedicated thread.
You can use the ParameterizedThreadStart overload on the Thread constructor. It allows you to pass an Object as a parameter to your thread method. It's going to be a single Object parameter, so I usually create a parameter class for such threads.. This object can also store the result of the thread execution, that you can read after the thread has ended.
Don't forget that accessing this object while the thread is running is possible, but is not "thread safe". You know the drill :)
Here's an example:
void Main()
{
var thread = new Thread(Fun);
var obj = new ThreadObject
{
i = 1,
j = 15,
};
thread.Start(obj);
thread.Join();
Console.WriteLine(obj.result);
}
public static void Fun(Object obj)
{
var threadObj = obj as ThreadObject;
threadObj.result = threadObj.i + threadObj.j;
}
public class ThreadObject
{
public int i;
public int j;
public int result;
}
For some alternatives; currying:
static ThreadStart CurryForFun(int i, int j)
{ // also needs a target object if Fun1 not static
return () => Fun1(i, j);
}
Thread FirstThread = new Thread(CurryForFun(5, 12));
or write your own capture-type (this is broadly comparable to what the compiler does for you when you use anon-methods / lambdas with captured variables, but has been implemented differently):
class MyCaptureClass
{
private readonly int i, j;
int? result;
// only available after execution
public int Result { get { return result.Value; } }
public MyCaptureClass(int i, int j)
{
this.i = i;
this.j = j;
}
public void Invoke()
{ // will also need a target object if Fun1 isn't static
result = Fun1(i, j);
}
}
...
MyCaptureClass capture = new MyCaptureClass(5, 12);
Thread FirstThread = new Thread(capture.Invoke);
// then in the future, access capture.Result
try backgroundWorker http://msdn.microsoft.com/en-us/library/system.componentmodel.backgroundworker.aspx you can pass value to the DoWork event with DoWorkEventArgs and retrive value in the RunWorkerCompleted.

Categories