Global variable not being initilized in a new thread - c#

I have tried using volatile ....
q is a global class variable that should be able to be accessed by any members in the class.
would i have to create a
Thread t; in class A ?
class A
{
string q;
public void SomeMethod ()
{
new Thread(Method ()).Start();
Console.WriteLine (q); //this writes out nothing
}
private void Method ()
{
q = "Hello World";
}
}

The main thread, the one that executes Console.WriteLine(q), is running that line and exiting before the new thread that you started has a chance to set the variable's value.
Do some research on thread synchronization.

Try this (note this is so your brain grasps the reason your code does not work. It is NOT a suggestion for a pattern to use - thanks for the comment Chris):
class A
{
string q;
public void SomeMethod ()
{
new Thread(Method ()).Start();
//Add this so the thread finishes (not a good permanent solution)
Thread.Sleep(500);
Console.WriteLine (q); //this writes out nothing
}
private void Method ()
{
q = "Hello World";
}
}
Why is this going to work? Because it gives the first thread a chance to finish its work before you write to console. What this means is one thread is writing while the other is setting.
Now, a better way is lock the variable when changing/retrieving q. A simple lock would be fine in this example. Just lock the variable both when you set and when you retrieve to write to console.

You're not blocking execution to wait for your new thread to execute. If you just want to make it write something out, you can use a waitHandle or a really simple boolean flag like this:
class A
{
string q;
bool hasBeenSet;
public void SomeMethod ()
{
new Thread(Method ()).Start();
while(!hasBeenSet)
{
Thread.Sleep(10);
}
Console.WriteLine (q); //this writes out nothing
}
private void Method ()
{
q = "Hello World";
hasBeenSet = true;
}
}
Although that being said, you should really do some research on WaitHandles and synchronization objects/ patterns.

Related

Invoke inside Task.Run, how to solve deadlock?

I have a static method, which can be called from anywhere. During execution it will encounter Invoke. Obviously when this method is called from UI thread it will deadlock.
Here is a repro:
public static string Test(string text)
{
return Task.Run(() =>
{
App.Current.Dispatcher.Invoke(() => { } );
return text + text;
}).Result;
}
void Button_Click(object sender, RoutedEventArgs e) => Test();
I've read multiple questions and like 10 answers of #StephenCleary (even some blogs linked from those), yet I fail to understand how to achieve following:
have a static method, which is easy to call and obtain result from anywhere (e.g. UI event handlers, tasks);
this method should block the caller and after it the caller code should continue run in the same context;
this method shouldn't freeze UI.
The closest analogy to what Test() should behave like is MessageBox.Show().
Is it achieve-able?
P.S.: to keep question short I am not attaching my various async/await attempts as well as one working for UI calls, but terrible looking using DoEvents one.
You can not.
Even just 2 of those 3 requirements can't be achieved together - "this method should block the caller" is in conflict with "this method shouldn't freeze UI".
You have to make this method either asynchronous in some way (await, callback) or make it executable in small chunks to block UI only for short periods of time using for example timer to schedule each step.
Just to reiterate what you already know - you can't block thread and call it back at the same time as discusses in many questions like - await works but calling task.Result hangs/deadlocks.
To achieve something what MessageBox does (but without creating window) one can do something like this:
public class Data
{
public object Lock { get; } = new object();
public bool IsFinished { get; set; }
}
public static bool Test(string text)
{
var data = new Data();
Task.Run(() =>
{
Thread.Sleep(1000); // simulate work
App.Current.Dispatcher.Invoke(() => { });
lock (data.Lock)
{
data.IsFinished = true;
Monitor.Pulse(data.Lock); // wake up
}
});
if (App.Current.Dispatcher.CheckAccess())
while (!data.IsFinished)
DoEvents();
else
lock (data.Lock)
Monitor.Wait(data.Lock);
return false;
}
static void DoEvents() // for wpf
{
var frame = new DispatcherFrame();
Dispatcher.CurrentDispatcher.BeginInvoke(DispatcherPriority.Background, new Func<object, object>(o =>
{
((DispatcherFrame)o).Continue = false;
return null;
}), frame);
Dispatcher.PushFrame(frame);
}
The idea is simple: check if current thread need invoke (UI thread) and then either run DoEvents loop or block thread.
Test() can be called from UI thread or from another task.
It works (not fully tested though), but it's crappy. I hope this will make my requirements clear and I still need the answer to my question if there is any better "no, you can't do this" ;)

Class Variable UnChanged by Asyncronous Method

EDIT: I know this is bad code, that's why I tagged it with anti-pattern.
OK, so this is some actual real code I found today:
public class ServiceWrapper
{
bool thingIsDone = false; //class variable.
//a bunch of other state variables
public string InvokeSoap(methodArgs args)
{
//blah blah blah
soapClient client = new Client();
client.doThingCompleted += new doThingEventHandler(MycompletionMethod);
client.doThingAsync(args);
do
{
string busyWork = "";
}
while (thingIsDone == false)
//do some more stuff
}
private void MyCompletionMethod(object sender, completedEventArgs e)
{
//do some other stuff
thingIsDone = true;
}
}
OK, so I'm aware of why this is bad, obviously. But what's actually making me ask this question is, thingIsDone never seems to be true in the InvokeSoap method, even if set true in MyCompletionMethod, but only if complied in Release. It behaves as one would "expect" in Debug mode.
Adding Thread.Sleep(100) inside the while loop also fixes it. What's up with this?
The CPU may cache the value of thingIsDone, since that thread cannot change it between reads.
You need volatile to force the writes to be published so the other thread can read it.
You can use Thread.Yield() in the loop.
do
{
string busyWork = "";
Thread.Yield();
}
while (thingIsDone == false)

Is it possible for a thread to call a thread?

Say I have this code:
public int A = 0;
//This is the method that will
//be run as a thread
public void Thread1()
{
public bool continue = true;
while (continue == true)
{
if (A==2)
{
Thread t2 = new Thread(new ThreadStart(Thread2));
}
//Some other code here
}
}
//This is the method that Thread1
//will try to run if A = 2
public void Thread2()
{
//Coding in this thread
}
Say that the int A gets set to 2 from an other method or something similar. Would the thread1 be able to create the new thread2 from inside itself? I felt that I would ask, because I have a habit of messing up my code bigtime when I try and do something I don't fully understand.
Yes, it's possible for threads to create other threads.
Keep in mind that the "default single thread" that your program loads up in is just another normal thread, so you're already creating a new thread from a thread when you're starting thread1

How to tell in C# if a class method is currently executing

Say I have the following code (please assume all the appropriate import statements):
public class CTestClass {
// Properties
protected Object LockObj;
public ConcurrentDictionary<String, String> Prop_1;
protected System.Timers.Timer TImer_1;
// Methods
public CTestClass () {
LockObj = new Object ();
Prop_1 = new ConcurrentDictionary<String, String> ();
Prop_1.TryAdd ("Key_1", "Value_1");
Timer_1 = new System.Timers.Timer ();
Timer_1.Interval = (1000 * 60); // One minute
Timer_1.Elapsed += new ElapsedEventHandler ((s, t) => Method_2 ());
Timer_1.Enabled = true;
} // End CTestClass ()
public void Method_1 () {
// Do something that requires Prop_1 to be read
// But *__do not__* lock Prop_1
} // End Method_1 ()
public void Method_2 () {
lock (LockObj) {
// Do something with Prop_1 *__only if__* Method_1 () is not currently executing
}
} // End Method_2 ()
} // End CTestClass
// Main class
public class Program {
public static void Main (string[] Args) {
CTestClass TC = new CTestClass ();
ParallelEnumerable.Range (0, 10)
.ForAll (s => {
TC.Method_1 ();
});
}
}
I understand it is possible to use MethodBase.GetCurrentMethod, but (short of doing messy book-keeping with global variables) is it possible to solve the problem without reflection?
Thanks in advance for your assistance.
EDIT
(a) Corrected an error with the scope of LockObj
(b) Adding a bit more by way of explanation (taken from my comment below)
I have corrected my code (in my actual project) and placed LockObj as a class property. The trouble is, Method_2 is actually fired by a System.Timers.Timer, and when it is ready to fire, it is quite possible that Method_1 is already executing. But in that event it is important to wait for Method_1 to finish executing before proceeding with Method_2.
I agree that the minimum working example I have tried to create does not make this latter point clear. Let me see if I can edit the MWE.
CODE EDITING FINISHED
ONE FINAL EDIT
I am using Visual Studio 2010 and .NET 4.0, so I do not have the async/await features that would have made my life a lot easier.
As pointed above, you should become more familiar with different synchronization primitives, that exist in .net.
You dont solve such problems by reflection or analyzing whos the concurent - running method, but by using a signaling primitive, which will inform anyone interested that the method is running/ended.
First of all ConcurentDictionary is thread safe so you don't need to lock for producing/consuming. So, if only care about accessing your dictionary no additional locking is necessary.
However if you just need to mutual exclude the execution of method 1 and 2, you should declare the lock object as class member and you may lock each function body using it, but as I said, not needed if you are going to use ConcurentDictionary.
If you really need which method executes at every moment you can use stack frame of each thread, but this will going to be slow and I believe not necessary for this case.
The term you're looking for is Thread Synchronisation. There are many ways to achieve this in .NET.
One of which (lock) you've discovered.
In general terms, the lock object should be accessible by all threads needing it, and initialised before any thread tries to lock it.
The lock() syntax ensures that only one thread can continue at a time for that lock object. Any other threads which try to lock that same object will halt until they can obtain the lock.
There is no ability to time out or otherwise cancel the waiting for the lock (except by terminating the thread or process).
By way of example, here's a simpler form:
public class ThreadSafeCounter
{
private object _lockObject = new Object(); // Initialise once
private int count = 0;
public void Increment()
{
lock(_lockObject) // Only one thread touches count at a time
{
count++;
}
}
public void Decrement()
{
lock (_lockObject) // Only one thread touches count at a time
{
count--;
}
}
public int Read()
{
lock (_lockObject) // Only one thread touches count at a time
{
return count;
}
}
}
You can see this as a sort of variant of the classic readers/writers problem where the readers don't consume the product of the writers. I think you can do it with the help of an int variable and three Mutex.
One Mutex (mtxExecutingMeth2) guard the execution of Method2 and blocks the execution of both Method2 and Method1. Method1 must release it immediately, since otherwise you could not have other parallel executions of Method1. But this means that you have to tell Method2 whene there are Method1's executing, and this is done using the mtxThereAreMeth1 Mutex which is released only when there are no more Method1's executing. This is controlled by the value of numMeth1 which has to be protected by another Mutex (mtxNumMeth1).
I didn't give it a try, so I hope I didn't introduce some race conditions. Anyway it should at least give you an idea of a possible direction to follow.
And this is the code:
protected int numMeth1 = 0;
protected Mutex mtxNumMeth1 = new Mutex();
protected Mutex mtxExecutingMeth2 = new Mutex();
protected Mutex mtxThereAreMeth1 = new Mutex();
public void Method_1()
{
// if this is the first execution of Method1, tells Method2 that it has to wait
mtxNumMeth1.WaitOne();
if (numMeth1 == 0)
mtxThereAreMeth1.WaitOne();
numMeth1++;
mtxNumMeth1.ReleaseMutex();
// check if Method2 is executing and release the Mutex immediately in order to avoid
// blocking other Method1's
mtxExecutingMeth2.WaitOne();
mtxExecutingMeth2.ReleaseMutex();
// Do something that requires Prop_1 to be read
// But *__do not__* lock Prop_1
// if this is the last Method1 executing, tells Method2 that it can execute
mtxNumMeth1.WaitOne();
numMeth1--;
if (numMeth1 == 0)
mtxThereAreMeth1.ReleaseMutex();
mtxNumMeth1.ReleaseMutex();
}
public void Method_2()
{
mtxThereAreMeth1.WaitOne();
mtxExecutingMeth2.WaitOne();
// Do something with Prop_1 *__only if__* Method_1 () is not currently executing
mtxExecutingMeth2.ReleaseMutex();
mtxThereAreMeth1.ReleaseMutex();
}

How to ensure run of a thread exactly after end of running of a specifc number of other threads?

I have a class in C# like this:
public MyClass
{
public void Start() { ... }
public void Method_01() { ... }
public void Method_02() { ... }
public void Method_03() { ... }
}
When I call the "Start()" method, an external class start to work and will create many parallel threads that those parallel threads call the "Method_01()" and "Method_02()" form above class. after end of working of the external class, the "Method_03()" will be run in another parallel thread.
Threads of "Method_01()" or "Method_02()" are created before creation of thread of Method_03(), but there is no guaranty to end before start of thread of "Method_03()". I mean the "Method_01()" or the "Method_02()" will lost their CPU turn and the "Method_03" will get the CPU turn and will end completely.
In the "Start()" method I know the total number of threads that are supposed to create and run "Method_01" and "Method_02()". The question is that I'm searching for a way using semaphore or mutex to ensure that the first statement of "Method_03()" will be run exactly after end of all threads which are running "Method_01()" or "Method_02()".
Three options that come to mind are:
Keep an array of Thread instances and call Join on all of them from Method_03.
Use a single CountdownEvent instance and call Wait from Method_03.
Allocate one ManualResetEvent for each Method_01 or Method_02 call and call WaitHandle.WaitAll on all of them from Method_03 (this is not very scalable).
I prefer to use a CountdownEvent because it is a lot more versatile and is still super scalable.
public class MyClass
{
private CountdownEvent m_Finished = new CountdownEvent(0);
public void Start()
{
m_Finished.AddCount(); // Increment to indicate that this thread is active.
for (int i = 0; i < NUMBER_OF_THREADS; i++)
{
m_Finished.AddCount(); // Increment to indicate another active thread.
new Thread(Method_01).Start();
}
for (int i = 0; i < NUMBER_OF_THREADS; i++)
{
m_Finished.AddCount(); // Increment to indicate another active thread.
new Thread(Method_02).Start();
}
new Thread(Method_03).Start();
m_Finished.Signal(); // Signal to indicate that this thread is done.
}
private void Method_01()
{
try
{
// Add your logic here.
}
finally
{
m_Finished.Signal(); // Signal to indicate that this thread is done.
}
}
private void Method_02()
{
try
{
// Add your logic here.
}
finally
{
m_Finished.Signal(); // Signal to indicate that this thread is done.
}
}
private void Method_03()
{
m_Finished.Wait(); // Wait for all signals.
// Add your logic here.
}
}
This appears to be a perfect job for Tasks. Below I assume that Method01 and Method02 are allowed to run concurrently with no specific order of invocation or finishing (with no guarantee, just typed in out of memory without testing):
int cTaskNumber01 = 3, cTaskNumber02 = 5;
Task tMaster = new Task(() => {
for (int tI = 0; tI < cTaskNumber01; ++tI)
new Task(Method01, TaskCreationOptions.AttachedToParent).Start();
for (int tI = 0; tI < cTaskNumber02; ++tI)
new Task(Method02, TaskCreationOptions.AttachedToParent).Start();
});
// after master and its children are finished, Method03 is invoked
tMaster.ContinueWith(Method03);
// let it go...
tMaster.Start();
What it sounds like you need to do is to create a ManualResetEvent (initialized to unset) or some other WatHandle for each of Method_01 and Method_02, and then have Method_03's thread use WaitHandle.WaitAll on the set of handles.
Alternatively, if you can reference the Thread variables used to run Method_01 and Method_02, you could have Method_03's thread use Thread.Join to wait on both. This assumes however that those threads are actually terminated when they complete execution of Method_01 and Method_02- if they are not, you need to resort to the first solution I mention.
Why not use a static variable static volatile int threadRuns, which is initialized with the number threads Method_01 and Method_02 will be run.
Then you modify each of those two methods to decrement threadRuns just before exit:
...
lock(typeof(MyClass)) {
--threadRuns;
}
...
Then in the beginning of Method_03 you wait until threadRuns is 0 and then proceed:
while(threadRuns != 0)
Thread.Sleep(10);
Did I understand the quesiton correctly?
There is actually an alternative in the Barrier class that is new in .Net 4.0. This simplifies the how you can do the signalling across multiple threads.
You could do something like the following code, but this is mostly useful when synchronizing different processing threads.
public class Synchro
{
private Barrier _barrier;
public void Start(int numThreads)
{
_barrier = new Barrier((numThreads * 2)+1);
for (int i = 0; i < numThreads; i++)
{
new Thread(Method1).Start();
new Thread(Method2).Start();
}
new Thread(Method3).Start();
}
public void Method1()
{
//Do some work
_barrier.SignalAndWait();
}
public void Method2()
{
//Do some other work.
_barrier.SignalAndWait();
}
public void Method3()
{
_barrier.SignalAndWait();
//Do some other cleanup work.
}
}
I would also like to suggest that since your problem statement was quite abstract, that often actual problems that are solved using countdownevent are now better solved using the new Parallel or PLINQ capabilities. If you were actually processing a collection or something in your code, you might have something like the following.
public class Synchro
{
public void Start(List<someClass> collection)
{
new Thread(()=>Method3(collection));
}
public void Method1(someClass)
{
//Do some work.
}
public void Method2(someClass)
{
//Do some other work.
}
public void Method3(List<someClass> collection)
{
//Do your work on each item in Parrallel threads.
Parallel.ForEach(collection, x => { Method1(x); Method2(x); });
//Do some work on the total collection like sorting or whatever.
}
}

Categories