How do C# threads cause memory leaks when they already finished? - c#

Here's my code:
using System;
using System.Collections.Generic;
using System.Threading;
namespace testt
{
class MainClass
{
public static List<TestObject> testjobs = new List<TestObject> ();
public static void Main (string[] args)
{
Console.WriteLine ("Hello World!");
addTask (); //put a breakpoint here
Thread.Sleep (5000);
deleteObj ();
while (true) {
Console.WriteLine ("MAIN STILL EXISTS!");
Thread.Sleep (1500);
GC.Collect ();
}
}
public static void addTask()
{
for (int i = 0; i < 10; i++)
{
testjobs.Add (new TestObject ());
testjobs [i].Start ();
}
}
public static void deleteObj()
{
for(int i=0; i<10;i++)
{
testjobs [0].dispose ();
testjobs.RemoveAt (0);
}
Console.WriteLine (testjobs.Count);
}
}
public class TestObject
{
private bool _isStopRequested;
private Thread _thread;
public void Start()
{
_thread = new Thread(ThreadRoutine);
_thread.Start();
}
public void Stop()
{
_isStopRequested = true;
if(!_thread.Join(5000))
{
_thread.Abort();
}
}
public void dispose(){
this.Stop ();
this._thread.Abort ();
this._thread=null;
}
private void ThreadRoutine()
{
//while(!_isStopRequested) //THIS CAUSES THE MEMORY LEAK!!!
{
Thread.Sleep (1);
}
Console.WriteLine ("THREAD FINISHED");
}
~TestObject(){
Console.WriteLine ("===================TESTOBJECT DESTROYED!!===============");
}
}
}
If you run it with the //while(!_isStopRequested) uncommented, the TestObject instances will not be destroyed, i.e their destructor methods will not be called.
If you run it as is, then only 4-8 objects will be destroyed, not all 10 of them.
Why does this happen when the threads have fully exited? I checked with the Xamarin debugger and the threads were definitely stopped. If you put a breakpoint in Xamarin at addTask(); then you can see that 10 threads
My only explanation for this is that the thread somehow holds a reference back to their parent object TestObject instance even after they have finished. How can a thread hold a reference to their parent object when the thread has already finished?
Also, if I change Thread.Sleep(1) to Thread.Sleep(5000), the TestObjects also stop being collected.
Also, as it is, only some TestObjects get collected whilst others don't.
Why do these things happen? How can I ensure that ALL the TestObjects get garbage collected by the time the deleteObj() function returns?
EDIT: I just tested the exact same code in Visual Studio (.NET) and all of the objects were garbage collected regardless of whether if that line was commented out or not.
Therefore I now consider this issue to be a Mono-specific problem and there was no memory leak to begin with.

Finalizers are not deterministic. You cannot rely on them being called.
If it is vitally important for your program to clean up the resource in question then you should be explicitly disposing of it, and not relying on a finalizer.
If cleaning up the resource would be nice, but you don't really care all that much if the finializer gets to it or not, then you can choose to not explicitly dispose of the unmanaged resources.
Also note that making a managed object eligible for garbage collection doesn't necessarily mean that it will be garbage collected. It means it can be collected whenever the collector feels like it.
Finally, recognize that aborting a thread is another unreliable thing to do. The thread do. There are a number of ways for a thread that has been requested to abort will not successfully do so, or where it will cause any number of different types of problems when it does. You should avoid using Thread.Abort unless the thread in question was designed to be aborted, and you have a strong understanding of all of the many possible pitfalls of trying to reason about a program that could throw an exception between any two operations.

Related

How to create certain number of threads dynamically and assign method when any of the thread is completed in C#?

I have a scenario in which I need to create number of threads dynamically based on the configurable variable.I can only start that number of thread at a time and as soon as one of the thread is completed,I need to assign a method in same thread as in queue.
Can any one help me to resolve the above scenario with an example.
I have been researching for a week but not able to get the concrete solution.
There are many ways to approach this, but which is best depends on your specific problem.
However, let's assume that you have a collection of items that you want to do some work on, with a separate thread processing each item - up to a maximum number of simultaneous threads that you specify.
One very simple way to do that is to use Plinq via AsParallel() and WithDegreeOfParallelism(), as the following console application demonstrates:
using System;
using System.Linq;
using System.Threading;
namespace Demo
{
static class Program
{
static void Main()
{
int maxThreads = 4;
var workItems = Enumerable.Range(1, 100);
var parallelWorkItems = workItems.AsParallel().WithDegreeOfParallelism(maxThreads);
parallelWorkItems.ForAll(worker);
}
static void worker(int value)
{
Console.WriteLine($"Worker {Thread.CurrentThread.ManagedThreadId} is processing {value}");
Thread.Sleep(1000); // Simulate work.
}
}
}
If you run this and inspect the output, you'll see that multiple threads are processing the work items, but the maximum number of threads is limited to the specified value.
You should have a look at thread pooling. See this link for more information Threadpooling in .NET. You will most likely have to work with the callbacks to accomplish your task to call a method as soon as work in one thread was done
There might be a smarter solution for you using async/await, depending on what you are trying to achieve. But since you explicitly ask about threads, here is a short class that does what you want:
public class MutliThreadWorker : IDisposable
{
private readonly ConcurrentQueue<Action> _actions = new ConcurrentQueue<Action>();
private readonly List<Thread> _threads = new List<Thread>();
private bool _disposed;
private void ThreadFunc()
{
while (true)
{
Action action;
while (!_actions.TryDequeue(out action)) Thread.Sleep(100);
action();
}
}
public MutliThreadWorker(int numberOfThreads)
{
for (int i = 0; i < numberOfThreads; i++)
{
Thread t = new Thread(ThreadFunc);
_threads.Add(t);
t.Start();
}
}
public void Dispose()
{
Dispose(true);
}
protected virtual void Dispose(bool disposing)
{
_disposed = true;
foreach (Thread t in _threads)
t.Abort();
if (disposing)
GC.SuppressFinalize(this);
}
public void Enqueue(Action action)
{
if (_disposed)
throw new ObjectDisposedException("MultiThreadWorker");
_actions.Enqueue(action);
}
}
This class starts the required number of threads when instantiated as this:
int requiredThreadCount = 16; // your configured value
MultiThreadWorker mtw = new MultiThreadWorker(requiredThreadCount);
It then uses a ConcurrentQueue<T> to keep track of the tasks to do. You can add methods to the queue via
mtw.Enqueue(() => DoThisTask());
I made it IDisposable to make sure the treads are stopped in the end. Of course this would need a little improvemnt since aborting threads like this is not the best practice.
The ThreadFunc itself checks repeatedly if there are queued actions and executes them. This could also be improved a little by patterns using Monitor.Pulse and Monitor.Wait etc.
And as I said, async/await may lead to better solutions, but you asked for threads explicitly.

c# lock() hangs on second lock attempt

EDIT2-->
Take a look at the bottom;
<--EDIT2
I encountered wierd (to me, at least) behaviour.
I even created simple WinForms class and simple class (code below) to test it.
I always thought that calling lock(m_lock) if previous lock(m_lock) call didn't ended, the first one will wait and enter onece the second leaves the scope of lock. Nope.
Flow of actions is:
Create Class1 object;
Call Start() method;
Call DoSomething() method while m_lock is locked in run method;
Output is:
start()
Trying to acquire lock
Acquired lock
Released lock
Trying to acquire lock
Acquired lock
DoSomething() Trying to acquire lock
... hangs ...
What am I missing or doing wrong? I'm a new one to C# (came from C++) so maybe there are some gotchas in C#.
And it still hangs... (by the time I ended writing this post)
EDIT-->
In a real world I use lock to secure read/write/configure on serialPort (with synchroneous read/writes, not async ones). And I see in dbg that there are some internal WaitOne calls. Don't know if it is relevant.
<--EDIT
Here's example:
using System;
namespace LockTester
{
public class Class1
{
object m_lock = null;
bool m_isRunning;
System.Threading.Thread m_thread = null;
public Class1()
{
Console.WriteLine("Class1 ctor");
m_lock = new object();
m_isRunning = false;
}
public void DoSomething(){
Console.WriteLine("DoSomething() Trying to acquire lock");
lock(m_lock){
Console.WriteLine("DoSomething() Acquired lock");
}
Console.WriteLine("DoSomething() Released lock");
}
public void Start(){
Console.WriteLine("start()");
m_isRunning = true;
if (m_thread == null){
m_thread = new System.Threading.Thread(Run);
}
m_thread.Start();
}
public void Stop(){
Console.WriteLine("stop()");
m_isRunning = false;
}
private void Run(){
while (m_isRunning){
Console.WriteLine("Trying to acquire lock");
lock(m_lock){
Console.WriteLine("Acquired lock");
System.Threading.Thread.Sleep(1000);
}
Console.WriteLine("Released lock");
System.Threading.Thread.Sleep(1000);
}
}
}
}
EDIT2:
Ok, found the answer. It was in one more common denominator.
I have found somewhere (SO probably) a solution to redirect Console output to TextBox (for purely testing reasons, you know - small testing applications with gui, which can capture tested object's internal messages being printed to Console).
Here's the code:
used in my form's constructor with :
_writer = new TextBoxStreamWriter(textBox1, this);
Console.SetOut(_writer);
public class TextBoxStreamWriter : TextWriter
{
TextBox _output = null;
Form _form = null;
object _lock = new object();
delegate void SetTextCallback(string text);
private void SetText(string text)
{
// InvokeRequired required compares the thread ID of the
// calling thread to the thread ID of the creating thread.
// If these threads are different, it returns true.
if (_output.InvokeRequired)
{
SetTextCallback d = new SetTextCallback(SetText);
_form.Invoke(d, new object[] { text });
}
else
{
_output.AppendText(text);
}
}
public TextBoxStreamWriter(TextBox output, Form form)
{
_output = output;
_form = form;
}
public override void Write(char value)
{
lock (_lock)
{
base.Write(value);
SetText(value.ToString());
}
}
public override Encoding Encoding
{
get { return System.Text.Encoding.UTF8; }
}
}
Anyone can explain me why this caused this problem?
When you call Form.Invoke, it will do this:
Executes the specified delegate on the thread that owns the control's underlying window handle.
The way it does this is to post a message into the message queue of the owning thread, and wait for that thread to process the message.
As such, Invoke is a blocking call that does not return until the invoked delegate has been called.
Now, the likely reason your code is blocking is that your main GUI thread is already waiting for something else to happen, likely that your external program has completed.
As such it is not actually processing messages.
If this is the reason, then the solution here is to remove the blocking part of the GUI thread. Don't sit around waiting for the external program to complete, instead spin out a task that waits for it to complete and then raises appropriate events on the main form when it does. In the mean time, the main thread is free to process messages, update textboxes, etc.
Note that this means that if starting the external program is done in response to an event, like a button click, you may need to disable parts of the user interface while the program is running, to avoid having the user click the button twice, starting two parallel executions that will both report to the same textbox.
Conclusion: Multithreaded programming is hard!

How can I trust in 'lock' anymore?

back on my old unmanaged c++ days, I could trust in my critical sections in a multithreading application. So, now with dotNet/C#, I was relaying on the lock mechanism. By locking a resource I was confident any thread couldn't access those resources within my piece of code.
This seems not to be true in dotNet!
I have my windows service application. I create a main managed thread with an hidden Form hosting a third parity OCX. Within this thread I do message pumping an polling on a list of objects. This list of objects gets modified by events fired by the OCX within this managed thread.
I post simplified parts of my code here:
public bool Start()
{
ServiceIsRunning = true;
m_TaskThread = new Thread(new ParameterizedThreadStart(TaskLoop));
m_TaskThread.SetApartmentState(ApartmentState.STA);
m_TaskThread.Start(this);
return true;
}
private void OnOCXEvent(object objToAdd)
{
lock(m_ObjectList)
{
m_ObjectList.Add(objToAdd); }
}
}
private void CheckList()
{
lock(m_ObjectList)
{
foreach(object obj in m_ObjectList)
{
...
}
}
}
[STAThread] // OCX requirement!
private void TaskLoop(object startParam)
{
try {
...
while (ServiceIsRunning)
{
// Message pump
Application.DoEvents();
if (checkTimeout.IsElapsed(true))
{
CheckList();
}
// Relax process CPU time!
Thread.Sleep(10);
}
} catch(Exception ex) {
...
}
}
You won't beleve me: I got a 'list has been modified' exception in CheckList! 8-/
So I did some logging and I noticed that the OnOCXEvent will be raised when the SAME managed thread is within the CheckList foreach loop. I'm sure: I got the same managed thread id in my log file, the foreach loop wasn't finished and the OnOCXEvent has been called by the same manged thread!
Now I'm wondering: how can this happen? Is a single managed thread implemented with more win32 threads?
Hope someone can explain why this is happening, so I can solve this issue.
Thanks,
Fabio
My Note:
I actually solved the issue creating a copy of the list before the foreach loop. But I do not like this solution. I also like to understand what is happening. I do not own the third parity OCX code, but the method I call within the CheckList loop has logically nothing to do with the OCX event beening fired.
I strongly suspect this is just a re-entrancy issue.
Within your CheckList call you're calling an OCX method. If that does anything which can itself raise OCX events - including effectively calling Application.DoEvents - then you can end up with OnOCXEvent being called in a thread which is also executing CheckList... and that will cause the problem.
This isn't an issue with lock - it's an issue with re-entrancy.
One way to diagnose this would be to modify your CheckList and OnOCXEvent methods:
private bool inCheckList;
private void OnOCXEvent(object objToAdd)
{
lock(m_ObjectList)
{
if (inCheckList)
{
throw new Exception("Look at this stack trace!");
}
m_ObjectList.Add(objToAdd);
}
}
private void CheckList()
{
lock(m_ObjectList)
{
inCheckList = true;
foreach(object obj in m_ObjectList)
{
...
}
inCheckList = false; // Put this in a finally block if you really want
}
}
I strongly suspect you'll see the exception thrown with a stack trace which includes CheckList, OnOCXEvent - and a bunch of code in-between, with something that runs the message loop in the middle.

How to tell in C# if a class method is currently executing

Say I have the following code (please assume all the appropriate import statements):
public class CTestClass {
// Properties
protected Object LockObj;
public ConcurrentDictionary<String, String> Prop_1;
protected System.Timers.Timer TImer_1;
// Methods
public CTestClass () {
LockObj = new Object ();
Prop_1 = new ConcurrentDictionary<String, String> ();
Prop_1.TryAdd ("Key_1", "Value_1");
Timer_1 = new System.Timers.Timer ();
Timer_1.Interval = (1000 * 60); // One minute
Timer_1.Elapsed += new ElapsedEventHandler ((s, t) => Method_2 ());
Timer_1.Enabled = true;
} // End CTestClass ()
public void Method_1 () {
// Do something that requires Prop_1 to be read
// But *__do not__* lock Prop_1
} // End Method_1 ()
public void Method_2 () {
lock (LockObj) {
// Do something with Prop_1 *__only if__* Method_1 () is not currently executing
}
} // End Method_2 ()
} // End CTestClass
// Main class
public class Program {
public static void Main (string[] Args) {
CTestClass TC = new CTestClass ();
ParallelEnumerable.Range (0, 10)
.ForAll (s => {
TC.Method_1 ();
});
}
}
I understand it is possible to use MethodBase.GetCurrentMethod, but (short of doing messy book-keeping with global variables) is it possible to solve the problem without reflection?
Thanks in advance for your assistance.
EDIT
(a) Corrected an error with the scope of LockObj
(b) Adding a bit more by way of explanation (taken from my comment below)
I have corrected my code (in my actual project) and placed LockObj as a class property. The trouble is, Method_2 is actually fired by a System.Timers.Timer, and when it is ready to fire, it is quite possible that Method_1 is already executing. But in that event it is important to wait for Method_1 to finish executing before proceeding with Method_2.
I agree that the minimum working example I have tried to create does not make this latter point clear. Let me see if I can edit the MWE.
CODE EDITING FINISHED
ONE FINAL EDIT
I am using Visual Studio 2010 and .NET 4.0, so I do not have the async/await features that would have made my life a lot easier.
As pointed above, you should become more familiar with different synchronization primitives, that exist in .net.
You dont solve such problems by reflection or analyzing whos the concurent - running method, but by using a signaling primitive, which will inform anyone interested that the method is running/ended.
First of all ConcurentDictionary is thread safe so you don't need to lock for producing/consuming. So, if only care about accessing your dictionary no additional locking is necessary.
However if you just need to mutual exclude the execution of method 1 and 2, you should declare the lock object as class member and you may lock each function body using it, but as I said, not needed if you are going to use ConcurentDictionary.
If you really need which method executes at every moment you can use stack frame of each thread, but this will going to be slow and I believe not necessary for this case.
The term you're looking for is Thread Synchronisation. There are many ways to achieve this in .NET.
One of which (lock) you've discovered.
In general terms, the lock object should be accessible by all threads needing it, and initialised before any thread tries to lock it.
The lock() syntax ensures that only one thread can continue at a time for that lock object. Any other threads which try to lock that same object will halt until they can obtain the lock.
There is no ability to time out or otherwise cancel the waiting for the lock (except by terminating the thread or process).
By way of example, here's a simpler form:
public class ThreadSafeCounter
{
private object _lockObject = new Object(); // Initialise once
private int count = 0;
public void Increment()
{
lock(_lockObject) // Only one thread touches count at a time
{
count++;
}
}
public void Decrement()
{
lock (_lockObject) // Only one thread touches count at a time
{
count--;
}
}
public int Read()
{
lock (_lockObject) // Only one thread touches count at a time
{
return count;
}
}
}
You can see this as a sort of variant of the classic readers/writers problem where the readers don't consume the product of the writers. I think you can do it with the help of an int variable and three Mutex.
One Mutex (mtxExecutingMeth2) guard the execution of Method2 and blocks the execution of both Method2 and Method1. Method1 must release it immediately, since otherwise you could not have other parallel executions of Method1. But this means that you have to tell Method2 whene there are Method1's executing, and this is done using the mtxThereAreMeth1 Mutex which is released only when there are no more Method1's executing. This is controlled by the value of numMeth1 which has to be protected by another Mutex (mtxNumMeth1).
I didn't give it a try, so I hope I didn't introduce some race conditions. Anyway it should at least give you an idea of a possible direction to follow.
And this is the code:
protected int numMeth1 = 0;
protected Mutex mtxNumMeth1 = new Mutex();
protected Mutex mtxExecutingMeth2 = new Mutex();
protected Mutex mtxThereAreMeth1 = new Mutex();
public void Method_1()
{
// if this is the first execution of Method1, tells Method2 that it has to wait
mtxNumMeth1.WaitOne();
if (numMeth1 == 0)
mtxThereAreMeth1.WaitOne();
numMeth1++;
mtxNumMeth1.ReleaseMutex();
// check if Method2 is executing and release the Mutex immediately in order to avoid
// blocking other Method1's
mtxExecutingMeth2.WaitOne();
mtxExecutingMeth2.ReleaseMutex();
// Do something that requires Prop_1 to be read
// But *__do not__* lock Prop_1
// if this is the last Method1 executing, tells Method2 that it can execute
mtxNumMeth1.WaitOne();
numMeth1--;
if (numMeth1 == 0)
mtxThereAreMeth1.ReleaseMutex();
mtxNumMeth1.ReleaseMutex();
}
public void Method_2()
{
mtxThereAreMeth1.WaitOne();
mtxExecutingMeth2.WaitOne();
// Do something with Prop_1 *__only if__* Method_1 () is not currently executing
mtxExecutingMeth2.ReleaseMutex();
mtxThereAreMeth1.ReleaseMutex();
}

Is the following C# code thread safe?

I am trying to learn the threading in C#. Today I sow the following code at http://www.albahari.com/threading/:
class ThreadTest
{
bool done;
static void Main()
{
ThreadTest tt = new ThreadTest(); // Create a common instance
new Thread (tt.Go).Start();
tt.Go();
}
// Note that Go is now an instance method
void Go()
{
if (!done) { done = true; Console.WriteLine ("Done"); }
}
}
In Java unless you define the "done" as volatile the code will not be safe. How does C# memory model handles this?
Guys, Thanks all for the answers. Much appreciated.
Well, there's the clear race condition that they could both see done as false and execute the if body - that's true regardless of memory model. Making done volatile won't fix that, and it wouldn't fix it in Java either.
But yes, it's feasible that the change made in one thread could happen but not be visible until in the other thread. It depends on CPU architecture etc. As an example of what I mean, consider this program:
using System;
using System.Threading;
class Test
{
private bool stop = false;
static void Main()
{
new Test().Start();
}
void Start()
{
new Thread(ThreadJob).Start();
Thread.Sleep(500);
stop = true;
}
void ThreadJob()
{
int x = 0;
while (!stop)
{
x++;
}
Console.WriteLine("Counted to {0}", x);
}
}
While on my current laptop this does terminate, I've used other machines where pretty much the exact same code would run forever - it would never "see" the change to stop in the second thread.
Basically, I try to avoid writing lock-free code unless it's using higher-level abstractions provided by people who really know their stuff - like the Parallel Extensions in .NET 4.
There is a way to make this code lock-free and correct easily though, using Interlocked. For example:
class ThreadTest
{
int done;
static void Main()
{
ThreadTest tt = new ThreadTest(); // Create a common instance
new Thread (tt.Go).Start();
tt.Go();
}
// Note that Go is now an instance method
void Go()
{
if (Interlocked.CompareExchange(ref done, 1, 0) == 0)
{
Console.WriteLine("Done");
}
}
}
Here the change of value and the testing of it are performed as a single unit: CompareExchange will only set the value to 1 if it's currently 0, and will return the old value. So only a single thread will ever see a return value of 0.
Another thing to bear in mind: your question is fairly ambiguous, as you haven't defined what you mean by "thread safe". I've guessed at your intention, but you never made it clear. Read this blog post by Eric Lippert - it's well worth it.
No, it's not thread safe. You could potentially have one thread check the condition (if(!done)), the other thread check that same condition, and then the first thread executes the first line in the code block (done = true).
You can make it thread safe with a lock:
lock(this)
{
if(!done)
{
done = true;
Console.WriteLine("Done");
}
}
Even in Java with volatile, both threads could enter the block with the WriteLine.
If you want mutual exclusion you need to use a real synchronisation object such as a lock.
onle way this is thread safe is when you use atomic compare and set in the if test
if(atomicBool.compareAndSet(false,true)){
Console.WriteLine("Done");
}
You should do something like this:
class ThreadTest{
Object myLock = new Object();
...
void Go(){
lock(myLock){
if(!done)
{
done = true;
Console.WriteLine("Done");
}
}
}
The reason you want to use an generic object, rather than "this", is that if your object (aka "this") changes at all it is considered another object. Thus your lock does not work any more.
Another small thing you might consider is this. It is a "good practices" thing, so nothing severe.
class ThreadTest{
Object myLock = new Object();
...
void Go(){
lock(myLock){
if(!done)
{
done = true;
}
}
//This line of code does not belong inside the lock.
Console.WriteLine("Done");
}
Never have code inside a lock that does not need to be inside a lock. This is due to the delay this causes. If you have lots of threads you can gain a lot of performance from removing all this unnecessary waiting.
Hope it helps :)

Categories