In the following code I am using two threads to share sane resource in this example it's a queue so do I need to use lock while en-queueing or dequeuing if yes then why because program seems to work fine.
class Program
{
static Queue<string> sQ = new Queue<string>();
static void Main(string[] args)
{
Thread prodThread = new Thread(ProduceData);
Thread consumeThread = new Thread(ConsumeData);
prodThread.Start();
consumeThread.Start();
Console.ReadLine();
}
private static void ProduceData()
{
for (int i = 0; i < 100; i++)
{
sQ.Enqueue(i.ToString());
}
}
private static void ConsumeData()
{
while (true)
{
if (sQ.Count > 0)
{
string s = sQ.Dequeue();
Console.WriteLine("DEQUEUE::::" + s);
}
}
}
}
Yes you do, System.Collections.Generic.Queue<T> is not thread safe for being written to and read from at the same time. You either need to lock on the same object before enquing or dequing or if you are using .NET 4/4.5 use the System.Collections.Concurrent.ConcurrentQueue<T> class instead and use the TryDequeue method.
The reason your current implementation has not caused you a problem so far, is due to the Thread.Sleep(500) call (not something you should be using in production code) which means that the prodThread doesn't write to the queue while the consumeThread reads from it since the read operation takes less than 500ms. If you remove the Thread.Sleep odds are it will throw an exception at some point.
Related
The code below is an example on multi-threading that the prof presented in class. I am new to coding (first course). I have read on multi-threading and using locks. Reading the theory is fun. var fun = Theory.Read(multi-threading); Actually coding threads and locks seems to baffle me.
Trying to understand how the two threads in the code below will behave. From testing the code it looks like lock1 will not release and message2 is not enqueue-ed, but I might be wrong. Looks like there is a synchronization issue. Is this an example of a deadlock?
I am also wondering why locks and threads are required if two different queues are used. I am not seeing a shared resource.
Is there a way to fix this code to prevent the synchronization issue?
private static object Lock1 = new object(); // Protect MessageQueueOne
private static object Lock2 = new object(); // Protect MessageQueueTwo
private static Queue<string> MessageQueueOne = new Queue<string>();
private static Queue<string> MessageQueueTwo = new Queue<string>();
private static void AddMessages(string message1, string message2)
{
lock (Lock1)
{
// (1) Thread 1 is here...
MessageQueueOne.Enqueue(message1);
lock (Lock2)
{
MessageQueueTwo.Enqueue(message2);
}
}
}
private static void RemoveMessages()
{
lock (Lock2)
{
if (MessageQueueTwo.Count > 0)
{
// (2) Thread 2 is here...
Console.WriteLine(MessageQueueTwo.Dequeue());
}
lock (Lock1)
{
if (MessageQueueOne.Count > 0)
{
Console.WriteLine(MessageQueueOne.Dequeue());
}
}
}
}
private static void Main()
{
Task taskOne = Task.Run(() =>
{
for (int i = 0; i < 100; ++i)
{
AddMessages($"Message One: {DateTime.Now}", $"Message Two: {DateTime.UtcNow}");
Thread.Sleep(25);
}
});
Task taskTwo = Task.Run(() =>
{
for (int i = 0; i < 100; ++i)
{
RemoveMessages();
Thread.Sleep(25);
}
});
taskOne.Wait();
taskTwo.Wait();
Console.Write("Tasks are finished");
Console.ReadKey();
}
The code in the post is classical example of deadlock and expected to deadlock most of the time. See more links in Wikipedia article on deadlocks.
What leads to deadlock: one thread locks "lock1" and waits for "lock2", the other thread at the same time holds lock on "lock2" and will release it after acquiring "lock1" which will never be release by waiting thread.
Standard solutions
listen to your class to know the answer
read existing examples
if above fails - one option is to acquire resources in fixed order (i.e. if need to lock on more than one resource get "lock1" first, than "lock2" and so on) for all thread (Would you explain lock ordering?).
My program needs to write very often messages to several files. As it is very time consuming, I need to optimise it. Below, you can find an extract from my program where I try to write async to file in the background. It seems to work, but I am not sure if it is the best practice as I do not dispose tasks (this part is commented). I do not do it because I do not want my program to wait for those tasks completion. Simply, I want my message to be written to few files in the background as quickly as possible. As those files could be accessed by several threads, I added lock.
I use static methods because these methods are used everywhere in my code and I do not want to instantiate this class, just to write one line of message to file, everywhere (maybe that's wrong).
================== Class ==============================================
namespace test
{
public static class MessageLOG
{
private static string debugFileName = Settings.DebugLOGfilename;
private static string logFileName = Settings.LOGfilename;
private static object DebuglockOn = new object();
private static object LoglockOn = new object();
private static StreamWriter DebugSW;
private static StreamWriter LogSW;
private static void DebugFile(string message)
{
uint linesCount = 0;
string _filename = debugFileName;
if(DebugSW == null && !string.IsNullOrEmpty(_filename))
DebugSW = new StreamWriter(_filename);
if(DebugSW != null)
{
lock(DebuglockOn)
{
DebugSW.WriteLine(message);
linesCount++;
if (linesCount > 10)
{
DebugSW.Flush();
linesCount = 0;
}
}
}
}
private static void LogFile(string message)
{
uint linesCount = 0;
string _filename = logFileName;
if(LogSW == null && !string.IsNullOrEmpty(_filename))
LogSW = new StreamWriter(_filename);
if(LogSW != null)
{
lock(LoglockOn)
{
LogSW.WriteLine(string.Format("{0} ({1}): {2}", DateTime.Now.ToShortDateString(), DateTime.Now.ToShortTimeString(), message));
linesCount++;
if (linesCount > 10)
{
LogSW.Flush();
linesCount = 0;
}
}
}
public static void LogUpdate(string message)
{
ThreadPool.QueueUserWorkItem(new WaitCallback( (x) => LogFile(message)));
ThreadPool.QueueUserWorkItem(new WaitCallback( (x) => DebugFile(message)));
ThreadPool.QueueUserWorkItem(new WaitCallback( (x) => Debug.WriteLine(message)));
}
//This method will be called when the main thread is being closed
public static void CloseAllStreams()
{
if (DebugSW != null)
{
DebugSW.Flush();
DebugSW.Close();
}
if (LogSW != null)
{
LogSW.Flush();
LogSW.Close();
}
}
=============== main window ===========
void MainWIndow()
{
... some code ....
MessageLog.LogUpdate("Message text");
... code cont ....
MessageLog.CloseAllStreams();
}
You should re-think your design. Your locks should not be local variables in your method. This is redundant because each method call creates a new object and locks to it. This will not force synchronization across multiple threads (https://msdn.microsoft.com/en-us/library/c5kehkcz(v=vs.80).aspx). Since your methods are static, the locks need to be static variables and you should have a different lock per file. You can use ThreadPool.QueueUserWorkItem (https://msdn.microsoft.com/en-us/library/kbf0f1ct(v=vs.110).aspx) instead of Tasks. ThreadPool is an internal .NET class that re-uses threads to run async operations. This is perfect for your use case because you don't need control over each thread. You just need some async operation to execute and finish on its own.
A better approach would be to create a logger class that runs on its own thread. You can have a queue and enqueue messages from multiple threads and then have the LoggerThread handle writing to the file. This will ensure that only one thread is ever writing to the file. This will also maintain logging order if you use a FIFO queue. You will no longer need to lock writing to the file, but you will need to lock your queue. You can use the .NET Monitor (https://msdn.microsoft.com/en-us/library/system.threading.monitor(v=vs.110).aspx) class to block the LoggerThread until a message is queued (look at methods Enter/Wait/Pulse). To optimize it even more, you can now keep a stream open to the file and push data to it as it gets queued. Since only one thread ever accesses the file, this will be OK. Just remember to close the stream to the file when you are done. You can also set up a timer that goes off once in a while to flush the content. Keeping the stream open is not always recommended, especially if you anticipate other applications attempting to lock the file. However, in this case, it might be OK. This will be a design decision you need to make that fits best with your application.
You´re opening a new stream and committing writings for each write action : very poor performance.
My recommendation is to use only one StreamWriter for each file, this instance must be a class field and you need to still using the lock to ensure is thread safe.
Also this would require that you don't use the using statement in each write method.
Also periodically , maybe every X number of writes, you could make a Stream.Flush to commit writings on disk. This Flush must be protected by the lock.
In my .NET program, I want to count the number of times a piece of code will be hit. To make it a bit more challenging, my code is usually executed in multiple threads and I cannot control the creation / destruction of threads (and don't know when they are created)... they can even be pooled. Say:
class Program
{
static int counter = 0;
static void Main(string[] args)
{
Stopwatch sw = Stopwatch.StartNew();
Parallel.For(0, 100000000, (a) =>
{
Interlocked.Increment(ref counter);
});
Console.WriteLine(sw.Elapsed.ToString());
}
}
As the performance counter and method are hit quite a few times, I'd like to use a 'normal' variable in contrast to an atomic / interlocked integer. My second attempt was therefore to use threadlocal storage in combination with IDisposable to speed things up. Because I cannot control creation/destruction, I have to keep track of the storage variables:
class Program
{
static int counter = 0;
// I don't know when threads are created / joined, which is why I need this:
static List<WeakReference<ThreadLocalValue>> allStorage =
new List<WeakReference<ThreadLocalValue>>();
// The performance counter
[ThreadStatic]
static ThreadLocalValue local;
class ThreadLocalValue : IDisposable
{
public ThreadLocalValue()
{
lock (allStorage)
{
allStorage.Add(new WeakReference<ThreadLocalValue>(this));
}
}
public int ctr = 0;
public void Dispose()
{
// Atomic add and exchange
int tmp = Interlocked.Exchange(ref ctr, 0); // atomic set to 0-with-read
Interlocked.Add(ref Program.counter, tmp); // atomic add
}
~ThreadLocalValue()
{
// Make sure it's merged.
Dispose();
}
}
// Create-or-increment
static void LocalInc()
{
if (local == null) { local = new ThreadLocalValue(); }
++local.ctr;
}
static void Main(string[] args)
{
Stopwatch sw = Stopwatch.StartNew();
Parallel.For(0, 100000000, (a) =>
{
LocalInc();
});
lock (allStorage)
{
foreach (var item in allStorage)
{
ThreadLocalValue target;
if (item.TryGetTarget(out target))
{
target.Dispose();
}
}
}
Console.WriteLine(sw.Elapsed.ToString());
Console.WriteLine(counter);
Console.ReadLine();
}
}
My question is: can we do this faster and/or prettier?
What you need is a thread-safe, nonblocking, volatile, static variable to perform the counting for you.
Thanks goodness, the .NET framework provides managed ways to perform what you want.
For starters, you need a volatile, static variable to be used as a counter. Declare it like (where all your threads can access it):
public static volatile int volatileCounter;
Where static means this is a class and not an instance member, and volatile prevents caching errors from happening.
Next, you will need a code that increments it in a thread-safe and nonblocking way. If you don't expect your counter to exceed the limits of the int variable (which is very likely), you can use the Interlocked class for that like:
Interlocked.Increment(ref yourInstance.volatileCounter);
The interlocked class will guarantee that your increment operation will be atomic so no race condition can cause false results, and it is also non-blocking in the manner of on heavy-weighted sync objects and thread blocking is involved here.
static void Main(string[] args)
{
Test c = new Test();
Thread oThread = new Thread(new ThreadStart(c.Lock));
oThread.Start();
Thread oThread2 = new Thread(new ThreadStart(c.AfterLock));
oThread2.Start();
Console.ReadLine();
}
public class Test
{
public Dictionary<string, string> dic = new Dictionary<string, string>();
public void Lock()
{
lock (((IDictionary)dic).SyncRoot)
{
for (var i = 3; i < 200; i++)
{
Console.WriteLine(i.ToString());
dic.Add(i.ToString(), i.ToString());
}
}
}
public void AfterLock()
{
Console.WriteLine(dic["100"]);
}
}
AfterLock throws exception: The given key was not present in the dictionary
dic was locked by first thread ? why afterlock does not wait for first threads lock ?
You need to lock around the access to the object both when reading the data as well as when writing to the data. Synchronizing around only the writes is not safe.
Nothing prevents your AfterLock method from actually running after your Lock method. It can just as easily run before it, or, since you're not properly synchronizing around the read, they can even run in parallel. You need to add some synchronization mechanism to prevent that method from running after the write, if it's execution depends on the write having run first (as it clearly does here).
You should lock on a dedicated object, not the one you're trying to use.
class Example
{
private object cacheLock;
public static void Load()
{
// . . .
lock (cacheLock)
{
CacheTable = (ThreadSafeDictionary<String,
ThreadSafeDictionary<Object, Object>>)CacheTableTmp.Clone();
}
}
Here is some code that perpetually generate GUIDs. I've written it to learn about threading. In it you'll notice that I've got a lock around where I generate GUIDs and enqueue them even though the ConcurrentQueue is thread safe. It's because my actual code will need to use NHibernate and so I must make sure that only one thread gets to fill the queue.
While I monitor this code in Task Manager, I notice the process drops the number of threads from 18 (on my machine) to 14 but no less. Is this because my code isn't good?
Also can someone refactor this if they see fit? I love shorter code.
class Program
{
ConcurrentNewsBreaker Breaker;
static void Main(string[] args)
{
new Program().Execute();
Console.Read();
}
public void Execute()
{
Breaker = new ConcurrentNewsBreaker();
QueueSome();
}
public void QueueSome()
{
ThreadPool.QueueUserWorkItem(DoExecute);
}
public void DoExecute(Object State)
{
String Id = Breaker.Pop();
Console.WriteLine(String.Format("- {0} {1}", Thread.CurrentThread.ManagedThreadId, Breaker.Pop()));
if (Breaker.Any())
QueueSome();
else
Console.WriteLine(String.Format("- {0} XXXX ", Thread.CurrentThread.ManagedThreadId));
}
}
public class ConcurrentNewsBreaker
{
static readonly Object LockObject = new Object();
ConcurrentQueue<String> Store = new ConcurrentQueue<String>();
public String Pop()
{
String Result = null;
if (Any())
Store.TryDequeue(out Result);
return Result;
}
public Boolean Any()
{
if (!Store.Any())
{
Task FillTask = new Task(FillupTheQueue, Store);
FillTask.Start();
FillTask.Wait();
}
return Store.Any();
}
private void FillupTheQueue(Object StoreObject)
{
ConcurrentQueue<String> Store = StoreObject as ConcurrentQueue<String>;
lock(LockObject)
{
for(Int32 i = 0; i < 100; i++)
Store.Enqueue(Guid.NewGuid().ToString());
}
}
}
You are using .NET's ThreadPool so .NET/Windows manages the number of threads based on the amount of work waiting to be processed.
While I monitor this code in Task
Manager, I notice the process drops
the number of threads from 18 (on my
machine) to 14 but no less. Is this
because my code isn't good?
This does not indicate a problem. 14 is still high, unless you've got a 16-core cpu.
The threadpool will try to adjust and do the work with as few threads as possible.
You should start to worry when the number of threads goes up significantly.