.Net Mutex Question - c#

I have an application that uses a Mutex for cross process synchronization of a block of code. This mechanism works great for the applications current needs. In the worst case I have noticed that about 6 threads can backup on the Mutex. It takes about 2-3 seconds to execute the synchronized code block.
I just received a new requirement that is asking to create a priority feature to the Mutex such that occasionally some requests of the Mutex can be deemed more important then the rest. When one of these higher priority threads comes in the desired functionality is for the Mutex to grant acquisition to the higher priority request instead of the lower.
So is there anyway to control the blocked Mutex queue that Windows maintains? Should I consider using a different threading model?
Thanks,
Matt

Using just the Mutex this will be tough one to solve, I am sure someone out there is thinking about thread priorities etc. but I would probably not consider this route.
One option would be to maintain a shared memory structure and implement a simple priority queue. The shared memory can use a MemoryMappedFile, then when a process wants to execute the section of code it puts a token with a priority on the priority queue and then when it wakes up each thread inspects the priority queue to check the first token in the queue if the token belongs to the process it can dequeue the token and execute the code.

Mutex isnt that great for a number of reasons, and as far as i know, there is no way to change promote one thread over another while they are running, nor a nice way to accomodate your requirement.
I just read Jeffrey Richters "clr via c# 3", and there are a load of great thread sync constructs in there, and lots of good threading advice generally.
I wish i could remember enough of it to answer your question, but i doubt i would get it across as well as he can. check out his website: http://www.wintellect.com/ or search for some of his concurrent affairs articles.
they will definitely help.

Give each thread an AutoResetEvent. Then instead of waiting on a mutex, each thread adds its ARE to to a sorted list. If there is only one ARE on the list, fire the event, else wait for its ARE to fire. When a thread finishes processing, it removes its ARE from the list and fires the next one. Be sure to synchronize the list.

Related

What is a multithreading program and how does it work?

What is a multithreading program and how does it work exactly? I read some documents but I'm confused. I know that code is executed line by line, but I can't understand how the program manages this.
A simple answer would be appreciated.c# example please (only animation!)
What is a multi-threading program and how does it work exactly?
Interesting part about this question is complete books are written on the topic, but still it is elusive to lot of people. I will try to explain in the order detailed underneath.
Please note this is just to provide a gist, an answer like this can never do justice to the depth and detail required. Regarding videos, best that I have come across are part of paid subscriptions (Wintellect and Pluralsight), check out if you can listen to them on trial basis, assuming you don't already have the subscription:
Wintellect by Jeffery Ritcher (from his Book, CLR via C#, has same chapter on Thread Fundamentals)
CLR Threading by Mike Woodring
Explanation Order
What is a thread ?
Why were threads introduced, main purpose ?
Pitfalls and how to avoid them, using Synchronization constructs ?
Thread Vs ThreadPool ?
Evolution of Multi threaded programming API, like Parallel API, Task API
Concurrent Collections, usage ?
Async-Await, thread but no thread, why they are best for IO
What is a thread ?
It is software implementation, which is purely a Windows OS concept (multi-threaded architecture), it is bare minimum unit of work. Every process on windows OS has at least one thread, every method call is done on the thread. Each process can have multiple threads, to do multiple things in parallel (provided hardware support).
Other Unix based OS are multi process architecture, in fact in Windows, even the most complex piece of software like Oracle.exe have single process with multiple threads for different critical background operations.
Why were threads introduced, main purpose ?
Contrary to the perception that concurrency is the main purpose, it was robustness that lead to the introduction of threads, imagine every process on Windows is running using same thread (in the initial 16 bit version) and out of them one process crash, that simply means system restart to recover in most of the cases. Usage of threads for concurrent operations, as multiple of them can be invoked in each process, came in picture down the line. In fact it is even important to utilize the processor with multiple cores to its full ability.
Pitfalls and how to avoid using Synchronization constructs ?
More threads means, more work completed concurrently, but issue comes, when same memory is accessed, especially for Write, as that's when it can lead to:
Memory corruption
Race condition
Also, another issue is thread is a very costly resource, each thread has a thread environment block, Kernel memory allocation. Also for scheduling each thread on a processor core, time is spent for context switching. It is quite possible that misuse can cause huge performance penalty, instead of improvement.
To avoid Thread related corruption issues, its important to use the Synchronization constructs, like lock, mutex, semaphore, based on requirement. Read is always thread safe, but Write needs appropriate Synchronization.
Thread Vs ThreadPool ?
Real threads are not the ones, we use in C#.Net, that's just the managed wrapper to invoke Win32 threads. Challenge remain in user's ability to grossly misuse, like invoking lot more than required number of threads, assigning the processor affinity, so isn't it better that we request a standard pool to queue the work item and its windows which decide when the new thread is required, when an already existing thread can schedule the work item. Thread is a costly resource, which needs to be optimized in usage, else it can be bane not boon.
Evolution of Multi threaded programming, like Parallel API, Task API
From .Net 4.0 onward, variety of new APIs Parallel.For, Parallel.ForEach for data paralellization and Task Parallelization, have made it very simple to introduce concurrency in the system. These APIs again work using a Thread pool internally. Task is more like scheduling a work for sometime in the future. Now introducing concurrency is like a breeze, though still synchronization constructs are required to avoid memory corruption, race condition or thread safe collections can be used.
Concurrent Collections, usage ?
Implementations like ConcurrentBag, ConcurrentQueue, ConcurrentDictionary, part of System.Collections.Concurrent are inherent thread safe, using spin-wait and much easier and quicker than explicit Synchronization. Also much easier to manage and work. There's another set API like ImmutableList System.Collections.Immutable, available via nuget, which are thread safe by virtue of creating another copy of data structure internally.
Async-Await, thread but no thread, why they are best for IO
This is an important aspect of concurrency meant for IO calls (disk, network), other APIs discussed till now, are meant for compute based concurrency so threads are important and make it faster, but for IO calls thread has no use except waiting for the call to return, IO calls are processed on hardware based queue IO Completion ports
A simple analogy might be found in the kitchen.
You've probably cooked using a recipe before -- start with the specified ingredients, follow the steps indicated in the recipe, and at the end you (hopefully) have a delicious dish ready to eat. If you do that, then you have executed a traditional (non-multithreaded) program.
But what if you have to cook a full meal, which includes a number of different dishes? The simple way to do it would be to start with the first recipe, do everything the recipe says, and when it's done, put the finished dish (and the first recipe) aside, then start on the second recipe, do everything it says, put the second dish (and second recipe) aside, and so on until you've gone through all of the recipes one after another. That will work, but you might end up spending 10 hours in the kitchen, and of course by the time the last dish is ready to eat, the first dish might be cold and unappetizing.
So instead you'd probably do what most chefs do, which is to start working on several recipes at the same time. For example, you might put the roast in the oven for 45 minutes, but instead of sitting in front of the oven waiting 45 minutes for the roast to cook, you'd spend the 45 minutes chopping the vegetables. When the oven timer rings, you put down your vegetable knife, pull the cooked roast out of the oven and let it cool, then go back to chopping vegetables, and so on. If you can do that, then you are successfully multitasking several recipes/programs. That is, you aren't literally working on multiple recipes at once (you still have only two hands!), but you are jumping back and forth from following one recipe to following another whenever necessary, and thereby making progress on several tasks rather than twiddling your thumbs a lot. Do this well and you can have the whole meal ready to eat in a much shorter amount of time, and everything will be hot and fresh at about the same time too. If you do this, you are executing a simple multithreaded program.
Then if you wanted to get really fancy, you might hire a few other chefs to work in the kitchen at the same time as you, so that you can get even more food prepared in a given amount of time. If you do this, your team is doing multiprocessing, with each chef taking one part of the total work and all of them working simultaneously. Note that each chef may well be working on multiple recipes (i.e. multitasking) as described in the previous paragraph.
As for how a computer does this sort of thing (no more analogies about chefs), it usually implements it using a list of ready-to-run threads and a timer. When the timer goes off (or when the thread that is currently executing has nothing to do for a while, because e.g. it is waiting to load data from a slow hard drive or something), the operating system does a context switch, in which pauses the current thread (by putting it into a list somewhere and no longer executing instructions from that thread's code anymore), then pulls another ready-to-run thread from the list of ready-to-run threads and starts executing instructions from that thread's code instead. This repeats for as long as necessary, often with context switches happening every few milliseconds, giving the illusion that multiple programs are running "at the same time" even on a single-core CPU. (On a multi-core CPU it does this same thing on each core, and in that case it's no longer just an illusion; multiple programs really are running at the same time)
Why don't you refer to Microsoft's very own documentation of the .net class System.Threading.Thread?
It has a handfull of simple example programs written in C# (at the bottom of the page) just as you asked for:
Thread Examples
actually multi thread is do multiple process at the same time together . and you can complete process parallel .
it's actually multi thread is do multiple process at the same time together . and you can complete process parallel . you can take task from your main thread then execute some other way and done .

How can I share resources with threads in C#?

I have a thread reading from a specific plc's memory and it works perfectly. Now what I want is to start another thread to test the behavior of the system (simulate the first thread) in case of a conectivity issue, and when everything is Ok, continue the first thread. But I think I'll have problems with that because these two threads will need to use the same port.
My first idea was to abort the first thread, start the second one and when the everything's OK again, abort this thread and 'restart' the first one.
I've read some other forums and people say that aborting or suspending a thread is the worst solution, and I've read about syncronization of threads but I dont really know if this is useful in this case because I've never used it.
My question is, what is the correct way to solve this kind of situations?
You have a shared resource that you need to coordinate thread access to. There are a number of mechanisms in .NET available for that coordination.
There is a wonderful resource that provides both an introduction to thread concepts in .NET, and discusses advanced concepts in an approachable manner
http://www.albahari.com/threading/
In your case, have a look at the section on locking
Exclusive locking is used to ensure that only one thread can enter particular sections of code at a time. The two main exclusive locking constructs are lock and Mutex. Of the two, the lock construct is faster and more convenient. Mutex, though, has a niche in that its lock can span applications in different processes on the computer.
http://www.albahari.com/threading/part2.aspx#_Locking
You can structure your two threads so that they must acquire a specific lock to work with the port. Have your first thread release that lock before you start the second thread, then have the first thread wait to acquire that lock again (which the second thread will hold until done).

Thread.Sleep() usage to Prevent Server Overload

I wrote some code that mass imports a high volume of users into AD. To refrain from overloading the server, I put a thread.sleep() in the code, executed at every iteration.
Is this a good use of the method, or is there a better alternative (.NET 4.0 applies here)?
Does Thread.Sleep() even aid in performance? What is the cost and performance impact of sleeping a thread?
The Thread.Sleep() method will just put the thread in a pause state for the specified amount of time. I could tell you there are 3 different ways to achieve the same Sleep() calling the method from three different Types. They all have different features. Anyway most important, if you use Sleep() on the main UI thread, it will stop processing messages during that pause and the GUI will look locked. You need to use a BackgroundWorker to run the job you need to sleep.
My opinion is to use the Thread.Sleep() method and just follow my previous advice. In your specific case I guess you'll have no issues. If you put some efforts looking for the same exact topic on SO, I'm sure you'll find much better explanations about what I just summarized before.
If you have no way to receive a feedback from the called service, like it would happen on a typical event driven system (talking in abstract..we could also say callback or any information to understand how the service is affected by your call), the Sleep may be the way to go.
I think that Thread.Sleep is one way to handle this; #cHao is correct that using a timer would allow you to do this in another fashion. Essentially, you're trying to cut down number of commands sent to the AD server over a period of time.
In using timers, you're going to need to devise a way to detect trouble (that's more intuitive than a try/catch). For instance, if your server starts stalling and responding slower, you're going to continue stacking commands that the server can't handle (which may cascade in other errors).
When working with AD I've seen the Domain Controller freak out when too many commands come in (similar to a DOS attack) and bring the server to a crawl or crash. I think by using the sleep method you're creating a manageable and measurable flow.
In this instance, using a thread with a low priority may slow it down, but not to any controllable level. The thread priority will only be a factor on the machine sending the commands, not to the server having to process them.
Hope this helps; cheers!
If what you want is not overload the server you can just reduce the priority of the thread.
Thread.Sleep() do not consume any resources. However, the correct way to do this is set the priority of thread to a value below than Normal: Thread.Current.Priority = ThreadPriority.Lowest for example.
Thread.Sleep is not that "evil, do not do it ever", but maybe (just maybe) the fact that you need to use it reflects some lack on solution design. But this is not a rule at all.
Personally I never find a situation where I have to use Thread.Sleep.
Right now I'm working on an ASP.NET MVC application that uses a background thread to load a lot of data from database into a memory cache and after that write some data to the database.
The only feature I have used to prevent this thread to eat all my webserver and db processors was reduce the thread priority to the Lowest level. That thread will get about to 35 minutes to conclude all the operations instead of 7 minutes if a use a Normal priority thread. By the end of process, thread will have done about 230k selects to the database server, but this do not has affected my database or webserver performance in a perceptive way for the user.
tip: remember to set the priority back to Normal if you are using a thread from ThreadPool.
Here you can read about Thread.Priority:
http://msdn.microsoft.com/en-us/library/system.threading.thread.priority.aspx
Here a good article about why not use Thread.Sleep in production environment:
http://msmvps.com/blogs/peterritchie/archive/2007/04/26/thread-sleep-is-a-sign-of-a-poorly-designed-program.aspx
EDIT Like others said here, maybe just reduce your thread priority will not prevent the thread to send a large number of commands/data to AD. Maybe you'll get better results if you rethink all the thing and use timers or something like that. I personally think that reduce priority could resolve your problem, although I think you need to do some tests using your data to see what happens to your server and other servers involved in the process.
You could schedule the thread at BelowNormal priority instead. That said, that could potentially lead to your task never running if something else overloads the server. (Assuming Windows scheduling works the way the documentation on scheduling threads mentions for "some operating systems".)
That said, you said you're moving data into AD. If it's over the nework, it's entirely possible the CPU impact of your code will be negligible compared to I/O and processing on the AD side.
I don't see any issue with it except that during the time you put the thread to sleep then that thread will not be responsive. If that is your main thread then your GUI will become non responsive. If it is a background thread then you won't be able to communicate with it (eg to cancel it). If the time you sleep is short then it shouldn't matter.
I don't think reducing the priority of the thread will help as 1) your code might not even be running on the server and 2) most of the work being done by the server is probably not going to be on your thread anyway.
Thread.sleep does not aid performance (unless your thread has to wait for some resource). It incurs at least some overhead, and the amount of time that you sleep for is not guaranteed. The OS can decide to have your Thread sleep longer than the amount of time you specify.
As such, it would make more sense to do a significant batch of work between calls to Thread.Sleep().
Thread.Sleep() is a CPU-less wait state. Its overhead should be pretty minimal. If execute Thread.Sleep(0), you don't [necessarily] sleep, but you voluntarily surrender your time slice so the scheduler can let lower priority thread run.
You can also lower your thread's priority by setting Thread.Priority.
Another way of throttling your task is to use a Timer:
// instantiate a timer that 'ticks' 10 times per second (your ideal rate might be different)
Timer timer = new Timer( ImportUserIntoActiveDirectory , null , 0 , 100 ) ;
where ImportUserIntoActiveDirectory is an event handler that will import just user into AD:
private void ImportUserIntoActiveDirectory( object state )
{
// import just one user into AD
return
}
This lets you dial things in. The event handler is called on thread pool worker threads, so you don't tie up your primary thread. Let the OS do the work for you: all you do is decide on your target transaction rate.

Monitor.TryEnter()

I was wondering on the Monitor Class.
As far as i know all waiting threads are not FIFO.
The first one that aquires the lock is not allways the first on in the waiting queue.
Is this correct?
Is there some way to ensure the FIFO condition?
Regards
If you are referring to a built-in way, then no. Repeatedly calling TryEnter in a loop is by definition not fair and unfortunately neither is the simple Monitor.Enter. Technically a thread could wait forever without getting the lock.
If you want absolute fairness you will need to implement it yourself using a queue to keep track of arrival order.
Is there some way to ensure the FIFO condition?
In a word: no!
I wrote a short article about this: Is the Ready Queue FIFO?
Look at this question, I think it will very useful for you - Does lock() guarantee acquired in order requested?
especially this quote:
Because monitors use kernel objects internally, they exhibit the same
roughly-FIFO behavior that the OS synchronization mechanisms also
exhibit (described in the previous chapter). Monitors are unfair, so
if another thread tries to acquire the lock before an awakened waiting
thread tries to acquire the lock, the sneaky thread is permitted to
acquire a lock.

Resource usage of ThreadPool RegisterWaitForSingleObject

I am writing a server application which processes request from multiple clients. For the processing of requests I am using the threadpool.
Some of these requests modify a database record, and I want to restrict the access to that specific record to one threadpool thread at a time. For this I am using named semaphores (other processes are also accessing these records).
For each new request that wants to modify a record, the thread should wait in line for its turn.
And this is where the question comes in:
As I don't want the threadpool to fill up with threads waiting for access to a record, I found the RegisterWaitForSingleObject method in the threadpool.
But when I read the documentation (MSDN) under the section Remarks:
New wait threads are created automatically when required. ...
Does this mean that the threadpool will fill up with wait-threads? And how does this affect the performance of the threadpool?
Any other suggestions to boost performance is more than welcome!
Thanks!
Your solution is a viable option. In the absence of more specific details I do not think I can offer other tangible options. However, let me try to illustrate why I think your current solution is, at the very least, based on sound theory.
Lets say you have 64 requests that came in simultaneously. It is reasonable to assume that the thread pool could dispatch each one of those requests to a thread immediately. So you might have 64 threads that immediately begin processing. Now lets assume that the mutex has already been acquired by another thread and it is held for a really long time. That means those 64 threads will be blocked for a long time waiting for the thread that currently owns the mutex to release it. That means those 64 threads are wasted on doing nothing.
On the other hand, if you choose to use RegisterWaitForSingleObject as opposed to using a blocking call to wait for the mutex to be released then you can immediately release those 64 waiting threads (work items) and allow them to be put back into the pool. If I were to implement my own version of RegisterWaitForSingleObject then I would use the WaitHandle.WaitAny method which allows me to specify up to 64 handles (I did not randomly choose 64 for the number of requests afterall) in a single blocking method call. I am not saying it would be easy, but I could replace my 64 waiting threads for only a single thread from the pool. I do not know how Microsoft implemented the RegisterWaitForSingleObject method, but I am guessing they did it in a manner that is at least as efficient as my strategy. To put this another way, you should be able to reduce the number of pending work items in the thread pool by at least a factor of 64 by using RegisterWaitForSingleObject.
So you see, your solution is based on sound theory. I am not saying that your solution is optimal, but I do believe your concern is unwarranted in regards to the specific question asked.
IMHO you should let the database do its own synchronization. All you need to do is to ensure that you're sync'ed within your process.
Interlocked class might be a premature optimization that is too complex to implement. I would recommend using higher-level sync objects, such as ReaderWriterLockSlim. Or better yet, a Monitor.
An approach to this problem that I've used before is to have the first thread that gets one of these work items be responsible for any other ones that occur while it's processing the work item(s), This is done by queueing the work items then dropping into a critical section to process the queue. Only the 'first' thread will drop into the critical section. If a thread can't get the critical section, it'll leave and let the thread already operating in the critical section handle the queued object.
It's really not very complicated - the only thing that might not be obvious is that when leaving the critical section, the processing thread has to do it in a way that doesn't potentially leave a late-arriving workitem on the queue. Basically, the 'processing' critical section lock has to be released while holding the queue lock. If not for this one requirement, a synchronized queue would be sufficient, and the code would really be simple!
Pseudo code:
// `workitem` is an object that contains the database modification request
//
// `queue` is a Queue<T> that can hold these workitem requests
//
// `processing_lock` is an object use to provide a lock
// to indicate a thread is processing the queue
// any number of threads can call this function, but only one
// will end up processing all the workitems.
//
// The other threads will simply drop the workitem in the queue
// and leave
void threadpoolHandleDatabaseUpdateRequest(workitem)
{
// put the workitem on a queue
Monitor.Enter(queue.SyncRoot);
queue.Enqueue(workitem);
Monitor.Exit(queue.SyncRoot);
bool doProcessing;
Monitor.TryEnter(processing_queue, doProcessing);
if (!doProcessing) {
// another thread has the processing lock, it'll
// handle the workitem
return;
}
for (;;) {
Monitor.Enter(queue.SyncRoot);
if (queue.Count() == 0) {
// done processing the queue
// release locks in an order that ensures
// a workitem won't get stranded on the queue
Monitor.Exit(processing_queue);
Monitor.Exit(queue.SyncRoot);
break;
}
workitem = queue.Dequeue();
Monitor.Exit(queue.SyncRoot);
// this will get the database mutex, do the update and release
// the database mutex
doDatabaseModification(workitem);
}
}
ThreadPool creates a wait thread for ~64 waitable objects.
Good comments are here: Thread.sleep vs Monitor.Wait vs RegisteredWaitHandle?

Categories