I have a thread reading from a specific plc's memory and it works perfectly. Now what I want is to start another thread to test the behavior of the system (simulate the first thread) in case of a conectivity issue, and when everything is Ok, continue the first thread. But I think I'll have problems with that because these two threads will need to use the same port.
My first idea was to abort the first thread, start the second one and when the everything's OK again, abort this thread and 'restart' the first one.
I've read some other forums and people say that aborting or suspending a thread is the worst solution, and I've read about syncronization of threads but I dont really know if this is useful in this case because I've never used it.
My question is, what is the correct way to solve this kind of situations?
You have a shared resource that you need to coordinate thread access to. There are a number of mechanisms in .NET available for that coordination.
There is a wonderful resource that provides both an introduction to thread concepts in .NET, and discusses advanced concepts in an approachable manner
http://www.albahari.com/threading/
In your case, have a look at the section on locking
Exclusive locking is used to ensure that only one thread can enter particular sections of code at a time. The two main exclusive locking constructs are lock and Mutex. Of the two, the lock construct is faster and more convenient. Mutex, though, has a niche in that its lock can span applications in different processes on the computer.
http://www.albahari.com/threading/part2.aspx#_Locking
You can structure your two threads so that they must acquire a specific lock to work with the port. Have your first thread release that lock before you start the second thread, then have the first thread wait to acquire that lock again (which the second thread will hold until done).
Related
I see the following in Joseph Albahari's Threading book (http://www.albahari.com/threading/)
Thread.Sleep(0) relinquishes the thread’s current time slice
immediately, voluntarily handing over the CPU to other threads.
Framework 4.0’s new Thread.Yield() method does the same thing — except
that it relinquishes only to threads running on the same processor.
Is the context switch happen to some other thread within the same process or among the threads that are waiting to get CPU?
If the answer is the latter, is there any way to do context switch to some other thread that is in wait state in the same process?
I understand that the thread scheduling has been taken care by the operating system. But, got struck with a problem because of Thread.Sleep(0) and trying to find the solution for it.
Editing for more clarity about the problem:
The software has two threads (say A and B) and A will wait for a signal from B for 20 milliseconds and proceed regardless of the signal. A sets the signal and to let the processor continue with B, Thread.Sleep(0) applied as the software is a time critical application where every second maters. For a second both A and B didn't continued and restored (known with the help of the logs). We thought some other process in the same processor got the CPU time slice and now looking for alternatives.
The Thread.Yield method will switch to any thread which is ready to run on the current processor. It doesn't make any distinction about which process that Thread exists in
There is no way to yield to another thread in the same process, even by P/Invoke. Windows simply doesn't support it.
An alternative would be to use some kind of co-operative multitasking, such as TPL and async/await. When you await something, such as the awaitable object returned by Task.Yield(), it enables another task queued with the scheduler to start up. It's also quite a bit more efficient than using Thread.Yield(), but if you're not using it yet this will likely require a large overhaul of your app.
Thread.Yield() will just allow the scheduler to choose another thread within the same process that is ready to run, and resume it at whatever point it was stopped. It has nothing to do with time-slicing among processes, which is a completely different thing. (And rarely of concern unless you're programming the other process(es) as well.)
Note that the Yield() may have no effect at all, if the current thread is the only one able to run. It will just return (relatively immediately) from the Yield() call.
Your question about "context switching to another thread in the same process" is a bit mis-guided. You shouldn't think in those terms. If you need to wait for another thread to finish, use Join. If you need to signal to another thread that it should stop waiting and do something, there are a variety of mechanisms to use for that.
In short, your problem will get worse if you're trying to "outguess" the thread scheduler.
Perhaps you should be more explicit about the problem you're actually having.
Thread is a wrapper around the OS threads. Due to this scheduling of Threads is performed by OS kernel and Yield just a way to tell the kernel, that you want relinquish CPU but still stay runnable (unblocked). A kernel will consider your request as a good point to perform a rescheduling and give the CPU to some other waiting thread. OS is free to give CPU to any waiting thread from the runqueue disregard the process to which it belong. There is no way to affect to the scheduler decision unless it is your own scheduler and you use so called green threads and cooperative multitasking.
In regard to your problem: you need to use explicit synchronization if you want to achieve guaranteed results.
Yielding is a wrong way because it doesn't provide any guaranties to you.
There are a bunch of issues that can appear from its use.
For example, your thread B can simply have not enough time to accomplish its work and to send signal to A before A will be scheduled again, A can be scheduled immediately after Yield onto another CPU core, A even can be rescheduled again before the B will got a chance to be executed. Finally, other application can take a CPU. If you really care about time then raise priorities of both threads, but synchronize them explicitly.
I am coding an application that runs many threads in the background which have to report back to the main thread so it can update a table in the interface. In the past, the worker threads were ordinary separate classes (named Citizen) which I have ran from the main thread using something like
new Thread(new ThreadStart(citizen.ProcessActions)).Start();
where ProcessActions function was the main function which did all the background work. Before actually starting the thread, I would register event handlers so the Citizen threads could log/report some stuff to the interface. Usually, there are tens of these Citizen threads (around 50) and they're pretty big classes - each has it's own HTTP client and it browses the web.
Is this a good way to do manage threads? Probably not, to be frank; I'm pretty sure the threads aren't gracefully exiting - once the ProcessActions function gets done, I remove the event handlers and that's it - the memory usage keeps rising with each new Citizen started.
What would be the best way to manage many (50+) threads, with which you have to communicate often? I believe I wouldn't have to worry much about thread safety for Citizen variables as I wouldn't be accessing them from other threads but it's own thread.
I think what you're looking for is a thread pool. Here's an MSDN article on them and that should be available in C# 4.0.
The idea would be to create a thread pool, set its count to some high number(say 50), and then start assigning threads to tasks. If the pool needs to expand, it can, but by declaring a high number up front, you get all the expensive creation of threads out of the way.
It might be beneficial to 'queue' tasks that you want to get done, and assign those tasks as threads become available.
Also, memory leaks can be hard to find, but I would start by testing the simple case: Take out all threads(just run one Citizen after another from the main thread) and let it run for a long time. If it's still leaking memory, your thread management isn't the issue.
I am writing a server application which processes request from multiple clients. For the processing of requests I am using the threadpool.
Some of these requests modify a database record, and I want to restrict the access to that specific record to one threadpool thread at a time. For this I am using named semaphores (other processes are also accessing these records).
For each new request that wants to modify a record, the thread should wait in line for its turn.
And this is where the question comes in:
As I don't want the threadpool to fill up with threads waiting for access to a record, I found the RegisterWaitForSingleObject method in the threadpool.
But when I read the documentation (MSDN) under the section Remarks:
New wait threads are created automatically when required. ...
Does this mean that the threadpool will fill up with wait-threads? And how does this affect the performance of the threadpool?
Any other suggestions to boost performance is more than welcome!
Thanks!
Your solution is a viable option. In the absence of more specific details I do not think I can offer other tangible options. However, let me try to illustrate why I think your current solution is, at the very least, based on sound theory.
Lets say you have 64 requests that came in simultaneously. It is reasonable to assume that the thread pool could dispatch each one of those requests to a thread immediately. So you might have 64 threads that immediately begin processing. Now lets assume that the mutex has already been acquired by another thread and it is held for a really long time. That means those 64 threads will be blocked for a long time waiting for the thread that currently owns the mutex to release it. That means those 64 threads are wasted on doing nothing.
On the other hand, if you choose to use RegisterWaitForSingleObject as opposed to using a blocking call to wait for the mutex to be released then you can immediately release those 64 waiting threads (work items) and allow them to be put back into the pool. If I were to implement my own version of RegisterWaitForSingleObject then I would use the WaitHandle.WaitAny method which allows me to specify up to 64 handles (I did not randomly choose 64 for the number of requests afterall) in a single blocking method call. I am not saying it would be easy, but I could replace my 64 waiting threads for only a single thread from the pool. I do not know how Microsoft implemented the RegisterWaitForSingleObject method, but I am guessing they did it in a manner that is at least as efficient as my strategy. To put this another way, you should be able to reduce the number of pending work items in the thread pool by at least a factor of 64 by using RegisterWaitForSingleObject.
So you see, your solution is based on sound theory. I am not saying that your solution is optimal, but I do believe your concern is unwarranted in regards to the specific question asked.
IMHO you should let the database do its own synchronization. All you need to do is to ensure that you're sync'ed within your process.
Interlocked class might be a premature optimization that is too complex to implement. I would recommend using higher-level sync objects, such as ReaderWriterLockSlim. Or better yet, a Monitor.
An approach to this problem that I've used before is to have the first thread that gets one of these work items be responsible for any other ones that occur while it's processing the work item(s), This is done by queueing the work items then dropping into a critical section to process the queue. Only the 'first' thread will drop into the critical section. If a thread can't get the critical section, it'll leave and let the thread already operating in the critical section handle the queued object.
It's really not very complicated - the only thing that might not be obvious is that when leaving the critical section, the processing thread has to do it in a way that doesn't potentially leave a late-arriving workitem on the queue. Basically, the 'processing' critical section lock has to be released while holding the queue lock. If not for this one requirement, a synchronized queue would be sufficient, and the code would really be simple!
Pseudo code:
// `workitem` is an object that contains the database modification request
//
// `queue` is a Queue<T> that can hold these workitem requests
//
// `processing_lock` is an object use to provide a lock
// to indicate a thread is processing the queue
// any number of threads can call this function, but only one
// will end up processing all the workitems.
//
// The other threads will simply drop the workitem in the queue
// and leave
void threadpoolHandleDatabaseUpdateRequest(workitem)
{
// put the workitem on a queue
Monitor.Enter(queue.SyncRoot);
queue.Enqueue(workitem);
Monitor.Exit(queue.SyncRoot);
bool doProcessing;
Monitor.TryEnter(processing_queue, doProcessing);
if (!doProcessing) {
// another thread has the processing lock, it'll
// handle the workitem
return;
}
for (;;) {
Monitor.Enter(queue.SyncRoot);
if (queue.Count() == 0) {
// done processing the queue
// release locks in an order that ensures
// a workitem won't get stranded on the queue
Monitor.Exit(processing_queue);
Monitor.Exit(queue.SyncRoot);
break;
}
workitem = queue.Dequeue();
Monitor.Exit(queue.SyncRoot);
// this will get the database mutex, do the update and release
// the database mutex
doDatabaseModification(workitem);
}
}
ThreadPool creates a wait thread for ~64 waitable objects.
Good comments are here: Thread.sleep vs Monitor.Wait vs RegisteredWaitHandle?
I have a thread that I fire off every time the user scans a barcode.
Most of the time it is a fairly short running thread. But sometimes it can take a very long time (waiting on a invoke to the GUI thread).
I have read that it may be a good idea to use the ThreadPool for this rather than just creating my own thread for each scan.
But I have also read that if the ThreadPool runs out of threads then it will just wait until some other thread exits (not OK for what I am doing).
So, how likely is it that I am going to run out of threads? And is the benefit of the ThreadPool really worth it? (When I scan it does not seem to take too long for the scan to "run" the thread logic.)
It depends on what you mean by "a very long time" and how common that scenario is.
The MSDN topic "The Managed Thread Pool" offers good guidelines for when not to use thread pool threads:
There are several scenarios in which it is appropriate to create and manage your own threads instead of using thread pool threads:
You require a foreground thread.
You require a thread to have a particular priority.
You have tasks that cause the thread to block for long periods of time. The
thread pool has a maximum number of
threads, so a large number of blocked
thread pool threads might prevent
tasks from starting.
You need to place threads into a single-threaded apartment. All
ThreadPool threads are in the
multithreaded apartment.
You need to have a stable identity associated with the thread, or to
dedicate a thread to a task.
Since the user will never scan more than one barcode at a time, the memory costs of the threadpool might not be worth it - I'd stick with a single thread just waiting in the background.
The point of the thread pool is to amortize the cost of creating threads, which are not inexpensive to spin up and tear down. If you have a short-running task, the cost of creating/destroying the thread can be a significant portion of the overall run-time. The maximum number of threads in the thread pool depends on the version of the .NET Framework, typically dozens to hundreds per processor. The number of threads is scaled depending on available work.
Will you run out of threads and have to wait for a thread to become available? It depends on your workload. You can get the maximum number of threads available via ThreadPool.GetMaxThreads(). Chances are (based on the description of your problem) that this number is sufficiently high.
http://msdn.microsoft.com/en-us/library/system.threading.threadpool.getmaxthreads.aspx
Another option would be to manage your own pool of scan threads and assign them work rather than creating a new thread for every scan. Personally I would try the threadpool first and only manage your own threads if it proved necessary. Even better, I would look into async programming techniques in .NET. The methods will be run on the thread pool, but give you a much nicer programming experience than manual thread management.
If most of the time it is short running threads you could use the thread pool or a BackgroundWorker which draws threads from the pool.
An advantage I can see in your case is that threadpool class puts an upper limit on the amount of threads that may be active. It depends on the context of your application whether you will exhaust system resources. Exhausting a modern desktop system is VERY hard to do really.
If the software is used in a supermarket till it is highly unlikely that you will have more then 5 barcodes being analysed at the same time. If its run in a back-end server for a whole row of supermarket tills. Then perhaps 30-100 concurrent requests might be active.
With this sort of theory crafting it is highly unlikely that you will run out of threads, even on embedded hardware. If you have a dozen or so requests active at a time, and your code works, it's ok to just leave it as it is.
A thread pool is just an abstraction though, and you could have queue in the middle that queues request onto a thread-pool, in this scenario for the row-of-till example above, I'd feel comfortable queueing 100-1000 requests against a threadpool with 10 threads.
In .net (and on windows in general), the question should always be reversed: "Is creating a new thread worth it in this scenario?"
Creating a new thread is expensive, and doing it over and over again is almost certainly not worth it. The thread pool is cheap, and really should be the first thing you turn to when you need a new thread.
If you decide to spin up a new thread, soon you will start worrying about re-using the thread if it's already running. Then you will start worrying that sometimes the thread is running but it seems to be taking too long, and so you should make a new one. Then you're going to decide to have a thread not exit immediately upon finishing work, but to wait a little while in case new work comes in. And then... bam! You've created your own thread pool. At which point you should just back up and use the system-provided one.
The folks who mentioned that the thread pool might "run out of threads" were well-intentioned, but they did you a disservice. The limit on the number of threads in the thread pool is quite large. If you run into it, you have other problems.
(And, of course, since .net 2.0, you can set the maximum number of threads, so you can tweak the number if you absolutely have to.)
Others have directed you to MSDN: "The Managed Thread Pool". I will repeat that direction, as the article is good, but in my mind does not sell the thread pool hard enough. :)
I have an application that uses a Mutex for cross process synchronization of a block of code. This mechanism works great for the applications current needs. In the worst case I have noticed that about 6 threads can backup on the Mutex. It takes about 2-3 seconds to execute the synchronized code block.
I just received a new requirement that is asking to create a priority feature to the Mutex such that occasionally some requests of the Mutex can be deemed more important then the rest. When one of these higher priority threads comes in the desired functionality is for the Mutex to grant acquisition to the higher priority request instead of the lower.
So is there anyway to control the blocked Mutex queue that Windows maintains? Should I consider using a different threading model?
Thanks,
Matt
Using just the Mutex this will be tough one to solve, I am sure someone out there is thinking about thread priorities etc. but I would probably not consider this route.
One option would be to maintain a shared memory structure and implement a simple priority queue. The shared memory can use a MemoryMappedFile, then when a process wants to execute the section of code it puts a token with a priority on the priority queue and then when it wakes up each thread inspects the priority queue to check the first token in the queue if the token belongs to the process it can dequeue the token and execute the code.
Mutex isnt that great for a number of reasons, and as far as i know, there is no way to change promote one thread over another while they are running, nor a nice way to accomodate your requirement.
I just read Jeffrey Richters "clr via c# 3", and there are a load of great thread sync constructs in there, and lots of good threading advice generally.
I wish i could remember enough of it to answer your question, but i doubt i would get it across as well as he can. check out his website: http://www.wintellect.com/ or search for some of his concurrent affairs articles.
they will definitely help.
Give each thread an AutoResetEvent. Then instead of waiting on a mutex, each thread adds its ARE to to a sorted list. If there is only one ARE on the list, fire the event, else wait for its ARE to fire. When a thread finishes processing, it removes its ARE from the list and fires the next one. Be sure to synchronize the list.