Simulate preemptive scheduling mechanisms in C# - c#

I have reserched and I realised that actually there is no way to make preemptive scheduling mechanisms in C#, am I right?
So, I have to simulate it by using cooperative scheduling mechanisms.
Actually I have to make API, so my idea is to use some Token, and if that Token is set, then I have to stop currently running Task, run Task with bigger priority, and then continue stopped Task..
So, here is my question. I can not find a good way to stop Task, and continue it later, so is there any way to do that. I have one idea to do nothing in while loop until that token is set, so it will seem like I stopped that Task, and continue later (this is just simulation).. But I hope that there is some better way..
Also I have find EventWaitHandle Class, but I am not sure if I can manage with individual Threads or Tasks.
Actually, I have to make my own Scheduler (of course not something really good, just simulation).. So I have to give him Tasks with priorities, and Scheduler have to schedule that, but if in some moment I add one more Task, and if it has bigger priority, I have to stop current Task, run Task with bigger priority, and when this Task finish, I have to continue stopped Task.. So I have to know if there any good way to suspend Task (but without using Suspend function, because it can make deadlock, and not good to use)...

Related

Is it good practice to always wait on a task to complete?

Sorry if it is a dumb question. I'm confused about the wait() and its variants in regards to the task parallel library.
Every single example I've seen waits on tasks to complete - is this considered good practice?
My scenario is this, that I'm developing a windows service that will run continuously. I would like to engage a number of tasks, but I don't care if they will run to completion - I will set a cancellation-token with an expiration, that will throw an error if something goes awry. So I don't see the need for a wait-to-complete, but every darn example uses it...
It really depends on what your situations needs. If for instance, you want to launch a sub process to do a procedure, say for instance, fire off an email in parallel you can do without waiting.
However, if you will need to act upon what ever result or structure which is affected by some behavior you will need to wait.
If your tasks are self contained and do not interact and/or depend on each other, then I do not see why you would need to wait.
You only need to wait on a task if the code that is waiting requires the output of the task before it can proceed. If you don't need that output, don't wait.

Does Thread.Yield will let the CPU do context switch to other thread in the same process or same processor?

I see the following in Joseph Albahari's Threading book (http://www.albahari.com/threading/)
Thread.Sleep(0) relinquishes the thread’s current time slice
immediately, voluntarily handing over the CPU to other threads.
Framework 4.0’s new Thread.Yield() method does the same thing — except
that it relinquishes only to threads running on the same processor.
Is the context switch happen to some other thread within the same process or among the threads that are waiting to get CPU?
If the answer is the latter, is there any way to do context switch to some other thread that is in wait state in the same process?
I understand that the thread scheduling has been taken care by the operating system. But, got struck with a problem because of Thread.Sleep(0) and trying to find the solution for it.
Editing for more clarity about the problem:
The software has two threads (say A and B) and A will wait for a signal from B for 20 milliseconds and proceed regardless of the signal. A sets the signal and to let the processor continue with B, Thread.Sleep(0) applied as the software is a time critical application where every second maters. For a second both A and B didn't continued and restored (known with the help of the logs). We thought some other process in the same processor got the CPU time slice and now looking for alternatives.
The Thread.Yield method will switch to any thread which is ready to run on the current processor. It doesn't make any distinction about which process that Thread exists in
There is no way to yield to another thread in the same process, even by P/Invoke. Windows simply doesn't support it.
An alternative would be to use some kind of co-operative multitasking, such as TPL and async/await. When you await something, such as the awaitable object returned by Task.Yield(), it enables another task queued with the scheduler to start up. It's also quite a bit more efficient than using Thread.Yield(), but if you're not using it yet this will likely require a large overhaul of your app.
Thread.Yield() will just allow the scheduler to choose another thread within the same process that is ready to run, and resume it at whatever point it was stopped. It has nothing to do with time-slicing among processes, which is a completely different thing. (And rarely of concern unless you're programming the other process(es) as well.)
Note that the Yield() may have no effect at all, if the current thread is the only one able to run. It will just return (relatively immediately) from the Yield() call.
Your question about "context switching to another thread in the same process" is a bit mis-guided. You shouldn't think in those terms. If you need to wait for another thread to finish, use Join. If you need to signal to another thread that it should stop waiting and do something, there are a variety of mechanisms to use for that.
In short, your problem will get worse if you're trying to "outguess" the thread scheduler.
Perhaps you should be more explicit about the problem you're actually having.
Thread is a wrapper around the OS threads. Due to this scheduling of Threads is performed by OS kernel and Yield just a way to tell the kernel, that you want relinquish CPU but still stay runnable (unblocked). A kernel will consider your request as a good point to perform a rescheduling and give the CPU to some other waiting thread. OS is free to give CPU to any waiting thread from the runqueue disregard the process to which it belong. There is no way to affect to the scheduler decision unless it is your own scheduler and you use so called green threads and cooperative multitasking.
In regard to your problem: you need to use explicit synchronization if you want to achieve guaranteed results.
Yielding is a wrong way because it doesn't provide any guaranties to you.
There are a bunch of issues that can appear from its use.
For example, your thread B can simply have not enough time to accomplish its work and to send signal to A before A will be scheduled again, A can be scheduled immediately after Yield onto another CPU core, A even can be rescheduled again before the B will got a chance to be executed. Finally, other application can take a CPU. If you really care about time then raise priorities of both threads, but synchronize them explicitly.

The "bag of tasks" concept in C#, enqueue,pause,cancel logical tasks

The app I'm developing is composed this way:
A producer task scan the file system for text files and put a reference to them in a bag.
Many consumer tasks take file refs from the bag concurrently and read the files (and do some short work with their content)
I must be able to pause and resume the whole process.
I've tried using TPL, creating a task for every file ref as they are put in the bag (in this case the bag is just a concept, the producer directly create the consumers task as it find files) but this way I don't have control over the task I create, I can't (or I don't know how to) pause them. I can write some code to suspend the thread currently executing the task but that will ruin the point of working with logical tasks instead of manully creating threads wouldn't it? I would want something like "task already assigned to phisical thread can complete but waiting logical tasks should not start until resume command"
How can I achive this? Can it be done with TPL or should I use something else?
EDIT:
Your answers are all valid but my main doubt remains unanswered. We are talking about tasks, if I use TPL my producer and my many consumer will be tasks (right?) not threads (well, ok at the moment of the execution tasks will be mapped on threads). Every synchronization mechanism i've found (like the one proposed in the comment "ManualResetEventSlim") work at thread level.
E.g. the description of the Wait() method of "ManualResetEventSlim" is "Blocks the current thread until the current ManualResetEventSlim is set."
My knowledge of task is purely academic, I don't know how things works in the "real world" but it seem logical to me that I need a way to coordinate (wait/signal/...) tasks at task level or things could get weird... like... two task may be mapped on the same thread but one was supposed to signal the other that was waiting then deadlock. I'm a bit confused. This is why I asked if my app could use TPL instead of old style simple threads.
Yes, you can do that. First, you have a main thread, your application. There you have two workers, represented by threads. The first worker would be a producer and the second worker would be a consumer.
When your application starts, you start the workers. Both of them operates on the concurrency collection, the bag. Producer searches for files and puts references to the bag and consumer takes references from the bag and starts a task per reference.
When you want to signal pause, simply pause the producer. If you do that, consumer also stops working if there is nothing in the bag. If this is not a desired behaviour, you can simply define that pausing of the producer also clears the bag - backup your bag first and than clear it. This way all running tasks will finish their job and consumer will not start new tasks, but it can still run and wait for the results.
EDIT:
Based on your edit. I don't know how to achieve it the way you want, but although it is nice try to use new technologies, don't let your mind be clouded. Using a ThreadPool is also nice thing. It will take more time to start the application, but once it is running, consuming will be faster, because you already have workers ready.
It is not a bad idea, you can specify a maximum number of workers. If you create a task for every item in the bag, it will be more memory-consuming because you will still allocate and release memory. This will not happen with ThreadPool.
Sure you can use TPL for this. And may be also reactive extensions and LINQ to simplify grouping and pausing/resuming the thread works.
If you have just a short job on each file, it is pretty good idea to not to disturb the handler function with cancellations. You can just suspend queueing the workers instead.
I imagine something like this:
You directory scanner thread puts the found files into an observable collection.
The consumer thread subscribes the collection changes and gets/removes the files and assigns them to workers.

Stopping all thread in .NET ThreadPool?

I am using ThreadPool in .NET to make some web request in the background, and I want to have a "Stop" button to cancel all the threads even if they are in the middle of making a request, so a simple bool wont do the job.
How can I do that?
Your situation is pretty much the canonical use-case for the Cancellation model in the .NET framework.
The idea is that you create a CancellationToken object and make it available to the operation that you might want to cancel. Your operation occasionally checks the token's IsCancellationRequested property, or calls ThrowIfCancellationRequested.
You can create a CancellationToken, and request cancellation through it, by using the CancellationTokenSource class.
This cancellation model integrates nicely with the .NET Task Parallel Library, and is pretty lightweight, more so than using system objects such as ManualResetEvent (though that is a perfectly valid solution too).
The correct way to handle this is to have a flag object that you signal.
The code running in those threads needs to check that flag periodically to see if it should exit.
For instance, a ManualResetEvent object is suitable for this.
You could then ask the threads to exit like this:
evt.Set();
and inside the threads you would check for it like this:
if (evt.WaitOne(0))
return; // or otherwise exit the thread
Secondly, since you're using the thread pool, what happens is that all the items you've queued up will still be processed, but if you add the if-statement above to the very start of the thread method, it will exit immediately. If that is not good enough you should build your own system using normal threads, that way you have complete control.
Oh, and just to make sure, do not use Thread.Abort. Ask the threads to exit nicely, do not outright kill them.
If you are going to stop/cancel something processing in another thread, ThreadPool is not the best choice, you should use Thread instead, and manage all of them in a container(e.g. a global List<Thread>), that guarantees you have full control of all the threads.

How to know that a thread in a Thread Pool hangs/freezes

I have queue of tasks for the ThreadPool, and each task has a tendency to froze locking up all the resources it is using. And these cant be released unless the service is restarted.
Is there a way in the ThreadPool to know that its thread is already frozen? I have an idea of using a time out, (though i still dont know how to write it), but i think its not safe because the length of time for processing is not uniform.
I don't want to be too presumptuous here, but a good dose of actually finding out what the problem is and fixing it is the best course with deadlocks.
Run a debug version of your service and wait until it deadlocks. It will stay deadlocked as this is a wonderful property of deadlocks.
Attach the Visual Studio debugger to the service.
"Break All".
Bring up your threads windows, and start spelunking...
Unless you have a sound architecture\design\reason to choose victims in the first place, don't do it - period. It's pretty much a recipe for disaster to arbitrarily bash threads over the head when they're in the middle of something.
(This is perhaps a bit lowlevel, but at least it is a simple solution. As I don't know C#'s API, this is a general solution for any language using thread-pools.)
Insert a watchdog task after each real task that updates a time value with the current time. If this value is larger than you max task run time (say 10 seconds), you know that something is stuck.
Instead of setting a time and polling it, you could continuously set and reset some timers 10 secs into the future. When it triggers, a task has hung.
The best way is probably to wrap each task in a "Watchdog" Task class that does this automatically. That way, upon completion, you'd clear the timer, and you could also set a per-task timeout, which might be useful.
You obviously need one time/timer object for each thread in the threadpool, but that's solvable via thread-local variables.
Note that this solution does not require you to modify your tasks' code. It only modifies the code putting tasks into the pool.
One way is to use a watchdog timer (a solution usually done in hardware but applicable to software as well).
Have each thread set a thread-specific value to 1 at least once every five seconds (for example).
Then your watchdog timer wakes every ten seconds (again, this is an example figure only) and checks to ensure that all the values are 1. If they're not 1, then a thread has locked up.
The watchdog timer then sets them all to 0 and goes back to sleep for the next cycle.
Providing your worker threads are written in such a way so that they will be able to set the values in a timely manner under non-frozen conditions, this scheme will work okay.
The first thread that locks up will not set its value to 1, and this will be detected by the watchdog timer on the next cycle.
However, a better solution is to find out why the threads are freezing in the first place and fix that.

Categories