I am trying to understand a section of code written in C#:
static Semaphore _transactionReceived;
static TransactionsSession _transactionsSession;
static void StartTransactionsStream()
{
WriteNewLine("Starting transactions stream ...");
_transactionsSession = new TransactionsSession(AccountID);
_transactionReceived = new Semaphore(0, 100);
_transactionsSession.DataReceived += OnTransactionReceived;
_transactionsSession.StartSession();
bool success = _transactionReceived.WaitOne(10000);
if (success)
WriteNewLine("Good news!. Transactions stream is functioning.");
else
WriteNewLine("Bad news!. Transactions stream is not functioning.");
}
but I am having trouble understanding what is happening in the code cycle in regards to the Sempahore class, particularly what the following lines are doing:
_transactionReceived = new Semaphore(0, 100);
and
_transactionReceived.WaitOne(10000)
is doing.
I have viewed and (re)viewed System.Threading.Semaphore documentation, and I see that the contructor "Initializes a new instance of the Semaphore class, specifying the initial number of entries and the maximum number of concurrent entries." But what does it mean when there are 0 entries?
Additionally, I see that the WaitOne(int32) call "Blocks the current thread until the current WaitHandle receives a signal, using a 32-bit signed integer to specify the time interval in milliseconds." But again, what does WaitOne that mean in the context of the code cycle?
Any pointers or general comments about how this is executing would be helpful.
Many thanks!
But what does it mean when there are 0 entries?
Exactly that; the Semaphore currently has no entries out of a maximum of 100.
If you had constructed the Semaphore with new Semaphore(1, 100);, then there would have been 1 entry, and another 99 remaining.
It would require a Semaphore.Release() to have 100 remaining entries.
what does WaitOne that mean in the context of the code cycle?
If the Semaphore has available entries i.e. the current number of entries is not 100, then it returns true immediately.
Otherwise it blocks the current thread until an entry is available (proabably by another thread calling Semaphore.Release()), at which point the method returns true.
If you specify a int millisecondsTimeout, that's the maximum amount of time the Semaphore will block and wait for an entry to be released.
If that timeout is exceeded, the method returns false.
Explanation on semaphores
A Semaphore is a synchronization object that allows a limited degree of parallelism in a code section.
For sake of simplicity, suppose you are instantiating a fresh new semaphore on a code block (no shared instance, global variable or other evil). Since multiple threads can execute the same piece of code at the same time, the semaphore guarantees only x of them can execute the same block at the same time.
Think of a thread as a worker person. Not by coincidence, threads are often called worker threads.
But what does it mean when there are 0 entries?
The semaphore is in a red state, so no one can execute a particular code section until some thread unlocks the semaphore. You can create a GUI where multiple threads race for the same action, but by the press of a button you unlock the semaphore and allow one thread to go.
But again, what does WaitOne that mean in the context of the code cycle?
It means that one of the following happens:
The semaphore is in a green state, i.e. has permits. The thread does not wait, the semaphore is decremented, the operation proceeds
The semaphore is in a red state, i.e. has no permits available
Either the WaitOne waits 10 seconds (10000ms) because no permit was available during that time
Or someone else unlocks the semaphore and the thread that invoked WaitOne is good to go
About your code
There must be some other method that releases the semaphore but it is not shown in the example. In fact, you have a red semaphore where you wait, but apparently nobody to release it. I believe that one of these two lines hides a Semaphore.Release method
_transactionsSession.DataReceived += OnTransactionReceived;
_transactionsSession.StartSession();
Related
I am trying to understand how Parallel.Invoke creates and reuses threads.
I ran the following example code (from MSDN, https://msdn.microsoft.com/en-us/library/dd642243(v=vs.110).aspx):
using System;
using System.Threading;
using System.Threading.Tasks;
class ThreadLocalDemo
{
static void Main()
{
// Thread-Local variable that yields a name for a thread
ThreadLocal<string> ThreadName = new ThreadLocal<string>(() =>
{
return "Thread" + Thread.CurrentThread.ManagedThreadId;
});
// Action that prints out ThreadName for the current thread
Action action = () =>
{
// If ThreadName.IsValueCreated is true, it means that we are not the
// first action to run on this thread.
bool repeat = ThreadName.IsValueCreated;
Console.WriteLine("ThreadName = {0} {1}", ThreadName.Value, repeat ? "(repeat)" : "");
};
// Launch eight of them. On 4 cores or less, you should see some repeat ThreadNames
Parallel.Invoke(action, action, action, action, action, action, action, action);
// Dispose when you are done
ThreadName.Dispose();
}
}
As I understand it, Parallel.Invoke tries to create 8 threads here - one for each action. So it creates the first thread, runs the first action, and by that gives a ThreadName to the thread. Then it creates the next thread (which gets a different ThreadName) and so on.
If it cannot create a new thread, it will reuse one of the threads created before. In this case, the value of repeat will be true and we can see this in the console output.
Is this correct until here?
The second-last comment ("Launch eight of them. On 4 cores or less, you should see some repeat ThreadNames") implies that the threads created by Invoke correspond to the available cpu threads of the processor: on 4 cores we have 8 cpu threads, at least one is busy (running the operating system and stuff), so Invoke can only use 7 different threads, so we must get at least one "repeat".
Is my interpretation of this comment correct?
I ran this code on my PC which has an Intel® Core™ i7-2860QM processor (i.e. 4 cores, 8 cpu threads). I expected to get at least one "repeat", but I didn't. When I changed the Invoke to take 10 instead of 8 actions, I got this output:
ThreadName = Thread6
ThreadName = Thread8
ThreadName = Thread6 (repeat)
ThreadName = Thread5
ThreadName = Thread3
ThreadName = Thread1
ThreadName = Thread10
ThreadName = Thread7
ThreadName = Thread4
ThreadName = Thread9
So I have at least 9 different threads in the console application. This contradicts the fact that my processor only has 8 threads.
So I guess some of my reasoning from above is wrong. Does Parallel.Invoke work differently than what I described above? If yes, how?
If you pass less then 10 items to Parallel.Invoke, and you don't specify MaxDegreeOfParallelism in options (so - your case), it will just run them all in parallel on thread pool sheduler using rougly the following code:
var actions = new [] { action, action, action, action, action, action, action, action };
var tasks = new Task[actions.Length];
for (int index = 1; index < tasks.Length; ++index)
tasks[index] = Task.Factory.StartNew(actions[index]);
tasks[0] = new Task(actions[0]);
tasks[0].RunSynchronously();
Task.WaitAll(tasks);
So just a regular Task.Factory.StartNew. If you will look at max number of threads in thread pool
int th, io;
ThreadPool.GetMaxThreads(out th, out io);
Console.WriteLine(th);
You will see some big number, like 32767. So, number of threads on which Parallel.Invoke will be executed (in your case) are not limited to number of cpu cores at all. Even on 1-core cpu it might run 8 threads in parallel.
You might then think, why some threads are reused at all? Because when work is done on thread pool thread - that thread is returned to the pool and is ready to accept new work. Actions from your example basically do no work at all and complete very fast. So sometimes first thread started via Task.Factory.StartNew has already completed your action and is returned to the pool before all subsequent threads were started. So that thread is reused.
By the way, you can see (repeat) in your example with 8 actions, and even with 7 if you try hard enough, on a 8 core (16 logical cores) processor.
UPDATE to answer your comment. Thread pool scheduler will not necessary create new threads immediately. There is min and max number of threads in thread pool. How to see max I already shown above. To see min number:
int th, io;
ThreadPool.GetMinThreads(out th, out io);
This number will usually be equal to the number of cores (so for example 8). Now, when you request new action to be performed on thread pool thread, and number of threads in a thread pool is less than minimum - new thread will be created immeditely. However, if number of available threads is greater than minimum - certain delay will be introduced before creating new thread (I don't remember how long exactly unfortunately, about 500ms).
Statement you added in your comment I highly doubt can execute in 2-3 seconds. For me it executes for 0.3 seconds max. So when first 8 threads are created by thread pool, there is that 500ms delay before creating 9th. During that delay, some (or all) of first 8 threads are completed their job and are available for new work, so there is no need to create new thread and they can be reused.
To verify this, introduce bigger delay:
static void Main()
{
// Thread-Local variable that yields a name for a thread
ThreadLocal<string> ThreadName = new ThreadLocal<string>(() =>
{
return "Thread" + Thread.CurrentThread.ManagedThreadId;
});
// Action that prints out ThreadName for the current thread
Action action = () =>
{
// If ThreadName.IsValueCreated is true, it means that we are not the
// first action to run on this thread.
bool repeat = ThreadName.IsValueCreated;
Console.WriteLine("ThreadName = {0} {1}", ThreadName.Value, repeat ? "(repeat)" : "");
Thread.Sleep(1000000);
};
int th, io;
ThreadPool.GetMinThreads(out th, out io);
Console.WriteLine("cpu:" + Environment.ProcessorCount);
Console.WriteLine(th);
Parallel.Invoke(Enumerable.Repeat(action, 100).ToArray());
// Dispose when you are done
ThreadName.Dispose();
Console.ReadKey();
}
You will see that now thread pool has to create new threads every time (much more than there are cores), because it cannot reuse previous threads while they are busy.
You can also increase number of min threads in thread pool, like this:
int th, io;
ThreadPool.GetMinThreads(out th, out io);
ThreadPool.SetMinThreads(100, io);
This will remove the delay (until 100 threads are created) and in above example you will notice that.
Behind the scenes, threads are organized (and possessed by) the task scheduler. Primary purpose of the task scheduler is to keep all CPU cores used as much as possible with useful work.
Under the hood, scheduler is using the thread pool, and then size of the thread pool is the way to fine-tune usefulness of operations executed on CPU cores.
Now this requires some analysis. For instance, thread switching costs CPU cycles and it is not useful work. On the other hand, when one thread executes one task on a core, all other tasks are stalled and they are not progressing on that core. I believe that is the core reason why the scheduler is usually starting two threads per core, so that at least some movement is visible in case that one task takes longer to complete (like several seconds).
There are corollaries to this basic mechanism. When some tasks take long time to complete, scheduler starts new threads to compensate. That means that long-running task will now have to compete for the core with short-running tasks. In that way, short tasks will be completed one after another, and long task will slowly progress to its completion as well.
Bottom line is that your observations about threads are generally correct, but not entirely true in specific situations. In concrete execution of a number of tasks, scheduler might choose to raise more threads, or to keep going with the default. That is why you will sometimes notice that number of threads differs.
Remember the goal of the game: Utilize CPU cores with useful work as much as possible, while at the same time making all tasks move, so that the application doesn't look like frozen. Historically, people used to try to reach these goals with many different techniques. Analysis had shown that many of those techniques were applied randomly and didn't really increase CPU utilization. That analysis has lead to introduction of task schedulers in .NET, so that fine-tuning can be coded once and be done well.
So I have at least 9 different threads in the console application. This contradicts the fact that my processor only has 8 threads.
A thread is a very much overloaded term. It can mean, at the very least: (1) something you sew with, (2) a bunch of code with associated state, that is represented by an OS handle, and (3) an execution pipeline of a CPU. The Thread.CurrentThread refers to (2), the "processor thread" that you mentioned refers to (3).
The existence of a (2)-thread is not predicated on the existence of (3)-thread, and the number of (2)-threads that exist on any particular system is pretty much limited by available memory and OS design. The existence of (2)-thread doesn't imply execution of (2)-thread at any given time (unless you use APIs that guarantee that).
Furthermore, if a (2)-thread executes at some point - implying a temporary 1:1 binding between (2)-thread and (3)-thread, there is no implication that the thread will continue executing in general, and of course neither is there an implication that the thread will continue executing on the same (3)-thread if it continues executing at all.
So, even if you have "caught" the execution of a (2)-thread on a (3)-thread by some side effect, e.g. console output, as you did, that doesn't necessarily imply anything about any other (2)-threads and (3)-threads at that point.
On to your code:
// If ThreadName.IsValueCreated is true, it means that we are not the
// first action to run on this thread. <-- this refers to (2)-thread, NOT (3)-thread.
Parallel.Invoke is not precluded from (in terms of specifications) creating as many new (2)-threads as there are arguments passed to it. The actual number of (2)-threads created may be all the way from zero to a hero, since to call Parallel.Invoke there must be an existing (2)-thread with some code that calls this API. So, no new (2)-threads need to be created at all, for example. Whether the (2)-threads created by Parallel.Invoke execute on any particular number of (3)-threads concurrently is beyond your control either.
So that explains the behavior you see. You conflated (2)-threads with (3)-threads, and assumed that Parallel.Invoke does something specific it in fact is not guaranteed to do. Citing documentation:
No guarantees are made about the order in which the operations execute or whether they execute in parallel.
This implies that Invoke is free to run the actions on dedicated (2)-threads if it so wishes. And that is what you observed.
I am making a library that allows access to a system-wide shared resource and would like a mutex-like lock on it. I have used the Mutex class in the past to synchronize operations in different threads or processes.
In UI applications a problem can occur. The library I'm making is used in multiple products, of which some are plugins that sit in the same host UI application. Because of this, the UI thread is the same for each instance of the library - so mutex.WaitOne() will return true even if the resource is already being accessed.
The 'resource' is the user's attention. I don't want more than one specific child window open regardless of which host process wants to open it. Additionally, it may be a different thread that knows when the mutex can be released (child window closed).
Is there a class, or pattern I can apply, that will allow me to easily solve this?
To summarize my intentions, this might be the ideal fictional class:
var specialMutex = new SpecialMutex("UserToastNotification");
specialMutex.WaitOne(0); // Returns true only once, even on the same thread,
// and is respected across different processes.
specialMutex.Release(); // Can be called from threads other than the one
// that called WaitOne();
Yes, Release looks dangerous, but it's only called by the resource.
I think you want a Semaphore that has an initial value of 1. Any call to WaitOne() on a Semaphore tries to decrement the count, regardless of the thread. And any call to Release, regardless of the thread that calls it, results in incrementing the count.
So if a single thread initializes a semaphore with a value of 1 and then calls WaitOne, the count will go to 0. If that same thread calls WaitOne again on the same semaphore, the thread will lock waiting for a release.
Some other thread could come along and call Release to increment the count.
So, whereas a Semaphore isn't exactly like a Mutex, it might be similar enough to let your program work.
You could use a compare/exchange operation to accomplish this. Something like this:
class Lock {
int locked = 0;
bool Enter() { return Interlocked.CompareExchange(ref locked, 1, 0) == 0; }
void Leave() { Interlocked.CompareExchange(ref locked, 0, 1); }
}
Here, Enter will only ever return true once, regardless from which thread it is called, untill you call leave.
I'm writing an application that should simulate the behavior of a PLC. This means I have to run several threads making sure only one thread at a time is active and all others are suspended.
For example:
thread 1 repeats every 130ms and blocks all other threads. The effective runtime is 30ms and the remaining 100ms before the thread restarts can be used by other threads.
thread 2 repeats every 300ms and blocks all threads except for thread 1. The effective runtime is 50ms (the remaining 250ms can be used by other threads). Thread 2 is paused until thread 1 has finished executing code (the remaining 100ms of thread 1) and once thread 1 is asleep it resumes from where it has been paused
thread 3 repeats every 1000ms. The effective runtime is 100ms. This thread continues execution only if all other threads are suspended.
The highest priority is to complete the tasks before they are called again, otherwise I have to react, therefore a thread that should be blocked should not run until a certain point, otherwise multicore processing would elaborate the code and only wait to pass the results.
I read several posts and learned that Thread.suspend is not recomended and semaphore or monitor operations mean that the code is executed until a specific and fixed point in the code while I have to pause the threads exactly where the execution has arrived when an other thread (with higher "priority") is called.
I also looked at the priority setting but it doesn't seem to be 100% relevant since the system can override priorities.
Is there a correct or at least solid way to code the blocking mechanism?
I don't think you need to burden yourself with Threads at all. Instead, you can use Tasks with a prioritised TaskScheduler (it's not too hard to write or find by googling).
This makes the code quite easy to write, for example the highest priority thread might be something like:
while (!cancellationRequested)
{
var repeatTask = Task.Delay(130);
// Do your high priority work
await repeatTask;
}
Your other tasks will have a similar basic layout, but they will be given a lower priority in the task scheduler (this is usually handled by the task scheduler having a separate queue for each of the task priorities). Once in a while, they can check whether there is a higher priority task, and if so, they can do await Task.Yield();. In fact, in your case, it seems like you don't even need real queues - that makes this a lot easier, and even better, allows you to use Task.Yield really efficiently.
The end result is that all three of your periodic tasks are efficiently run on just a single thread (or even no thread at all if they're all waiting).
This does rely on coöperative multi-tasking, of course. It's not really possible to handle full blown real-time like pre-emption on Windows - and partial pre-emption solutions tend to be full of problems. If you're in control of most of the time spent in the task (and offload any other work using asynchronous I/O), the coöperative solution is actually far more efficient, and can give you a lot less latency (though it really shouldn't matter much).
I hope I don't missunderstand your question :)
One possibility to your problem might be to use a concurrent queue: https://msdn.microsoft.com/de-de/library/dd267265(v=vs.110).aspx
For example you create a enum to control your state and init the queue:
private ConcurrentQueue<Action> _clientActions ;
private enum Statuskatalog
{
Idle,
Busy
};
Create a timer to start and create a timerfunktion.
Timer _taskTimer = new Timer(ProcessPendingTasks, null, 100, 333);
private void ProcessPendingTasks(object x)
{
_status = Statuskatalog.Busy;
_taskTimer.Change(Timeout.Infinite, Timeout.Infinite);
Action currentTask;
while( _clientActions.TryDequeue( out currentTask ))
{
var task = new Task(currentTask);
task.Start();
task.Wait();
}
_status=Statuskatalog.Idle;
}
Now you only have to add your tasks as delegates to the queue:
_clientActions.Enqueue(delegate { **Your task** });
if (_status == Statuskatalog.Idle) _taskTimer.Change(0, 333);
On this base, you can manage your special requirements you were asking for.
Hope this was, what you were searching for.
I require to use semaphore in my application which includes multiple threads. My usage might be a common scenario, but m stuck with the APIs.
In my usage, the semaphore can be posted from multiple spots, whereas there is only one thread which waits on the semaphore.
Now, I require the semaphore to be a binary one, i.e., I need to make sure that in case multiple threads post to the semaphore simultaneously, the semaphore count remains at one, and no error is thrown. How can I accomplish this.
In short I require the following code to work.
private static Semaphore semaphoreResetMapView = new Semaphore(0, 1); // Limiting the max value of semaphore to 1.
void threadWait(){
while (true){
semaphoreResetMapView.WaitOne();
<code>
}
}
void Main(){
tThread = new Thread(threadWait);
tThread.Start();
semaphoreResetMapView.Release(1);
semaphoreResetMapView.Release(1);
semaphoreResetMapView.Release(1); // Multiple Releases should not throw an error. Rather saturate the value of semaphore to 1.
}
I will appreciate any help on this.
It sounds like you don't really need a semaphore - you just need an AutoResetEvent. Your "posting" threads would just call Set, and the waiting thread would call WaitOne.
Or you could just use Monitor.Wait and Monitor.Pulse...
I have asked a similar question before here, but after much thought, and implementations from those that answered me, I found that my approach might have been incorrect.
When I implement the solution given to me on this previous question the following test result appeared:
When I 'simulate' multiple tasks running concurrently on multiple threads from the threadpool (by making the threads sleep at random times from 1 to 20 seconds for instance), then the model seems to work fine. I set the system to poll every 1 second to see if it can spawn another thread and all seems fine. Longer running (sleeping) threads would complete later on and threads would start and die all over the place. If I happen to run out of threads (I set it to spawn no more than 10) it would sit and wait for one to become available.
When I however make the system do actual processing in each thread (which would take anything from 3 seconds upwards), which would involve reading data, generating XMLs saving data, sending emails and the like, the system would spawn 1, 2 or 3 threads, do processing and then just close the threads (3...2...1...) and then say 0 threads running (I added console.writelines everywhere to document the process). It would then hang around 0 threads, never picking any more work!
So I decided to state my issue again the hopes that someone has a solution. I have tried various solutions so far:
ThreadPool: There's always the mention that you shouldn't over-work the ThreadPool and jobs has to be 'quick', but what is the definition of 'quick'? How do I know how big/busy the ThreadPool is?
Threads: It's always stated that Threads are expensive and you have to handle them starting up and ending, but how do I limit them, I have tried Semaphores, 'lock' objects, public variables, but it no no avail
So here is what I would like to accomplish:
I have the same job that needs to run at regular intervals, i.e. like gmail would check it's server for new email for you every 5 seconds.
If there is work to be done (i.e. you have new emails to be sent to your inbox), then spawn an async thread and make it start the work. This work will typically take longer than the interval stated in (1), hence the async thread, if an interval passes and the system checks again to see if there's new work and see you have more work, it will spawn another thread and make it start the work.
As in my example, all the jobs are the same kind of job (check of new mail), and are totally independent of eachother, they do not influence each other. If one of them fails, the rest can continue on working with no issue.
I need there to be a limit of how many concurrent threads and maximum threads I can have. If I pick '10', then the system should start checking for jobs as in (1), and keep on spawning threads as in (1), until it reaches 10 threads. All new attempts on an interval to spawn a new thread should just fail (do nothing) until a thread is released again. Here I suppose the choice will be: (a) when it's released there will already be some work queued waiting to be given to the new open thread or (b) on the next interval check if there's new work and assign it to the new open thread.
If there is no work, then typically the system should sit and wait, having no threads and in essence the only thing that should be running is some sort of timer
I currently use the sample in the previous question to do the following:
I start a timer, that ticks every 1 sec
On every tick I 'ThreadPool.QueueUserWorkItem(new WaitCallback(DoWork)'
In DoWork I I instantiate a class and call various methods that does some work
...but this leads to what I mentioned before, only 3 threads that die off and then nothing.
I as thinking of doing the following:
Set the ThreadPool to 10 thread's
Start a timer and in each tick ThreadPool.QueueUserWorkItem', and just keep on doing this, hoping that the ThreadPool will handle everything else. Isn't this what the ThreadPool is supposed to do?
Any help will be fantastic! (Sorry for the involved explanation!)
Try to have a look at the Semaphore class. You can use that to set a limit to how many threads can concurrently access a particular resource (and when I say resource, it can be anything).
Ok, edited for details:
In your class managing the threads, you create:
Semaphore concurrentThreadsEnforcer = new Semaphore(value1, value2);
Then, each thread you start will call:
concurrentThreadsEnforcer.WaitOne();
That will either take one slot from the semaphore and give it to the new thread, or block the new thread until a slot becomes available.
Whenever your new thread finishes its work, he (I like personalizing) MUST call, for obvious reasons:
concurrentThreadsEnforcer.Release().
Now, regarding the constructor, the second parameter is fairly simple: states how many concurrent threads can access the resource at any given time.
The first one is a bit trickier. The difference between the second parameter and the first one will state how many semaphore slots are reserved for the calling thread. That is, all your newly spawned threads will have access to the number of slots stated by the first parameter, and the rest of them up to the second parameter's value will be reserved for the original thread that created the semaphore (calling thread).
In your case, for 10 max threads, you would use:
... = new Semaphore(10, 10);
Since I posted a story anyway, let me gibe more details.
The way I will do it in the new threads, will be like this:
bool aquired = false;
try
{
aquired = concurrentThreadsEnforcer.WaitOne();
// Do some work here
} // Optional catch statements
finally
{
if (aquired)
concurrentThreadsEnforcer.Release();;
}
I would use a combination of BlockingCollection and Parallel.ForEach
Something like this:
private BlockingCollection<Job> jobs = new BlockingCollection<Job>();
private Task jobprocessor;
public void StartWork() {
timer.Start();
jobprocessor = Task.Factory.StartNew(RunJobs);
}
public void EndWork() {
timer.Stop();
jobs.CompleteAdding();
jobprocessor.Wait();
}
public void TimerTick() {
var job = new Job();
if (job.NeedsMoreWork())
jobs.Add(job);
}
public void RunJobs() {
var options = new ParallelOptions { MaxDegreeOfParallelism = 10 };
Parallel.ForEach(jobs.GetConsumingPartitioner(), options,
job => job.DoSomething());
}