Prioritised-writer access to shared resource? - c#

Does the .NET framework provide an ability to implement access to a shared resource in a manner such that some Writers trying to access that resource will have priority over others?
My problem has the following constraints:
1. Only 1 concurrent write request to the resource can be granted
2. There are many Writers waiting for access to this resource, but some Writers have precedence over others (starvation of low-priority Writers is okay).
3. Thread affinity is a NON-requirement. One thread can set the lock, but another may reset it.
4. All Writer threads are from the same process.
In short, I need a primitive that exposes its wait-queue and allows modify access to it. If there isn't any such thing, any tips on how I can proceed on building one for myself, using the already available classes, such as Semaphore?

Here is some quick'n'dirty code I could come up with. I will refine this, but as a POC this works...
public class PrioritisedLock
{
private List<CountdownEvent> waitQueue; //wait queue for the shared resource
private Semaphore waitQueueSemaphore; //ensure safe access to wait queue itself
public PrioritisedLock()
{
waitQueue = new List<CountdownEvent>();
waitQueueSemaphore = new Semaphore(1, 1);
}
public bool WaitOne(int position = 0)
{
//CountdownEvent needs to have a initial count of 1
//otherwise it is created in signaled state
position++;
bool containsGrantedRequest = false; //flag to check if wait queue still contains object which owns the lock
CountdownEvent thisRequest = position<1 ? new CountdownEvent(1) : new CountdownEvent(position);
int leastPositionMagnitude=Int32.MaxValue;
waitQueueSemaphore.WaitOne();
//insert the request at the appropriate position
foreach (CountdownEvent cdEvent in waitQueue)
{
if (cdEvent.CurrentCount > position)
cdEvent.AddCount();
else if (cdEvent.CurrentCount == position)
thisRequest.AddCount();
if (cdEvent.CurrentCount == 0)
containsGrantedRequest = true;
}
waitQueue.Add(thisRequest);
foreach (CountdownEvent cdEvent in waitQueue)
if (cdEvent.CurrentCount < leastPositionMagnitude)
leastPositionMagnitude = cdEvent.CurrentCount;
//If nobody holds the lock, grant the lock to the current request
if (containsGrantedRequest==false && thisRequest.CurrentCount == leastPositionMagnitude)
thisRequest.Signal(leastPositionMagnitude);
waitQueueSemaphore.Release();
//now do the actual wait for this request; if it is already signaled, it ends immediately
thisRequest.Wait();
return true;
}
public int Release()
{
int waitingCount = 0, i = 0, positionLeastMagnitude=Int32.MaxValue;
int grantedIndex = -1;
waitQueueSemaphore.WaitOne();
foreach(CountdownEvent cdEvent in waitQueue)
{
if (cdEvent.CurrentCount <= 0)
{
grantedIndex = i;
break;
}
i++;
}
//remove the request which is already fulfilled
if (grantedIndex != -1)
waitQueue.RemoveAt(grantedIndex);
//find the wait count of the first element in the queue
foreach (CountdownEvent cdEvent in waitQueue)
if (cdEvent.CurrentCount < positionLeastMagnitude)
positionLeastMagnitude = cdEvent.CurrentCount;
//decrement the wait counter for each waiting object, such that the first object in the queue is now signaled
foreach (CountdownEvent cdEvent in waitQueue)
{
waitingCount++;
cdEvent.Signal(positionLeastMagnitude);
}
waitQueueSemaphore.Release();
return waitingCount;
}
}
}

Use priority queue to keep list of pending requests. See here: Priority queue in .Net.
Use stanadrd Monitor functionality to lock and signal what and when to do, as proposed by kenny.

Related

Can a thread jump over lock()?

I have a class that provides thread-safe access to LinkedList<> (adding and reading items).
class LinkedListManager {
public static object locker = new object();
public static LinkedList<AddXmlNodeArgs> tasks { get; set; }
public static EventWaitHandle wh { get; set; }
public void AddItemThreadSafe(AddXmlNodeArgs task) {
lock (locker)
tasks.AddLast(task);
wh.Set();
}
public LinkedListNode<AddXmlNodeArgs> GetNextItemThreadSafe(LinkedListNode<AddXmlNodeArgs> prevItem) {
LinkedListNode<AddXmlNodeArgs> nextItem;
if (prevItem == null) {
lock (locker)
return tasks.First;
}
lock (locker) // *1
nextItem = prevItem.Next;
if (nextItem == null) { // *2
wh.WaitOne();
return prevItem.Next;
}
lock (locker)
return nextItem;
}
}
}
I have 3 threads: 1st - writes data to tasks; 2nd and 3rd - read data from tasks.
In 2nd and 3rd threads I retrieve data from tasks by calling GetNextItemThreadSafe().
The problem is that sometimes GetNextItemThreadSafe() returns null, when parameter of method (prevItem) is not null`.
Question:
Can a thread somehow jump over lock(locker) (// *1) and get to // *2 at once ??
I think it's the only way to get a return value = null from GetNextItemThreadSafe()...
I've spend a whole day to find the mistake, but it's extremely hard because it seems to be almost impossible to debug it step by step (tasks contains 5.000 elements and error occurs whenever it wants). Btw sometimes program works fine - without exception.
I'm new to threads so maybe I'm asking silly questions...
Not clear what you're trying to achieve. Are both threads supposed to get the same elements of the linked list ? Or are you trying to have 2 threads process the tasks out of the list in parallel ? If it's the second case, then what you are doing cannot work. You'd better look at BlockingCollection which is thread-safe and designed for this kind of multi-threaded producers/consumers patterns.
A lock is only active when executing the block of code declared following the lock. Since you lock multiple times on single commands, this effectively degenerates to only locking the single command that follows the lock, after which another thread is free to jump in and consume the data. Perhaps what you meant is this:
public LinkedListNode<AddXmlNodeArgs> GetNextItemThreadSafe(LinkedListNode<AddXmlNodeArgs> prevItem) {
LinkedListNode<AddXmlNodeArgs> nextItem;
LinkedListNode<AddXmlNodeArgs> returnItem;
lock(locker) { // Lock the entire method contents to make it atomic
if (prevItem == null) {
returnItem = tasks.First;
}
// *1
nextItem = prevItem.Next;
if (nextItem == null) { // *2
// wh.WaitOne(); // Waiting in a locked block is not a good idea
returnItem = prevItem.Next;
}
returnItem = nextItem;
}
return returnItem;
}
}
Note that only assignments (as opposed to returns) occur within the locked block and there is a single return point at the bottom of the method.
I think the solution is the following:
In your Add method, add the node and set the EventWaitHandle both inside the same lock
In the Get method, inside a lock, check if the next element is empty and inside the same lock, Reset the EventWaitHandle. Outside of the lock, wait on the EventWaitHandle.

ManualResetEvent - Backlog of events

Im having some issues around the ManualResetEvent and a backlog of events. My application is subscribing to messages then running a long task.
The issue I have is that I am receiving more messages than I can process. Task is taking around 5s to process but im receiving a new message every 2-3s.
Ideally what I want to do is ignore any new events until i've finished processing the task then start 'listening' again. At present I appear to be backlogging the events in order of being received and processed. As you can imagine after a couple of hours the message being processed is very old.
I cannot run the long running task from multiple threads.
Maybe I require some kind of queing mechanism then clear the last message (Last On First Off) and delete the queue?
Any ideas?
Im also calling the ManualResetEvent.Set() at the end of the long running process - from my research I understand this is correct? Should I Reset() at the beginning of the long running task to caused the thread to block then Set() at the end?
Create a circular buffer that you treat as a LIFO queue (a stack). So, say you want a maximum of 10 entries in the queue:
const int MaxItems = 10;
Item[] theQueue = new Item[];
int insertPoint = 0;
object myLock = new object();
// initialize the array to all NULL.
void Enqueue(Item t)
{
lock (myLock)
{
theQueue[insertPoint] = t;
insertPoint = (insertPoint+1) % 10;
}
}
Item Dequeue()
{
lock (myLock)
{
int takeFrom = insertPoint-1;
if (takeFrom < 0)
takeFrom = MaxItems-1;
if (theQueue[takeFrom] != null)
{
var rslt = theQueue[takeFrom];
insertPoint = takeFrom;
return rslt;
}
// queue is empty. Either return null or throw an exception.
return null;
}
}
Of course you'll want to wrap that all up into a nice object. But that's the basic idea.
How about this:
At any given time, you have one message being processed, and one message waiting to be processed.
When you receive a new message, you overwrite the message waiting to be processed.
Your processing thread waits until there is a message waiting to be processed, then mark it as being processed, processes it, then starts again.
Add some syncing logic, and you get this code:
private object _sync = new object();
private Message _beingProcessed;
private Message _waitingToBeProcesssed;
public void OnMessageReceived(Message message)
{
lock(sync)
{
_waitingToBeProcesssed = message;
Monitor.Pulse(sync);
}
}
public void DoWork()
{
while (true)
{
lock (sync)
{
while (_waitingToBeProcesssed == null)
{
Monitor.Wait(sync);
}
_beingProcessed = _waitingToBeProcesssed;
_waitingToBeProcesssed = null;
}
Process(_beingProcessed); //Do the actual work
}
}

C#: Locking a Queue properly for Iteration

I am using a Queue (C#) to store data that has to be sent to any client connecting.
my lock statement is private readonly:
private readonly object completedATEQueueSynched = new object();
only two methods are enqueueing:
1) started by mouse-movement, executed by the mainform-thread:
public void handleEddingToolMouseMove(MouseEventArgs e)
{
AbstractTrafficElement de = new...
sendElementToAllPlayers(de)
lock (completedATEQueueSynched)
{
completedATEQueue.Enqueue(de);
}
}
2) started on a button-event, executed by mainform-thread too (does not matter here, but better safe than sorry):
public void handleBLC(EventArgs e)
{
AbstractTrafficElement de = new...
sendElementToAllPlayers(de);
lock (completedATEQueueSynched)
{
completedATEQueue.Enqueue(de);
}
}
this method is called by the thread responsible for the specific client connected. here it is:
private void sendSetData(TcpClient c)
{
NetworkStream clientStream = c.GetStream();
lock (completedATEQueueSynched)
{
foreach (AbstractTrafficElement ate in MainForm.completedATEQueue)
{
binaryF.Serialize(clientStream, ate);
}
}
}
if a client connects and i am moving my mouse at the same time, a deadlock occurs.
if i lock the iteration only, a InvalidOperation exection is thrown, because the queue changed.
i have tried the synchronized Queue-Wrapper as well, but it does't work for Iterating. (even in combination with locks)
any ideas? i just don't get my mistake
You can reduce the contention, probably enough to make it acceptable:
private void sendSetData(TcpClient c)
{
IEnumerable<AbstractTrafficElement> list;
lock (completedATEQueueSynched)
{
list = MainForm.completedATEQueue.ToList(); // take a snapshot
}
NetworkStream clientStream = c.GetStream();
foreach (AbstractTrafficElement ate in list)
{
binaryF.Serialize(clientStream, ate);
}
}
But of course a snapshot introduces its own bit of timing logic. What exactly does 'all elements' mean at any given moment?
Looks like ConcurrentQueue you've wanted
UPDATE
Yes work fine, TryDequeue uses within the Interlocked.CompareExchange and SpinWait. Lock is not good choice, because too expensive take a look on SpinLock and don't forget about Data Structures for Parallel Programming
Her is enqueue from ConcurrentQueue, as you see only SpinWait and Interlocked.Increment are used. looks pretty nice
public void Enqueue(T item)
{
SpinWait spinWait = new SpinWait();
while (!this.m_tail.TryAppend(item, ref this.m_tail))
spinWait.SpinOnce();
}
internal void Grow(ref ConcurrentQueue<T>.Segment tail)
{
this.m_next = new ConcurrentQueue<T>.Segment(this.m_index + 1L);
tail = this.m_next;
}
internal bool TryAppend(T value, ref ConcurrentQueue<T>.Segment tail)
{
if (this.m_high >= 31)
return false;
int index = 32;
try
{
}
finally
{
index = Interlocked.Increment(ref this.m_high);
if (index <= 31)
{
this.m_array[index] = value;
this.m_state[index] = 1;
}
if (index == 31)
this.Grow(ref tail);
}
return index <= 31;
}
Henk Holterman's approach is good if your rate of en-queue, dequeue on queue is not very high. Here I think you are capturing mouse movements. If you expect to generate lot of data in queue the above approach is not fine. The lock becomes contention between the network code and en-queue code. The granularity of this lock is at whole queue level.
In this case I'll recommend what GSerjo mentioned - ConcurrentQueue. I've looked into the implementation of this queue. It is very granular. It operates at single element level in queue. While one thread is dequeueing, other threads can in parallel enqueue without stopping.

Csharp threading starting new threads when one finished without waiting on join

I've searched all morning and I can't seem to find the answer to this question.
I have an array of Threads each doing work and then I'll loop through the ids joining each one then starting new threads. What's the best way to detect when a thread has finish so I can fire off a new thread without waiting for each thread to finish?
EDIT added code snippet maybe this will help
if (threadCount > maxItems)
{
threadCount = maxItems;
}
threads = new Thread[threadCount];
for (int i = 0; i < threadCount; i++)
{
threads[i] = new Thread(delegate() { this.StartThread(); });
threads[i].Start();
}
while (loopCounter < threadCount)
{
if (loopCounter == (threadCount - 1))
{
loopCounter = 0;
}
if (threads[loopCounter].ThreadState == ThreadState.Stopped)
{
threads[loopCounter] = new Thread(delegate() { this.StartThread(); });
threads[loopCounter].Start();
}
}
Rather than creating new thread each time, why not just have each thread call a function that returns the next ID (or null if there's no more data to process) when it's finished with the current one? That function will obviously have to be threadsafe, but should reduce your overhead versus watching for finished threads and starting new ones.
so,
void RunWorkerThreads(int threadCount) {
for (int i = 0; i < threadCount; ++i) {
new Thread(() => {
while(true) {
var nextItem = GetNextItem();
if (nextItem == null) break;
/*do work*/
}
}).Start();
}
}
T GetNextItem() {
lock(_lockObject) {
//return the next item
}
}
I'd probably pull GetNextItem and "do work" out and pass them as a parameters to RunWorkerThreads to make that more generic -- so it would be RunWorkerThreads<T>(int count, Func<T> getNextItem, Action<T> workDoer), but that's up to you.
Note that Parallel.ForEach() does essentially this though plus give ways of monitoring and aborting and such, so there's probably no need to reinvent the wheel here.
You can check the thread's ThreadState property and when it's Stopped you can kick off a new thread.
http://msdn.microsoft.com/en-us/library/system.threading.thread.threadstate.aspx
http://msdn.microsoft.com/en-us/library/system.threading.threadstate.aspx
Get each thread, as the last thing it does, to signal that it is done. That way there needs to be no waiting at all.
Even better move to a higher level of abstraction, e.g. threadpool and let someone else worry about such details.

ObjectPool implementation deadlocks

I have implemented a generic ObjectPool class but have experienced that it sometime deadlocks (happens at Monitor.Wait(poolLock))
Can anyone spot the error?
public class ObjectPool<T> where T : new()
{
private readonly object poolLock = new object();
Stack<T> stack = null;
public ObjectPool(int count)
{
stack = new Stack<T>(count);
for (int i=0; i<count; i++)
stack.Push(new T());
}
public T Get()
{
lock (poolLock)
{
//if no more left wait for one to get Pushed
while (stack.Count < 1)
Monitor.Wait(poolLock);
return stack.Pop();
}
}
public void Put(T item)
{
lock (poolLock)
{
stack.Push(item);
//If adding first send signal
if (stack.Count == 1)
Monitor.Pulse(poolLock);
}
}
usage
try
{
service = myPool.Get();
}
finally
{
if (service != null)
myPool.Put(service);
}
The deadlock is probably happening with a stack.Count > 0 . That means you have a Wait/Pulse problem. It is not a bad idea to always call Pulse after a Push(). Or at least when Count < 5 or so. Remember that the Wait/Pulse mechanism does not have a memory.
A scenario:
Thread A tries to Get from an empty
Pool, and does a Wait()
Thread B tries
to Get from an empty Pool, and does a
Wait()
Thread C Puts into the Pool, Does a Pulse()
Thread D Puts back into the Pool and does not Pulse (Count == 2)
Thread A is activated and Gets its Item.
Thread B is left Waiting. With little hope fro recovery.
i see it a little more clear now. I must have a reader lock, right?
public T Get()
{
lock (readerLock)
{
lock (poolLock)
{
//if no more left wait for one to get Pushed
while (stack.Count < 1)
Monitor.Wait(poolLock);
return stack.Pop();
}
}
}
Just a guess but what about removing the "stack.Count == 1" condition and always issuing a Pulse inside of the Put function? Maybe two Puts are being called quickly in sequence and only one waiting thread is being awaken..
Henk answered your question. Following condition is not correct:
if (stack.Count == 1)

Categories