Lock and Mutex are showing different results - c#

I was trying out some concepts related to lock and Mutex in C# Threading. However if found that using Mutex gave me correct results while that by using lock were inconsitent.
With lock construct:
class BankAccount
{
private int balance;
public object padlock = new object();
public int Balance { get => balance; private set => balance = value; }
public void Deposit(int amount)
{
lock ( padlock )
{
balance += amount;
}
}
public void Withdraw(int amount)
{
lock ( padlock )
{
balance -= amount;
}
}
public void Transfer(BankAccount where, int amount)
{
lock ( padlock )
{
balance = balance - amount;
where.Balance = where.Balance + amount;
}
}
}
static void Main(string[] args)
{
var ba1 = new BankAccount();
var ba2 = new BankAccount();
var task = Task.Factory.StartNew(() =>
{
for ( int j = 0; j < 1000; ++j )
ba1.Deposit(100);
});
var task1 = Task.Factory.StartNew(() =>
{
for ( int j = 0; j < 1000; ++j )
ba2.Deposit(100);
});
var task2 = Task.Factory.StartNew(() =>
{
for ( int j = 0; j < 1000; ++j )
ba1.Transfer(ba2, 100);
});
Task.WaitAll(task, task1, task2);
Console.WriteLine($"Final balance is {ba1.Balance}.");
Console.WriteLine($"Final balance is {ba2.Balance}.");
Console.ReadLine();
}
The code was giving incorrect balance for ba2 while ba1 was set to 0.
This is the case even though each operation is surrounded by lock statement. It is not working correctly.
With Mutex construct:
class BankAccount
{
private int balance;
public int Balance { get => balance; private set => balance = value; }
public void Deposit(int amount)
{
balance += amount;
}
public void Withdraw(int amount)
{
balance -= amount;
}
public void Transfer(BankAccount where, int amount)
{
balance = balance - amount;
where.Balance = where.Balance + amount;
}
}
static void Main(string[] args)
{
var ba1 = new BankAccount();
var ba2 = new BankAccount();
var mutex1 = new Mutex();
var mutex2 = new Mutex();
var task = Task.Factory.StartNew(() =>
{
for ( int j = 0; j < 1000; ++j )
{
var lockTaken = mutex1.WaitOne();
try
{
ba1.Deposit(100);
}
finally
{
if ( lockTaken )
{
mutex1.ReleaseMutex();
}
}
}
});
var task1 = Task.Factory.StartNew(() =>
{
for ( int j = 0; j < 1000; ++j )
{
var lockTaken = mutex2.WaitOne();
try
{
ba2.Deposit(100);
}
finally
{
if ( lockTaken )
{
mutex2.ReleaseMutex();
}
}
}
});
var task2 = Task.Factory.StartNew(() =>
{
for ( int j = 0; j < 1000; ++j )
{
bool haveLock = Mutex.WaitAll(new[] { mutex1, mutex2 });
try
{
ba1.Transfer(ba2, 100);
}
finally
{
if ( haveLock )
{
mutex1.ReleaseMutex();
mutex2.ReleaseMutex();
}
}
}
});
Task.WaitAll(task, task1, task2);
Console.WriteLine($"Final balance is {ba1.Balance}.");
Console.WriteLine($"Final balance is {ba2.Balance}.");
Console.ReadLine();
}
With this approach I was getting correct balances every time I ran it.
I am not able to figure out why first approach is not working correctly. Am I missing something with respect to lock statements?

The main problem is with this line:
public int Balance { get => balance; private set => balance = value; }
You are allowing external code to meddle with the balance field, without the protection of the padlock. You also allow out-of-order reads of the balance field, because of the lack of a memory barrier, or even worse torn reads in case you later replace the int type with the more appropriate decimal.
The second problem can be solved by protecting the read with the padlock.
public int Balance { get => { lock (padlock) return balance; } }
As for the Transfer method, it can now be implemented without access to the other BankAccounts balance, like this:
public void Transfer(BankAccount where, int amount)
{
Withdraw(amount);
where.Deposit(amount);
}
This Transfer implementation is not atomic though, since an exception in the where.Deposit method could lead to the amount vanishing into thin air. Also other threads are not prevented from reading inconsistent values for the two BankAccounts Balances. This is why people generally use databases equipped with the ACID properties for this kind of work.

The two codes give the same results on my machine VS2017 .NET Framework 4.7.2 and work fine. So perhaps a difference with your system.
Final balance is 0.
Final balance is 200000.
Mutex are historically and originally for inter-process synchronization.
So in the process where the mutex is created, it is never locked against itself unless it was released like in the code provided in the question.
Using an operating system mutex object to synchronize threads is a bad practice and an anti-pattern.
Use Semaphore or Monitor within a process if having problems with lock and volatile.
Mutex : "A synchronization primitive that can also be used for interprocess synchronization."
Semaphore : "Limits the number of threads that can access a resource or pool of resources concurrently."
Monitor : "Provides a mechanism that synchronizes access to objects."
lock : "The lock statement acquires the mutual-exclusion lock for a given object, executes a statement block, and then releases the lock. While a lock is held, the thread that holds the lock can again acquire and release the lock. Any other thread is blocked from acquiring the lock and waits until the lock is released."
volatile : "The volatile keyword indicates that a field might be modified by multiple threads that are executing at the same time. The compiler, the runtime system, and even hardware may rearrange reads and writes to memory locations for performance reasons. Fields that are declared volatile are not subject to these optimizations. Adding the volatile modifier ensures that all threads will observe volatile writes performed by any other thread in the order in which they were performed. There is no guarantee of a single total ordering of volatile writes as seen from all threads of execution."
Hence you may try to add volatile:
private volatile int balance;
You can also set the locker object as static to be shared between instances if needed:
static private object padlock = new object();

Related

Hitting Synchronization LockException when resizing concurrent dictionary

Would anyone know why I hit the SynchronizationLockException when attempting a resize operation?
Based on the documentation for this error, I understand this happens when the current thread doesn't own the lock but based on the TryResize function, I think this thread should be owning all the locks
To illustrate the bug, I deliberately kept the load factor equal to 0 so that after the very first add operation, the implementation will attempt a resize operation
Here is the test I ran:
public async Task ThreeThreadAdd()
{
MyConcurrentDictionary<int, int> dict = new MyConcurrentDictionary<int, int>();
var task1 = Task.Run(() => dict.TryAdd(1, 1));
var sameBucketAsTask1 = Task.Run(() => dict.TryAdd(11, 1));
var task2 = Task.Run(() => dict.TryAdd(2, 2));
await Task.WhenAll(task1, sameBucketAsTask1, task2);
Assert.AreEqual(3, dict.Count());
}
Here is the implementation:
namespace DictionaryImplementations
{
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading;
public class MyConcurrentDictionary<TKey, TValue>
{
internal class Entry<TKey, TValue>
{
internal TKey key;
internal TValue value;
}
internal class Table
{
internal readonly object[] locks;
internal readonly List<Entry<TKey, TValue>>[] buckets;
internal Table(object[] locks, List<Entry<TKey, TValue>>[] buckets)
{
this.locks = locks;
this.buckets = buckets;
}
}
private double loadFactor;
private int count;
private volatile int bucketSize;
private volatile Table table;
public MyConcurrentDictionary()
{
//// starting with size 2 to illustrate resize issue
int size = 2;
this.bucketSize = size;
object[] locks = new object[size];
for (int i = 0; i < locks.Length; i++)
{
locks[i] = new object();
}
List<Entry<TKey, TValue>>[] buckets = new List<Entry<TKey, TValue>>[size];
for (int i = 0; i < buckets.Length; i++)
{
buckets[i] = new List<Entry<TKey, TValue>>();
}
Table table = new Table(locks, buckets);
this.table = table;
this.loadFactor = 0;
}
private void TryAcquireLocks(int inclusiveStart, int exclusiveEnd)
{
for (int i = inclusiveStart; i < exclusiveEnd; i++)
{
while (!Monitor.TryEnter(this.table.locks[i], 100))
{
continue;
}
}
}
private void ReleaseLocks(int inclusiveStart, int exclusiveEnd)
{
for (int i = inclusiveStart; i < exclusiveEnd; i++)
{
Monitor.Exit(this.table.locks[i]);
}
}
/// <returns>true if the k/v pair was added, false if key already exists</returns>
public bool TryAdd(TKey key, TValue value)
{
int hashCode = key.GetHashCode();
// is the volatile read safe?
int index = hashCode % this.bucketSize;
// acquire the lock
this.TryAcquireLocks(index, index + 1);
try
{
foreach (var entry in this.table.buckets[index])
{
if (entry.key.Equals(key))
{
return false;
}
}
Entry<TKey, TValue> newEntry = new Entry<TKey, TValue>()
{
key = key,
value = value
};
this.table.buckets[index].Add(newEntry);
Interlocked.Increment(ref this.count);
return true;
}
finally
{
this.ReleaseLocks(index, index + 1);
// attempt resize operation
this.TryResize();
}
}
public bool TryRemove(TKey key, out TValue oldValue)
{
oldValue = default(TValue);
int hashCode = key.GetHashCode();
// is this volatile read safe?
int index = hashCode % this.bucketSize;
// acquire the lock
this.TryAcquireLocks(index, index + 1);
try
{
bool found = false;
int entryIndex = 0;
foreach (var entry in this.table.buckets[index])
{
if (!entry.key.Equals(key))
{
entryIndex++;
}
else
{
found = true;
break;
}
}
if (!found)
{
return false;
}
this.table.buckets[index].RemoveAt(entryIndex);
// `volatile` doesn't work in this hashmap model since we have locks for each bucket
// since increment isn't an atomic operation, using `volatile` alone will not help
Interlocked.Decrement(ref this.count);
return true;
}
finally
{
this.ReleaseLocks(index, index + 1);
}
}
public int Count()
{
// `Interlock` should flush all caches so that we observe latest value
return this.count;
}
public bool ContainsKey(TKey key)
{
int hashCode = key.GetHashCode();
int index = hashCode % this.bucketSize;
// acquire the lock
// in this case, we need to take a lock to guard against collection being modified
this.TryAcquireLocks(index, index + 1);
try
{
List<Entry<TKey, TValue>> bucket = this.table.buckets[index];
return bucket.Any(item => item.key.Equals(key));
}
finally
{
this.ReleaseLocks(index, index + 1);
}
}
private void TryResize()
{
double currentLoad = (this.count * (1.0)) / this.bucketSize;
if (currentLoad < this.loadFactor)
{
return;
}
// locks are re-entrant for the same thread. So, we should not deadlock when acquiring same lock
this.TryAcquireLocks(0, this.bucketSize);
// store a reference to the locks array before the resize
var prevLockReference = this.table.locks;
try
{
int newBucketSize = this.bucketSize * 2;
object[] newLocks = new object[newBucketSize];
Array.Copy(this.table.locks, newLocks, this.table.locks.Length);
for (int i = this.table.locks.Length; i < newBucketSize; i++)
{
newLocks[i] = new object();
}
List<Entry<TKey, TValue>>[] newBuckets = new List<Entry<TKey, TValue>>[newBucketSize];
for (int i = 0; i < newBuckets.Length; i++)
{
newBuckets[i] = new List<Entry<TKey, TValue>>();
}
// re-compute distribution
foreach (List<Entry<TKey, TValue>> bucket in this.table.buckets)
{
foreach (Entry<TKey, TValue> entry in bucket)
{
int hashCode = entry.key.GetHashCode();
int newIndex = hashCode % newBucketSize;
newBuckets[newIndex].Add(new Entry<TKey, TValue>() { key = entry.key, value = entry.value });
}
}
Table newTable = new Table(newLocks, newBuckets);
// perform new assignments
// volatile reads will flush the cache
this.bucketSize = newBucketSize;
this.table = newTable;
}
finally
{
for (int i = 0; i < prevLockReference.Length; i++)
{
Monitor.Exit(prevLockReference[i]);
}
}
}
}
}
That's a nice brain teaser. I will leave all the constructive criticism about your code for the end.
The bug hunt
The bug is in TryResize. A thread comes in trying to resize when the bucket size is, say, 2. It gets to the entry of your critical section:
// locks are re-entrant for the same thread. So, we should not deadlock when acquiring same lock
this.TryAcquireLocks(0, this.bucketSize);
It acquires both locks and goes on its merry way to resize the dictionary. The logic is all fine, you copy the locks to a new array, reallocate buckets, re-compute the distribution...
Then a Spoiler thread comes along and also TryResizes. It gets to the critical section, invokes TryAcquireLocks(0, 2), tries to acquire the first lock and hangs.
In the meantime the first thread finishes recalculation, assigns this.bucketSize = 4, reassigns the internal table along with its locks and enters finally to release both locks.
Now the Spoiler thread wakes up, because it can now acquire lock number 0. It loops again, looks at the new table since it's correctly volatile and acquires lock number 1. But here's the kicker -- the Spoiler thread never witnessed the this.bucketSize reassignment. It is not aware that there are twice as many locks to acquire now, since it's executing TryAcquireLocks(0, 2). So it only acquires the 2 first locks in the table!
And that's it, not only is the critical section's precondition violated, when this Spoiler thread executes finally it will try to release all 4 locks, since the loop there explicitly goes up to the Length of the lock table. But it doesn't own the new 2 locks, only the original first 2, so you get a SynchronizationLockException.
An immediate fix is to introduce a new method that will always acquire all locks, even if their count increases between calls:
private void TryAcquireAllLocks()
{
for (int i = 0; i < this.bucketSize; i++)
{
while (!Monitor.TryEnter(this.table.locks[i], 100))
{
continue;
}
}
}
And then replace the bugged line in TryResize with
this.TryAcquireAllLocks();
The "please don't put this on production" section, a.k.a. just use ConcurrentDictionary
One of the reasons why you should probably never implement such complicated structures by yourself is this bug you just got. It's very non-trivial to track it down, and it takes a lot of reading to understand all the code that you've written and convince someone it's correct.
Your code already contains a lot of bugs waiting to happen. You're prone to the same error as old versions of Monitor.Enter, where your thread can die while you're holding a lock and deadlock the application -- what happens, when a thread acquired part of the locks that it needs to perform some operation and then dies? It'll never release them, and no one will ever get to use the dictionary again! I also don't get why you're passing in a timeout to Monitor.Enter if you always try to reacquire the lock right after.
If you're writing this code as an exercise, great, do so, test it, and then post it to Code Review StackExchange to get some quality feedback. Actually sounds like a great exercise.
But please, for the sake of us all, don't use your own implementation in production. The BCL version is well audited by experts whose only job is to make sure their standard implementations work, there's no way your custom code is going to be more robust than theirs.

Is using two lock statements one after another thread safe?

I've been learning about multi-thread programming and working on the dining philosophers problem. I'm trying to cause a deadlock without sleeping any threads. Here is the code snippet that I'm using:
public class Program
{
const int NumberOfPhilosophers = 5;
const int NumberOfForks = 5;
const int EatingTimeInMs = 20;
static object[] forks = new object[NumberOfForks];
static Thread[] philosopherEatingThreads = new Thread[NumberOfPhilosophers];
public static void Main(string[] args)
{
for (int i = 0; i < NumberOfForks; i++)
{
forks[i] = new object();
}
for (int i = 0; i < NumberOfPhilosophers; i++)
{
int philosopherIndex = i;
philosopherEatingThreads[i] = new Thread(() => { DoWork(philosopherIndex); })
{
Name = philosopherIndex.ToString()
};
philosopherEatingThreads[philosopherIndex].Start();
}
}
public static void DoWork(int philosopherIndex)
{
int fork1Index = philosopherIndex;
int fork2Index = (philosopherIndex + 1 ) % NumberOfForks;
var fork1 = forks[fork1Index];
var fork2 = forks[fork2Index];
lock (fork1)
{
lock (fork2)
{
Thread.Sleep(EatingTimeInMs);
}
}
}
}
I wasn't able to see any deadlocks after trying a couple of times. I know that not experiencing a deadlock does not mean that this code is thread-safe.
For example, when I change the lock statement and add latency I cause deadlock.
lock (fork1)
{
Thread.Sleep(10);
lock (fork2)
{
Thread.Sleep(EatingTimeInMs);
}
}
I have two questions:
Is using two lock statements one after another an atomic operation?
If using Thread.Sleep() causes a deadlock in a code snippet, does that mean that the code snippet is not thread-safe?
Thank you!

Multiple Threads but not locking still gives correct result

I imagined the result would be a negative value, due to not locking and multiple threads sharing the same object. I have tested this many times with release and debug version, every time the result is correct. Why is it still correct?
Code :
static BankAccount ThisbankAccount = new BankAccount(10000);
public static void WithdrawMoney()
{
for(int i = 0; i < 1000; i++)
ThisbankAccount.WithdrawMoney(25);
}
static void Main(string[] args)
{
Thread client1 = new Thread(WithdrawMoney);
Thread client2 = new Thread(WithdrawMoney);
Thread client3 = new Thread(WithdrawMoney);
client1.Start();
client2.Start();
client3.Start();
client3.Join();
Console.WriteLine( ThisbankAccount.Balance);
Console.ReadLine();
}
}
public class BankAccount
{
object Acctlocker = new object();
public BankAccount(int initialAmount)
{
m_balance = initialAmount;
}
public void WithdrawMoney(int amnt)
{
// lock(Acctlocker)
// {
if (m_balance - amnt >= 0)
{
m_balance -= amnt;
}
// }
}
public int Balance
{
get
{
return m_balance;
}
}
private int m_balance;
}
Just because something works now doesn't mean it is guaranteed to work. Race conditions are hard to trigger and might take years to surface. And when they surface, they can be very hard to track down and diagnose.
To see your problem in action, change this code:
if (m_balance - amnt >= 0)
{
m_balance -= amnt;
}
to:
if (m_balance - amnt >= 0)
{
Thread.Sleep(10);
m_balance -= amnt;
}
That introduces a slow enough code path to highlight the problem really easily.
The reason you aren't spotting it with your current code is that the operations you are doing (subtraction and comparisons) are very fast. So the window for the race condition is very small - and you are lucky enough for it not to occur. But, over unlimited time, it definitely will occur.

Multithread access to list C#

I have a task to show difference between syncronized and unsyncronized multithreading. Therefore I wrote an application simulating withdrawing money from clients' bank accounts. Each of some number of threads chooses a random user and withdraws money from the account.
Every thread should withdraw every account once. First time the threads are syncronized, but the second time they are not. So there must be a difference between accounts, withdrawed by syncronized and unsyncronized threads. And the difference must be different for different numbers of users and threads. But in my application I have difference just for 1000 threads. So I need unsyncronized threads' results to be strongly different from syncronized threads' ones.
The class User:
public class User : IComparable
{
public string Name { get; set; }
public int Start { get; set; }
public int FinishSync { get; set; }
public int FinishUnsync { get; set; }
public int Hypothetic { get; set; }
public int Differrence { get; set; }
...
}
The method which withdraws money:
public void Withdraw(ref List<User> users, int sum, bool isSync)
{
int ind = 0;
Thread.Sleep(_due);
var rnd = new Random(DateTime.Now.Millisecond);
//used is list of users, withrawed by the thread
while (_used.Count < users.Count)
{
while (_used.Contains(ind = rnd.Next(0, users.Count))) ; //choosing a random user
if (isSync) //isSync = if threads syncroized
{
if (Monitor.TryEnter(users[ind]))
{
try
{
users[ind].FinishSync = users[ind].FinishSync - sum;
}
finally
{
Monitor.Exit(users[ind]);
}
}
}
else
{
lock (users[ind])
{
users[ind].FinishUnsync = users[ind].FinishUnsync - sum;
}
}
_used.Add(ind);
}
done = true;
}
And the threads are created this way:
private void Withdrawing(bool IsSync)
{
if (IsSync)
{
for (int i = 0; i < _num; i++)
{
_withdrawers.Add(new Withdrawer(Users.Count, _due, _pause));
_threads.Add(new Thread(delegate()
{ _withdrawers[i].Withdraw(ref Users, _sum, true); }));
_threads[i].Name = i.ToString();
_threads[i].Start();
_threads[i].Join();
}
}
else
{
for (int i = 0; i < _num; ++i)
{
_withdrawers.Add(new Withdrawer(Users.Count, _due, _pause));
_threads.Add(new Thread(delegate()
{ _withdrawers[i].Withdraw(ref Users, _sum, false); }));
_threads[i].Name = i.ToString();
_threads[i].Start();
}
}
}
I've changed the Withdraw class this way, bc the problem could have been in creating threads separately from the delegate:
class Withdrawer
{
private List<int>[] _used;
private int _due;
private int _pause;
public int done;
private List<Thread> _threads;
public Withdrawer(List<User> users, int n, int due, int pause, int sum)
{
_due = due;
_pause = pause;
done = 0;
_threads = new List<Thread>(users.Count);
InitializeUsed(users, n);
CreateThreads(users, n, sum, false);
_threads.Clear();
while (done < n) ;
Array.Clear(_used,0,n-1);
InitializeUsed(users, n);
CreateThreads(users, n, sum, true);
}
private void InitializeUsed(List<User> users, int n)
{
_used = new List<int>[n];
for (int i = 0; i < n; i++)
{
_used[i] = new List<int>(users.Count);
for (int j = 0; j < users.Count; j++)
{
_used[i].Add(j);
}
}
}
private void CreateThreads(List<User> users, int n, int sum, bool isSync)
{
for (int i = 0; i < n; i++)
{
_threads.Add(new Thread(delegate() { Withdraw(users, sum, isSync); }));
_threads[i].Name = i.ToString();
_threads[i].Start();
}
}
public void Withdraw(List<User> users, int sum, bool isSync)
{
int ind = 0;
var rnd = new Random();
while (_used[int.Parse(Thread.CurrentThread.Name)].Count > 0)
{
int x = rnd.Next(_used[int.Parse(Thread.CurrentThread.Name)].Count);
ind = _used[int.Parse(Thread.CurrentThread.Name)][x];
if (isSync)
{
lock (users[ind])
{
Thread.Sleep(_due);
users[ind].FinishSync -= sum;
}
}
else
{
Thread.Sleep(_due);
users[ind].FinishUnsync -= sum;
}
_used[int.Parse(Thread.CurrentThread.Name)][x] = _used[int.Parse(Thread.CurrentThread.Name)][_used[int.Parse(Thread.CurrentThread.Name)].Count - 1];
_used[int.Parse(Thread.CurrentThread.Name)].RemoveAt(_used[int.Parse(Thread.CurrentThread.Name)].Count - 1);
Thread.Sleep(_pause);
}
done++;
}
}
Now the problem is FinishUnSync values are correct, while FinishSync values are absolutely not.
Thread.Sleep(_due);
and
Thread.Sleep(_pause);
are used to "hold" the resourse, bc my task is the thread should get resourse, hold it for a _due ms, and after processing wait _pause ms before finishing.
Your code isn't doing anything useful, and doesn't show the difference between synchronized and unsynchronized access. There are many things you'll need to address.
Comments in your code say that _used is a list of users that have been accessed by the thread. You're apparently creating that on a per-thread basis. If that's true, I don't see how. From the looks of things I'd say that _used is accessible to all threads. I don't see anywhere that you're creating a per-thread version of that list. And the naming convention indicates that it's at class scope.
If that list is not per-thread, that would go a long way towards explaining why your data is always the same. You also have a real race condition here because you're updating the list from multiple threads.
Assuning that _used really is a per-thread data structure . . .
You have this code:
if (isSync) //isSync = if threads syncroized
{
if (Monitor.TryEnter(users[ind]))
{
try
{
users[ind].FinishSync = users[ind].FinishSync - sum;
}
finally
{
Monitor.Exit(users[ind]);
}
}
}
else
{
lock (users[ind])
{
users[ind].FinishUnsync = users[ind].FinishUnsync - sum;
}
}
Both of these provide synchronization. In the isSync case, a second thread will fail to do its update if a thread already has the user locked. In the second case, the second thread will wait for the first to finish, and then will do the update. In either case, the use of Monitor or lock prevents concurrent update.
Still, you would potentially see a difference if multiple threads could be executing the isSync code at the same time. But you won't see a difference because in your synchronized case you never let more than one thread execute. That is, you have:
if (IsSync)
{
for (int i = 0; i < _num; i++)
{
_withdrawers.Add(new Withdrawer(Users.Count, _due, _pause));
_threads.Add(new Thread(delegate()
{ _withdrawers[i].Withdraw(ref Users, _sum, true); }));
_threads[i].Name = i.ToString();
_threads[i].Start();
_threads[i].Join();
}
}
else
{
for (int i = 0; i < _num; ++i)
{
_withdrawers.Add(new Withdrawer(Users.Count, _due, _pause));
_threads.Add(new Thread(delegate()
{ _withdrawers[i].Withdraw(ref Users, _sum, false); }));
_threads[i].Name = i.ToString();
_threads[i].Start();
}
}
So in the IsSync case, you start a thread and then wait for it to complete before you start another thread. Your code is not multithreaded. And in the "unsynchronized" case you're using a lock to prevent concurrent updates. So in one case you prevent concurrent updates by only running one thread at a time, and in the other case you prevent concurrent updates by using a lock. There will be no difference.
Something else worth noting is that your method of randomly selecting a user is highly inefficient, and could be part of the problem you're seeing. Basically what you're doing is picking a random number and checking to see if it's in a list. If it is, you try again, etc. And the list keeps growing. Quick experimentation shows that I have to generate 7,000 random numbers between 0 and 1,000 before I get all of them. So your threads are spending a huge amount of time trying to find the next unused account, meaning that they have less likelihood to be processing the same user account at the same time.
You need to do three things. First, change your Withdrawl method so it does this:
if (isSync) //isSync = if threads syncroized
{
// synchronized. prevent concurrent updates.
lock (users[ind])
{
users[ind].FinishSync = users[ind].FinishSync - sum;
}
}
else
{
// unsynchronized. It's a free-for-all.
users[ind].FinishUnsync = users[ind].FinishUnsync - sum;
}
Your Withdrawing method should be the same regardless of whether IsSync is true or not. That is, it should be:
for (int i = 0; i < _num; ++i)
{
_withdrawers.Add(new Withdrawer(Users.Count, _due, _pause));
_threads.Add(new Thread(delegate()
{ _withdrawers[i].Withdraw(ref Users, _sum, false); }));
_threads[i].Name = i.ToString();
_threads[i].Start();
}
Now you always have multiple threads running. The only difference is whether access to the user account is synchronized.
Finally, make your _used list a list of indexes into the users list. Something like:
_used = new List<int>(users.Count);
for (int i = 0; i < _used.Count; ++i)
{
_used[i] = i;
}
Now, when you select a user, you do this:
var x = rnd.Next(_used.Count);
ind = _used[x];
// now remove the item from _used
_used[x] = _used[_used.Count-1];
_used.RemoveAt(_used.Count-1);
That way you can generate all users more efficiently. It will take n random numbers to generate n users.
A couple of nitpicks:
I have no idea why you have the Thread.Sleep call in the Withdraw method. What benefit do you think it provides?
There's no real reason to pass DateTime.Now.Millisecond to the Random constructor. Just calling new Random() will use Environment.TickCount for the seed. Unless you really want to limit the seed to numbers between 0 and 1,000.

Using Interlocked.CompareExchange to increment a counter until a value

I need to increment a counter until it reaches a particular number. I can use two parallel task to increment the number. Instead of using a lock to check if the number has not reach the maximum allowed value and then incrementing, I thought using Interlocked.CompareExchange in the following manner:
public class CompareExchangeStrategy
{
private int _counter = 0;
private int _max;
public CompareExchangeStrategy(int max)
{
_max = max;
}
public void Increment()
{
Task task1 = new Task(new Action(DoWork));
Task task2 = new Task(new Action(DoWork));
task1.Start();
task2.Start();
Task[] tasks = new Task[2] { task1, task2 };
Task.WaitAll(tasks);
}
private void DoWork()
{
while (true)
{
int initial = _counter;
if (initial >= _max)
{
break;
}
int computed = initial + 1;
Interlocked.CompareExchange(ref _counter, computed, initial);
}
}
}
This code is taking more to execute (for _max= 1,000,000) than the lock approach:
public class LockStrategy
{
private int _counter = 0;
private int _max;
public LockStrategy(int max)
{
_max = max;
}
public void Increment()
{
Task task1 = new Task(new Action(DoWork));
Task task2 = new Task(new Action(DoWork));
task1.Start();
task2.Start();
Task[] tasks = new Task[2] { task1, task2 };
Task.WaitAll(tasks);
}
private void DoWork()
{
while (true)
{
lock (_lockObject)
{
if (_counter < _max)
{
_counter++;
}
else
{
break;
}
}
}
}
}
There might be a problem with the way I am using Interlocked.CompareExchange but I have not been able to figure out. Is there a better way to perform the above logic without using lock (aka Interlocked methods)?
Update
I was able to come with a version which performs as good as the lock version (for iterations = 1,000,000 and better for > 1,000,000 iterations).
SpinWait spinwait = new SpinWait();
int lock =0;
while(true)
{
if (Interlocked.CompareExchange(ref lock, 1, 0) != 1)
{
if (_counter < _max)
{
_counter++;
Interlocked.Exchange(ref lock, 0);
}
else
{
Interlocked.Exchange(ref lock, 0);
break;
}
}
else
{
spinwait.SpinOnce();
}
}
The difference is made by the spin. If the task is unable to increment the variable on first go it spins providing an opportunity for task 2 to progress further instead of performing a busy spin wait.
I suspect lock pretty much does the same, it could employ a strategy to spin and allow the thread currently owning the lock to execute.
The problem here is that you are actually doing a lot more work in the Interlocked version - by which I mean more iterations. This is because a lot of the time the CompareExchange isn't doing anything, because the value was changed by the other thread. You can see this by adding a total to each loop:
int total = 0;
while (true)
{
int initial = Thread.VolatileRead(ref _counter);
if (initial >= _max)
{
break;
}
int computed = initial + 1;
Interlocked.CompareExchange(ref _counter, computed, initial);
total++;
}
Console.WriteLine(total);
(note I also added a VolatileRead to ensure _counter isn't held in a register)
I get much more than iterations (via total) that you might expect here. The point is that when using Interlocked in this way, you need to add a strategy for what happens if the value changed, i.e. a retry strategy.
For example, a crude retry strategy might be:
while (true)
{
int initial = Thread.VolatileRead(ref _counter);
if (initial >= _max)
{
break;
}
int computed = initial + 1;
if (Interlocked.CompareExchange(ref _counter, computed, initial)
!= initial) continue;
total++;
}
which is to say: keep retrying until you make it work - any "doing" code would only happen after that check (where the total++ line is currently). This, however, makes the code more expensive.
If lock is cheaper: use lock. There's nothing wrong with lock, and indeed it is very optimized internally. Lock-free is not automatically the same as "fastest" or indeed "simplest".
I've managed to achieve almost the same performance as lockstrategy using the following code:
public class CompareExchangeStrategy {
volatile private int _counter = 0;
private int _max;
public CompareExchangeStrategy(int max) {
_max = max;
}
public void Increment() {
Task task1 = new Task(new Action(DoWork));
Task task2 = new Task(new Action(DoWork));
task1.Start();
task2.Start();
Task[] tasks = new Task[2] { task1, task2 };
Task.WaitAll(tasks);
}
private void DoWork() {
while(true) {
if(Interlocked.Add(ref _counter, 1) >= _max)
break;
}
}
}

Categories