There is a State Machine with two Transitions in the same function.
static readonly object _object = new object();
lock (_object)
{
// I want Host received the event of SMTrans01 first .
Obj.StateMachine.Send((int)MyStateMachine.EventType.SMTrans01, new object[2] { Obj, MyStateMachine.EventType.SMTrans01 });
}
lock (_object)
{
// And then I want Host received the event of SMTrans02.
Obj.StateMachine.Send((int)MyStateMachine.EventType.SMTrans02, new object[2] { Obj, MyStateMachine.EventType.SMTrans02 });
}
I implemented my state machine code as above. I am not sure I understand Lock statement correctly or not?
I need the events followed the the right order (Host received SMTrans01 first, and then host received SMTrans02 event).
After testing I found sometime host will receive the SMTrans02 event first. Looks like Lock statement not work. I don't know why.
Are there any good methods to approach it?
It looks like your problem has nothing to do with threads and locking.
I suspect that your Send method is asynchronous. The solution is to use a synchronous approach instead. Do not send the second event until you get an acknowledgement that the first event has been handled.
Alternatively, rewrite your receiving code so that it can handle the events coming out of order.
If you provide the code for Send and describe more how your method is being called that would help debugging the problem.
if the order matters
EventType _state = EventType.SMTrans02;
if(_state == EventType.SMTrans02 )
{
_state =EventType.SMTrans01;
Obj.StateMachine.Send((int)_state, new object[2] { Obj, _state });
}
else
{
_state = EventType.SMTrans02;
Obj.StateMachine.Send((int)_state, new object[2] { Obj, _state });
}
for more complex situation you may use switch block or even use State Pattern
you need Lock only to sync the threads which may call those events.
Related
Assuming the following case:
public HashTable map = new HashTable();
public void Cache(String fileName) {
if (!map.ContainsKey(fileName))
{
map.Add(fileName, new Object());
_Cache(fileName);
}
}
}
private void _Cache(String fileName) {
lock (map[fileName])
{
if (File Already Cached)
return;
else {
cache file
}
}
}
When having the following consumers:
Task.Run(()=> {
Cache("A");
});
Task.Run(()=> {
Cache("A");
});
Would it be possible in any ways that the Cache method would throw a Duplicate key exception meaning that both tasks would hit the map.add method and try to add the same key??
Edit:
Would using the following data structure solve this concurrency problem?
public class HashMap<Key, Value>
{
private HashSet<Key> Keys = new HashSet<Key>();
private List<Value> Values = new List<Value>();
public int Count => Keys.Count;
public Boolean Add(Key key, Value value) {
int oldCount = Keys.Count;
Keys.Add(key);
if (oldCount != Keys.Count) {
Values.Add(value);
return true;
}
return false;
}
}
Yes, of course it would be possible. Consider the following fragment:
if (!map.ContainsKey(fileName))
{
map.Add(fileName, new Object());
Thread 1 may execute if (!map.ContainsKey(fileName)) and find that the map does not contain the key, so it will proceed to add it, but before it gets the chance to add it, Thread 2 may also execute if (!map.ContainsKey(fileName)), at which point it will also find that the map does not contain the key, so it will also proceed to add it. Of course, that will fail.
EDIT (after clarifications)
So, the problem seems to be how to keep the main map locked for as little as possible, and how to prevent cached objects from being initialized twice.
This is a complex problem, so I cannot give you a ready-to-run answer that will work, (especially since I do not currently even have a C# development environment handy,) but generally speaking, I think that you should proceed as follows:
Fully guard your map with lock().
Keep your map locked as little as possible; when an object is not found to be in the map, add an empty object to the map and exit the lock immediately. This will ensure that this map will not become a point of contention for all requests coming in to the web server.
After the check-if-present-and-add-if-not fragment, you are holding an object which is guaranteed to be in the map. However, this object may and may not be initialized at this point. That's fine. We will take care of that next.
Repeat the lock-and-check idiom, this time with the cached object: every single incoming request interested in that specific object will need to lock it, check whether it is initialized, and if not, initialize it. Of course, only the first request will suffer the penalty of initialization. Also, any requests that arrive before the object has been fully initialized will have to wait on their lock until the object is initialized. But that's all very fine, that's exactly what you want.
I have a web method upload Transaction (ASMX web service) that take the XML file, validate the file and store the file content in SQL server database. we noticed that a certain users can submit the same file twice at the same time. so we can have the same codes again in our database( we cannot use unique index on the database or do anything on database level, don't ask me why). I thought I can use the lock statement on the user id string but i don't know if this will solve the issue. or if I can use a cashed object for storing all user id requests and check if we have 2 requests from the same user Id we will execute the first one and block the second request with an error message
so if anyone have any idea please help
Blocking on strings is bad. Blocking your webserver is bad.
AsyncLocker is a handy class that I wrote to allow locking on any type that behaves nicely as a key in a dictionary. It also requires asynchronous awaiting before entering the critical section (as opposed to the normal blocking behaviour of locks):
public class AsyncLocker<T>
{
private LazyDictionary<T, SemaphoreSlim> semaphoreDictionary =
new LazyDictionary<T, SemaphoreSlim>();
public async Task<IDisposable> LockAsync(T key)
{
var semaphore = semaphoreDictionary.GetOrAdd(key, () => new SemaphoreSlim(1,1));
await semaphore.WaitAsync();
return new ActionDisposable(() => semaphore.Release());
}
}
It depends on the following two helper classes:
LazyDictionary:
public class LazyDictionary<TKey,TValue>
{
//here we use Lazy<TValue> as the value in the dictionary
//to guard against the fact the the initializer function
//in ConcurrentDictionary.AddOrGet *can*, under some conditions,
//run more than once per key, with the result of all but one of
//the runs being discarded.
//If this happens, only uninitialized
//Lazy values are discarded. Only the Lazy that actually
//made it into the dictionary is materialized by accessing
//its Value property.
private ConcurrentDictionary<TKey, Lazy<TValue>> dictionary =
new ConcurrentDictionary<TKey, Lazy<TValue>>();
public TValue GetOrAdd(TKey key, Func<TValue> valueGenerator)
{
var lazyValue = dictionary.GetOrAdd(key,
k => new Lazy<TValue>(valueGenerator));
return lazyValue.Value;
}
}
ActionDisposable:
public sealed class ActionDisposable:IDisposable
{
//useful for making arbitrary IDisposable instances
//that perform an Action when Dispose is called
//(after a using block, for instance)
private Action action;
public ActionDisposable(Action action)
{
this.action = action;
}
public void Dispose()
{
var action = this.action;
if(action != null)
{
action();
}
}
}
Now, if you keep a static instance of this somewhere:
static AsyncLocker<string> userLock = new AsyncLocker<string>();
you can use it in an async method, leveraging the delights of LockAsync's IDisposable return type to write a using statement that neatly wraps the critical section:
using(await userLock.LockAsync(userId))
{
//user with userId only allowed in this section
//one at a time.
}
If we need to wait before entering, it's done asynchronously, freeing up the thread to service other requests, instead of blocking until the wait is over and potentially messing up your server's performance under load.
Of course, when you need to scale to more than one webserver, this approach will no longer work, and you'll need to synchronize using a different means (probably via the DB).
There are a great number of articles available regarding thread safe caching, here's an example:
private static object _lock = new object();
public void CacheData()
{
SPListItemCollection oListItems;
oListItems = (SPListItemCollection)Cache["ListItemCacheName"];
if(oListItems == null)
{
lock (_lock)
{
// Ensure that the data was not loaded by a concurrent thread
// while waiting for lock.
oListItems = (SPListItemCollection)Cache[“ListItemCacheName”];
if (oListItems == null)
{
oListItems = DoQueryToReturnItems();
Cache.Add("ListItemCacheName", oListItems, ..);
}
}
}
}
However, this example depends on the request for the cache also rebuilding the cache.
I'm looking for a solution where the request and rebuild are separate. Here's the scenario.
I have a web service that I want to monitor for certain types of error. If an error occurs, I create an monitor object and cache - it is updatable and is locked accordingly during update. Alls well so far.
Elsewhere, I check for the existence of the cached object, and the data it contains. This would work straight out of the box except for one particular scenario.
If the cache object is being updated - say a status change, I would like to wait and get the latest info rather than the current info, which if returned, would be out of date. So for my fetch code, I need to check if the object is currently being created/updating, and if so wait, then retry.
As I pointed out, there are many examples of cache locking patterns but I can't seem to find one that for this scenario. Any ideas as to how to go about this would be appreciated?
You can try the following code using two locks. Write lock in the setter is quite simple and protects cache from being written by more than one threads. The getter use a simple double-check lock.
Now, the trick is in Refresh() method, which uses the same lock as the getter. The method uses the lock and in the first step removes list from the cache. It will trigger any getter to fail the first null check and wait for the lock. The method in the meantime gets items, sets cache again and releases the lock.
When it comes back to the getter, it reads the cache again and now it contains the list.
public class CacheData
{
private static object _readLock = new object();
private static object _writeLock = new object();
public SPListItemCollection ListItem
{
get
{
var oListItems = (SPListItemCollection) Cache["ListItemCacheName"];
if (oListItems == null)
{
lock (_readLock)
{
oListItems = (SPListItemCollection)Cache["ListItemCacheName"];
if (oListItems == null)
{
oListItems = DoQueryToReturnItems();
Cache.Add("ListItemCacheName", oListItems, ..);
}
}
}
return oListItems;
}
set
{
lock (_writeLock)
{
Cache.Add("ListItemCacheName", value, ..);
}
}
}
public void Refresh()
{
lock (_readLock)
{
Cache.Remove("ListItemCacheName");
var oListItems = DoQueryToReturnItems();
ListItem = oListItems;
}
}
}
You can make the method and property static, if you do not need CacheData instance.
I hope I can word this correctly. I have a WCF Service that I'm using (duplex channel communications) in which one client registers with the service. The service's registration method returns a value. I want the the method of the called service registration method to also call the callback method that will send out notification of the client registration (I have my reasons for this and explaining it here will only confuse the issue). The problem is that the client's implemented callback has to run in the main application thread to work correctly (due mostly to integration with a third-party application). The service registration method call is also occuring in this same thread, so it effectively locks up since the client is looking for a return from the service registration method holding on to the thread preventing the callback method from being able to run. If I tell it to call all callback methods for all contexts other than the one just registered, it works just fine. But if I tell it to include it, obviously it locks up because that thread is already locked up. I can set the callback attribute property for UseSynchronizationContext to false, but then this means the callback method is called on a separate thread from the main and now the rest of the program will not work. Any help would be greatly appreciated.
Here's basically that registration method (first draft..)
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession,
UseSynchronizationContext = false,
ConcurrencyMode = ConcurrencyMode.Multiple,
Namespace = "http://MyApp/Design/CADServiceTypeLibrary/2012/12")]
public class DTOTransactionService : IDTOTransactionService, IDisposable
{
//some more stuff
public CADManager RegisterCADManager(int processID, bool subscribeToMessages)
{
List<CADManager> cadMgrs = this.CADManagers;
bool registered = false;
//Create new CADManager mapped to process id
CADManager regCADManager = new CADManager(processID);
//Add to CADManagers List and subscribe to messages
if (regCADManager.IsInitialized)
{
cadMgrs.Add(regCADManager);
this.CADManagers = cadMgrs;
//Subscribe to callbacks
if (subscribeToMessages)
SubscribeCallBack(regCADManager.ID);
registered = true;
}
//Send registration change notification
RegistrationState state;
if (registered)
state = RegistrationState.Registered;
else
state = RegistrationState.RegistrationException;
foreach (CallBackSubscriber subscriber in this.CallBackSubscribers)
{
subscriber.CallBackProxy.CADManagerRegistrationNotification(regCADManager.ID, state);
}
return regCADManager;
}
}
I think I've got it figured out. It struck me that a little deeper what's happening is that since the call to the service method is expecting a return value and since the callback will occur in the same thread as the client method expecting a return value that this could be the result of the deadlock condition. I then decided to try calling the callback methods in the service using a different thread to work around the current thread condition. In other words, work around the current thread whose method has yet to have provided a return value from the service method. It worked! Was this the right approach? I have enough experience to be dangerous here, so if someone else's experience shows this to be the wrong way to handle this, I'm all ears.
Thread notifyThread = new Thread(delegate()
{
foreach (CallBackSubscriber subscriber in this.CallBackSubscribers)
{
subscriber.CallBackProxy.CADManagerRegistrationNotification(regCADManager.ID, state);
}
});
Update:
Yes, the threading and deadlock condition was the issue, however the more appropriate fix I recently discovered is to use SynchronizationContext. To use, create a property or field of the type SynchronizationContext, then assign the value to the field/property while in the context you wish to capture using SynchronizationContext.Current. Then, use the Post() method (providing it a delegate via the SendOrPostCallback object) in the callback method being called by the Service. A short example:
private SynchronizationContext _appSyncContext = null;
private DTOCommunicationsService()
{
this.AppSyncContext = SynchronizationContext.Current;
//Sets up the service proxy, etc, etc
Open();
}
// Callback method
public void ClientSubscriptionNotification(string clientID, SubscriptionState subscriptionState)
{
SendOrPostCallback callback = delegate(object state)
{
object[] inputArgs = (object[])state;
string argClientID = (string)inputArgs[0];
SubscriptionState argSubState = (SubscriptionState)inputArgs[1];
//Do stuff with arguments
};
_appSyncContext.Post(callback, new object[] { clientID, subscriptionState });
}
I'd like to use .NET's Lazy<T> class to implement thread safe caching. Suppose we had the following setup:
class Foo
{
Lazy<string> cachedAttribute;
Foo()
{
invalidateCache();
}
string initCache()
{
string returnVal = "";
//CALCULATE RETURNVAL HERE
return returnVal;
}
public String CachedAttr
{
get
{
return cachedAttribute.Value;
}
}
void invalidateCache()
{
cachedAttribute = new Lazy<string>(initCache, true);
}
}
My questions are:
Would this work at all?
How would the locking have to work?
I feel like I'm missing a lock somewhere near the invalidateCache, but for the life of me I can't figure out what it is.
I'm sure there's a problem with this somewhere, I just haven't figured out where.
[EDIT]
Ok, well it looks like I was right, there were things I hadn't thought about. If a thread sees an outdated cache it'd be a very bad thing, so it looks like "Lazy" is not safe enough. The Property is accessed a lot though, so I was engaging in pre-mature optimization in hopes that I could learn something and have a pattern to use in the future for thread-safe caching. I'll keep working on it.
P.S.: I decided to make the object thread-un-safe and have access to the object be carefully controlled instead.
Well, it's not thread-safe in that one thread could still see the old value after another thread sees the new value after invalidation - because the first thread could have not seen the change to cachedAttribute. In theory, that situation could perpetuate forever, although it's pretty unlikely :)
Using Lazy<T> as a cache of unchanging values seems like a better idea to me - more in line with how it was intended - but if you can cope with the possibility of using an old "invalidated" value for an arbitrarily long period in another thread, I think this would be okay.
cachedAttribute is a shared resource that needs to be protected from concurrent modification.
Protect it with a lock:
private readonly object gate = new object();
public string CachedAttr
{
get
{
Lazy<string> lazy;
lock (gate) // 1. Lock
{
lazy = this.cachedAttribute; // 2. Get current Lazy<string>
} // 3. Unlock
return lazy.Value // 4. Get value of Lazy<string>
// outside lock
}
}
void InvalidateCache()
{
lock (gate) // 1. Lock
{ // 2. Assign new Lazy<string>
cachedAttribute = new Lazy<string>(initCache, true);
} // 3. Unlock
}
or use Interlocked.Exchange:
void InvalidateCache()
{
Interlocked.Exchange(ref cachedAttribute, new Lazy<string>(initCache, true));
}
volatile might work as well in this scenario, but it makes my head hurt.