I need to synchronize a sequence of operations that contains an asynchronous part.
The method looks into an image cache and returns the image if it's there (invokes a callback in reality). Otherwise it has to download it from the server. The download operation is asynchronous and fires an event on completion.
This is the (simplified) code.
private Dictionary<string, Bitmap> Cache;
public void GetImage(string fileName, Action<Bitmap> onGetImage)
{
if (Cache.ContainsKey(fileName))
{
onGetImage(Cache[fileName]);
}
else
{
var server = new Server();
server.ImageDownloaded += server_ImageDownloaded;
server.DownloadImageAsync(fileName, onGetImage); // last arg is just passed to the handler
}
}
private void server_ImageDownloaded(object sender, ImageDownloadedEventArgs e)
{
Cache.Add(e.Bitmap, e.Name);
var onGetImage = (Action<Bitmap>)e.UserState;
onGetImage(e.Bitmap);
}
The problem: if two threads call GetImage almost at the same time, they will both call the server and try to add the same image to the cache. What I should do is create lock at the beginning of GetImage and release it at the end of the server_ImageDownloaded handler.
Obviously this is not doable with the lock construct and it would not make sense, because it would be difficult to ensure that the lock is realeased in any case.
Now what I thought I could do is use a lambda instead of the event handler. This way I can put a lock around the whole section:
I have to lock the Cache dictionary at the beginning of the DownloadImage method and release it only at the end of the ImageDownloaded event handler.
private Dictionary<string, Bitmap> Cache;
public void GetImage(string fileName, Action<Bitmap> onGetImage)
{
lock(Cache)
{
if (Cache.ContainsKey(fileName))
{
onGetImage(Cache[fileName]);
}
else
{
var server = new Server();
server.ImageDownloaded += (s, e) =>
{
Cache.Add(e.Bitmap, e.Name);
onGetImage(e.Bitmap);
}
server.DownloadImageAsync(fileName, onGetImage); // last arg is just passed to the handler
}
}
}
Is this safe? Or the lock is immediately released after execution of GetImage, leaving the lambda expression unlocked?
Is there a better approach to solve this problem?
SOLUTION
In the end the solution was a bit of a mix of all the answers and comments, unfortunately I cannot mark-as-answer all of them. So here is my final code (removed some null checks/error cases/etc. for clarity).
private readonly object ImageCacheLock = new object();
private Dictionary<Guid, BitmapImage> ImageCache { get; set; }
private Dictionary<Guid, List<Action<BitmapImage>>> PendingHandlers { get; set; }
public void GetImage(Guid imageId, Action<BitmapImage> onDownloadCompleted)
{
lock (ImageCacheLock)
{
if (ImageCache.ContainsKey(imageId))
{
// The image is already cached, we can just grab it and invoke our callback.
var cachedImage = ImageCache[imageId];
onDownloadCompleted(cachedImage);
}
else if (PendingHandlers.ContainsKey(imageId))
{
// Someone already started a download for this image: we just add our callback to the queue.
PendingHandlers[imageId].Add(onDownloadCompleted);
}
else
{
// The image is not cached and nobody is downloading it: we add our callback and start the download.
PendingHandlers.Add(imageId, new List<Action<BitmapImage>>() { onDownloadCompleted });
var server = new Server();
server.DownloadImageCompleted += DownloadCompleted;
server.DownloadImageAsync(imageId);
}
}
}
private void DownloadCompleted(object sender, ImageDownloadCompletedEventArgs e)
{
List<Action<BitmapImage>> handlersToExecute = null;
BitmapImage downloadedImage = null;
lock (ImageCacheLock)
{
if (e.Error != null)
{
// ...
}
else
{
// ...
ImageCache.Add(e.imageId, e.bitmap);
downloadedImage = e.bitmap;
}
// Gets a reference to the callbacks that are waiting for this image and removes them from the waiting queue.
handlersToExecute = PendingHandlers[imageId];
PendingHandlers.Remove(imageId);
}
// If the download was successful, executes all the callbacks that were waiting for this image.
if (downloadedImage != null)
{
foreach (var handler in handlersToExecute)
handler(downloadedImage);
}
}
The lambda expression is converted into a delegate within a lock, but the body of the lambda expression will not automatically acquire the lock for the Cache monitor when the delegate is executed. So you may want:
server.ImageDownloaded += (s, e) =>
{
lock (Cache)
{
Cache.Add(e.Bitmap, e.Name);
}
onGetImage(e.Bitmap);
}
You have another potential problem here. This code:
if (Cache.ContainsKey(fileName))
{
onGetImage(Cache[fileName]);
}
If some other thread removes the image from the cache after your call to ContainsKey but before the next line is executed, it's going to crash.
If you're using Dictionary in a multi-threaded context where concurrent threads can be reading and writing, then you need to protect every access with a lock of some kind. lock is convenient, but ReaderWriterLockSlim will provide better performance.
I would also suggest that you re-code the above to be:
Bitmap bmp;
if (Cache.TryGetValue(fileName, out bmp))
{
onGetImage(fileName);
}
If you're running .NET 4.0, then I would strongly suggest that you look into using ConcurrentDictionary.
Why don't you just keep a a collection of image filenames that are being downloaded, and have the code for a thread be:
public void GetImage(string fileName, Action<Bitmap> onGetImage)
{
lock(Cache)
{
if (Cache.ContainsKey(fileName))
{
onGetImage(Cache[fileName]);
}
else if (downloadingCollection.contains(fileName))
{
while (!Cache.ContainsKey(fileName))
{
System.Threading.Monitor.Wait(Cache)
}
onGetImage(Cache[fileName]);
}
else
{
var server = new Server();
downloadCollection.Add(filename);
server.ImageDownloaded += (s, e) =>
{
lock (Cache)
{
downloadCollection.Remove(filename);
Cache.Add(e.Bitmap, e.Name);
System.Threading.Monitor.PulseAll(Cache);
}
onGetImage(e.Bitmap);
}
server.DownloadImageAsync(fileName, onGetImage); // last arg is just passed to the handler
}
}
}
That is more or less the standard monitor pattern, or would be if you refactored the lambda expression into a member function like GetImage. You should really do that. It will make the monitor logic easier to reason about.
Related
I have some code that loads up and AppDomain(call it domain) calling an object function within the domain. The purpose is to get a list of items from a usb device using the device API to retrieve the information. The API requires a callback to return the information.
var AppDomain.CreateDomain(
$"BiometricsDomain{System.IO.Path.GetRandomFileName()}");
var proxy = domain.CreateInstanceAndUnwrap(proxy.Assembly.FullName, proxy.FullName
?? throw new InvalidOperationException()) as Proxy;
var ids = obj.GetIdentifications();
The proxy code loaded into the domain is as follows
public class Proxy : MarshalByRefObject
{
public List<String> GetIdentifications()
{
var control = new R100DeviceControl();
control.OnUserDB += Control_OnUserDB;
control.Open();
int nResult = control.DownloadUserDB(out int count);
// need to be able to return the list here but obviously that is not
// going to work.
}
private void Control_OnUserDB(List<String> result)
{
// Get the list of string from here
}
}
Is there a way to be able to wait on the device and return the information as needed when the callback is called? Since the GetIdentifications() has already returned I don't know how to get the
You can consider wrapping the Event-Based Asynchronous Pattern (EAP) operations as one task by using a TaskCompletionSource<TResult> so that the event can be awaited.
public class Proxy : MarshalByRefObject {
public List<String> GetIdentifications() {
var task = GetIdentificationsAsync();
return task.Result;
}
private Task<List<String>> GetIdentificationsAsync() {
var tcs = new TaskCompletionSource<List<string>>();
try {
var control = new R100DeviceControl();
Action<List<string>> handler = null;
handler = result => {
// Once event raised then set the
// Result property on the underlying Task.
control.OnUserDB -= handler;//optional to unsubscribe from event
tcs.TrySetResult(result);
};
control.OnUserDB += handler;
control.Open();
int count = 0;
//call async event
int nResult = control.DownloadUserDB(out count);
} catch (Exception ex) {
//Bubble the error up to be handled by calling client
tcs.TrySetException(ex);
}
// Return the underlying Task. The client code
// waits on the Result property, and handles exceptions
// in the try-catch block there.
return tcs.Task;
}
}
You can also improve on it by adding the ability to cancel using a CancellationToken for longer than expected callbacks.
With that the proxy can then be awaited
List<string> ids = proxy.GetIdentifications();
Reference How to: Wrap EAP Patterns in a Task
NOTE: Though there may be more elegant solutions to the problem of asynchronous processing, the fact that this occurs in a child AppDomain warrants child AppDomain best practices. (see links below)
i.e.
do not allow code meant for a child AppDomain to be executed in the parent domain
do not allow complex types to bubble to the parent AppDomain
do not allow exceptions to cross AppDomain boundaries in the form of custom exception types
OP:
I am using it for fault tolerance
First I would probably add a Open or similar method to give time for the data to materialise.
var proxy = domain.CreateInstanceAndUnwrap(proxy.Assembly.FullName, proxy.FullName
?? throw new InvalidOperationException()) as Proxy;
proxy.Open(); // <------ new method here
.
. some time later
.
var ids = obj.GetIdentifications();
Then in your proxy make these changes to allow for data processing to occur in the background so that by the time you call GetNotifications data may be ready.
public class Proxy : MarshalByRefObject
{
ConcurrentBag<string> _results = new ConcurrentBag<string>();
public void Open()
{
var control = new R100DeviceControl();
control.OnUserDB += Control_OnUserDB;
control.Open();
// you may need to store nResult and count in a field?
nResult = control.DownloadUserDB(out int count);
}
public List<String> GetIdentifications()
{
var copy = new List<string>();
while (_results.TryTake(out var x))
{
copy.Add(x);
}
return copy;
}
private void Control_OnUserDB(List<String> result)
{
// Get the list of string from here
_results.Add (result);
}
}
Now you could probably improve upon GetNotifications to accept a timeout in the event either GetNotifications is called before data is ready or if you call it multiply but before subsequent data to arrive.
More
How to: Run Partially Trusted Code in a Sandbox
Not sure why you just don't maintain a little state and then wait for the results in the call:
public class Proxy : MarshalByRefObject
{
bool runningCommand;
int lastResult;
R100DeviceControl DeviceControl { get{ if(deviceControl == null){ deviceControl = new R100DeviceControl(); deviceControl.OnUserDB += Control_OnUserDB; } return deviceControl; } }
public List<String> GetIdentifications()
{
if(runningCommand) return null;
DeviceControl.Open();
runningCommand = true;
lastResult = control.DownloadUserDB(out int count);
}
private void Control_OnUserDB(List<String> result)
{
runningCommand = false;
// Get the list of string from here
}
}
Once you have a pattern like this you can easily switch between async and otherwise whereas before it will look a little harder to understand because you integrated the async logic, this way you can implement the sync method and then make an async wrapper if you desire.
Basically, I have a "DON'T DO THIS" Sentinel scenario. Because Sentinel is not safe in such scenario, I've implemented the following
var main = "192.168.XXX.YY:6379,abortConnect=false";
var backup = "192.168.XXX.YY:6379,abortConnect=false";
IConnectionMultiplexer redis = ConnectionMultiplexer.Connect(main);
redis.ConnectionFailed += (src, args) =>
{
if ((src as ConnectionMultiplexer).Configuration != backup) {
using (var writer = new StringWriter()) {
writer.Write(backup);
(src as ConnectionMultiplexer).Configure(writer);
/**
* Just for checking. It does not save
**/
(src as ConnectionMultiplexer).GetDatabase().StringSet("aaa", "bbb");
}
}
};
So, when my main connection is down, I change the configuration, by calling (src as ConnectionMultiplexer).Configure(writer), so that ConnectionMultiplexer can use the new configuration. However, ConnectionMultiplexer continue to use the old one.
Question: How can I change ConnectionMultiplexer.configuration in the ConnectionFailed event ?
I looked at the source code of the library, it seems there is no desired functionality. There is internal method Reconfigure, but it tries to connect to other servers from the configuration.
I would suggest you to refactor, if your application is not very large. Make a wrapper over ConnectionMultiplexer, pass wrapper to objects where the connection is used. We do wrap method GetConnection, which returns all the links on a single object. All who need the connection will call this method, no needs to store connection. Inside the wrapper OnFailed subscribe to an event handler to create a new connection to a Backup.
Not sure if that would be acceptable, not exactly switching the config but more like rebuilding the multiplixer
private static Lazy<IConnectionMultiplexer> _redisMux = new Lazy<ConnectionMultiplexer>(CreateMultiplexer);
public static IConnectionMultiplexer Multiplexer { get { return _redisMux.Value; } }
private const string Main = "192.168.XXX.YY:6379,abortConnect=false";
private const string Backup = "192.168.XXX.YY:6379,abortConnect=false";
private static string ActiveConfig = Main;
private static ConnectionMultiplexer CreateMultiplexer()
{
var mux = ConnectionMultiplexer.Connect(ActiveConfig));
mux.ConnectionFailed += OnConnectionFailed;
return mux;
}
[MethodImpl(MethodImplOptions.Synchronized)]
private static void OnConnectionFailed(object sender, ConnectionFailedEventArgs e)
{
ActiveConfig = Backup;
try { Multiplexer.Dispose(); } catch { }
_redisMux = new Lazy<ConnectionMultiplexer>(CreateMultiplexer);
}
I'm writing a Windows Service that will execute different data import logic, from different data source to eventually write it to a single target, a MS CRM instance. Right now, the only thing I think will be problematic, is the writing to CRM part. The concurent reading of data from different (sometimes same) data source shouldn't really be an issue (I may be wrong on this...) So I came up with a way to make sure there are no concurent writes (create or updates) to CRM.
Here's the general design for the moment:
What happens when the service starts:
Timers = new List<System.Timers.Timer>();
CrmTransactionQueue.Lock = new object { }; //Static class. The object for locking purposes...
System.Threading.Thread.Sleep(20000); //for debugging purpose so I can attach to process before everything kicks in...
//retrieve all types that are extending BaseSyncStrategy..
var strategyTypes = Assembly.GetExecutingAssembly().GetTypes().Where(x => x.BaseType == typeof(BaseSyncStrategy));
foreach (Type strategyType in strategyTypes)
{
//create a instance of that type....
var strategy = (BaseSyncStrategy)Activator.CreateInstance(strategyType);
//create a timer for each of these, they will have different intervals...
System.Timers.Timer t = new System.Timers.Timer
{
Interval = strategy.Interval * 1000,
AutoReset = false,
Enabled = true
};
Timers.Add(t);
t.Elapsed += (sender, e) => TimerElapsed(sender, e, strategy);
t.Start();
}
What happens when the timers' interval are expired:
private void TimerElapsed(object sender, ElapsedEventArgs e, BaseSyncStrategy strategy)
{
//get timer back
var timer = (Timer)sender;
try
{
strategy.Execute();
}
catch (Exception ex)
{
Logger.WriteEntry(EventLogEntryType.Error, $"Error executing strategy {strategy.GetType().Name}: ", ex);
}
timer.Start();
}
And within all the Execute methods of objects extending BaseSyncStrategy, each time I want to update or create something in the target CRM instance, I do this:
XrmServiceContext XrmCtx = new XrmServiceContext();
//....
//code that fetches data from foreign sources and creates CRM entities...
//....
Action<XrmServiceContext> action = (XrmServiceContext ctx) =>
{
//write those created/updated objects
//ctx lets me query entities and write back to CRM...
};
CrmTransactionQueue.Execute(action, XrmCtx);
And the simple code to make sure (I think) no concurent writes to CRM happen:
public static class CrmTransactionQueue
{
public static object Lock { get; set; }
public static void Execute(Action<XrmServiceContext> transaction, XrmServiceContext Ctx)
{
lock (Lock)
{
transaction.Invoke(Ctx);
}
}
}
Is this sound design or there's a better way to do this ?
I try to resolve the dreaded .WindowsPhone.exe!{<ID>}_Quiesce_Hang hang issue of my WinRT (Windows Phone 8.1) app.
At the moment I have the following handling of the Windows.UI.Xaml.Application.Suspending event:
private void App_Suspending(object iSender, SuspendingEventArgs iArgs)
{
SuspendingDeferral clsDeferral = null;
object objLock = new object();
try
{
clsDeferral = iArgs.SuspendingOperation.GetDeferral();
DateTimeOffset clsDeadline = iArgs.SuspendingOperation.Deadline;
//This task is to ensure that the clsDeferral.Complete()
//is called before the deadline.
System.Threading.Tasks.Task.Run(
async delegate
{
//Reducing the timeout by 1 second just in case.
TimeSpan clsTimeout = clsDeadline.Subtract(DateTime.UtcNow).Subtract(TimeSpan.FromSeconds(1));
if (clsTimeout.TotalMilliseconds > 100)
{
await System.Threading.Tasks.Task.Delay(clsTimeout);
}
DeferrerComplete(objLock, ref clsDeferral);
});
//Here I execute the suspending code i.e. I serializing the app
//state and save it in files. This may take more than clsTimeout
//on some devices.
...
//I do not call the Complete method here because the above
//suspending code is old-fashoin asynchronous i.e. not async but
//returns before the job is done.
//DeferrerComplete(objLock, ref clsDeferral);
}
catch
{
DeferrerComplete(objLock, ref clsDeferral);
}
}
private static void DeferrerComplete(object iLock, ref SuspendingDeferral ioDeferral)
{
lock (iLock)
{
if (ioDeferral != null)
{
try
{
ioDeferral.Complete();
}
catch
{
}
ioDeferral = null;
}
}
}
I have read the answer about the _Quiesce_Hang problem. I get the idea that it might be related to app storage activity. So my question is: what am I missing? Does my handling of the Suspending event look OK?
I have the following code:
public class EmailJobQueue
{
private EmailJobQueue()
{
}
private static readonly object JobsLocker = new object();
private static readonly Queue<EmailJob> Jobs = new Queue<EmailJob>();
private static readonly object ErroredIdsLocker = new object();
private static readonly List<long> ErroredIds = new List<long>();
public static EmailJob GetNextJob()
{
lock (JobsLocker)
{
lock (ErroredIdsLocker)
{
// If there are no jobs or they have all errored then get some new ones - if jobs have previously been skipped then this will re get them
if (!Jobs.Any() || Jobs.All(j => ErroredIds.Contains(j.Id)))
{
var db = new DBDataContext();
foreach (var emailJob in db.Emailing_SelectSend(1))
{
// Dont re add jobs that exist
if (Jobs.All(j => j.Id != emailJob.Id) && !ErroredIds.Contains(emailJob.Id))
{
Jobs.Enqueue(new EmailJob(emailJob));
}
}
}
while (Jobs.Any())
{
var curJob = Jobs.Dequeue();
// Check the job has not previously errored - if they all have then eventually we will exit the loop
if (!ErroredIds.Contains(curJob.Id))
return curJob;
}
return null;
}
}
}
public static void ReInsertErrored(long id)
{
lock (ErroredIdsLocker)
{
ErroredIds.Add(id);
}
}
}
I then start 10 threads which do this:
var email = EmailJobQueue.GetNextJob();
if (email != null)
{
// Breakpoint here
}
The thing is that if I put a breakpoint where the comment is and add one item to the queue then the breakpoint gets hit multiple times. Is this an issue with my code or a peculiarity with VS debugger?
Thanks,
Joe
It appears as if you are getting your jobs from the database:
foreach (var emailJob in db.Emailing_SelectSend(1))
Is that database call marking the records as unavailable for section in future queries? If not, I believe that's why you're hitting the break point multiple times.
For example, if I replace that call to the database with the following, I see your behavior.
// MockDB is a static configured as `MockDB.Enqueue(new EmailJob{Id = 1})`
private static IEnumerable<EmailJob> GetJobFromDB()
{
return new List<EmailJob>{MockDB.Peek()};
}
However, if I actually Dequeue from the mock db, it only hits the breakpoint once.
private static IEnumerable<EmailJob> GetJobFromDB()
{
var list = new List<EmailJob>();
if (MockDB.Any())
list.Add(MockDB.Dequeue());
return list;
}
This is a side effect of debugging a multi-threaded piece of your application.
You are seeing the breakpoint being hit on each thread. Debugging a multi-threaded piece of the application is tricky because you're actually debugging all threads at the same time. In fact, at times, it will jump between classes while you're stepping through because it's doing different things on all of those threads, depending on your application.
Now, to address whether or not it's thread-safe. That really depends on how you're using the resources on those threads. If you're just reading, it's likely that it's thread-safe. But if you're writing, you'll need to leverage at least the lock operation on shared objects:
lock (someLockObject)
{
// perform the write operation
}