I'm writing a Windows Service that will execute different data import logic, from different data source to eventually write it to a single target, a MS CRM instance. Right now, the only thing I think will be problematic, is the writing to CRM part. The concurent reading of data from different (sometimes same) data source shouldn't really be an issue (I may be wrong on this...) So I came up with a way to make sure there are no concurent writes (create or updates) to CRM.
Here's the general design for the moment:
What happens when the service starts:
Timers = new List<System.Timers.Timer>();
CrmTransactionQueue.Lock = new object { }; //Static class. The object for locking purposes...
System.Threading.Thread.Sleep(20000); //for debugging purpose so I can attach to process before everything kicks in...
//retrieve all types that are extending BaseSyncStrategy..
var strategyTypes = Assembly.GetExecutingAssembly().GetTypes().Where(x => x.BaseType == typeof(BaseSyncStrategy));
foreach (Type strategyType in strategyTypes)
{
//create a instance of that type....
var strategy = (BaseSyncStrategy)Activator.CreateInstance(strategyType);
//create a timer for each of these, they will have different intervals...
System.Timers.Timer t = new System.Timers.Timer
{
Interval = strategy.Interval * 1000,
AutoReset = false,
Enabled = true
};
Timers.Add(t);
t.Elapsed += (sender, e) => TimerElapsed(sender, e, strategy);
t.Start();
}
What happens when the timers' interval are expired:
private void TimerElapsed(object sender, ElapsedEventArgs e, BaseSyncStrategy strategy)
{
//get timer back
var timer = (Timer)sender;
try
{
strategy.Execute();
}
catch (Exception ex)
{
Logger.WriteEntry(EventLogEntryType.Error, $"Error executing strategy {strategy.GetType().Name}: ", ex);
}
timer.Start();
}
And within all the Execute methods of objects extending BaseSyncStrategy, each time I want to update or create something in the target CRM instance, I do this:
XrmServiceContext XrmCtx = new XrmServiceContext();
//....
//code that fetches data from foreign sources and creates CRM entities...
//....
Action<XrmServiceContext> action = (XrmServiceContext ctx) =>
{
//write those created/updated objects
//ctx lets me query entities and write back to CRM...
};
CrmTransactionQueue.Execute(action, XrmCtx);
And the simple code to make sure (I think) no concurent writes to CRM happen:
public static class CrmTransactionQueue
{
public static object Lock { get; set; }
public static void Execute(Action<XrmServiceContext> transaction, XrmServiceContext Ctx)
{
lock (Lock)
{
transaction.Invoke(Ctx);
}
}
}
Is this sound design or there's a better way to do this ?
Related
I've been working on a hobby project being developed in C# + Xamarin Forms + Prism + EF Core + Sqlite, debugging in UWP app.
I've written the following code to store tick data received from broker to Sqlite.
First, the OnTick call back that receives the ticks (approx. 1 tick per sec per instrument):
private void OnTick(Tick tickData)
{
foreach (var instrument in IntradayInstruments.Where(i => i.InstrumentToken == tickData.InstrumentToken))
{
instrument.UpdateIntradayCandle(tickData);
}
}
And the UpdateIntradayCandle method is:
public void UpdateIntradayCandle(Tick tick)
{
if (LastIntradayCandle != null)
{
if (LastIntradayCandle.Open == 0m)
{
LastIntradayCandle.Open = tick.LastPrice;
}
if (LastIntradayCandle.High < tick.LastPrice)
{
LastIntradayCandle.High = tick.LastPrice;
}
if (LastIntradayCandle.Low == 0m)
{
LastIntradayCandle.Low = tick.LastPrice;
}
else if (LastIntradayCandle.Low > tick.LastPrice)
{
LastIntradayCandle.Low = tick.LastPrice;
}
LastIntradayCandle.Close = tick.LastPrice;
}
}
The LastIntradayCandle is a property:
object _sync = new object();
private volatile IntradayCandle _lastIntradayCandle;
public IntradayCandle LastIntradayCandle
{
get
{
lock (_sync)
{
return _lastIntradayCandle;
}
}
set
{
lock (_sync)
{
_lastIntradayCandle = value;
}
}
}
Now, the LastIntradayCandle is changed periodically, say, 5 minutes, and a new candle is put in place for updating, from a different thread coming from a System.Threading.Timer which is scheduled to run every 5m.
public void AddNewIntradayCandle()
{
if (LastIntradayCandle != null)
{
LastIntradayCandle.IsClosed = true;
}
var newIntradayCandle = new IntradayCandle { Open = 0m, High = 0m, Low = 0m, Close = 0m };
LastIntradayCandle = newIntradayCandle;
IntradayCandles.Add(newIntradayCandle);
}
Now, the problem is, I'm getting 0s in those Open, High or Low but not in Close, Open having the most number of zeroes. This is happening very randomly.
I'm thinking that if any of the Open, High, Low or Close values is getting updated, it means the tick is having a value to be grabbed, but somehow one or more assignments in UpdateIntradayCandle method are not running. Having zeroes is a strict NO for the purpose of the app.
I'm neither formally trained as a programmer nor an expert, but a self-learning hobbyist and definitely never attempted at multi-threading before.
So, I request you to please point me what I am doing wrong, or better still, what should I be doing to make it work.
Multithreading and EF Core is not compatible things. EF Core context is not a thread safe. You have to create new context for each thread. Also making your object thread safe is wasting time.
So, schematically you have to do the following and you can remove locks from your object.
private void OnTick(Tick tickData)
{
using var ctx = new MyDbContext(...);
foreach (var instrument in ctx.IntradayInstruments.Where(i => i.InstrumentToken == tickData.InstrumentToken))
{
instrument.UpdateIntradayCandle(tickData);
}
ctx.SaveChanges();
}
I have some code that loads up and AppDomain(call it domain) calling an object function within the domain. The purpose is to get a list of items from a usb device using the device API to retrieve the information. The API requires a callback to return the information.
var AppDomain.CreateDomain(
$"BiometricsDomain{System.IO.Path.GetRandomFileName()}");
var proxy = domain.CreateInstanceAndUnwrap(proxy.Assembly.FullName, proxy.FullName
?? throw new InvalidOperationException()) as Proxy;
var ids = obj.GetIdentifications();
The proxy code loaded into the domain is as follows
public class Proxy : MarshalByRefObject
{
public List<String> GetIdentifications()
{
var control = new R100DeviceControl();
control.OnUserDB += Control_OnUserDB;
control.Open();
int nResult = control.DownloadUserDB(out int count);
// need to be able to return the list here but obviously that is not
// going to work.
}
private void Control_OnUserDB(List<String> result)
{
// Get the list of string from here
}
}
Is there a way to be able to wait on the device and return the information as needed when the callback is called? Since the GetIdentifications() has already returned I don't know how to get the
You can consider wrapping the Event-Based Asynchronous Pattern (EAP) operations as one task by using a TaskCompletionSource<TResult> so that the event can be awaited.
public class Proxy : MarshalByRefObject {
public List<String> GetIdentifications() {
var task = GetIdentificationsAsync();
return task.Result;
}
private Task<List<String>> GetIdentificationsAsync() {
var tcs = new TaskCompletionSource<List<string>>();
try {
var control = new R100DeviceControl();
Action<List<string>> handler = null;
handler = result => {
// Once event raised then set the
// Result property on the underlying Task.
control.OnUserDB -= handler;//optional to unsubscribe from event
tcs.TrySetResult(result);
};
control.OnUserDB += handler;
control.Open();
int count = 0;
//call async event
int nResult = control.DownloadUserDB(out count);
} catch (Exception ex) {
//Bubble the error up to be handled by calling client
tcs.TrySetException(ex);
}
// Return the underlying Task. The client code
// waits on the Result property, and handles exceptions
// in the try-catch block there.
return tcs.Task;
}
}
You can also improve on it by adding the ability to cancel using a CancellationToken for longer than expected callbacks.
With that the proxy can then be awaited
List<string> ids = proxy.GetIdentifications();
Reference How to: Wrap EAP Patterns in a Task
NOTE: Though there may be more elegant solutions to the problem of asynchronous processing, the fact that this occurs in a child AppDomain warrants child AppDomain best practices. (see links below)
i.e.
do not allow code meant for a child AppDomain to be executed in the parent domain
do not allow complex types to bubble to the parent AppDomain
do not allow exceptions to cross AppDomain boundaries in the form of custom exception types
OP:
I am using it for fault tolerance
First I would probably add a Open or similar method to give time for the data to materialise.
var proxy = domain.CreateInstanceAndUnwrap(proxy.Assembly.FullName, proxy.FullName
?? throw new InvalidOperationException()) as Proxy;
proxy.Open(); // <------ new method here
.
. some time later
.
var ids = obj.GetIdentifications();
Then in your proxy make these changes to allow for data processing to occur in the background so that by the time you call GetNotifications data may be ready.
public class Proxy : MarshalByRefObject
{
ConcurrentBag<string> _results = new ConcurrentBag<string>();
public void Open()
{
var control = new R100DeviceControl();
control.OnUserDB += Control_OnUserDB;
control.Open();
// you may need to store nResult and count in a field?
nResult = control.DownloadUserDB(out int count);
}
public List<String> GetIdentifications()
{
var copy = new List<string>();
while (_results.TryTake(out var x))
{
copy.Add(x);
}
return copy;
}
private void Control_OnUserDB(List<String> result)
{
// Get the list of string from here
_results.Add (result);
}
}
Now you could probably improve upon GetNotifications to accept a timeout in the event either GetNotifications is called before data is ready or if you call it multiply but before subsequent data to arrive.
More
How to: Run Partially Trusted Code in a Sandbox
Not sure why you just don't maintain a little state and then wait for the results in the call:
public class Proxy : MarshalByRefObject
{
bool runningCommand;
int lastResult;
R100DeviceControl DeviceControl { get{ if(deviceControl == null){ deviceControl = new R100DeviceControl(); deviceControl.OnUserDB += Control_OnUserDB; } return deviceControl; } }
public List<String> GetIdentifications()
{
if(runningCommand) return null;
DeviceControl.Open();
runningCommand = true;
lastResult = control.DownloadUserDB(out int count);
}
private void Control_OnUserDB(List<String> result)
{
runningCommand = false;
// Get the list of string from here
}
}
Once you have a pattern like this you can easily switch between async and otherwise whereas before it will look a little harder to understand because you integrated the async logic, this way you can implement the sync method and then make an async wrapper if you desire.
Basically, I have a "DON'T DO THIS" Sentinel scenario. Because Sentinel is not safe in such scenario, I've implemented the following
var main = "192.168.XXX.YY:6379,abortConnect=false";
var backup = "192.168.XXX.YY:6379,abortConnect=false";
IConnectionMultiplexer redis = ConnectionMultiplexer.Connect(main);
redis.ConnectionFailed += (src, args) =>
{
if ((src as ConnectionMultiplexer).Configuration != backup) {
using (var writer = new StringWriter()) {
writer.Write(backup);
(src as ConnectionMultiplexer).Configure(writer);
/**
* Just for checking. It does not save
**/
(src as ConnectionMultiplexer).GetDatabase().StringSet("aaa", "bbb");
}
}
};
So, when my main connection is down, I change the configuration, by calling (src as ConnectionMultiplexer).Configure(writer), so that ConnectionMultiplexer can use the new configuration. However, ConnectionMultiplexer continue to use the old one.
Question: How can I change ConnectionMultiplexer.configuration in the ConnectionFailed event ?
I looked at the source code of the library, it seems there is no desired functionality. There is internal method Reconfigure, but it tries to connect to other servers from the configuration.
I would suggest you to refactor, if your application is not very large. Make a wrapper over ConnectionMultiplexer, pass wrapper to objects where the connection is used. We do wrap method GetConnection, which returns all the links on a single object. All who need the connection will call this method, no needs to store connection. Inside the wrapper OnFailed subscribe to an event handler to create a new connection to a Backup.
Not sure if that would be acceptable, not exactly switching the config but more like rebuilding the multiplixer
private static Lazy<IConnectionMultiplexer> _redisMux = new Lazy<ConnectionMultiplexer>(CreateMultiplexer);
public static IConnectionMultiplexer Multiplexer { get { return _redisMux.Value; } }
private const string Main = "192.168.XXX.YY:6379,abortConnect=false";
private const string Backup = "192.168.XXX.YY:6379,abortConnect=false";
private static string ActiveConfig = Main;
private static ConnectionMultiplexer CreateMultiplexer()
{
var mux = ConnectionMultiplexer.Connect(ActiveConfig));
mux.ConnectionFailed += OnConnectionFailed;
return mux;
}
[MethodImpl(MethodImplOptions.Synchronized)]
private static void OnConnectionFailed(object sender, ConnectionFailedEventArgs e)
{
ActiveConfig = Backup;
try { Multiplexer.Dispose(); } catch { }
_redisMux = new Lazy<ConnectionMultiplexer>(CreateMultiplexer);
}
I am currently using the Change Notifications in Active Directory Domain Services in .NET as described in this blog. This will return all events that happen on an selected object (or in the subtree of that object). I now want to filter the list of events for creation and deletion (and maybe undeletion) events.
I would like to tell the ChangeNotifier class to only observe create-/delete-/undelete-events. The other solution is to receive all events and filter them on my side. I know that in case of the deletion of an object, the atribute list that is returned will contain the attribute isDeleted with the value True. But is there a way to see if the event represents the creation of an object? In my tests the value for usnchanged is always usncreated+1 in case of userobjects and both are equal for OUs, but can this be assured in high-frequency ADs? It is also possible to compare the changed and modified timestamp. And how can I tell if an object has been undeleted?
Just for the record, here is the main part of the code from the blog:
public class ChangeNotifier : IDisposable
{
static void Main(string[] args)
{
using (LdapConnection connect = CreateConnection("localhost"))
{
using (ChangeNotifier notifier = new ChangeNotifier(connect))
{
//register some objects for notifications (limit 5)
notifier.Register("dc=dunnry,dc=net", SearchScope.OneLevel);
notifier.Register("cn=testuser1,ou=users,dc=dunnry,dc=net", SearchScope.Base);
notifier.ObjectChanged += new EventHandler<ObjectChangedEventArgs>(notifier_ObjectChanged);
Console.WriteLine("Waiting for changes...");
Console.WriteLine();
Console.ReadLine();
}
}
}
static void notifier_ObjectChanged(object sender, ObjectChangedEventArgs e)
{
Console.WriteLine(e.Result.DistinguishedName);
foreach (string attrib in e.Result.Attributes.AttributeNames)
{
foreach (var item in e.Result.Attributes[attrib].GetValues(typeof(string)))
{
Console.WriteLine("\t{0}: {1}", attrib, item);
}
}
Console.WriteLine();
Console.WriteLine("====================");
Console.WriteLine();
}
LdapConnection _connection;
HashSet<IAsyncResult> _results = new HashSet<IAsyncResult>();
public ChangeNotifier(LdapConnection connection)
{
_connection = connection;
_connection.AutoBind = true;
}
public void Register(string dn, SearchScope scope)
{
SearchRequest request = new SearchRequest(
dn, //root the search here
"(objectClass=*)", //very inclusive
scope, //any scope works
null //we are interested in all attributes
);
//register our search
request.Controls.Add(new DirectoryNotificationControl());
//we will send this async and register our callback
//note how we would like to have partial results
IAsyncResult result = _connection.BeginSendRequest(
request,
TimeSpan.FromDays(1), //set timeout to a day...
PartialResultProcessing.ReturnPartialResultsAndNotifyCallback,
Notify,
request
);
//store the hash for disposal later
_results.Add(result);
}
private void Notify(IAsyncResult result)
{
//since our search is long running, we don't want to use EndSendRequest
PartialResultsCollection prc = _connection.GetPartialResults(result);
foreach (SearchResultEntry entry in prc)
{
OnObjectChanged(new ObjectChangedEventArgs(entry));
}
}
private void OnObjectChanged(ObjectChangedEventArgs args)
{
if (ObjectChanged != null)
{
ObjectChanged(this, args);
}
}
public event EventHandler<ObjectChangedEventArgs> ObjectChanged;
#region IDisposable Members
public void Dispose()
{
foreach (var result in _results)
{
//end each async search
_connection.Abort(result);
}
}
#endregion
}
public class ObjectChangedEventArgs : EventArgs
{
public ObjectChangedEventArgs(SearchResultEntry entry)
{
Result = entry;
}
public SearchResultEntry Result { get; set; }
}
I participated in a design review about five years back on a project that started out using AD change notification. Very similar questions to yours were asked. I can share what I remember, and don't think things have change much since then. We ended up switching to DirSync.
It didn't seem possible to get just creates & deletes from AD change notifications. We found change notification resulted enough events monitoring a large directory that notification processing could bottleneck and fall behind. This API is not designed for scale, but as I recall the performance/latency were not the primary reason we switched.
Yes, the usn relationship for new objects generally holds, although I think there are multi-dc scenarios where you can get usncreated == usnchanged for a new user, but we didn't test that extensively, because...
The important thing for us was that change notification only gives you reliable object creation detection under the unrealistic assumption that your machine is up 100% of the time! In production systems there are always some case where you need to reboot and catch up or re-synchronize, and we switched to DirSync because it has a robust way to handle those scenarios.
In our case it could block email to a new user for an indeterminate time if an object create were missed. That obviously wouldn't be good, we needed to be sure. For AD change notifications, getting that resync right that would have some more work and hard to test. But for DirSync, its more natural, and there's a fast-path resume mechanism that usually avoids resync. For safety I think we triggered a full re-synchronize every day.
DirSync is not as real-time as change notification, but its possible to get ~30-second average latency by issuing the DirSync query once a minute.
I have the following code:
public class EmailJobQueue
{
private EmailJobQueue()
{
}
private static readonly object JobsLocker = new object();
private static readonly Queue<EmailJob> Jobs = new Queue<EmailJob>();
private static readonly object ErroredIdsLocker = new object();
private static readonly List<long> ErroredIds = new List<long>();
public static EmailJob GetNextJob()
{
lock (JobsLocker)
{
lock (ErroredIdsLocker)
{
// If there are no jobs or they have all errored then get some new ones - if jobs have previously been skipped then this will re get them
if (!Jobs.Any() || Jobs.All(j => ErroredIds.Contains(j.Id)))
{
var db = new DBDataContext();
foreach (var emailJob in db.Emailing_SelectSend(1))
{
// Dont re add jobs that exist
if (Jobs.All(j => j.Id != emailJob.Id) && !ErroredIds.Contains(emailJob.Id))
{
Jobs.Enqueue(new EmailJob(emailJob));
}
}
}
while (Jobs.Any())
{
var curJob = Jobs.Dequeue();
// Check the job has not previously errored - if they all have then eventually we will exit the loop
if (!ErroredIds.Contains(curJob.Id))
return curJob;
}
return null;
}
}
}
public static void ReInsertErrored(long id)
{
lock (ErroredIdsLocker)
{
ErroredIds.Add(id);
}
}
}
I then start 10 threads which do this:
var email = EmailJobQueue.GetNextJob();
if (email != null)
{
// Breakpoint here
}
The thing is that if I put a breakpoint where the comment is and add one item to the queue then the breakpoint gets hit multiple times. Is this an issue with my code or a peculiarity with VS debugger?
Thanks,
Joe
It appears as if you are getting your jobs from the database:
foreach (var emailJob in db.Emailing_SelectSend(1))
Is that database call marking the records as unavailable for section in future queries? If not, I believe that's why you're hitting the break point multiple times.
For example, if I replace that call to the database with the following, I see your behavior.
// MockDB is a static configured as `MockDB.Enqueue(new EmailJob{Id = 1})`
private static IEnumerable<EmailJob> GetJobFromDB()
{
return new List<EmailJob>{MockDB.Peek()};
}
However, if I actually Dequeue from the mock db, it only hits the breakpoint once.
private static IEnumerable<EmailJob> GetJobFromDB()
{
var list = new List<EmailJob>();
if (MockDB.Any())
list.Add(MockDB.Dequeue());
return list;
}
This is a side effect of debugging a multi-threaded piece of your application.
You are seeing the breakpoint being hit on each thread. Debugging a multi-threaded piece of the application is tricky because you're actually debugging all threads at the same time. In fact, at times, it will jump between classes while you're stepping through because it's doing different things on all of those threads, depending on your application.
Now, to address whether or not it's thread-safe. That really depends on how you're using the resources on those threads. If you're just reading, it's likely that it's thread-safe. But if you're writing, you'll need to leverage at least the lock operation on shared objects:
lock (someLockObject)
{
// perform the write operation
}