I have a synchronous method that calls a method which collates a bunch of data on a custom object and stores it on a table entry on a Firebird database, located on a server.
On the server, a monitoring process keeps watching the first table for new entries using a database event (a table trigger raises an event which is captured by the monitor). When this event is raised, that data is sent to a third-party black-box service to be processed with the use of a proprietary library, that takes between near-naught and 1 minute to reply.
The third-party service replies with some data which is entered on a second table on the database. This second table has another trigger that the client's program monitors. The client's program must either wait until the third-party replies some data, or it times out (the same 1 minute).
I'm currently delving into the world of database events and I've reached an impasse:
Currently I have a key press that runs a synchronous method, which according to an application setting either runs another synchronous method, which runs flawlessly, or another method that inserts an entry on a Firebird database. This database is monitored by another process, which reads that entry, do some stuff, and inserts the new data on another table.
Back on the main program, what I currently have is the method has an event handler which is triggered when the new data is inserted. However, as it is an event, the rest of the method runs its course, ending prematurely, before the event handler has the chance to read the new data.
In pseudo code:
MainWindow_KeyDown(object sender, EventArgs e)
{
if (e.Key == X)
{
MakeADecision()
}
}
MakeADecision()
{
if (Properties.Settings.Default.MySetting) Console.Write(DoLocalStuff());
else Console.Write(DoRemoteStuff());
}
string DoRemoteStuff()
{
using (OldDataTableAdapter)
using (NewDataTableAdapter)
{
OldDataTableAdapter.Insert(OldData);
var revent = new FBRemoteEvent(MyConnectionString);
revent.RemoteEventCounts += (sender, e) =>
{
NewDataTableAdapter.Fill(NewDataDataTable);
NewData = NewDataDataTable[0].MYCOLUMN;
};
revent.QueueEvents("MY_FB_EVENT");
}
return NewData;
}
As you can see, the issue here is that DoRemoteStuff reaches its return before the event can be triggered. I tried turning DoRemoteStuff() into an async method, but I don't know how to use events with async methods. Can anyone please help me with this? Any tips or hints on how to work with async methods?
A possible solution would be to use a TaskCompletionSource so you can convert your method to an async method. This is based on Is it possible to await an event instead of another async method?.
MakeADecision()
{
if (Properties.Settings.Default.MySetting)
{
Console.Write(DoLocalStuff());
}
else
{
// Consider making MakeADecision async as well
NewData = DoRemoteStuff().Result;
Console.Write(NewData);
}
}
async Task<string> DoRemoteStuff()
{
Task<string> task;
using (OldDataTableAdapter)
{
OldDataTableAdapter.Insert(OldData);
task = WaitForEvent(MyConnectionString);
}
return await task;
}
private async Task<string> WaitForEvent(string connectionString)
{
var taskCompletionSource = new TaskCompletionSource<string>();
var revent = new FbRemoteEvent(connectionString);
revent.RemoteEventCounts += (sender, e) =>
{
using (NewDataTableAdapter)
{
NewDataTableAdapter.Fill(NewDataDataTable);
string newData = NewDataDataTable[0].MYCOLUMN;
taskCompletionSource.SetResult(newData);
}
sender.Dispose();
};
revent.QueueEvents("MY_FB_EVENT");
return await taskCompletionSource.Task;
}
Some things to point out:
You need to explicitly dispose the event to avoid a memory leak
The using for NewDataTableAdapter belongs within the event handler
The MakeADecision method seems like a candidate to be made async as well
A word of warning, my C# is a bit rusty (and I have never done much with async), so I'm not sure if this is the idiomatic way of doing it. I also did not test the code as written above (I wrote and tested a simpler version, but I may have introduced bugs while transforming your code to a similar solution).
This solution may also have the possibility of a race condition between inserting the new data triggering the event and registering for the event (unless the Dispose at the end of the using block is what commits the data), consider moving the WaitForEvent before inserting. Also consider the possibility of receiving the event from an update done for/by another change.
Related
I have a number of web posts inside my application that need to send text data to a server but other than awaiting completion of the post shouldnt hold up the methods that are called from (large data posts that would slowdown logic etc that shouldnt be).
Currently im discarding the task as that appeared to be the correct method however on the server end logs indicate it seams to be closing the connection before the data is successfuly sent meaning I'm loosing most of the data in transit.
private void DoSomethingandPost()
{
BeforeMethod();
PushWebDataAsync(TheData1);
PushWebDataAsync(TheData2);
AfterMethod();
}
public static async void PushWebDataAsync(string Data)
{
...makes changes to the data...
try
{
_ = pushDataAync(Data);
}
catch (Exception e)
{
_ = pushDataAync(Data);
}
}
public System.Threading.Tasks.Task<System.Xml.XmlNode> pushDataAync(string Data)
{
return base.Channel.pushDataAync(Data);
}
My gut feeling is that if "AfterMethod" returns before the data has completed sending the connection to the server is cut and so the data isnt fully transmitted.
What Im trying to acheieve really is DoSomethingandPost() completes and exits but the two async Post's continue on their own until complete then exit.
If AfterMethod must run after the two PushWebDataAsync calls, then make the later return a Task, make AfterMethod async and await the push-methods. DoSomethingandPost will return at the first await-statement, doing the rest of the work at some later time . If you want to do the push concurrently then do
var task1 = PushWebDataAsync(TheData1);
var task2 = PushWebDataAsync(TheData2);
await Task.WhenAll(new []{task1, task2});
...
It is good practice to avoid async void since this makes it impossible for the caller to know if the call succeeded or not. If you know this will never be needed, like in the event handler for a button, then it is good practice to handle any exception that may be thrown.
First of all my Main is STAThread and i am not able to change this without facing problems with the rest of my code.
So, I am currently using Rapi2 To pull and push files between my Pda and Computer. Now since there is quite a bit of number crunching i would like to do this on a separate thread. First wat i do is create an RemoteDeviceManager and then make an Event Handler for when a device connects.
public void Initialize()
{
_deviceManager = new RemoteDeviceManager();
_deviceManager.DeviceConnected += DeviceConnected;
}
As you can see when my device connects it triggers DeviceConnected.
This is the class that i end up pulling and pushing a database and do some number work.
private void DeviceConnected(object sender, RemoteDeviceConnectEventArgs e)
{
if (e.Device == null) return;
... (unimportant code)
}
Now the problem here is that i would want to run the code inside DeviceConnected in a new thread but i am unable to access e inside the new thread since it was initialized outside that thread
So now wat i tried was make a new thread before calling Initialize.
public Watcher()
{
_dataThread = new Thread(Initialize);
_dataThread.IsBackground = true;
_dataThread.Name = "Data Thread";
_dataThread.SetApartmentState(ApartmentState.MTA);
_dataThread.Start();
}
But the thread dies and thus never fires my event handler.
I tried many different ways to make it work or keep my thread alive but without any success. I hope someone here is able to give me some hints.
I'm trying to build a file download actor, using Akka.net. It should send messages on download completion but also report download progress.
In .NET there are classes supporting asynchronous operations using more than one event. For example WebClient.DownloadFileAsync has two events: DownloadProgressChanged and DownloadFileCompleted.
Preferably, one would use the task based async version and use the .PipeTo extension method. But, I can't see how that would work with an async method exposing two events. As is the case with WebClient.DownloadFileAsync. Even with WebClient.DownloadFileTaskAsync you still need to handle DownloadProgressChanged using an event handler.
The only way I found to use this was to hook up two event handlers upon creation of my actor. Then in the handlers, I messages to Self and the Sender. For this, I must refer to some private fields of the actor from inside the event handlers. This feels wrong to me, but I cannot see another way out.
Is there a safer way to use multiple event handlers in an Actor?
Currently, my solution looks like this (_client is a WebClient instance created in the constructor of the actor):
public void HandleStartDownload(StartDownload message)
{
_self = Self;
_downloadRequestor = Sender;
_uri = message.Uri;
_guid = message.Guid;
_tempPath = Path.GetTempFileName();
_client.DownloadFileAsync(_uri, _tempPath);
}
private void Client_DownloadFileCompleted(object sender, System.ComponentModel.AsyncCompletedEventArgs e)
{
var completedMessage = new DownloadCompletedInternal(_guid, _tempPath);
_downloadRequestor.Tell(completedMessage);
_self.Tell(completedMessage);
}
private void Client_DownloadProgressChanged(object sender, DownloadProgressChangedEventArgs e)
{
var progressedMessage = new DownloadProgressed(_guid, e.ProgressPercentage);
_downloadRequestor.Tell(progressedMessage);
_self.Tell(progressedMessage);
}
So when the download starts, some fields are set. Additionally, I make sure I Become a state where further StartDownload messages are stashed, until the DownloadCompleted message is received by Self:
public void Ready()
{
Receive<StartDownload>(message => {
HandleStartDownload(message);
Become(Downloading);
});
}
public void Downloading()
{
Receive<StartDownload>(message => {
Stash.Stash();
});
Receive<DownloadCompleted>(message => {
Become(Ready);
Stash.UnstashAll();
});
}
For reference, here's the entire Actor, but I think the important stuff is in this post directly: https://gist.github.com/AaronLenoir/4ce5480ecea580d5d283c5d08e8e71b5
I must refer to some private fields of the actor from inside the event
handlers. This feels wrong to me, but I cannot see another way out.
Is there a safer way to use multiple event handlers in an Actor?
There's nothing inherently wrong with an actor having internal state, and members that are part of that state raising events which are handled within the actor. No more wrong than this would be if taking an OO approach.
The only real concern is if that internal state gets mixed between multiple file download requests, but I think your current code is sound.
A possibly more palatable approach may be to look at the FileDownloadActor as a single use actor, fire it up, download the file, tell the result to the sender and then kill the actor. Starting up actors is a cheap operation, and this completely sidesteps the possibility of sharing the internal state between multiple download requests.
Unless of course you specifically need to queue downloads to run sequentially as your current code does - but the queue could be managed by another actor altogether and still treat the download actors as temporary.
I don't know if that is your case, but I see people treating Actors as micro services when they are simply objects. Remember Actors have internal state.
Now think about scalability, you can't scale messages to one Actor in a distributed Actor System. The messages you're sending to one Actor will be executed in the node executing that Actor.
If you want to execute download operations in parallel (for example), you do as Patrick said and create one Actor per download operation and that Actor can be executed in any available node.
I'm working on a really big project developed by two teams, one (mainly) for the database, and one (where I am) mainly for the GUI and helper classes as an interface between GUI and DB.
Obviously, there are errors in communication, and - of course - we can't assume 100Mbit bandwidth & super-fast server computer.
Language is C# .NET, target "framework" is WPF and Silverlight.
When a user clicks a button, the GUI asks the DB (through helper classes) for information. Let's say... pizza types. The server should answer "{Funghi,Frutti di mare,Prosciutto}". When DB sends his answer, we receive a "database.Ready" event and fill our datagrid.
BUT if the user clicks the button while we haven't received the answer yet, the GUI sends an another request to the database. And the whole system tries to serve the user.
But it can't, because when the second request is sent, the first is disposed when we want to read it. So NullReferenceExceptions occur.
I've solved this by implementing kind of a semaphore which closes when user input occurs and opens when the Ready event (the functions it calls) finishes working.
Problem:
If I don't receive the Ready event, no user input is allowed, but this is wrong.
Question:
Is there a common (or at least, working) solution to stop waiting for the Ready event and...
1) re-sending the request a few times, hoping we receive our pizza types?
AND/OR
2) Dropping the request, tell the user that database failed to send the answer, and re-open the semaphore?
I can't post code here as this code is the property of a corporation, I'd rather like to have theoretical solutions, which are okay for professionals too.
Sorry for the long post, and thank you for your answers!
I assume that you are already using a background thread to dispatch the query to the database and wait for it's response. You can use the Task API that was introduced in .NET 4.0 to cancel such a request. For that, you pass in a CancellationToken that signals the status to the executing task. You can obtain a CancellationToken via a CancellationTokenSource as shown in the following code:
public partial class MainWindow : Window
{
private readonly CancellationTokenSource _cancellationTokenSource = new CancellationTokenSource();
public MainWindow()
{
InitializeComponent();
}
private void Button_CallDatabase(object sender, RoutedEventArgs e)
{
Task.Factory.StartNew(CallDatabase, _cancellationTokenSource.Token);
}
private void Button_OnNavigate(object sender, RoutedEventArgs e)
{
// If you navigate, you can cancel the background task and thus
// it will not execute any further
_cancellationTokenSource.Cancel();
}
private void CallDatabase()
{
// This simulates a DB call
for (var i = 0; i < 50; i++)
{
Thread.Sleep(100);
}
// Check if cancellation was requested
if (_cancellationTokenSource.Token.IsCancellationRequested)
{
Debug.WriteLine("Request cancelled");
return;
}
Debug.WriteLine("Update Controls with DB infos.");
}
}
Note that this example is simplified, you can and should use this in another component (e.g. view model).
If you still want to use the Ready event, you could also just unregister from it when you navigate away, so that no further actions will be performed when it is raised.
I'm using RavenDB as denormalized read model populated from domain events. I found problem, when two events (let's call them Created and Updated) are denormalized in the same time, loading document to be updated by Updated event occurs before saving changes made by Created event. I've came up with solution based on Changes API to wait for document creation:
public static T WaitAndLoad<T>(this IDocumentSession #this, ValueType id)
where T : class
{
var fullId = #this.Advanced.DocumentStore.Conventions.FindFullDocumentKeyFromNonStringIdentifier(id, typeof(T), false);
var ev = new ManualResetEvent(false);
var cancelation = new CancellationTokenSource();
#this.Advanced.DocumentStore
.Changes()
.ForDocument(fullId)
.Subscribe(change =>
{
if (change.Type == DocumentChangeTypes.Put)
{
ev.Set();
}
}, cancelation.Token);
try
{
var existing = #this.Load<T>(id);
if (existing != null)
{
return existing;
}
ev.WaitOne();
return #this.Load<T>(id);
}
finally
{
cancelation.Cancel();
}
}
Unfortunately second call to Load returns null because Id of document is already in knownMissingIds field in InMemoryDocumentSessionOperations and no request to server is made.
It there any other way to wait until document is created?
Well, I'm not sure what mechanism you are using for event processing, but I have been in a similar situation with something like NServiceBus. I don't think this exactly a RavenDB problem. You would probably have the same issue if you were writing to a SQL Server database.
The generalized problem is, Create and Update events are fired off, but they are received and processed in the wrong order. What to do?
Well the general advice is that your event handlers should be idempotent, and should retry when failed. So if Update is received first, it will throw an exception and be scheduled for retry. Then Create comes through, then Update retries and all is good.
Specifically blocking and waiting in the handler for the Update event is not advised, because if you had several of these then they could block all worker threads and the Create events would never come through.