I have the following Actor where I am trying to restart and resend the failing message back to the actor :
public class BuildActor : ReceivePersistentActor
{
public override string PersistenceId => "asdad3333";
private readonly IActorRef _nextActorRef;
public BuildActor(IActorRef nextActorRef)
{
_nextActorRef = nextActorRef;
Command<Workload>(x => Build(x));
RecoverAny(workload =>
{
Console.WriteLine("Recovering");
});
}
public void Build(Workload Workload)
{
var context = Context;
var self = Self;
Persist(Workload, async x =>
{
//after this line executes
//application goes into break mode
//does not execute PreStart or Recover
var workload = await BuildTask(Workload);
_nextActorRef.Tell(workload);
context.Stop(self);
});
}
private Task<Workload> BuildTask(Workload Workload)
{
//works as expected if method made synchronous
return Task.Run(() =>
{
//simulate exception
if (Workload.ShowException)
{
throw new Exception();
}
return Workload;
});
}
protected override void PreRestart(Exception reason, object message)
{
if (message is Workload workload)
{
Console.WriteLine("Prestart");
workload.ShowException = false;
Self.Tell(message);
}
}
}
Inside the success handler of Persist I am trying to simulate an exception being thrown but on exception the application goes in to break mode and PreRestart hook is not invoked. But if I make BuildTask method synchronous by removing Task.Run then on exception both PreRestart and Recover<T> methods are invoked.
I would really appreciated if someone can point to me what should be the recommended pattern for this and where I am going wrong.
Most probably, Akka.Persistence is not the good solution for your problem here.
Akka.Persistence uses eventsourcing principles for storing actor's state. Few key points important in this context:
What you're sending to actor, is a command. It describes a job, you want to be done. Executing that command may result in doing some actual processing and eventually may lead to persist actor's linear state change history in form of the events.
In Akka.NET Persist method is used only to store events - they describe the fact, that something has happened: because of that, they cannot be denied and they cannot fail (a thing that you're doing in your Persist callback).
When an actor restarts at any point in time, it will always try to recreate its own state by replaying all events Persisted up to the last known point in time. For this reason it's important that Recover method should only focus on replaying actor's state (it can be called multiple times over the same event) and never result in side effects (example of side effect is sending an email). Any exception thrown there will mean, that actor state is irrecoverably corrupted and that actor will be killed.
If you want to resend the message to your actor, you could:
Put a reliable message queue (i.e. RabbitMQ or Azure Service Bus) or log (Kafka or Event Hub) in front of your actor processing pipeline. This is actually the most reasonable scenario in many cases.
Use at-least-once delivery semantics from Akka.Persistence - but IMHO only if for some reason you cannot use 1st solution.
The most simplistic and unreliable option (since messages are residing only in memory and never persisted) is dead letter queue. Every unhandled message is send there. You can subscribe to it and filter the incoming data to detect which messages should be send again to their recipients.
I am creating a Windows service in C#. Its purpose is to consume info from a feed on the Internet. I get the data by using zeromq's pub/sub architecture (my service is a subscriber only). To debug the service I "host" it in a WPF control panel. This allows me to start, run, and stop the service without having to install it. The problem I am seeing is that when I call my stop method it appears as though the service continues to write to the database. I know this because I put a Debug.WriteLine() where the writing occurs.
More info on the service:
I am attempting to construct my service in a fashion that allows it to write to the database asynchronously. This is accomplished by using a combination of threads and the ThreadPool.
public void StartDataReceiver() // Entry point to service from WPF host
{
// setup zmq subscriber socket
receiverThread = new Tread(SpawnReceivers);
receiverThread.Start();
}
internal void SpawnReceivers()
{
while(!stopEvent.WaitOne(0))
{
ThreadPool.QueueUserWorkItem(new WaitCallback(ProcessReceivedData), subscriber.Recv()); // subscriber.Recv() blocks when there is no data to receive (according to the zmq docs) so this loop should remain under control, and threads only created in the pool when there is data to process.
}
}
internal void ProcessReceivedData(Object recvdData)
{
// cast recvdData from object -> byte[]
// convert byte[] -> JSON string
// deserialize JSON -> MyData
using (MyDataEntities context = new MyDataEntities())
{
// build up EF model object
Debug.WriteLine("Write obj to db...");
context.MyDatas.Add(myEFModel);
context.SaveChanges();
}
}
internal void QData(Object recvdData)
{
Debug.WriteLine("Queued obj in queue...");
q.Enqueue((byte[])recvdData);
}
public void StopDataReceiver()
{
stopEvent.Set();
receiverThread.Join();
subscriber.Dispose();
zmqContext.Dispose();
stopEvent.Reset();
}
The above code are the methods that I am concerned with. When I debug the WPF host, and the method ProcessReceivedData is set to be queued in the thread pool everything seems to work as expected, until I stop the service by calling StopDataReceiver. As far as I can tell the thread pool never queues any more threads (I checked this by placing a break point on that line), but I continue to see "Write obj to db..." in the output window and when I 'Break All' in the debugger a little green arrow appears on the context.SaveChanges(); line indicating that is where execution is currently halted. When I test some more, and have the thread pool queue up the method QData everything seems to work as expected. I see "Queued obj in queue..." messages in the output window until I stop the service. Once I do no more messages in the output window.
TL;DR:
I don't know how to determine if the Entity Framework is just slowing things way down and the messages I am seeing are just the thread pool clearing its backlog of work items, or if there is something larger at play. How do I go about solving something like this?
Would a better solution be to queue the incoming JSON strings as byte[] like I do in the QData method then have the thread pool queue up a different method to work on clearing the queue. I feel that that solution will only shift the problem around and not actually solve it.
Could another solution be to write a new service dedicated to clearing that queue? The problem I see with writing another service would be that I would probably have to use WCF (or possibly zmq) to communicate between the two services which would obviously add overhead and possibly become less performant.
I see the critical section in all of this being the part of getting the data off the wire fast enough because the publisher I am subscribed to is set to begin discarding messages if my subscriber can't keep up.
I've got a C# console app running on Windows Server 2003 whose purpose is to read a table called Notifications and a field called "NotifyDateTime" and send an email when that time is reached. I have it scheduled via Task Scheduler to run hourly, check to see if the NotifyDateTime falls within that hour, and then send the notifications.
It seems like because I have the notification date/times in the database that there should be a better way than re-running this thing every hour.
Is there a lightweight process/console app I could leave running on the server that reads in the day's notifications from the table and issues them exactly when they're due?
I thought service, but that seems overkill.
My suggestion is to write simple application, which uses Quartz.NET.
Create 2 jobs:
First, fires once a day, reads all awaiting notification times from database planned for that day, creates some triggers based on them.
Second, registered for such triggers (prepared by the first job), sends your notifications.
What's more,
I strongly advice you to create windows service for such purpose, just not to have lonely console application constantly running. It can be accidentally terminated by someone who have access to the server under the same account. What's more, if the server will be restarted, you have to remember to turn such application on again, manually, while the service can be configured to start automatically.
If you're using web application you can always have this logic hosted e.g. within IIS Application Pool process, although it is bad idea whatsoever. It's because such process is by default periodically restarted, so you should change its default configuration to be sure it is still working in the middle of the night, when application is not used. Unless your scheduled tasks will be terminated.
UPDATE (code samples):
Manager class, internal logic for scheduling and unscheduling jobs. For safety reasons implemented as a singleton:
internal class ScheduleManager
{
private static readonly ScheduleManager _instance = new ScheduleManager();
private readonly IScheduler _scheduler;
private ScheduleManager()
{
var properties = new NameValueCollection();
properties["quartz.scheduler.instanceName"] = "notifier";
properties["quartz.threadPool.type"] = "Quartz.Simpl.SimpleThreadPool, Quartz";
properties["quartz.threadPool.threadCount"] = "5";
properties["quartz.threadPool.threadPriority"] = "Normal";
var sf = new StdSchedulerFactory(properties);
_scheduler = sf.GetScheduler();
_scheduler.Start();
}
public static ScheduleManager Instance
{
get { return _instance; }
}
public void Schedule(IJobDetail job, ITrigger trigger)
{
_scheduler.ScheduleJob(job, trigger);
}
public void Unschedule(TriggerKey key)
{
_scheduler.UnscheduleJob(key);
}
}
First job, for gathering required information from the database and scheduling notifications (second job):
internal class Setup : IJob
{
public void Execute(IJobExecutionContext context)
{
try
{
foreach (var kvp in DbMock.ScheduleMap)
{
var email = kvp.Value;
var notify = new JobDetailImpl(email, "emailgroup", typeof(Notify))
{
JobDataMap = new JobDataMap {{"email", email}}
};
var time = new DateTimeOffset(DateTime.Parse(kvp.Key).ToUniversalTime());
var trigger = new SimpleTriggerImpl(email, "emailtriggergroup", time);
ScheduleManager.Instance.Schedule(notify, trigger);
}
Console.WriteLine("{0}: all jobs scheduled for today", DateTime.Now);
}
catch (Exception e) { /* log error */ }
}
}
Second job, for sending emails:
internal class Notify: IJob
{
public void Execute(IJobExecutionContext context)
{
try
{
var email = context.MergedJobDataMap.GetString("email");
SendEmail(email);
ScheduleManager.Instance.Unschedule(new TriggerKey(email));
}
catch (Exception e) { /* log error */ }
}
private void SendEmail(string email)
{
Console.WriteLine("{0}: sending email to {1}...", DateTime.Now, email);
}
}
Database mock, just for purposes of this particular example:
internal class DbMock
{
public static IDictionary<string, string> ScheduleMap =
new Dictionary<string, string>
{
{"00:01", "foo#gmail.com"},
{"00:02", "bar#yahoo.com"}
};
}
Main entry of the application:
public class Program
{
public static void Main()
{
FireStarter.Execute();
}
}
public class FireStarter
{
public static void Execute()
{
var setup = new JobDetailImpl("setup", "setupgroup", typeof(Setup));
var midnight = new CronTriggerImpl("setuptrigger", "setuptriggergroup",
"setup", "setupgroup",
DateTime.UtcNow, null, "0 0 0 * * ?");
ScheduleManager.Instance.Schedule(setup, midnight);
}
}
Output:
If you're going to use service, just put this main logic to the OnStart method (I advice to start the actual logic in a separate thread not to wait for the service to start, and the same avoid possible timeouts - not in this particular example obviously, but in general):
protected override void OnStart(string[] args)
{
try
{
var thread = new Thread(x => WatchThread(new ThreadStart(FireStarter.Execute)));
thread.Start();
}
catch (Exception e) { /* log error */ }
}
If so, encapsulate the logic in some wrapper e.g. WatchThread which will catch any errors from the thread:
private void WatchThread(object pointer)
{
try
{
((Delegate) pointer).DynamicInvoke();
}
catch (Exception e) { /* log error and stop service */ }
}
You trying to implement polling approach, where a job is monitoring a record in DB for any changes.
In this case we are trying to hit DB for periodic time, so if the one hour delay reduced to 1 min later stage, then this solution turns to performance bottle neck.
Approach 1
For this scenario please use Queue based approach to avoid any issues, you can also scale up number of instances if you are sending so many emails.
I understand there is a program updates NotifyDateTime in a table, the same program can push a message to Queue informing that there is a notification to handle.
There is a windows service looking after this queue for any incoming messages, when there is a message it performs the required operation (ie sending email).
Approach 2
http://msdn.microsoft.com/en-us/library/vstudio/zxsa8hkf(v=vs.100).aspx
you can also invoke C# code from SQL Server Stored procedure if you are using MS SQL Server. but in this case you are making use of your SQL server process to send mail, which is not a good practice.
However you can invoke a web service, OR WCF service which can send emails.
But Approach 1 is error free, Scalable , Track-able, Asynchronous , and doesn't trouble your data base OR APP, you have different process to send email.
Queues
Use MSMQ which is part of windows server
You can also try https://www.rabbitmq.com/dotnet.html
Pre-scheduled tasks (at undefined times) are generally a pain to handle, as opposed to scheduled tasks where Quartz.NET seems well suited.
Furthermore, another distinction is to be made between fire-and-forget for tasks that shouldn't be interrupted/change (ex. retries, notifications) and tasks that need to be actively managed (ex. campaign or communications).
For the fire-and-forget type tasks a message queue is well suited. If the destination is unreliable, you will have to opt for retry levels (ex. try send (max twice), retry after 5 minutes, try send (max twice), retry after 15 minutes) that at least require specifying message specific TTL's with a send and retry queue. Here's an explanation with a link to code to setup a retry level queue
The managed pre-scheduled tasks will require that you use a database queue approach (Click here for a CodeProject article on designing a database queue for scheduled tasks)
. This will allow you to update, remove or reschedule notifications given you keep track of ownership identifiers (ex. specifiy a user id and you can delete all pending notifications when the user should no longer receive notifications such as being deceased/unsubscribed)
Scheduled e-mail tasks (including any communication tasks) require finer grained control (expiration, retry and time-out mechanisms). The best approach to take here is to build a state machine that is able to process the e-mail task through its steps (expiration, pre-validation, pre-mailing steps such as templating, inlining css, making links absolute, adding tracking objects for open tracking, shortening links for click tracking, post-validation and sending and retrying).
Hopefully you are aware that the .NET SmtpClient isn't fully compliant with the MIME specifications and that you should be using a SAAS e-mail provider such as Amazon SES, Mandrill, Mailgun, Customer.io or Sendgrid. I'd suggest you look at Mandrill or Mailgun. Also if you have some time, take a look at MimeKit which you can use to construct MIME messages for the providers allow sending raw e-mail and doesn't necessarily support things like attachments/custom headers/DKIM signing.
I hope this sets you on the right path.
Edit
You will have to use a service to poll at specific intervals (ex. 15 seconds or 1 minute). The database load can be somewhat negated by checkout out a certain amount of due tasks at a time and keeping an internal pool of messages due for sending (with a time-out mechanism in place). When there's no messages returned, just 'sleep' the polling for a while. I'd would advise against building such a system out against a single table in a database - instead design an independant e-mail scheduling system that you can integrate with.
I would turn it into a service instead.
You can use System.Threading.Timer event handler for each of the scheduled times.
Scheduled tasks can be scheduled to run just once at a specific time (as opposed to hourly, daily, etc.), so one option would be to create the scheduled task when the specific field in your database changes.
You don't mention which database you use, but some databases support the notion of a trigger, e.g. in SQL: http://technet.microsoft.com/en-us/library/ms189799.aspx
If you know when the emails need to be sent ahead of time then I suggest that you use a wait on an event handle with the appropriate timeout. At midnight look at the table then wait on an event handle with the timeout set to expire when the next email needs to be sent. After sending the email wait again with the timeout set based on the next mail that should be sent.
Also, based on your description, this should probably be implemented as a service but it is not required.
I have been dealing with the same problem about three years ago. I have changed the process several times before it was good enough, I tell you why:
First implementation was using special deamon from webhosting which called the IIS website. The website checked the caller IP and then check the database and send emails. This was working until one day, when I got a lot of very dirty emails from the users that I have totally spammed their mailboxes. The drawback of keeping email in database and sending from SMTP email is that there is NOTHING which ensure DB to SMTP transaction. You are never sure if the email has been successfully sent or not. Sending email can be successfull, can failed or it can be false positive or it can be false negative (SMTP client tells you, that the email was not sent, but it was). There was some problem with SMTP server and the server returned false(email not send), but the email was sent. The daemon was resending the email every hour the whole day before the dirty emails appears.
Second implementation: To prevent spamming, I have changed the algorithm, that the email is considered to be sent even if it failed (my email notification was not too important). My first advice is: "Don't launch the deamon too often, because this false negative smtp error makes users upset."
After several month there were some changes on the server and the daemon was not working well. I got the idea from the stackoverflow: bind the .NET timer to the web application domain. It wasn't good idea, because it seems, that IIS can restart application from time to time because of memory leaks and the timer never fires if the restarts are more often then timer ticks.
The last implementation. Windows scheduler every hour fires python batch which read local website. This fire ASP.NET code. The advantage is that time windows scheduler call the the local batch and website reliably. IIS doesn't hang, it has restart ability. The timer site is part of my website, it is still one projects. (you can use console app instead). Simple is better. It just works!
Your first choice is the correct option in my opinion. Task Scheduler is the MS recommended way to perform periodic jobs. Moreover it's flexible, can reports failures to ops, is optimized and amortized amongst all tasks in the system, ...
Creating any console-kind app that runs all the time is fragile. It can be shutdown by anyone, needs an open seesion, doesn't restart automatically, ...
The other option is creating some kind of service. It's guaranteed to be running all the time, so that would at least work. But what was your motivation?
"It seems like because I have the notification date/times in the database that there should be a better way than re-running this thing every hour."
Oh yeah optimization... So you want to add a new permanently running service to your computer so that you avoid one potentially unrequired SQL query every hour? The cure looks worse than the disease to me.
And I didn't mention all the drawbacks of the service. On one hand, your task uses no resource when it doesn't run. It's very simple, lightweight and the query efficient (provided you have the right index).
On the other hand, if your service crashes it's probably gone for good. It needs a way to be notified of new e-mails that may need to be sent earlier than what's currently scheduled. It permanently uses computer resources, such as memory. Worse, it may contain memory leaks.
I think that the cost/benefit ratio is very low for any solution other than the trivial periodic task.
I wonder what the best way is to publish and subscribe to channels using BookSleeve. I currently implement several static methods (see below) that let me publish content to a specific channel with the newly created channel being stored in private static Dictionary<string, RedisSubscriberConnection> subscribedChannels;.
Is this the right approach, given I want to publish to channels and subscribe to channels within the same application (note: my wrapper is a static class). Is it enough to create one channel even I want to publish and subscribe? Obviously I would not publish to the same channel than I would subscribe to within the same application. But I tested it and it worked:
RedisClient.SubscribeToChannel("Test").Wait();
RedisClient.Publish("Test", "Test Message");
and it worked.
Here my questions:
1) Will it be more efficient to setup a dedicated publish channel and a dedicated subscribe channel rather than using one channel for both?
2) What is the difference between "channel" and "PatternSubscription" semantically? My understanding is that I can subscribe to several "topics" through PatternSubscription() on the same channel, correct? But if I want to have different callbacks invoked for each "topic" I would have to setup a channel for each topic correct? Is that efficient or would you advise against that?
Here the code snippets.
Thanks!!!
public static Task<long> Publish(string channel, byte[] message)
{
return connection.Publish(channel, message);
}
public static Task SubscribeToChannel(string channelName)
{
string subscriptionString = ChannelSubscriptionString(channelName);
RedisSubscriberConnection channel = connection.GetOpenSubscriberChannel();
subscribedChannels[subscriptionString] = channel;
return channel.PatternSubscribe(subscriptionString, OnSubscribedChannelMessage);
}
public static Task UnsubscribeFromChannel(string channelName)
{
string subscriptionString = ChannelSubscriptionString(channelName);
if (subscribedChannels.Keys.Contains(subscriptionString))
{
RedisSubscriberConnection channel = subscribedChannels[subscriptionString];
Task task = channel.PatternUnsubscribe(subscriptionString);
//remove channel subscription
channel.Close(true);
subscribedChannels.Remove(subscriptionString);
return task;
}
else
{
return null;
}
}
private static string ChannelSubscriptionString(string channelName)
{
return channelName + "*";
}
1: there is only one channel in your example (Test); a channel is just the name used for a particular pub/sub exchange. It is, however, necessary to use 2 connections due to specifics of how the redis API works. A connection that has any subscriptions cannot do anything else except:
listen to messages
manage its own subscriptions (subscribe, psubscribe, unsubscribe, punsubscribe)
However, I don't understand this:
private static Dictionary<string, RedisSubscriberConnection>
You shouldn't need more than one subscriber connection unless you are catering for something specific to you. A single subscriber connection can handle an arbitrary number of subscriptions. A quick check on client list on one of my servers, and I have one connection with (at time of writing) 23,002 subscriptions. Which could probably be reduced, but: it works.
2: pattern subscriptions support wildcards; so rather than subscribing to /topic/1, /topic/2/ etc you could subscribe to /topic/*. The name of the actual channel used by publish is provided to the receiver as part of the callback signature.
Either can work. It should be noted that the performance of publish is impacted by the total number of unique subscriptions - but frankly it is still stupidly fast (as in: 0ms) even if you have tens of multiple thousands of subscribed channels using subscribe rather than psubscribe.
But from publish
Time complexity: O(N+M) where N is the number of clients subscribed to the receiving channel and M is the total number of subscribed patterns (by any client).
I recommend reading the redis documentation of pub/sub.
Edit for follow on questions:
a) I assume I would have to "publish" synchronously (using Result or Wait()) if I want to guarantee the order of sending items from the same publisher is preserved when receiving items, correct?
that won't make any difference at all; since you mention Result / Wait(), I assume you're talking about BookSleeve - in which case the multiplexer already preserves command order. Redis itself is single threaded, and will always process commands on a single connection in order. However: the callbacks on the subscriber may be executed asynchronously and may be handed (separately) to a worker thread. I am currently investigating whether I can force this to be in-order from RedisSubscriberConnection.
Update: from 1.3.22 onwards you can set the CompletionMode to PreserveOrder - then all callbacks will be completed sequentially rather than concurrently.
b) after making adjustments according to your suggestions I get a great performance when publishing few items regardless of the size of the payload. However, when sending 100,000 or more items by the same publisher performance drops rapidly (down to 7-8 seconds just to send from my machine).
Firstly, that time sounds high - testing locally I get (for 100,000 publications, including waiting for the response for all of them) 1766ms (local) or 1219ms (remote) (that might sound counter-intuitive, but my "local" isn't running the same version of redis; my "remote" is 2.6.12 on Centos; my "local" is
2.6.8-pre2 on Windows).
I can't make your actual server faster or speed up the network, but: in case this is packet fragmentation, I have added (just for you) a SuspendFlush() / ResumeFlush() pair. This disables eager-flushing (i.e. when the send-queue is empty; other types of flushing still happen); you might find this helps:
conn.SuspendFlush();
try {
// start lots of operations...
} finally {
conn.ResumeFlush();
}
Note that you shouldn't Wait until you have resumed, because until you call ResumeFlush() there could be some operations still in the send-buffer. With that all in place, I get (for 100,000 operations):
local: 1766ms (eager-flush) vs 1554ms (suspend-flush)
remote: 1219ms (eager-flush) vs 796ms (suspend-flush)
As you can see, it helps more with remote servers, as it will be putting fewer packets through the network.
I cannot use transactions because later on the to-be-published items are not all available at once. Is there a way to optimize with that knowledge in mind?
I think that is addressed by the above - but note that recently CreateBatch was added too. A batch operates a lot like a transaction - just: without the transaction. Again, it is another mechanism to reduce packet fragmentation. In your particular case, I suspect the suspend/resume (on flush) is your best bet.
Do you recommend having one general RedisConnection and one RedisSubscriberConnection or any other configuration to have such wrapper perform desired functions?
As long as you're not performing blocking operations (blpop, brpop, brpoplpush etc), or putting oversized BLOBs down the wire (potentially delaying other operations while it clears), then a single connection of each type usually works pretty well. But YMMV depending on your exact usage requirements.
Can anyone point me to a good working solution to the following problem?
The application I'm working on needs to communicate over TCP to software running on another system. Some of the requests I send to that system can take a long time to complete (up to 15sec).
In my application I have a number of threads, including the main UI thread, which can access the service which communicates with the remote system. There is only a single instance of the service which is accessed by all threads.
I need to only allow a single request to be processed at a time, i.e. it needs to be serialized, otherwise bad things happen with the TCP comms.
Attempted Solutions so far
Initially I tried using lock() with a static object to protect each 'command' method, as follows:
lock (_cmdLock)
{
SetPosition(position);
}
However I found that sometimes it wouldn't release the lock, even though there are timeouts on the remote system and on the TCP comms. Additionally, if two calls came in from the same thread (e.g. a user double clicked a button) then it would get past the lock - after reading up about locking again I know that the same thread won't wait for the lock.
I then tried to use AutoResetEvents to only allow a single call through at a time. But without the locking it wouldn't work with multiple threads. The following is the code I used to send a command (from the calling thread) and process a command request (running in the background on its own thread)
private static AutoResetEvent _cmdProcessorReadyEvent = new AutoResetEvent(false);
private static AutoResetEvent _resultAvailableEvent = new AutoResetEvent(false);
private static AutoResetEvent _sendCommandEvent = new AutoResetEvent(false);
// This method is called to send each command and can run on different threads
private bool SendCommand(Command cmd)
{
// Wait for processor thread to become ready for next cmd
if (_cmdProcessorReadyEvent.WaitOne(_timeoutSec + 500))
{
lock (_sendCmdLock)
{
_currentCommand = cmd;
}
// Tell the processor thread that there is a command present
_sendCommandEvent.Set();
// Wait for a result from the processor thread
if (!_resultAvailableEvent.WaitOne(_timeoutSec + 500))
_lastCommandResult.Timeout = true;
}
return _lastCommandResult.Success;
}
// This method runs in a background thread while the app is running
private void ProcessCommand()
{
try
{
do
{
// Indicate that we are ready to process another commnad
_cmdProcessorReadyEvent.Set();
_sendCommandEvent.WaitOne();
lock (_sendCmdLock)
{
_lastCommandResult = new BaseResponse(false, false, "No Command");
RunCOMCommand(_currentCommand);
}
_resultAvailableEvent.Set();
} while (_processCommands);
}
catch (Exception ex)
{
_lastCommandResult.Success = false;
_lastCommandResult.Timeout = false;
_lastCommandResult.LastError = ex.Message;
}
}
I haven't tried implementing a queue of command requests as the calling code expects everything to be synchronous - i.e. the previous command must have completed before I sent the next one.
Additional Background
The software running on the remote system is a 3rd party product and I don't have access to it, it is used to control a laser marking machine with an integrated XY table.
I'm actually using a legacy VB6 DLL to communicate with the laser as it has all the code for formatting commands and processing the responses. This VB6 DLL uses a WinSock control for the comms.
I'm not sure why a queueing solution wouldn't work.
Why not put each request, plus details for a callback with result, on a queue ? Your application would queue these requests, and the module interfacing to your 3rd party system can take each queue item in turn, process, and return the result.
I think it's a cleaner separation of concerns between modules rather than implementing locking around request dispatch etc. Your requestor is largely oblivious of the serialisation constraints, and the 3rd-party interfacing module can look after serialisation, managing timeouts and other errors etc.
Edit: In the Java world we have BlockingQueues which are synchronised for consumers/publishers and make this sort of thing quite easy. I'm not sure if you have the same in the C# world. A quick search suggests not, but there's source code floating around for this sort of thing (if anyone in the C# world can shed some light that would be appreciated)