How to do multiple publication scheduling in C# service? - c#

I have one service which is listening for position updates coming from upstream system. Now there are multiple consumers of this position updates.
One consumer wants to get update as soon as possible
One consumer wants to get update every 30 seconds
One consumer want to get update when 50 updates are accumulated
One consumer want to get update when 50 updates are accumulated or 30 seconds which ever is earlier.
Above can be changed anytime or new variation can be added ?
How can I make this configurable, scalable, and what kind of programming approach I should use.
I am developing in C#, Window Service

It sounds like you are describing a scenario where the service is an intermediary between the publishing source (the service itself is a subscriber) and the service re-broadcasts this information to N subscribers, but according to their schedule.
So assuming an update is a single position update and not some sort of aggregation like a rolling average nor a buffering (e.g. just the latest position of a car every 30 seconds not all its positions since the last 30 seconds), then you need to maintain some information for each subscriber:
a subscription. Who is the consumer? how do I notify it? (e.g. callback, reply queue, etc.)
a specification. What does the consumer want and when? (e.g. every 50 ticks)
state
time since last send
number of updates since last send
...
As the service receives updates, for each consumer it must evaluate the specification against the state for each update from the source; something like:
if (consumer.Spec.Matches(consumer.State, updateMessage)
SendUpdate(consumer.Subscription.Callback, updateMessage)
The above assumes your spec is directly executable by the service (i.e. the consumers are in-process or the spec was serialized and can be deserialized by the service. If this isn't the case, your spec could perhaps represent a DSL (e.g. a parseable representation that the server could compile into something it could execute). Another approach is thinking of the spec as an instruction set. For example,
public enum FrequencyUnit
{
SecondsSinceLastSend,
UpdatesSinceLastSend,
}
public class Frequency
{
public double Value { get; set; }
public FrequencyUnit Unit { get; set; }
}
public class Operator
{
Every, // Unary: e.g. every update; every 10 sec; every 5 updates
Or, // Nary: e.g. every 50 or every 20 sec (whichever's first)
And, // Nary: e.g. 19 messages and 20 sec have passed
// etc.
}
public class UpdateSpec
{
public Frequency[] Frequencies { get; set; }
public Operator Operator { get; set; }
}
These are pretty flexible, and can be configured on the server in-code, or built-up by reading XML or something. These could also be passed to the service from the consumer itself upon registration. For example, an IService.Register() could expose an interface taking in the subscription and specification.
The last bit would be scalability. I described the service looping on each update for the consumers. This won't scale well because the loop may block receiving the updates from the source or if asynchronous with the source, would at least likely accumulate updates faster than processing them.
A strategy to deal with this is to add an internal queue to the information you maintain for each subscriber. The service, upon receiving an update, would enqueue it to each internal queue. Service tasks (TPL-based), thread-pool threads, or long-lived threads would then dequeue and evaluate the update as above. There are many possible variations and optimizations of this.

Related

Looking for advice on how to implement a "volume" property in MVVM C# Xamarin

So I am developing an app that can adjust (amongst other things) the volume of a device. So what I started with was a very simple Model which implements INotifyPropertyChanged. There is no need for a ViewModel in such a simple scenario as far as I can tell. INPC is called when the volume property is set, and the Model generates a TCP message to tell the device to change the volume.
However, this is where it gets complicated. The volume does not have to be changed by the app, it could also be changed directly on the device, or even by another phone with the app. The only way to get these changes from the device is to poll it periodically.
So what I think is reasonable is to change the structure a bit. So now I have a DeviceModel which represents the actual device. I add a VolumeViewModel. The DeviceModel class now handles generating the TCP messages. It also periodically polls the device. However, lets say the DeviceModel finds that the volume changed. How should this propagate back to the VolumeViewModel such that all changes are two-way both from the UI, and from the actual device? If I put INPC in the DeviceModel, it seems my VolumeViewModel becomes superfluous. Perhaps for this simple contrived example that's fine, but lets say the device is more complicated than just 1 volume. I was thinking the VM could contain a reference to the Model, and the volume property could just be a reference to the volume in the DeviceModel but it still doesn't really solve my problem.
If the DeviceModel volume changes, the reference isn't changing, so it seems to me this would not trigger the setter function for the volume property in the VolumeViewModel. Do I have the ViewModel inject an event handler into the Model to be called when polling sees a different volume? Do I use INPC in both (what would implementing it that way look like?)
Set direction is clear. And you want to get it explicitly. So we need something like
class MyDeviceService : IDeviceService
{
public async Task SetVolumeAsync(int volume) { }
public async Task<int> GetVolumeAsync() { }
}
// ViewModel
class DeviceViewModel : INotifyPropertyChanged
{
public int Volume { get{ ... } set { ... } }
public DeviceViewModel(IDeviceService service) { ... }
}
For the update you have different options:
Callback
Pro:
Easy to implement
Con:
only one subscriber
looks like a bad implementation of events (in our scenario)
class MyDeviceService
{
public Action<int> VolumeChangedCallback { get; set; }
public async Task SetVolumeAsync(int volume) { }
public async Task<int> GetVolumeAsync() { }
// producer
VolumeChangedCallback(newVolume);
}
// consumer
myDeviceService.VolumeChangedCallback = v => Volume = v;
// deregistration
myDeviceService.VolumeChangedCallback = null;
Event
Pro:
Language feature (built in)
Multiple subscribers
Con:
???
class MyDeviceService
{
public event EventHandler<VolumeChangedEventArgs> VolumeChanged;
public async Task SetVolumeAsync(int volume) { }
public async Task<int> GetVolumeAsync() { }
// producer
VolumeChanged(new VolumeChangedEventArgs(newVolume));
}
// consumer
MessagingCenter.Subscribe<MyDeviceService, int>(this,
MyDeviceService.VolumeMessageKey, newVolume => Volume = newVolume);
// needs deregistration
MessagingCenter.Unsubscribe<MyDeviceService, int>(this,
MyDeviceService.VolumeMessageKey, newVolume => Volume = newVolume);
Messaging
Pro:
Easy Sub / Unsub
Multiple subscribers
Multiple senders
Receiver does not need to know the sender
Con:
external library needed (but included in Xamarin.Forms, MvvMCross, other MvvM Frameworks)
class MyDeviceService
{
public static string VolumeMessageKey = "Volume";
public async Task SetVolumeAsync(int volume) { }
public async Task<int> GetVolumeAsync() { }
// producer
MessagingCenter.Send<MyDeviceService, int>(this,
VolumeMessageKey, newVolume);
}
// consumer
MessagingCenter.Subscribe<MyDeviceService, int>(this,
MyDeviceService.VolumeMessageKey, newVolume => Volume = newVolume);
// needs deregistration
MessagingCenter.Unsubscribe<MyDeviceService, int>(this,
MyDeviceService.VolumeMessageKey, newVolume => Volume = newVolume);
Observable
Using Reactive extensions is always nice, if you have event streams.
Pro:
Easy Sub / Unsub
Multiple subscribers
Filterable like IEnumerable (e.g. Where(volume => volume > 10))
Con:
external library just for one case
high learning effort due totally new approach
class MyDeviceService
{
IObservable<int> VolumeUpdates { get; }
public async Task SetVolumeAsync(int volume) { }
public async Task<int> GetVolumeAsync() { }
}
// consumer
_volumeSubscription = myDeviceService.VolumeUpdates
.Subscribe(newVolume => Volume = newVolume);
// deregistration
// - implicitly, if object gets thrown away (but not deterministic because of GC)
// - explicitly:
_volumeSubscription.Dispose();
Conclusion
I left out INPC in the model, because that's events but worse, because you have to compare the property names.
If you have a look at these examples, you see, that they just differ in the way you subscribe and unsubscribe. The main difference is the flexibility they offer. Personally, I'd go for Reactive Extensions ;) But Events and Messaging are fine, too. So go for the approach that you and your team members understand the best. You just have to remember:
ALWAYS deregister! ^^
I am presuming that you intend to show a UI to the user that displays the current volume (such as a slider widget). Therefore your real challenge is the fact that any attempts to manipulate that slider cannot be immediately confirmed - it may take some time for the device to respond, and once it does it may not even accept the request (or might be overridden by local manipulation). Yet you still have a need to show the mobile app user that their request is being processed - or else they will assume it is malfunctioning.
I've had to solve this in an app as well - although my example was a much more complicated situation. My app is used to control large installations of irrigation management hardware, with many devices (with varying versions of firmware and varying degrees of remote control capabilities). But ultimately the problem was the same. I solved it with standard MVVM.
For each device, create a viewmodel that tracks two distinct values: the actual last known (reported) status of the hardware, and any "pending" value that may have been recently requested by the app. Bind the visual controls to the "pending" values via standard INPC bindings. In the setters for those values, if the new value differs from the last known hardware status, then it would trigger an async request to the device to transition to the desired status. And for the rest of the time, you just poll the device status using whatever mechanism makes sense for you (push notifications might be better, but in my case the infrastructure I was working with could only support active polling). You would update with the new hardware status values, and also the pending values (unless a different value was already pending).
In the app UI, you probably want to show the actual hardware status values as well as the "pending" values that the user is allowed to manipulate. For sliders, you might want to implement a "ghost" slider thumb that reflects the reported hardware value (read-only). For switches, you might want to disable them until the hardware reports the same value as the pending value. Whatever makes sense for your app's design language.
This leaves the final edge case of how to deal with situations where the hardware does not (or cannot) respect a request. Perhaps the user tries to turn up the volume to 11 when the device can only go up to 10. Or maybe someone presses a physical pushbutton on the device to mute it. Or maybe someone else is running the mobile app and fighting you for control of it. In any event, it is easily solved by establishing a maximum wait timeout for pending manipulations. For example, any volume change requests that aren't met after 10 seconds are assumed to be pre-empted and the UI would just stop waiting for it by setting the pending value = last reported value.
Anyhow, good luck! It's a challenging thing to handle well, but worth the effort!

How do I persist states between two same actors deployed on different cluster node? (akka.net)

If I have a setup like below, let's say I'll have 3 nodes joined to a cluster, and I use round robin pool.
var worker = cluster.ActorOf(Props.Create<Worker>().WithRouter(
new ClusterRouterPool(
new RoundRobinPool(5),
new ClusterRouterPoolSettings(30, true, 1))), "worker");
The "worker" simply remembers how many messages it has processed like below
public class Worker : TypedActor, IHandle<int> {
readonly List<int> processed;
public Worker()
{
processed = new List<int>();
}
public void Handle(int message)
{
System.Threading.Thread.Sleep(new Random().Next(1000, 2000));
processed.Add(message);
Console.WriteLine("WORKER ({0}) [{1}:{2}], processed: {3}", message, Context.Self.Path, Cluster.Get(Context.System).SelfUniqueAddress.Address.Port, processed.Count);
}
Is there anyway to synchronize the "processed List" between different actors on different cluster nodes? Is this something that akka.net.cluster.sharding will eventually do? Or am I doing something which totally makes no sense?
In general your problem seems to be the closest to what JVM akka eventuate and ddata plugins offer. General side effect in every case when you have actors working on it on the same piece of data is eventual consistency - since your state is 'shared' between many actors working on the multiple machines, the actual state at particular point it time may be blurred and will differ depending on which actor's point of view will you take.
At the moment I haven't heard about any finished production ready options on .NET land for your case, but Akka.DistributedData - which is currently under development - will allow you to complete your task. It's a Akka implementation of CRDTs.
What CRDTs will give you, is the access to an eventually consistent data types that can be replicated over different nodes in distributed cluster up to the moment, when total state is concise in whole application. In that case you could replace your processed list to GSet which would allow you to attach your elements to one data set in distributed fashion.
If you don't want to wait, take a risk or build CRDT on your own, you could use third-party solutions like Riak.
PS: Akka.Cluster.Sharding has a different purpose, which is to automatically distribute your actors evenly on your cluster - even when number of nodes changes - so that the only one instance of specific actor will be present in the current cluster scope.

Worker Role process - Configuration value polling

I have a Worker Role which processes items off a queue. It is basically an infinite loop which pops items off of the queue and asynchronously processes them.
I have two configuration settings (PollingInterval and MessageGetLimit) which I want the worker role to pick up when changed (so with no restart required).
private TimeSpan PollingInterval
{
get
{
return TimeSpan.FromSeconds(Convert.ToInt32(RoleEnvironment.GetConfigurationSettingValue("PollingIntervalSeconds")));
}
}
private int MessageGetLimit
{
get
{
return Convert.ToInt32(RoleEnvironment.GetConfigurationSettingValue("MessageGetLimit"));
}
}
public override void Run()
{
while (true)
{
var messages = queue.GetMessages(MessageGetLimit);
if (messages.Count() > 0)
{
ProcessQueueMessages(messages);
}
else
{
Task.Delay(PollingInterval);
}
}
}
Problem:
During peak hours, the while loop could be running a couple of times per second. This means that it would be querying the config items up to 100,000 times per day.
Is this detrimental or inefficient?
John's answer is a good one using the Environment Changing/Changed events to modify your settings without restarts, but I think perhaps a better method is for you to use an exponential back-off policy to make your polling more efficient. By having the code behavior smarter on it's own you will reduce how often you are in there tweaking it. Remember that each time you update these environment settings it has to be rolled out to all of the instances, which can take a little time depending on how many instances you have running. Also, you are putting a step in here that a human has to be involved.
You are using Windows Azure Storage Queues which means each time your GetMessages(s) executes it's making a call to the service and retrieving 0 or more messages (up to your MessageGetLimit). Each time it asks for that you'll get charged a transaction. Now, understand that transactions are really cheap. Even 100,000 transactions a day is $0.01/day. However, don't underestimate the speed of a loop. :) You may get more throughput than that and if you have multiple worker role instances this adds up (though will still be a really small amount of money compared to actually running the instances themselves).
A more efficient path would be to put in an exponential backoff approach to reading your messages off the queue. Check out this post by Maarten on a simple example: http://www.developerfusion.com/article/120619/advanced-scenarios-with-windows-azure-queues/. Couple a back off approach with an auto-scaling of the worker roles based on queue depth and you'll have a solution that relies less on a human adjusting settings. Put in minimum and maximum values for instance counts, adjust the numbers of messages to pull based on how many times a message has been present the very next time you ask for one, etc. There are a lot of options here that will reduce your involvement and have an efficient system.
Also, you might look at Windows Azure Service Bus Queues in that they implement long polling, so it results in much fewer transactions while waiting for work to hit the queue.
Upfront disclaimer, I haven't used RoleEnvironments.
The MDSN documentation for GetConfigurationSettingValue states that the configuration is read from disk. http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.serviceruntime.roleenvironment.getconfigurationsettingvalue.aspx. So it is sure to be slow when called often.
The MSDN documentation also shows that there is an event fired when a setting changes. http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.serviceruntime.roleenvironment.changed.aspx. You can use this event to only reload the settings when they have actually changed.
Here is one (untested, not compiled) approach.
private TimeSpan mPollingInterval;
private int mMessageGetLimit;
public override void Run()
{
// Refresh the configuration members only when they change.
RoleEnvironment.Changed += RoleEnvironmentChanged;
// Initialize them for the first time
RefreshRoleEnvironmentSettings();
while (true)
{
var messages = queue.GetMessages(mMessageGetLimit);
if (messages.Count() > 0)
{
ProcessQueueMessages(messages);
}
else
{
Task.Delay(mPollingInterval);
}
}
}
private void RoleEnvironmentChanged(object sender, RoleEnvironmentChangedEventArgs e)
{
RefreshRoleEnvironmentSettings();
}
private void RefreshRoleEnvironmentSettings()
{
mPollingInterval = TimeSpan.FromSeconds(Convert.ToInt32(RoleEnvironment.GetConfigurationSettingValue("PollingIntervalSeconds")));
mMessageGetLimit = Convert.ToInt32(RoleEnvironment.GetConfigurationSettingValue("MessageGetLimit"));
}

Inserting data in background/async task what is the best way?

I have an very quick/lightweight mvc action, that is requested very often and I need to maintain minimal response time under heavy load.
What i need to do, is from time to time depending on conditions to insert small amount of data to sql server (log unique id for statistics, for ~1-5% of queries).
I don't need inserted data for response and if I loose some of it because application restart or smth, I'll survive.
I imagine that I could queue somehow inserting and do it in background, may be even do some kind of buffering - like wait till queue collects 100 of inserts and then make them in one pass.
I'm pretty sure, that somebody must have done/seen such implementation before, there's no need to reinvent wheel, so if somebody could point to right direction, I would be thankful.
You could trigger a background task from your controller action that will do the insertion (fire and forget):
public ActionResult Insert(SomeViewModel model)
{
Task.Factory.StartNew(() =>
{
// do the inserts
});
return View();
}
Be aware though that IIS could recycle the application at any time which would kill any running tasks.
Create a class that will store the data that needs to be pushed to the server, and a queue to hold a queue of the objects
Queue<LogData> loggingQueue = new Queue<LogData>();
public class LogData {
public DataToLog {get; set}
}
The create a timer or some other method within the app that will be triggered every now and then to post the queued data to the database
I agree with #Darin Dimitrov's approach although I would add that you could simply use this task to write to the MSMQ on the machine. From there you could write a service that reads the queue and inserts the data into the database. That way you could throttle the service that reads data or even move the queue onto a different machine.
If you wanted to take this one step further you could use something like nServiceBus and a pub/sub model to write the events into the database.

C# - Alternative to System.Timers.Timer, to call a function at a specific time

I want to call a specific function on my C# application at a specific time. At first I thought about using a Timer (System.Time.Timer), but that soon became impossible to use. Why?
Simple. The Timer class requires a Interval in milliseconds, but considering that I might want the function to be executed, let's says in a week that would mean:
7 days = 168 hours;
168 hours = 10,080 minutes;
10,080 minutes = 604,800 seconds;
604,800 seconds = 604,800,000 milliseconds;
So the interval would be 604,800,000;
Now let's remember that the Interval accepted data type is int, and as we know int range goes from -2,147,483,648 to 2,147,483,647.
That makes Timer useless, not in this case, but in the case of more than about 25 days, once we cannot set a Interval bigger that 2,147,483,647 milliseconds.
So I need a solution where I could specify when the function should be called. Something like this:
solution.ExecuteAt = "30-04-2010 15:10:00";
solution.Function = "functionName";
solution.Start();
So when the System Time would reach "30-04-2010 15:10:00" the function would be executed in the application.
How can this problem be solved?
Additional information: What will these functions do?
Getting climate information and based on that information:
Starting / Shutting down other applications (most of them console based);
Sending custom commands to those console applications;
Power down, rebooting, sleep, hibernate the computer;
And if possible schedule the BIOS to power up the computer;
EDIT:
It would seem that the Interval accepted data type is double, however if you set a value bigger that an int to the Interval, and call Start() it throws a exception [0, Int32.MaxValue].
EDIT 2:
Jørn Schou-Rode suggested using Ncron to handle the scheduling tasks, and at first look this seems a good solution, but I would like to hear about some who has worked with it.
Your "Start()" method should spawn a thread that wakes up at a defined interval, checks the time, and if you haven't reached the desired time, goes back to sleep.
I would recommend that you just write a program that deals with the business part of it and then execute that program when necessary by using Windows Task Scheduler.
One approach to task scheduling, simliar to that proposed by klausbyskov, is to built your scheduling service on top of an existing .NET scheduling framework/library. Compared to using the Windows Task Scheduler, this has the advantages of (a) allowing several jobs to be defined in the same project and (b) keeping jobs and scheduling logic "together" - i.e. not relying on server settings prone to get lost in system upgrades/replacements.
I know of two open-source projects that offer this kind of functionality:
"Quartz.NET is a full-featured, open source job scheduling system that can be used from smallest apps to large scale enterprise systems." I have never actually used this framework myself, but from studying the website, I have the impression of a very solid tool, providing many cool features. The fact that there [quartz-net] tag on Stackoverflow might also indicate that it is actually used in the wild.
"NCron is a light-weight library for building and deploying scheduled background jobs on the .NET server platform." It does not have half as many features as Quartz.NET, and it does not have any tag on Stackoverflow, but the author (yours truly) believes that its low-friction API makes it somewhat easier to get started with.
Building your scheduling service on top of NCron, you can schedule a CleanupJob for weekly execution using a single line of code:
service.Weekly().Run<CleanupJob>();
Ok, you will need around three lines of boiler plate code on top of that to actually turn your project into a Windows service, but it sounds more impressive when I claim that it can be done with one line of code ;)
You could write some sort of wrapper class for a Timer which takes a DateTime instance. Then you perform the following steps:
Determine the difference between DateTime.Now and the desired time.
If the difference (in milliseconds) is larger than the maximum allowed value for the Timer.Interval property, set the Interval to the maximum allowed value (i.e. double.MaxValue or whatever) and start it.
Now, when the timer elapses the first time, you simply go back to step 1.
At some time, the difference will be smaller than the maximum allowed value for the Interval property, and then you could fire an event in your wrapper which ultimately calls the desired method.
Use the System.Threading.Timer:
var timer = new System.Threading.Timer(delegate { }, // Pass here a delegate to the method
null,
TimeSpan.FromDays(7), // Execute Method after 7 days.
TimeSpan.Zero);
You can use the System.Threading.Timer class, which provides a constructor accepting an interval expressed as an Int64, which should be enough for your needs.
Now for the other stuff :
You can start/stop/configure program using the Process class (I don't really get what you call "custom commands")
You cannot restart or shut down or control the local BIOS using native .NET classes. Rebooting / restarting is possible through Interop (calling native Windows API from .NET), and scheduling the BIOS is just impossible. Or maybe with a special server motherboard ? I don't know..
The class System.Threading.Timer has the same limitation too (it would throw an ArgumentOutOfRangeException according to MSDN).
There seems to be no .Net Framework class natively adept to circumvent the Int32.MaxValue milliseconds upper bound.
public static class Scheduler
{
private const long TimerGranularity = 100;
static Scheduler()
{
ScheduleTimer = new Timer(Callback, null, Timeout.Infinite, Timeout.Infinite);
Tasks = new SortedQueue<Task>();
}
private static void Callback(object state)
{
var first = Tasks.Peek();
if(first.ExecuteAt<DateTime.Now)
{
Tasks.Dequeue();
var executionThread = new Thread(() => first.Function());
executionThread.Start();
}
}
private static Timer ScheduleTimer { get; set; }
public static void Start()
{
ScheduleTimer.Change(0, TimerGranularity);
}
public static void Add(Task task)
{
Tasks.Enqueue(task);
}
public static SortedQueue<Task> Tasks { get; set; }
}
public class Task : IComparable<Task>
{
public Func<Boolean> Function { get; set; }
public DateTime ExecuteAt { get; set; }
public int CompareTo(Task other)
{
return ExecuteAt.CompareTo(other.ExecuteAt);
}
}
The solution I'd use is something similar to the above example: a class Scheduler that manages all the Tasks (in order to avoid having a timer for each task we are going to schedule).
Tasks are added to the a queue able to perform sorted insertion. Note that SortedQueue<T> is not a type of the .Net Framework but an hypothetical, easy-to-code collection capable of sorted insertion on a comparable type T.
The scheduler awakes every TimerGranularity milliseconds and checks for the first task whose `ExecuteAt' time has been surpassed; then executes it on a separate thread.
Additional effort could be done by creating a list of all surpassed tasks (instead of the first one only); but I left it out for the sake of clarity.
There is exist exist nu-get called Quartz.NET.
You can use it exactly for this.

Categories