I have a Worker Role which processes items off a queue. It is basically an infinite loop which pops items off of the queue and asynchronously processes them.
I have two configuration settings (PollingInterval and MessageGetLimit) which I want the worker role to pick up when changed (so with no restart required).
private TimeSpan PollingInterval
{
get
{
return TimeSpan.FromSeconds(Convert.ToInt32(RoleEnvironment.GetConfigurationSettingValue("PollingIntervalSeconds")));
}
}
private int MessageGetLimit
{
get
{
return Convert.ToInt32(RoleEnvironment.GetConfigurationSettingValue("MessageGetLimit"));
}
}
public override void Run()
{
while (true)
{
var messages = queue.GetMessages(MessageGetLimit);
if (messages.Count() > 0)
{
ProcessQueueMessages(messages);
}
else
{
Task.Delay(PollingInterval);
}
}
}
Problem:
During peak hours, the while loop could be running a couple of times per second. This means that it would be querying the config items up to 100,000 times per day.
Is this detrimental or inefficient?
John's answer is a good one using the Environment Changing/Changed events to modify your settings without restarts, but I think perhaps a better method is for you to use an exponential back-off policy to make your polling more efficient. By having the code behavior smarter on it's own you will reduce how often you are in there tweaking it. Remember that each time you update these environment settings it has to be rolled out to all of the instances, which can take a little time depending on how many instances you have running. Also, you are putting a step in here that a human has to be involved.
You are using Windows Azure Storage Queues which means each time your GetMessages(s) executes it's making a call to the service and retrieving 0 or more messages (up to your MessageGetLimit). Each time it asks for that you'll get charged a transaction. Now, understand that transactions are really cheap. Even 100,000 transactions a day is $0.01/day. However, don't underestimate the speed of a loop. :) You may get more throughput than that and if you have multiple worker role instances this adds up (though will still be a really small amount of money compared to actually running the instances themselves).
A more efficient path would be to put in an exponential backoff approach to reading your messages off the queue. Check out this post by Maarten on a simple example: http://www.developerfusion.com/article/120619/advanced-scenarios-with-windows-azure-queues/. Couple a back off approach with an auto-scaling of the worker roles based on queue depth and you'll have a solution that relies less on a human adjusting settings. Put in minimum and maximum values for instance counts, adjust the numbers of messages to pull based on how many times a message has been present the very next time you ask for one, etc. There are a lot of options here that will reduce your involvement and have an efficient system.
Also, you might look at Windows Azure Service Bus Queues in that they implement long polling, so it results in much fewer transactions while waiting for work to hit the queue.
Upfront disclaimer, I haven't used RoleEnvironments.
The MDSN documentation for GetConfigurationSettingValue states that the configuration is read from disk. http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.serviceruntime.roleenvironment.getconfigurationsettingvalue.aspx. So it is sure to be slow when called often.
The MSDN documentation also shows that there is an event fired when a setting changes. http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.serviceruntime.roleenvironment.changed.aspx. You can use this event to only reload the settings when they have actually changed.
Here is one (untested, not compiled) approach.
private TimeSpan mPollingInterval;
private int mMessageGetLimit;
public override void Run()
{
// Refresh the configuration members only when they change.
RoleEnvironment.Changed += RoleEnvironmentChanged;
// Initialize them for the first time
RefreshRoleEnvironmentSettings();
while (true)
{
var messages = queue.GetMessages(mMessageGetLimit);
if (messages.Count() > 0)
{
ProcessQueueMessages(messages);
}
else
{
Task.Delay(mPollingInterval);
}
}
}
private void RoleEnvironmentChanged(object sender, RoleEnvironmentChangedEventArgs e)
{
RefreshRoleEnvironmentSettings();
}
private void RefreshRoleEnvironmentSettings()
{
mPollingInterval = TimeSpan.FromSeconds(Convert.ToInt32(RoleEnvironment.GetConfigurationSettingValue("PollingIntervalSeconds")));
mMessageGetLimit = Convert.ToInt32(RoleEnvironment.GetConfigurationSettingValue("MessageGetLimit"));
}
Related
We're running SignalR in a stand-alone ASP.Net app running in a virtual directory off our main ASP.Net website.
In our SignalR hub implementation, we have a static ConcurrentDictionary<int, UserState> variable maintaining some light-weight user state across individual connections. Over time that variable will be added to based upon client-side actions (i.e. as new users start interacting with our website). This variable is essentially providing some simple state tracking across connections.
We don't particularly want to add a special SignalR backplane which would require additional infrastructure dependencies as our data load is likely to be relatively lightweight and tracking this in-memory should be sufficient.
When a user has been inactive for a long-enough period of time (let's say 1 hour) we want to remove them from the dictionary variable. Whatever process does this should be guaranteed to run on a consistent basis - so, not dependent upon user behaviour, but instead upon a timed duration.
I have what I believe to be a good solution for doing this:
public class UserStateService : IUserStateService
{
private static readonly ConcurrentDictionary<int, UserState> recentUsers = new ConcurrentDictionary<int, UserState>();
private static Timer timer;
public static void StartCleanup()
{
timer = new Timer( CleanupRecentUsers, null, 0, 60000 );
}
public static void StopCleanup()
{
timer.Dispose();
}
private static void CleanupRecentUsers( object state )
{
var now = DateTime.UtcNow;
var oldUsers = recentUsers.Select( p => p.Value ).Where( u => u.LastActionTime.AddHours( 1 ) > now );
foreach ( var user in oldUsers )
{
UserState removedUser;
recentUsers.TryRemove( user.UserId, out removedUser );
}
}
// other code for adding/updating user state.
}
As mentioned, I think this is a good solution. However, I'm not very conversant in thread management (though I'm aware that dealing with static objects in ASP.Net is dangerous).
StartCleanup() and StopCleanup() are called once each at the start and end of the application lifecycle, respectively. The UserStateService is supplied to our Hub classes via our IoC container (Structure Map) and is currently not scoped with any special lifecycle handling (i.e. it's not Singleton or thread-scoped, simply per-instance request).
We're already using static concurrent dictionaries in our production app and they're working fine without any known instances of performance issues. What I'm not sure about is running a Timer loop here.
So, my question is, are there any obvious risks here relating to threads being blocked/locked (or CPU use generally going out of control for any reason) that I need to mitigate or which could make this approach unworkable?
There's no particular problem with using a Timer in the way that you suggest.
However, there are a couple of problems with your code.
First, you have:
var oldUsers = recentUsers
.Select( p => p.Value )
.Where( u => u.LastActionTime.AddHours( 1 ) > now );
That will delete any user whose last activity was within the last hour. So anybody you saw a minute ago will be removed. The result is that your recentUsers list will probably be empty most of the time. At best, it will contain users who were last seen at least an hour ago.
I think you want to change that to <. Or, to think about it another way:
.Where((now - u.LastActionTime) > TimeSpan.FromHours(1));
There might also be a race condition in that a user selected for removal might make a request before the removal actually occurs, so you end up removing a user that just made a request. The time window for that race condition is pretty narrow, though, and probably isn't worth worrying about.
I'm trying to send my rs232 device multiple SerialPort.Write commands right after each other. However, I don't think it can handle multiple WRITE commands at once.
Currently, I'm just using Thread.Sleep(500) to delay between WRITEs, but is there a way to detect when the perfect time to send is? Or with buffers?
example code
Interface
private void btn_infuse_Click(object sender, RoutedEventArgs e) {
cmd.SetTargetInfusionVolume(spmanager, double.Parse(tbox_targetvolume.Text));
cmd.StartInfuse(spmanager);//cmd object of Command class
}
Command Class
public void StartInfuse(SPManager spm){
spm.Write("RUN");//spm object of serialportmanager class
}
public void SetTargetInfusionVolume(SerialPortManager spm, double num) {
spm.Write("MLT " + num.ToString());
}
SerialPortManager class
public void Write(string sData) {
if (oSerialPort.IsOpen) {//oSerialPort object of SerialPort class
try {
oSerialPort.Write(sData + "\r");
}
catch { MessageBox.Show("error"); }
}
}
If your serial port settings (especially, as Hans Passsant mentioned, flow control) are correct, then the problem with speed is most likely that your device can't handle messages fast enough to keep up with you if you send them too fast, or that it expects "silent" gaps between messages in order to delineate them.
In this case, a Sleep() to introduce a transmission delay is a very reasonable approach. You will need to find a sensible delay that guarantees the device handles your messages successfully, ideally without stalling your application for too long.
All too often this involves trial and error, but consult the documentation for the device, as quite a few devices use gaps in transmission to indicate the end of a message packet (e.g. often they may expect you to be silent for a short time after a message, e.g. if they specified 10 bits worth of "silent time" on a 2400 bps link, this would correspond to 10/2400ths or just over 4 milliseconds). This can all be compromised a bit by windows, though, as it tends to buffer data (i.e. it will hang on to it for a few milliseconds to see if you are going to ask it to transmit anything more) - you may therefore need a significantly longer delay than should strictly be required to get this to work - maybe 15-20ms. And of course, I could be barking up the wrong tree and you may find you need something as large as 500ms to get it to work.
I have an very quick/lightweight mvc action, that is requested very often and I need to maintain minimal response time under heavy load.
What i need to do, is from time to time depending on conditions to insert small amount of data to sql server (log unique id for statistics, for ~1-5% of queries).
I don't need inserted data for response and if I loose some of it because application restart or smth, I'll survive.
I imagine that I could queue somehow inserting and do it in background, may be even do some kind of buffering - like wait till queue collects 100 of inserts and then make them in one pass.
I'm pretty sure, that somebody must have done/seen such implementation before, there's no need to reinvent wheel, so if somebody could point to right direction, I would be thankful.
You could trigger a background task from your controller action that will do the insertion (fire and forget):
public ActionResult Insert(SomeViewModel model)
{
Task.Factory.StartNew(() =>
{
// do the inserts
});
return View();
}
Be aware though that IIS could recycle the application at any time which would kill any running tasks.
Create a class that will store the data that needs to be pushed to the server, and a queue to hold a queue of the objects
Queue<LogData> loggingQueue = new Queue<LogData>();
public class LogData {
public DataToLog {get; set}
}
The create a timer or some other method within the app that will be triggered every now and then to post the queued data to the database
I agree with #Darin Dimitrov's approach although I would add that you could simply use this task to write to the MSMQ on the machine. From there you could write a service that reads the queue and inserts the data into the database. That way you could throttle the service that reads data or even move the queue onto a different machine.
If you wanted to take this one step further you could use something like nServiceBus and a pub/sub model to write the events into the database.
I have one service which is listening for position updates coming from upstream system. Now there are multiple consumers of this position updates.
One consumer wants to get update as soon as possible
One consumer wants to get update every 30 seconds
One consumer want to get update when 50 updates are accumulated
One consumer want to get update when 50 updates are accumulated or 30 seconds which ever is earlier.
Above can be changed anytime or new variation can be added ?
How can I make this configurable, scalable, and what kind of programming approach I should use.
I am developing in C#, Window Service
It sounds like you are describing a scenario where the service is an intermediary between the publishing source (the service itself is a subscriber) and the service re-broadcasts this information to N subscribers, but according to their schedule.
So assuming an update is a single position update and not some sort of aggregation like a rolling average nor a buffering (e.g. just the latest position of a car every 30 seconds not all its positions since the last 30 seconds), then you need to maintain some information for each subscriber:
a subscription. Who is the consumer? how do I notify it? (e.g. callback, reply queue, etc.)
a specification. What does the consumer want and when? (e.g. every 50 ticks)
state
time since last send
number of updates since last send
...
As the service receives updates, for each consumer it must evaluate the specification against the state for each update from the source; something like:
if (consumer.Spec.Matches(consumer.State, updateMessage)
SendUpdate(consumer.Subscription.Callback, updateMessage)
The above assumes your spec is directly executable by the service (i.e. the consumers are in-process or the spec was serialized and can be deserialized by the service. If this isn't the case, your spec could perhaps represent a DSL (e.g. a parseable representation that the server could compile into something it could execute). Another approach is thinking of the spec as an instruction set. For example,
public enum FrequencyUnit
{
SecondsSinceLastSend,
UpdatesSinceLastSend,
}
public class Frequency
{
public double Value { get; set; }
public FrequencyUnit Unit { get; set; }
}
public class Operator
{
Every, // Unary: e.g. every update; every 10 sec; every 5 updates
Or, // Nary: e.g. every 50 or every 20 sec (whichever's first)
And, // Nary: e.g. 19 messages and 20 sec have passed
// etc.
}
public class UpdateSpec
{
public Frequency[] Frequencies { get; set; }
public Operator Operator { get; set; }
}
These are pretty flexible, and can be configured on the server in-code, or built-up by reading XML or something. These could also be passed to the service from the consumer itself upon registration. For example, an IService.Register() could expose an interface taking in the subscription and specification.
The last bit would be scalability. I described the service looping on each update for the consumers. This won't scale well because the loop may block receiving the updates from the source or if asynchronous with the source, would at least likely accumulate updates faster than processing them.
A strategy to deal with this is to add an internal queue to the information you maintain for each subscriber. The service, upon receiving an update, would enqueue it to each internal queue. Service tasks (TPL-based), thread-pool threads, or long-lived threads would then dequeue and evaluate the update as above. There are many possible variations and optimizations of this.
I want to call a specific function on my C# application at a specific time. At first I thought about using a Timer (System.Time.Timer), but that soon became impossible to use. Why?
Simple. The Timer class requires a Interval in milliseconds, but considering that I might want the function to be executed, let's says in a week that would mean:
7 days = 168 hours;
168 hours = 10,080 minutes;
10,080 minutes = 604,800 seconds;
604,800 seconds = 604,800,000 milliseconds;
So the interval would be 604,800,000;
Now let's remember that the Interval accepted data type is int, and as we know int range goes from -2,147,483,648 to 2,147,483,647.
That makes Timer useless, not in this case, but in the case of more than about 25 days, once we cannot set a Interval bigger that 2,147,483,647 milliseconds.
So I need a solution where I could specify when the function should be called. Something like this:
solution.ExecuteAt = "30-04-2010 15:10:00";
solution.Function = "functionName";
solution.Start();
So when the System Time would reach "30-04-2010 15:10:00" the function would be executed in the application.
How can this problem be solved?
Additional information: What will these functions do?
Getting climate information and based on that information:
Starting / Shutting down other applications (most of them console based);
Sending custom commands to those console applications;
Power down, rebooting, sleep, hibernate the computer;
And if possible schedule the BIOS to power up the computer;
EDIT:
It would seem that the Interval accepted data type is double, however if you set a value bigger that an int to the Interval, and call Start() it throws a exception [0, Int32.MaxValue].
EDIT 2:
Jørn Schou-Rode suggested using Ncron to handle the scheduling tasks, and at first look this seems a good solution, but I would like to hear about some who has worked with it.
Your "Start()" method should spawn a thread that wakes up at a defined interval, checks the time, and if you haven't reached the desired time, goes back to sleep.
I would recommend that you just write a program that deals with the business part of it and then execute that program when necessary by using Windows Task Scheduler.
One approach to task scheduling, simliar to that proposed by klausbyskov, is to built your scheduling service on top of an existing .NET scheduling framework/library. Compared to using the Windows Task Scheduler, this has the advantages of (a) allowing several jobs to be defined in the same project and (b) keeping jobs and scheduling logic "together" - i.e. not relying on server settings prone to get lost in system upgrades/replacements.
I know of two open-source projects that offer this kind of functionality:
"Quartz.NET is a full-featured, open source job scheduling system that can be used from smallest apps to large scale enterprise systems." I have never actually used this framework myself, but from studying the website, I have the impression of a very solid tool, providing many cool features. The fact that there [quartz-net] tag on Stackoverflow might also indicate that it is actually used in the wild.
"NCron is a light-weight library for building and deploying scheduled background jobs on the .NET server platform." It does not have half as many features as Quartz.NET, and it does not have any tag on Stackoverflow, but the author (yours truly) believes that its low-friction API makes it somewhat easier to get started with.
Building your scheduling service on top of NCron, you can schedule a CleanupJob for weekly execution using a single line of code:
service.Weekly().Run<CleanupJob>();
Ok, you will need around three lines of boiler plate code on top of that to actually turn your project into a Windows service, but it sounds more impressive when I claim that it can be done with one line of code ;)
You could write some sort of wrapper class for a Timer which takes a DateTime instance. Then you perform the following steps:
Determine the difference between DateTime.Now and the desired time.
If the difference (in milliseconds) is larger than the maximum allowed value for the Timer.Interval property, set the Interval to the maximum allowed value (i.e. double.MaxValue or whatever) and start it.
Now, when the timer elapses the first time, you simply go back to step 1.
At some time, the difference will be smaller than the maximum allowed value for the Interval property, and then you could fire an event in your wrapper which ultimately calls the desired method.
Use the System.Threading.Timer:
var timer = new System.Threading.Timer(delegate { }, // Pass here a delegate to the method
null,
TimeSpan.FromDays(7), // Execute Method after 7 days.
TimeSpan.Zero);
You can use the System.Threading.Timer class, which provides a constructor accepting an interval expressed as an Int64, which should be enough for your needs.
Now for the other stuff :
You can start/stop/configure program using the Process class (I don't really get what you call "custom commands")
You cannot restart or shut down or control the local BIOS using native .NET classes. Rebooting / restarting is possible through Interop (calling native Windows API from .NET), and scheduling the BIOS is just impossible. Or maybe with a special server motherboard ? I don't know..
The class System.Threading.Timer has the same limitation too (it would throw an ArgumentOutOfRangeException according to MSDN).
There seems to be no .Net Framework class natively adept to circumvent the Int32.MaxValue milliseconds upper bound.
public static class Scheduler
{
private const long TimerGranularity = 100;
static Scheduler()
{
ScheduleTimer = new Timer(Callback, null, Timeout.Infinite, Timeout.Infinite);
Tasks = new SortedQueue<Task>();
}
private static void Callback(object state)
{
var first = Tasks.Peek();
if(first.ExecuteAt<DateTime.Now)
{
Tasks.Dequeue();
var executionThread = new Thread(() => first.Function());
executionThread.Start();
}
}
private static Timer ScheduleTimer { get; set; }
public static void Start()
{
ScheduleTimer.Change(0, TimerGranularity);
}
public static void Add(Task task)
{
Tasks.Enqueue(task);
}
public static SortedQueue<Task> Tasks { get; set; }
}
public class Task : IComparable<Task>
{
public Func<Boolean> Function { get; set; }
public DateTime ExecuteAt { get; set; }
public int CompareTo(Task other)
{
return ExecuteAt.CompareTo(other.ExecuteAt);
}
}
The solution I'd use is something similar to the above example: a class Scheduler that manages all the Tasks (in order to avoid having a timer for each task we are going to schedule).
Tasks are added to the a queue able to perform sorted insertion. Note that SortedQueue<T> is not a type of the .Net Framework but an hypothetical, easy-to-code collection capable of sorted insertion on a comparable type T.
The scheduler awakes every TimerGranularity milliseconds and checks for the first task whose `ExecuteAt' time has been surpassed; then executes it on a separate thread.
Additional effort could be done by creating a list of all surpassed tasks (instead of the first one only); but I left it out for the sake of clarity.
There is exist exist nu-get called Quartz.NET.
You can use it exactly for this.