I have been given a windows service written by a previous intern at my current internship that monitors an archive and alerts specific people through emails and pop-ups should one of the recorded values go outside a certain range. It currently uses a timer to check the archive every 30 seconds, and I have been asked if I would be able to update it to allow a choice of time depending on what "tag" is being monitored. It uses an XML file to keep track of which tags are being monitored. Would creating multiple timers in the service be the most efficient way of going about this? I'm not really sure what approach to take.
The service is written in C# using .NET 3.5.
Depending on the granularity, you could use a single timer that is a common factor of the timing intervals they want. Say they want to put in the XML file that each archive is to be checked every so many minutes. You set up a timer that goes off once a minute, and you check how long it's been since you did each one and whether to do it or not.
If you're getting a chance to re-architect, I would move away from a service to a set of scheduled tasks. Write it so one task does one archive. Then write a controller program that sets up the scheduled tasks (and can stop them, change them etc.) The API for scheduled tasks on Windows 7 is nice and understandable, and unlike a service you can impose restrictions like "don't do it if the computer is on battery" or "only do it if the machine is idle" along with your preferences for what to do if a chance to run the task was missed. 7 or 8 scheduled tasks, each on their own schedule, using the same API of yours, passing in the archive path and the email address, is a lot neater than one service trying to juggle everything at once. Plus the machine will start up faster when you don't have yet another autostart service on it.
Efficient? Possibly not - especially if you have lots of tags, as each timer takes a tiny but finite amount of resources.
An alternative approach might be to have one timer that fires every second, and when that happens you check a list of outstanding requests.
This has the benefit of being easier to debug if things go wrong as there's only one active thread.
As in most code maintenance situations, however, it depends on your existing code, your ability, and how you feel more comfortable.
I woould suggest to just use one timer scheduled at the least common divisor.
For example configure your timer to signal every second and you can handle every interval (1 second, 2 seconds, ...) by counting the according number of timer ticks.
Related
I have developed a .NET program for a SCADA solution to control a heavy machine but I have some problem related with time management on the application and I am looking for some wise advisory.
I use a Winforms timer to check regularly and record the values of some variables related with the process being controlled. The application spends about 40h turned on without interruptions. At the beggining the timer does it's job, at a 5 minutes time interval, it records the values to the database. But at the end of the 40 hours, the same timer without changing it's configuration is invoked only 1 time per hour.
So my question basically is "What's the best way to ensure a certain code is run on fixed periodic intervals in a program developed in C#?" I don't really need a pure real time solution, just ensure a function is called always in fixed time intervals. But those intervals are not quite critical, we are talking about 5 minutes length. Is not important how long it takes to run the code but it is important that the code is always executed on the same period of time.
Is it better option to run the application as a service rather than a regular user-space program? Is it better option to develop in C++ the "time critical" part and communicate with the C# code via sockets or so?
At the beggining the timer does it's job, at a 5 minutes time interval...
I don't really need a pure real time solution...
Windows Task Scheduler is built for this in mind. With it you can simply have it run an .exe of your choosing with optional arguments. If the schedules are static you are arguably better off setting up the schedule in the TS UI or if complex, via the COM API.
Much better than having a another process hanging about counting down when TS already does it.
I have to refactor a fairly time-consuming process in one of my applications and after doing some research I think it's a perfect match for using TPL. I wanted to clarify my understanding of it and ask if there are any more issues which I should take into account.
In few words, I have a windows service, which runs overnight and sends out emails with data updates to around 10000 users. At presence, the whole process takes around 8 hrs to complete. I would like to reduce it to 2 hrs max.
Application workflow follows steps below:
1. Iterate through all users list
2. Check if this user has to be notified
3. If so, create an email body by calling external service
4. Send an email
Analysis of the code has shown that step 3 is the most time-consuming one and takes around 3,5 sec to complete. It means, that when processing 10000 users, my application waits well over 6 hrs in total for a response from the external service! I think this is a reason good enough to try to introduce some asynchronous and parallel processing.
So, my plan is to use Parallel class and ForEach method to iterate through users in step 1. As I can understand this should distribute processing each user into a separate thread, making them run in parallel? Processes are completely independent of each other and each doesn't return any value. In the case of any exception being thrown it will be persisted in logs db. As with regards to step 3, I would like to convert a call to external service into an async call. As I can understand this would release the resources on the thread so it could be reused by the Parallel class to start processing next user from the list?
I had a read through MS documentation regarding TPL, especially Potential Pitfalls in Data and Task Parallelism document and the only point I'm not sure about is "Avoid Writing to Shared Memory Locations". I am using a local integer to count a total number of emails processed. As with regards to all of the rest, I'm quite positive they're not applicable to my scenario.
My question is, without any implementation as yet. Is what I'm trying to achieve possible (especially the async await part for external service call)? Should I be aware of any other obstacles that might affect my implementation? Is there any better way of improving the workflow?
Just to clarify I'm using .Net v4.0
Yes, you can use the TPL for your problem. If you cannot influence your external problem, then this might be the best way.
However, you can make the best gains if you can get your external source to accept batches. Because this source could actually optimize the performance. Right now you have a message overhead of 10000 messages to serialize, send, work on, receive and deserialize. This is stuff that could be done once. In addition, your external source might be able to optimize the work they do if they know they will get multiple records.
So the bottom line is: if you need to optimize locally, the TPL is fine. If you want to optimize your whole process for actual gains, try to find out if your external source can help you, because that is where you can make some real progress.
You didn't show any code, and I'm assuming that step 4 (send an e-mail) is not that fast either.
With the presented case, unless your external service from step 3 (create an email body by calling external service) processes requests in parallel and supports a good load of simultaneous requests, you will not gain much with this refactor.
In other words, test the external service and the e-mail server first for:
Parallel request execution
The way to test this is to send at least 2 simultaneous requests and observe how long it takes to process them.
If it takes about double the time of a single, the requests have some serial processing, either they're queued or some broad lock is being taken.
Load test
Go up to 4, 8, 12, 16, 20, etc, and see where it starts to degrade.
You should set a limit on the amount of simultaneous requests to something that keeps execution time above e.g. 80% of the time it takes to process a single request, assuming you're the sole consumer
Or a few requests before it starts degrading (e.g. divide by the number of consumers) to leave the external service available for other consumers.
Only then can you decide if the refactor is worth. If you can't change the external service or the e-mail server, you must weight it they offer enough parallel capability without degrading.
Even so, be realistic. Don't let your service push the external service and the e-mail server to their limits in production.
We've built this app that needs to have some calculations done on a remote machine (actually a MatLab server). We're using web services to connect to the MatLab server and perform the calculations.
In order to speed things up, we've used Parallel.ForEach() in order to have multiple service calls going at the same time. If we're very conservative in setting ParallelOptions.MaxDegreeOfParallelism (DOP) to 4 or something, everything works fine and well.
However, if we let the framework decide on the DOP it will spawn so many threads that it forces the remote machine on its knees and timeouts start occurring ( > 10 minutes ).
How can we solve this issue? What I would LOVE to be able to do is use the response time to throttle the calls. If response time is less than 30 sec, keep adding threads, as soon as it's over 30 sec, use less. Any suggestions?
N.B. Related to the response in this question: https://stackoverflow.com/a/20192692/896697
Simplest way would be to tune for the best number of concurrent requests and hardcode that as you have done so far, however there are some nicer options if you are willing to put in some effort.
You could move from a Parallel.ForEach to using a thread pool. That way as things come back from the remote server you can either manually or programatically tune the number of available threads. reducing/increasing the number of available threads as things slow down/speed up, or even kill them if needed.
You could also do a variant of the above using Tasks which are the newer way of doing parallel/async stuff in .net.
Another option would be to use a timers and/or jobs model to schedule jobs every x milliseconds, which could then be throttled/relaxed as results returned from the server. The easiest way to get started would be using Quartz.Net.
I want to create a service that will monitor changes to web pages i.e. the page content has been updated. I am trying to think of the best way to achieve this and at present I am considering a couple of options. Note that there could be hundreds of pages to monitor and the interval for checking could be seconds or hours (configurable).
Create a windows service for each page to monitor
Create a windows service that spawns a thread for each page to monitor
Now, I am concerned which of these is the best approach and whether these is an alternative I haven't considered. I thought 1 would have the benefit of isolating each monitoring task but would come at the expense of overhead in terms of physical resources and effort to create/maintain. The second would be slightly more complex but cleaner. Obviously it would also lose isolation in that if the service fails then all monitoring will fail.
I have done something similar and I solved it by having a persisted queue (a SQL Server table) that would store the remote Uri along with the interval and a DateTime for the last time it ran.
I can then get all entries that I want to run by selecting the ones that has lastRun + interval < now.
If your smallest interval are in the region of seconds, you probably want to use a ThreadPool, so that you can issue several request at the same time. (Remember to adjust the maxConnections setting in your app.config accordingly).
I would use one Windows service (have a look at the TopShelf project for that) and I would then have Quartz.Net trigger the jobs. With Quartz, you can control whether it has to wait for previous jobs to finish etc.
Creating one Windows Service is the way to go... regarding the failure of this windows Service there are several measures you could take to deal with that - for example configure windows to automatically restart the Windows Service on failure...
I would recommend using a thread pool approach and/or a System.Threading.Timer in combination with a ConcurrentDictionary or ConcurrentQueue .
I have to create an app that will read in some info from a db, process the data, write changes back to the db, and then send an email with these changes to some users or groups. I will be writing this in c#, and this process must be run once a week at a particular time. This will be running on a Windows 2008 Server.
In the past, I would always go the route of creating a windows service with a timer and setting the time/day for it to be run in the app.config file so that it can be changed and only have to be restarted to catch the update.
Recently, though, I have seen blog posts and such that recommend writing a console application and then using a scheduled task to execute it.
I have read many posts talking to this very issue, but have not seen a definitive answer about which process is better.
What do any of you think?
Thanks for any thoughts.
If it is a one per week application, why waste the resources for it to be running in the background for the rest of the week.
A console application seems much more appropriate.
The typical rule of thumb that I use is something along these lines. First I ask a few questions.
Frequency of Execution
Frequency of changes to #1
Triggering Mechanism
Basically from here if the frequency of execution is daily or less frequent I'll almost always lean towards a scheduled task. Then looking at the frequency for changes, if there is a high demand for schedule changes, I'll also try to lean towards scheduled tasks, to allow no-coding changes for schedule changes. lastly if there is ever a thought of a trigger other than time, then I'll lean towards windows services to help "future proof" an application. Say for example the requirement changes to be run every time a user drops a file in X folder.
The basic rule I follow is: if you need to be running continuously because events of interest can happen at any time, use a service (or daemon in UNIX).
If you just want to periodically do something, use a scheduled task (or cron).
The clincher here is your phrase "must be run once a week at a particular time" - go for a scheduled task.
If you have only one application and you need it to run once a week may be scheduler will be good as there is no need to have separate service and process running on the system which will be idle most of the time.