In my application I have a small number of threads (5) performing the following method non stop:
private void ThreadMethod()
{
while(true)
{
if(CurrentItem != null)
{
HandleCurrentItem();
}
Thread.Sleep(200);
}
}
From what I've seen around this is not a recommended practice, but most of the arguments is because you don't have responsiveness and you cannot cancel the thread or the timing isn't precise. None of those are an issue for me, however I'm concerned about wasting too much CPU resources in this. From what I've seen here at 01:05:35 the processor gets full utilization when you call the Sleep method.
My questions:
Is this a decent solution in my scenario?
If not, how to do it better?
Note: I'm using .Net Framework 4.0
Note 2: those threads are located in different instances of a class, so the CurrentItem is a different object for each thread.
You could make the method async and make it await:
private async void ThreadMethod()
{
while(true)
{
if(CurrentItem != null)
{
HandleCurrentItem();
}
await Task.Delay(200);
}
}
This won't block the thread
Note Async await keywords will only work with .Net 4.0 on visual studio 2012+ by using Microsoft.Bcl.Async (you can get this package on nuget).
You can also use this snippet (credits to: Calvin Fisher):
new System.Threading.ManualResetEvent(false).WaitOne(1000);
You could also use an Timer instead of sleeping when you want to execute something 200 miliseconds after the last action. This won't spin lock the processor.
var timer = new Timer(200);
timer.Elapsed += (sender, args) =>
{
if(CurrentItem != null)
HandleCurrentItem();
};
timer.AutoReset = true;
timer.Start();
The .NET Framework Class Library includes four classes named Timer, each of which offers different functionality:
System.Timers.Timer, which fires an event and executes the code in one or more event sinks at regular intervals. The class is intended for use as a server-based or service component in a multithreaded environment; it has no user interface and is not visible at runtime.
System.Threading.Timer, which executes a single callback method on a thread pool thread at regular intervals. The callback method is defined when the timer is instantiated and cannot be changed. Like the System.Timers.Timer class, this class is intended for use as a server-based or service component in a multithreaded environment; it has no user interface and is not visible at runtime.
System.Windows.Forms.Timer, a Windows Forms component that fires an event and executes the code in one or more event sinks at regular intervals. The component has no user interface and is designed for use in a single-threaded environment; it executes on the UI thread.
System.Web.UI.Timer, an ASP.NET component that performs asynchronous or synchronous web page postbacks at a regular interval.
You can achieve a clean implementation by using Hangfire. This will give you more control over your tasks and you will also be getting a feedback if the function executed or failed.
You can create a scheduled job like this:
RecurringJob.AddOrUpdate(() => Console.WriteLine("Recurring!"),Cron.Daily);
There are many more options which you can explore in the documentation.
You can use AutoRestEvent
private void ThreadMethod()
{
AutoResetEvent _restEvent = new AutoResetEvent(false);
_restEvent.Reset();
while(true)
{
if(CurrentItem != null)
{
HandleCurrentItem();
}
_restEvent.WaitOne(200); // Set a timeout in ms.
}
}
Related
I have a WPF window that creates and starts a timer in its constructor. The timer elapsed event triggers a method (SyncPTUpdate) which uses BeginInvoke to place a call to another method (PTProgressUpdateInThread) onto the Window's thread. This then calls a WCF call asynchronously (using the TAP pattern, auto-generated by VS 2013).
When I make the WCF call artificially long in duration (using thread.sleep in the server component), the UI of my WPF application freezes. Not initially, but after a few seconds have gone by.
Where am I going wrong?
public delegate void PTProgressDelegate();
// this method is called from a periodically firing timer (System.Timers)
private async void SyncPTUpdate(object sender, ElapsedEventArgs e)
{
await this.Dispatcher.BeginInvoke(DispatcherPriority.Normal, new PTProgressDelegate(PTProgressUpdateInThread));
}
private async void PTProgressUpdateInThread()
{
PTMapFieldClient = new FieldClient();
ServiceField.BlokPTProgress PTProgressFromServer = await PTMapFieldClient.GetPTProgressAsync(variousparametershere);
PTMapFieldClient.Close();
// now use the results of the WCF call to update the Window UI
//...
}
Dispatcher.BeginInvoke() is often presented as a way to do things async, with the benefit of not having to create/involve another thread.
But that means it is only beneficial for small, lightweight jobs. Best thing here is to push the call(s) to PTProgressUpdateInThread() to the ThreadPool. Or yuse a threaded Timer.
You're not using the results after await anyway.
I am currently working on a c# project where I need to perform a task so many times every so many seconds.
For example, write to the console 5 times every 1 second. How could I go about doing this.
Thanks for any help you can provide.
You can use a Timer and bind and event to the Timer.Elapsed event.
using System.Timers;
Sample code:
Timer timer = new Timer();
timer.Elapsed += (sender, eventArgs) =>
{
for (int i = 0; i < 5; i++)
{
Console.Write(i);
}
};
Is this an console application, or do you run this on another thread?
For short stuff like this, use a timer. There are two main ones to choose from:
System.Threading.Timer
And:
System.Windows.Forms.Timer
The former uses the ThreadPool, the latter uses UI events. They both expose the ability to specify and interval and either a callback method or event to attach to in order to put custom code.
For longer periods of inactivity, look into scheduling either with the Windows scheduler (the OS one) or a scheduling framework such as Quartz.NET.
Do note that the accuracy of the timers vary, but not really within margins that humans can detect :-)
Also note that the callback of the threaded timer will return on an arbitrary ThreadPool thread, so you could effectively end up "multi-threading" the code without realising it.
There is also System.Timers.Timer, it exposes an event. An article about the different timers available can be found here.
I don't think it will affect you, but it's still worth knowing - windows is not a real-time OS; if you ask for something to be done every X milliseconds, it won't be exact, as for how much it will be out depends on a variety of things.
You could create a thread to do it. Especially useful if you want to do a lot of processing! Here's an example of a thread doing work every 1s (1000ms):
public void Start()
{
running = true;
thread = new Thread(new ParameterizedThreadStart(ThreadFunction));
thread.Start();
}
public virtual void ThreadFunction(object o)
{
var insert = false;
while (running)
{
//Do work
Thread.Sleep(1000);
}
}
try this
While (true)
{
for (int i=0; i<5; i++)
{
console ("Hello");
}
//this will pause for 1 sec (1000msec)
Thread.sleep(1000);
}
I have a program that uses threads to perform time-consuming processes sequentially. I want to be able to monitor the progress of each thread similar to the way that the BackgroundWorker.ReportProgress/ProgressChanged model does. I can't use ThreadPool or BackgroundWorker due to other constraints I'm under. What is the best way to allow/expose this functionality. Overload the Thread class and add a property/event? Another more-elegant solution?
Overload the Thread class and add a
property/event?
If by "overload" you actually mean inherit then no. The Thread is sealed so it cannot be inherited which means you will not be able to add any properties or events to it.
Another more-elegant solution?
Create a class that encapsulates the logic that will be executed by the thread. Add a property or event (or both) which can be used to obtain progress information from it.
public class Worker
{
private Thread m_Thread = new Thread(Run);
public event EventHandler<ProgressEventArgs> Progress;
public void Start()
{
m_Thread.Start();
}
private void Run()
{
while (true)
{
// Do some work.
OnProgress(new ProgressEventArgs(...));
// Do some work.
}
}
private void OnProgress(ProgressEventArgs args)
{
// Get a copy of the multicast delegate so that we can do the
// null check and invocation safely. This works because delegates are
// immutable. Remember to create a memory barrier so that a fresh read
// of the delegate occurs everytime. This is done via a simple lock below.
EventHandler<ProgressEventArgs> local;
lock (this)
{
var local = Progress;
}
if (local != null)
{
local(this, args);
}
}
}
Update:
Let me be a little more clear on why a memory barrier is necessary in this situation. The barrier prevents the read from being moved before other instructions. The most likely optimization is not from the CPU, but from the JIT compiler "lifting" the read of Progress outside of the while loop. This movement gives the impression of "stale" reads. Here is a semi-realistic demonstration of the problem.
class Program
{
static event EventHandler Progress;
static void Main(string[] args)
{
var thread = new Thread(
() =>
{
var local = GetEvent();
while (local == null)
{
local = GetEvent();
}
});
thread.Start();
Thread.Sleep(1000);
Progress += (s, a) => { Console.WriteLine("Progress"); };
thread.Join();
Console.WriteLine("Stopped");
Console.ReadLine();
}
static EventHandler GetEvent()
{
//Thread.MemoryBarrier();
var local = Progress;
return local;
}
}
It is imperative that a Release build is ran without the vshost process. Either one will disable the optimization that manifest the bug (I believe this is not reproducable in framework version 1.0 and 1.1 as well due to their more primitive optimizations). The bug is that "Stopped" is never displayed even though it clearly should be. Now, uncomment the call to Thread.MemoryBarrier and notice the change in behavior. Also keep in mind that even the most subtle changes to the structure of this code currently inhibit the compiler's ability to make the optimization in question. One such change would be to actually invoke the delegate. In other words you cannot currently reproduce the stale read problem using the null check followed by an invocation pattern, but there is nothing in the CLI specification (that I am aware of anyway) that prohibits a future hypothetical JIT compiler from reapplying that "lifting" optimization.
I tried this some time ago and it worked for me.
Create a List-like class with locks.
Have your threads add data to an instance of the class you created.
Place a timer in your Form or wherever you want to record the log/progress.
Write code in the Timer.Tick event to read the messages the threads output.
You might also want to check out the Event-based Asynchronous Pattern.
Provide each thread with a callback that returns a status object. You can use the thread's ManagedThreadId to keep track of separate threads, such as using it as a key to a Dictionary<int, object>. You can invoke the callback from numerous places in the thread's processing loop or call it from a timer fired from within the thread.
You can also use the return argument on a callback to signal the thread to pause or halt.
I've used callbacks with great success.
Microsoft just announced the new C# Async feature. Every example I've seen so far is about asynchronously downloading something from HTTP. Surely there are other important async things?
Suppose I'm not writing a new RSS client or Twitter app. What's interesting about C# Async for me?
Edit I had an Aha! moment while watching Anders' PDC session. In the past I have worked on programs that used "watcher" threads. These threads sit waiting for something to happen, like watching for a file to change. They aren't doing work, they're just idle, and notify the main thread when something happens. These threads could be replaced with await/async code in the new model.
Ooh, this sounds interesting. I'm not playing with the CTP just yet, just reviewing the whitepaper. After seeing Anders Hejlsberg's talk about it, I think I can see how it could prove useful.
As I understand, async makes writing asynchronous calls easier to read and implement. Very much in the same way writing iterators is easier right now (as opposed to writing out the functionality by hand). This is essential blocking processes since no useful work can be done, until it is unblocked. If you were downloading a file, you cannot do anything useful until you get that file letting the thread go to waste. Consider how one would call a function which you know will block for an undetermined length and returns some result, then process it (e.g., store the results in a file). How would you write that? Here's a simple example:
static object DoSomeBlockingOperation(object args)
{
// block for 5 minutes
Thread.Sleep(5 * 60 * 1000);
return args;
}
static void ProcessTheResult(object result)
{
Console.WriteLine(result);
}
static void CalculateAndProcess(object args)
{
// let's calculate! (synchronously)
object result = DoSomeBlockingOperation(args);
// let's process!
ProcessTheResult(result);
}
Ok good, we have it implemented. But wait, the calculation takes minutes to complete. What if we wanted to have an interactive application and do other things while the calculation took place (such as rendering the UI)? This is no good, since we called the function synchronously and we have to wait for it to finish effectively freezing the application since the thread is waiting to be unblocked.
Answer, call the function expensive function asynchronously. That way we're not bound to waiting for the blocking operation to complete. But how do we do that? We'd call the function asynchronously and register a callback function to be called when unblocked so we may process the result.
static void CalculateAndProcessAsyncOld(object args)
{
// obtain a delegate to call asynchronously
Func<object, object> calculate = DoSomeBlockingOperation;
// define the callback when the call completes so we can process afterwards
AsyncCallback cb = ar =>
{
Func<object, object> calc = (Func<object, object>)ar.AsyncState;
object result = calc.EndInvoke(ar);
// let's process!
ProcessTheResult(result);
};
// let's calculate! (asynchronously)
calculate.BeginInvoke(args, cb, calculate);
}
Note: Sure we could start another thread to do this but that would mean we're spawning a thread that just sits there waiting to be unblocked, then do some useful work. That would be a waste.
Now the call is asynchronous and we don't have to worry about waiting for the calculation to finish and process, it's done asynchronously. It will finish when it can. An alternative to calling code asynchronously directly, you could use a Task:
static void CalculateAndProcessAsyncTask(object args)
{
// create a task
Task<object> task = new Task<object>(DoSomeBlockingOperation, args);
// define the callback when the call completes so we can process afterwards
task.ContinueWith(t =>
{
// let's process!
ProcessTheResult(t.Result);
});
// let's calculate! (asynchronously)
task.Start();
}
Now we called our function asynchronously. But what did it take to get it that way? First of all, we needed the delegate/task to be able to call it asynchronously, we needed a callback function to be able to process the results, then call the function. We've turned a two line function call to much more just to call something asynchronously. Not only that, the logic in the code has gotten more complex then it was or could be. Although using a task helped simplify the process, we still needed to do stuff to make it happen. We just want to run asynchronously then process the result. Why can't we just do that? Well now we can:
// need to have an asynchronous version
static async Task<object> DoSomeBlockingOperationAsync(object args)
{
//it is my understanding that async will take this method and convert it to a task automatically
return DoSomeBlockingOperation(args);
}
static async void CalculateAndProcessAsyncNew(object args)
{
// let's calculate! (asynchronously)
object result = await DoSomeBlockingOperationAsync(args);
// let's process!
ProcessTheResult(result);
}
Now this was a very simplified example with simple operations (calculate, process). Imagine if each operation couldn't conveniently be put into a separate function but instead have hundreds of lines of code. That's a lot of added complexity just to gain the benefit of asynchronous calling.
Another practical example used in the whitepaper is using it on UI apps. Modified to use the above example:
private async void doCalculation_Click(object sender, RoutedEventArgs e) {
doCalculation.IsEnabled = false;
await DoSomeBlockingOperationAsync(GetArgs());
doCalculation.IsEnabled = true;
}
If you've done any UI programming (be it WinForms or WPF) and attempted to call an expensive function within a handler, you'll know this is handy. Using a background worker for this wouldn't be that much helpful since the background thread will be sitting there waiting until it can work.
Suppose you had a way to control some external device, let's say a printer. And you wanted to restart the device after a failure. Naturally it will take some time for the printer to start up and be ready for operation. You might have to account for the restart not helping and attempt to restart again. You have no choice but to wait for it. Not if you did it asynchronously.
static async void RestartPrinter()
{
Printer printer = GetPrinter();
do
{
printer.Restart();
printer = await printer.WaitUntilReadyAsync();
} while (printer.HasFailed);
}
Imagine writing the loop without async.
One last example I have. Imagine if you had to do multiple blocking operations in a function and wanted to call asynchronously. What would you prefer?
static void DoOperationsAsyncOld()
{
Task op1 = new Task(DoOperation1Async);
op1.ContinueWith(t1 =>
{
Task op2 = new Task(DoOperation2Async);
op2.ContinueWith(t2 =>
{
Task op3 = new Task(DoOperation3Async);
op3.ContinueWith(t3 =>
{
DoQuickOperation();
}
op3.Start();
}
op2.Start();
}
op1.Start();
}
static async void DoOperationsAsyncNew()
{
await DoOperation1Async();
await DoOperation2Async();
await DoOperation3Async();
DoQuickOperation();
}
Read the whitepaper, it actually has a lot of practical examples like writing parallel tasks and others.
I can't wait to start playing with this either in the CTP or when .NET 5.0 finally makes it out.
The main scenarios are any scenario that involves high latency. That is, lots of time between "ask for a result" and "obtain a result". Network requests are the most obvious example of high latency scenarios, followed closely by I/O in general, and then by lengthy computations that are CPU bound on another core.
However, there are potentially other scenarios that this technology will mesh nicely with. For example, consider scripting the logic of a FPS game. Suppose you have a button click event handler. When the player clicks the button you want to play a siren for two seconds to alert the enemies, and then open the door for ten seconds. Wouldn't it be nice to say something like:
button.Disable();
await siren.Activate();
await Delay(2000);
await siren.Deactivate();
await door.Open();
await Delay(10000);
await door.Close();
await Delay(1000);
button.Enable();
Each task gets queued up on the UI thread, so nothing blocks, and each one resumes the click handler at the right point after its job is finished.
I've found another nice use-case for this today: you can await user interaction.
For example, if one form has a button that opens another form:
Form toolWindow;
async void button_Click(object sender, EventArgs e) {
if (toolWindow != null) {
toolWindow.Focus();
} else {
toolWindow = new Form();
toolWindow.Show();
await toolWindow.OnClosed();
toolWindow = null;
}
}
Granted, this isn't really any simpler than
toolWindow.Closed += delegate { toolWindow = null; }
But I think it nicely demonstrates what await can do. And once the code in the event handler is non-trivial, await make programming much easier. Think about the user having to click a sequence of buttons:
async void ButtonSeries()
{
for (int i = 0; i < 10; i++) {
Button b = new Button();
b.Text = i.ToString();
this.Controls.Add(b);
await b.OnClick();
this.Controls.Remove(b);
}
}
Sure, you could do this with normal event handlers, but it would require you to take apart the loop and convert it into something much harder to understand.
Remember that await can be used with anything that gets completed at some point in the future. Here's the extension method Button.OnClick() to make the above work:
public static AwaitableEvent OnClick(this Button button)
{
return new AwaitableEvent(h => button.Click += h, h => button.Click -= h);
}
sealed class AwaitableEvent
{
Action<EventHandler> register, deregister;
public AwaitableEvent(Action<EventHandler> register, Action<EventHandler> deregister)
{
this.register = register;
this.deregister = deregister;
}
public EventAwaiter GetAwaiter()
{
return new EventAwaiter(this);
}
}
sealed class EventAwaiter
{
AwaitableEvent e;
public EventAwaiter(AwaitableEvent e) { this.e = e; }
Action callback;
public bool BeginAwait(Action callback)
{
this.callback = callback;
e.register(Handler);
return true;
}
public void Handler(object sender, EventArgs e)
{
callback();
}
public void EndAwait()
{
e.deregister(Handler);
}
}
Unfortunately it doesn't seem possible to add the GetAwaiter() method directly to EventHandler (allowing await button.Click;) because then the method wouldn't know how to register/deregister that event.
It's a bit of boilerplate, but the AwaitableEvent class can be re-used for all events (not just UI). And with a minor modification and adding some generics, you could allow retrieving the EventArgs:
MouseEventArgs e = await button.OnMouseDown();
I could see this being useful with some more complex UI gestures (drag'n'drop, mouse gestures, ...) - though you'd have to add support for cancelling the current gesture.
There are some samples and demos in the CTP that don't use the Net, and even some that don't do any I/O.
And it does apply to all multithreaded / parallel problem areas (that already exist).
Async and Await are a new (easier) way of structuring all parallel code, be it CPU-bound or I/O bound. The biggest improvement is in areas where before C#5 you had to use the APM (IAsyncResult) model, or the event model (BackgroundWorker, WebClient). I think that is why those examples lead the parade now.
A GUI clock is a good example; say you want to draw a clock, that updates the time shown every second. Conceptually, you want to write
while true do
sleep for 1 second
display the new time on the clock
and with await (or with F# async) to asynchronously sleep, you can write this code to run on the UI thread in a non-blocking fashion.
http://lorgonblog.wordpress.com/2010/03/27/f-async-on-the-client-side/
The async extensions are useful in some cases when you have an asynchronous operation. An asynchronous operation has a definite start and completion. When asynchronous operations complete, they may have a result or an error. (Cancellation is treated as a special kind of error).
Asynchronous operations are useful in three situations (broadly speaking):
Keeping your UI responsive. Any time you have a long-running operation (whether CPU-bound or I/O-bound), make it asynchronous.
Scaling your servers. Using asynchronous operations judiciously on the server side may help your severs to scale. e.g., asynchronous ASP.NET pages may make use of async operations. However, this is not always a win; you need to evaluate your scalability bottlenecks first.
Providing a clean asynchronous API in a library or shared code. async is excellent for reusability.
As you begin to adopt the async way of doing things, you'll find the third situation becoming more common. async code works best with other async code, so asynchronous code kind of "grows" through the codebase.
There are a couple of types of concurrency where async is not the best tool:
Parallelization. A parallel algorithm may use many cores (CPUs, GPUs, computers) to solve a problem more quickly.
Asynchronous events. Asynchronous events happen all the time, independent of your program. They often do not have a "completion." Normally, your program will subscribe to an asynchronous event stream, receive some number of updates, and then unsubscribe. Your program can treat the subscribe and unsubscribe as a "start" and "completion", but the actual event stream never really stops.
Parallel operations are best expressed using PLINQ or Parallel, since they have a lot of built-in support for partitioning, limited concurrency, etc. A parallel operation may easily be wrapped in an awaitable by running it from a ThreadPool thread (Task.Factory.StartNew).
Asynchronous events do not map well to asynchronous operations. One problem is that an asynchronous operation has a single result at its point of completion. Asynchronous events may have any number of updates. Rx is the natural language for dealing with asynchronous events.
There are some mappings from an Rx event stream to an asynchronous operation, but none of them are ideal for all situations. It's more natural to consume asynchronous operations by Rx, rather than the other way around. IMO, the best way of approaching this is to use asynchronous operations in your libraries and lower-level code as much as possible, and if you need Rx at some point, then use Rx from there on up.
Here is probably a good example of how not to use the new async feature (that's not writing a new RSS client or Twitter app), mid-method overload points in a virtual method call. To be honest, i am not sure there is any way to create more than a single overload point per method.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Threading;
namespace AsyncText
{
class Program
{
static void Main(string[] args)
{
Derived d = new Derived();
TaskEx.Run(() => d.DoStuff()).Wait();
System.Console.Read();
}
public class Base
{
protected string SomeData { get; set; }
protected async Task DeferProcessing()
{
await TaskEx.Run(() => Thread.Sleep(1) );
return;
}
public async virtual Task DoStuff() {
Console.WriteLine("Begin Base");
Console.WriteLine(SomeData);
await DeferProcessing();
Console.WriteLine("End Base");
Console.WriteLine(SomeData);
}
}
public class Derived : Base
{
public async override Task DoStuff()
{
Console.WriteLine("Begin Derived");
SomeData = "Hello";
var x = base.DoStuff();
SomeData = "World";
Console.WriteLine("Mid 1 Derived");
await x;
Console.WriteLine("EndDerived");
}
}
}
}
Output Is:
Begin Derived
Begin Base
Hello
Mid 1 Derived
End Base
World
EndDerived
With certain inheritance hierarchies (namely using command pattern) i find myself wanting to do stuff like this occasionally.
here is an article about showing how to use the 'async' syntax in a non-networked scenario that involves UI and multiple actions.
If I have a Windows Service that needs to execute a task every 30 seconds which is better to use; the Timer() class or a loop that executes the task then sleeps for a number of seconds?
class MessageReceiver
{
public MessageReceiver()
{
}
public void CommencePolling()
{
while (true)
{
try
{
this.ExecuteTask();
System.Threading.Thread.Sleep(30000);
}
catch (Exception)
{
// log the exception
}
}
}
public void ExecutedTask()
{
// do stuff
}
}
class MessageReceiver
{
public MessageReceiver()
{
}
public void CommencePolling()
{
var timer = new Timer()
{
AutoReset = true,
Interval = 30000,
Enabled = true
};
timer.Elapsed += Timer_Tick;
}
public void Timer_Tick(object sender, ElapsedEventArgs args)
{
try
{
// do stuff
}
catch (Exception)
{
// log the exception
}
}
}
The windows service will create an instance of the MessageReciever class and execute the CommencePolling method on a new thread.
I think it really depends on your requirement.
case 1.
Suppose you want to run this.ExecuteTask() every five minutes starting from 12:00AM (i.e., 12:00, 12:05, ...) and suppose the execution time of this.ExecuteTask() varies (for example, from 30 sec to 2 min), maybe using timer instead of Thread.Sleep() seems to be an easier way of doing it (at least for me).
However, you can achieve this behavior with Thread.Sleep() as well by calculating the offset while taking timestamps on a thread wake-up and on a completion of this.ExecuteTask().
case 2.
Suppose you want to perform the task in the next 5 min just after completion of this.ExecuteTask(), using Thread.Sleep() seems to be easier. Again, you can achieve this behavior with a timer as well by reseting the timer every time while calculating offsets on every time this.ExecuteTask() completes.
Note1, for the case 1, you should be very careful in the following scenario: what if this.ExecuteTask() sometimes takes more than the period (i.e. it starts at 12:05 and completes 12:13 in the example above).
What does this mean to your application and how will it be handled?
a. Total failure - abort the service or abort the current(12:05) execution at 12:10 and launch 12:10 execution.
b. Not a big deal (skip 12:10 one and run this.ExecuteTask() at 12:15).
c. Not a big deal, but need to launch 12:10 execution immediately after 12:05 task finishes (what if it keeps taking more than 5 min??).
d. Need to launch 12:10 execution even though 12:05 execution is currently running.
e. anything else?
For the policy you select above, does your choice of implementation (either timer or Thread.Sleep()) easy to support your policy?
Note2. There are several timers you can use in .NET. Please see the following document (even though it's bit aged, but it seems to be a good start): Comparing the Timer Classes in the .NET Framework Class Library
Are you doing anything else during that ten second wait? Using Thread.sleep would block, preventing you from doing other things. From a performance point of view I don't think you'd see too much difference, but I would avoid using Thread.sleep myself.
There are three timers to choose from - System.Windows.Forms.Timer is implemented on the main thread whereas System.Timers.Timer and System.Threading.Timer are creating seperate threads.
I believe both methods are equivalent. There will be a thread either way: either because you create one, or because the library implementing the Timer class creates one.
Using the Timer class might be slightly more less expensive resource-wise, since the thread implementing timers probably monitors other timeouts as well.
I this the answers to this question will help.
Not answered by me but John Saunders (above)... the answer can be found here For a windows service, which is better, a wait-spin or a timer?