I'm writing a simplified asynchronous event driven Timer class. Just wondering if this will work under all conditions and if it's thread safe. IE, any chance of it failing during invoke or reading the Enabled property or setting the AutoReset feature.
namespace Sandbox
{
public class AsyncTimer
{
public volatile bool AutoReset = true;
volatile bool enabled = false;
public volatile int Interval;
CancellationTokenSource? cts;
public volatile Action? Elapsed;
public bool Enabled { get { return enabled; } }
public AsyncTimer (int interval) => Interval = interval;
public void Start(bool startElapsed = false)
{
if (startElapsed) Elapsed?.Invoke();
enabled = true;
cts = new();
_ = Task.Run(() => RunTimerAsync());
}
public void Stop()
{
enabled = false;
cts?.Cancel();
}
async void RunTimerAsync()
{
while (enabled && !cts!.IsCancellationRequested)
{
await Task.Delay(Interval);
Elapsed?.Invoke();
if (!AutoReset) cts.Cancel();
}
}
}
}
As far as I can see, this is just a wrapper around Threading.Timer with a bunch of extra stuff around it that does not add any actual functionality. Your timer works by calling Task.Delay, but this is just a wrapper around Threading.Timer, so you might as well cut out the middleman.
Most the functionality you expose is already provided by this timer by calling the .Change method. If you want to provide a more intuitive interface I would suggest wrapping this timer, or provide some extension methods, instead.
If you want the behavior that guarantees that the event is not raised concurrently, and that the execution-time is added to the period time, you should wrap the timer and set some due-time and an infinite period. Then at the end of your event handler you would call .Change again to restart the timer.
If you write a simple wrapper around Threading.Timer you will have a much easier time ensuring thread safety, since the Threading.Timer is thread safe.
As it is, I think your class is probably kind of thread safe. But I'm fairly sure it can cause some behavior that is unexpected. For example, calling .Start() multiple times would cause multiple loops to be started. I would have expected such a method to be idempotent.
Related
I have been given an application that boils down to being a producer-consumer pattern. Several threads are doing some work and updating a single data set so that several more threads can consume that data and do their own work with it. At the moment, it's not terribly complex, all the consuming threads wait on the data set until one of the producers calls a pulseall.
There is now a desire to have one of the consumer threads consume from two different data sets anytime either of the sets changes. The teams desire to keep refactoring to a minimum and my limited experience with threading has given me some issues finding a clean solution.
The quick and dirty solution was to do the waiting and pulsing on a separate object and have the consumer threads check for changes in their data set before continuing. There does not seem to be a way for one thread to wait on two objects, without replacing the generic threads with a more robust threading tool (thread pools, task, etc) unless I'm failing to google the right thing.
If you are willing to do a little refactoring I would recommend switching from Monitor to one of the EventWaitHandle derived classes.
Depending on the behavior you want you may want AutoResetEvent, that will more closely act like a Monitor.Entier(obj)/Monitor.Exit(obj)
private readonly object _lockobj = new Object();
public void LockResource()
{
Monitor.Enter(_lockobj);
}
public void FreeResource()
{
Monitor.Exit(_lockobj);
}
//Which is the same as
private readonly AutoResetEvent _lockobj = new AutoResetEvent(true);
public void LockResource()
{
_lockobj.WaitOne();
}
public void FreeResource()
{
_lockobj.Set();
}
or you may want ManualResetEvent will more closely act like Monitor.Wait(obj)/Monitor.PulseAll(obj)
private readonly object _lockobj = new Object();
public void LockResource()
{
Monitor.Enter(_lockobj);
}
public bool WaitForResource()
{
//requires to be inside of a lock.
//returns true if it is the lock holder.
return Monitor.Wait(_lockobj);
}
public void SignalAll()
{
Monitor.PulseAll(_lockobj);
}
// Is very close to
private readonly ManualResetEvent _lockobj = new ManualResetEvent(true);
public bool LockResource()
{
//Returns true if it was able to perform the lock.
return _lockobj.Reset();
}
public void WaitForResource()
{
//Does not require to be in a lock.
//if the _lockobj is in the signaled state this call does not block.
_lockobj.WaitOne();
}
public void SignalAll()
{
_lockobj.Set();
}
1 event can wake up multiple threads, to handle multiple events by one thread you can do
ManualResetEvent resetEvent0 = ...
ManualResetEvent resetEvent1 = ...
public int WaitForEvent()
{
int i = WaitHandle.WaitAny(new WaitHandle[] {resetEvent0, resetEvent1});
return i;
}
and i will be the index of the reset event that had Set() called on it.
Consider the following abstract class:
public abstract class Worker {
protected bool shutdown;
protected Thread t;
/// <summary>
/// defines that we have an auto unpause scheduled
/// </summary>
private bool _unpauseScheduled;
/// <summary>
/// when paused; schedule an automatic unpause when we
/// reach this datetime
/// </summary>
private DateTime pauseUntil;
private bool _isStopped = true;
public bool IsStopped {
get {
return t.ThreadState == ThreadState.Stopped;
}
}
private bool _isPaused = false;
public bool IsPaused {
get {
return _isPaused;
}
}
private string stringRepresentation;
public Worker() {
t = new Thread(ThreadFunction);
stringRepresentation = "Thread id:" + t.ManagedThreadId;
t.Name = stringRepresentation;
}
public Worker(string name) {
t = new Thread(ThreadFunction);
stringRepresentation = name;
t.Name = stringRepresentation;
}
public void Start() {
OnBeforeThreadStart();
t.Start();
}
public void ScheduleStop() {
shutdown = true;
}
public void SchedulePause() {
OnPauseRequest();
_isPaused = true;
}
public void SchedulePause(int seconds) {
_unpauseScheduled = true;
pauseUntil = DateTime.Now.AddSeconds(seconds);
SchedulePause();
}
public void Unpause() {
_isPaused = false;
_unpauseScheduled = false;
}
public void ForceStop() {
t.Abort();
}
/// <summary>
/// The main thread loop.
/// </summary>
private void ThreadFunction() {
OnThreadStart();
while (!shutdown) {
OnBeforeLoop();
if (!IsPaused) {
if (!OnLoop()) {
break;
}
} else {
// check for auto-unpause;
if (_unpauseScheduled && pauseUntil < DateTime.Now) {
Unpause();
}
}
OnAfterLoop();
Thread.Sleep(1000);
}
OnShutdown();
}
public abstract void OnBeforeThreadStart();
public abstract void OnThreadStart();
public abstract void OnBeforeLoop();
public abstract bool OnLoop();
public abstract void OnAfterLoop();
public abstract void OnShutdown();
public abstract void OnPauseRequest();
public override string ToString() {
return stringRepresentation;
}
}
I use this class to create Threads that are designed to run for the lifetime of the application, but also with the ability to pause and stop the threads as needed.
I can't help but shake the feeling that my implementation is naive though. My use of Thread.Sleep() gives me pause. I am still learning the ins and outs of threads, and I am looking to see what others might do instead.
The Worker derived objects need to be able to do the following:
Run for the lifetime of the application (or as long as needed)
Be able to stop safely (finish what is was doing in OnLoop())
Be able to stop unsafely (disregard what is happening in OnLoop())
Be able to pause execution for a certain amount of time (or indefinitly)
Now, my implementation works, but that is not good enough for me. I want to use good practice, and I could use some review of this to help me with that.
I can't help but shake the feeling that my implementation is naive though. My use of Thread.Sleep() gives me pause. I am still learning the ins and outs of threads, and I am looking to see what others might do instead.
Your intuitions are good here; this is a naive approach, and any time you sleep a thread in production code you should think hard about whether you're making a mistake. You're paying for that worker; why are you paying for it to sleep?
The right way to put a thread to sleep until it is needed is not to sleep and poll in a loop. Use an appropriate wait handle instead; that's what wait handles are for.
But a better approach still would be to put an idle thread back into a pool of threads; if the work needs to be started up again in the future, schedule it onto a new worker thread. A thread that can sleep forever is a huge waste of resources; remember, a thread is a million bytes of memory by default. Would you allocate a bunch of million-byte arrays and then never use them?
You should study the design of the Task Parallel Library for additional inspiration. The insight of the TPL is that threads are workers, but what you care about is getting tasks completed. Your approach puts a thin layer on top of threads, but it does not get past the fact that threads are workers; managing workers is a pain. State your tasks, and let the TPL assign them to workers.
You might also examine the assumptions around the up-to-date-ness of your various flags. They have no locks and are not volatile, and therefore reads and writes can be moved forwards and backwards in time basically at the whim of the CPU.
You also have some non-threading bugs to think about. For example, suppose you decide to pause for thirty minutes, but at five minutes before clocks "spring forward" for daylight savings time. Do you pause for half an hour, or five minutes? Which do you actually intend?
I am working on some interesting concepts related to wrapping threads.
I have called it Fiber for now.
http://net7mma.codeplex.com/SourceControl/latest#Concepts/Classes/Threading/Threading.cs
Eric Lippert is is correct about paying a worker to sleep in some regard, if you imagine Eric Lippert is paid salary as opposed to via the hour then technically he is paid to sleep just as any other salaried employee.
How this relates to the concept at hand?
What about Priority? The CPU(s) that are executing your code are contending with their own pipelines for execution context as well as requests from the scheduler.
No one makes any mention of reducing the Priority which will reduce the amount of time given to execution of the context by the scheduler.
Chaining the Priority will thus increase the amount of cycles given to other context's and additionally will reduce the power consumption of your processor at the same time making your application run longer if it has a limited source of power (unless of course your using the excess heat to provide additional power to your system.)
I have class which implements an endless worker thread like this example, in my case representing a body. During runtime I will have between 0 and ~8 instances live at any time with instances constantly being created and destroyed.
Most of the time this class has a lifecycle of 30 seconds to 5 minutes but occasionally there may be a number of instances created and destroyed in a relatively short period of time. This is where I tend to run into performance issues given the low spec hardware this code is running on.
I would now like to rewrite the behavior so that I use a ThreadPool for my collection of running workers and I am struggling to find the correct way to structure the code.
Basically the code I have at the moment is something like
public class BodyCollection : IReadOnlyDictionary<ulong, TrackedBody>
{
public void Update()
{
if (createNew)
{
var body = new TrackedBody();
body.BeginTracking();
this.Add(1234, body);
}
if (remove)
{
TrackedBody body = this[1234];
body.StopTracking();
this.Remove(body);
}
}
}
public class TrackedBody
{
private readonly Thread _BiometricsThread;
private volatile bool _Continue = true;
public TrackedBody()
{
_BiometricsThread = new Thread(RunBiometricsThread);
}
public void BeginTracking()
{
_BiometricsThread.Start();
}
public void StopTracking()
{
_Continue = false;
}
private void RunBiometricsThread()
{
while(_Continue)
{
System.Threading.Thread.Sleep(1000);
}
}
}
So how do I re-write the above to utilize a ThreadPool correctly and so that I can cancel running threads on the ThreadPool as required? Do I use CancellationTokens or ManualResetEvents to control the threads?
I strongly believe you should be using more modern methods of asynchronous programming. We are going to use the Task Parallel Library here because it gives you the features you want for free:
Tracking completion
Cancellation
Thread pool
public class TrackedBody
{
public Task BeginTrackingAsync(CancellationToken cancellation)
{
return Task.Run(() => RunBiometricsThread(cancellation));
}
private void RunBiometricsThread(CancellationToken cancellation)
{
while(!cancellation.IsCancellationRequested)
{
Task.Delay(1000, cancellation);
}
}
}
Note that I have removed the async keyword. This was doing nothing on its own.
You can use the task to track the state of the ongoing work. You can use the cancellation token to stop all work.
Is there anything wrong with this code or can this be done more efficiently? In particular, i'm a little concerned about the code within parrallel.foreach firing/invoking a delegate. could this potentially cause any issues?
I ask because currently the consumers are unable to keep up with the items being produced in many cases, leading to memory issues.
public delegate void DataChangedDelegate(DataItem obj);
public class Consumer
{
public DataChangedDelegate OnCustomerChanged;
public DataChangedDelegate OnOrdersChanged;
private CancellationTokenSource cts;
private CancellationToken ct;
private BlockingCollection<DataItem> queue;
public Consumer(BlockingCollection<DataItem> queue) {
this.queue = queue;
Start();
}
private void Start() {
cts = new CancellationTokenSource();
ct = cts.Token;
Task.Factory.StartNew(() => DoWork(), ct);
}
private void DoWork() {
Parallel.ForEach(queue.GetConsumingPartitioner(), item => {
if (item.DataType == DataTypes.Customer) {
OnCustomerChanged(item);
} else if(item.DataType == DataTypes.Order) {
OnOrdersChanged(item);
}
});
}
}
In particular, i'm a little concerned about the code within parrallel.foreach firing/invoking a delegate. could this potentially cause any issues?
In general terms, there's nothing wrong with calling a delegate from within the Parallel.ForEach method.
However, it does make it more difficult to control thread safety, as the delegate will take on the requirements to handle all data synchronization correctly. This is mostly an issue since the main reason to use a delegate is to allow the "method" that you're calling to be passed in, which means it's being supplied externally.
This means, for example, that if a delegate happens to call code that tries to update a user interface, you may be in trouble, as it will get called from a background/ThreadPool thread.
I have to test a method which does a certain amount of work after an interval.
while (running)
{
...
// Work
...
Thread.Sleep(Interval);
}
Interval is passed in as a parameter to the class so I can just pass in 0 or 1 but I was interested as to how to mock the system clock if this wasn't the case.
In my test I'd like to be able to simply set the time forward by TimeSpan Interval and have the thread wake up.
I've never written tests for code which acts upon the executing thread before and I'm sure there are some pitfalls to avoid - please feel free to elaborate on what approach you use.
Thanks!
If you do not wish to test the fact that the thread actually sleeps, a more straightforward approach (and one that is possible) is to have an ISleepService. You can then mock this out, and then not sleep in your tests, but have an implementation that does cause a Thread.Sleep in your production code.
ISleepService sleepService = Container.Resolve<ISleepService>();
..
while (running)
{
...
// Work
...
sleepService.Sleep(Interval);
}
Example using Moq:
public interface ISleepService
{
void Sleep(int interval);
}
[Test]
public void Test()
{
const int Interval = 1000;
Mock<ISleepService> sleepService = new Mock<ISleepService>();
sleepService.Setup(s => s.Sleep(It.IsAny<int>()));
_container.RegisterInstance(sleepService.Object);
SomeClass someClass = _container.Resolve<SomeClass>();
someClass.DoSomething(interval: Interval);
//Do some asserting.
//Optionally assert that sleep service was called
sleepService.Verify(s => s.Sleep(Interval));
}
private class SomeClass
{
private readonly ISleepService _sleepService;
public SomeClass(IUnityContainer container)
{
_sleepService = container.Resolve<ISleepService>();
}
public void DoSomething(int interval)
{
while (true)
{
_sleepService.Sleep(interval);
break;
}
}
}
Update
On a design\maintenance note, if it is painful to change the constructor of "SomeClass", or to add Dependency Injection points to the user of the class, then a service locator type pattern can help out here, e.g.:
private class SomeClass
{
private readonly ISleepService _sleepService;
public SomeClass()
{
_sleepService = ServiceLocator.Container.Resolve<ISleepService>();
}
public void DoSomething(int interval)
{
while (true)
{
_sleepService.Sleep(interval);
break;
}
}
}
You can't really mock the system clock.
If you need to be able to alter the suspend behavior of code like this, you will need to refactor it so that you are not calling Thread.Sleep() directly.
I would create a singleton service, which could be injected into the application when it's under test. The singleton service would have to include methods to allow some external caller (like a unit test) to be able to cancel a sleep operation.
Alternatively, you could use a Mutex or WaitHandle object's WaitOne() method which has a timeout parameter. This way you could trigger the mutex to cancel the "sleep" or let it timeout:
public WaitHandle CancellableSleep = new WaitHandle(); // publicly available
// in your code under test use this instead of Thread.Sleep()...
while( running ) {
// .. work ..
CancellableSleep.WaitOne( Interval ); // suspends thread for Interval timeout
}
// external code can cancel the sleep by doing:
CancellableSleep.Set(); // trigger the handle...