I am writing a windows service which creates a couple of parallels tasks to run:
Following is the sample code snippet:
private static void TaskMethod1()
{
//I am doing a bunch of operations here, all of them can be replaced with a sleep for 25 minutes
}
private static async Task TaskMethod()
{
while(runningService)
{
// Thi will create more than one task in parallel to run and each task can take upto 30 minutes to finish
Task.Run(() => TaskMethod1(arg1);
}
}
internal static void Start()
{
runningService = true;
Task1 = Task.Run(() => TaskMethod());
}
internal static void Stop()
{
runningService = false;
Task1.Wait();
}
Now when I stop the service, it will not create any new tasks because runningService = false but windows service doesn't wait for 30 minutes for already running tasks to get finished.
Now I read that there is an x minutes timeout for service and it can be changed using registry settings, I was just wondering if there is the way such that the service will wait for each task to be finished instead of hardcoding that time via the registry.
In my .net core services i use the interface IHostApplicationLifetime to intercept when my service is beign closed, by registering an action to call using IHostApplicationLifetime.ApplicationStopped.Register(Action callback) .
Then in the callback you could wait for the task to complete.
Related
I have a background service that will be started when the application performing startup. The background service will start to create multiple tasks based on how many workers are set. As I do various trials and monitor the open connection on DB. The open connection is always the same value as the worker I set. Let say I set 32 workers, then the connection will be always 32 open connections shown as I use query to check it. FYI I am using Postgres as the DB server. In order to check the open connection, I use the query below to check the connection when the application is running.
select * from pg_stat_activity where application_name = 'myapplication';
Below is the background service code.
public class MessagingService : BackgroundService {
private int worker = 32;
protected override async Task ExecuteAsync(CancellationToken cancellationToken) {
var tasks = new List<Task>();
for (int i=0; i<worker; i++) {
tasks.Add(DoJob(cancellationToken));
}
while (!cancellationToken.IsCancellationRequested) {
try {
var completed = await Task.WhenAny(tasks);
tasks.Remove(completed);
} catch (Exception) {
await Task.Delay(1000, cancellationToken);
}
if (!cancellationToken.IsCancellationRequested) {
tasks.Add(DoJob(cancellationToken));
}
}
}
private async Task DoJob(CancellationToken cancellationToken) {
using (var scope = _services.CreateScope()) {
var service = scope.ServiceProvider
.GetRequiredService<MessageService>();
try {
//do select and update query on db if null return false otherwise send mail
if (!await service.Run(cancellationToken)) {
await Task.Delay(1000, cancellationToken);
}
} catch (Exception) {
await Task.Delay(1000, cancellationToken);
}
}
}
}
The workflow is not right as it will keep creating the task and leave the connection open and idle. Also, the CPU and memory usage are high when running those tasks. How can I achieve like when there is no record found on DB only keep 1 worker running at the moment? If a record or more is found it will keep increasing until the preset max worker then decreasing the worker when the record is less than the max worker. If this question is too vague or opinion-based then please let me know and I will try my best to make it as specific as possible.
Update Purpose
The purpose of this service is to perform email delivery. There is another API that will be used to create a scheduled job. Once the job is added to the DB, this service will do the email delivery at the scheduled time. Eg, 5k schedule jobs are added to the DB and the scheduled time to perform the job is '2021-12-31 08:00:00' and the time when creating the scheduled job is 2021-12-31 00:00:00'. The service will keep on looping from 00:00:00 until 08:00:00 with 32 workers running at the same time then just start to do the email delivery. How can I improve it to more efficiency like normally when there is no job scheduled only 1 worker is running. When it checked there is 5k scheduled job it will fully utilise all the worker. After 5k job is completed, it will back to 1 workers.
My suggestion is to spare yourself from the burden of manually creating and maintaining worker tasks, by using an ActionBlock<T> from the TPL Dataflow library. This component is a combination of an input queue and an Action<T> delegate. You specify the delegate in its constructor, and you feed it with messages with its Post method. The component invokes the delegate for each message it receives, with the specified degree of parallelism. When there are no more messages to send, you notify it by invoking its Complete method, and then await its Completion so that you know that all work that was delegated to it has completed.
Below is a rough demonstration if how you could use this component:
protected override async Task ExecuteAsync(CancellationToken cancellationToken)
{
var processor = new ActionBlock<Job>(async job =>
{
await ProcessJob(job);
await MarkJobAsCompleted(job);
}, new ExecutionDataflowBlockOptions()
{
MaxDegreeOfParallelism = 32
});
try
{
while (true)
{
Task delayTask = Task.Delay(TimeSpan.FromSeconds(60), cancellationToken);
Job[] jobs = await FetchReadyToProcessJobs();
foreach (var job in jobs)
{
await MarkJobAsPending(job);
processor.Post(job);
}
await delayTask; // Will throw when the token is canceled
}
}
finally
{
processor.Complete();
await processor.Completion;
}
}
The FetchReadyToProcessJobs method is supposed to connect to the database, and fetch all the jobs whose time has come to be processed. In the above example this method is invoked every 60 seconds. The Task.Delay is created before invoking the method, and awaited after the returned jobs have been posted to the ActionBlock<T>. This way the interval between invocations will be stable and consistent.
I have a headless UWP application that uses an external library to connect to a serial device and send some commands. It runs an infinite loop (while true) with a 10 minute pause between loops. The measurement process takes around 4 minutes.
The external library needs to run 3 measurements and after each it signals by raising an event. When the event is raised the 4th time I know that I can return the results.
After 4 hours (+/- a few seconds) the library stops raising events (usually it raises the event one or 2 times and then it halts, no errors, nothing).
I implemented in DoMeasureAsync() below a CancellationTokenSource that was supposed to set the IsCancelled property on the TaskCompletionSource after 8 minutes so that the task returns and the loop continues.
Problem:
When the measurement does not complete (the NMeasureCompletionSource never gets its result set in class CMeasure), the task from nMeasureCompletionSource is never cancelled. The delegate defined in RespondToCancellationAsync() should run after the 8 minutes.
If the measurement runs ok, I can see in the logs that the code in the
taskAtHand.ContinueWith((x) =>
{
Logger.LogDebug("Disposing CancellationTokenSource...");
cancellationTokenSource.Dispose();
});
gets called.
Edit:
Is it possible that the GC comes in after the 4 hours and maybe deallocates some variables and doing so makes the app to not be able to send the commands to the sensor? - It is not the case
What am I missing here?
//this gets called in a while (true) loop
public Task<PMeasurement> DoMeasureAsync()
{
nMeasureCompletionSource = new TaskCompletionSource<PMeasurement>();
cancellationTokenSource = new CancellationTokenSource(TimeSpan.FromMinutes(8));
var t = cMeasure.Run(nitrateMeasureCompletionSource, cancellationTokenSource.Token);
var taskAtHand = nitrateMeasureCompletionSource.Task;
taskAtHand.ContinueWith((x) =>
{
Logger.LogDebug("Disposing CancellationTokenSource...");
cancellationTokenSource.Dispose();
});
return taskAtHand;
}
public class CMeasure
{
public async Task Run(TaskCompletionSource<PMeasurement> tcs, CancellationToken cancellationToken)
{
try
{
NMeasureCompletionSource = tcs;
CancellationToken = cancellationToken;
CancellationToken.Register(async () => await RespondToCancellationAsync(), useSynchronizationContext: false);
CloseDevice(); //Closing device if for some reason is still open
await Task.Delay(2500);
TheDevice = await GetDevice();
measurementsdone = 0;
Process(); //start the first measurement
}
catch (Exception ex)
{
DisconnectCommManagerAndCloseDevice();
NMeasureCompletionSource.SetException(ex);
}
}
public async Task RespondToCancellationAsync()
{
if (!NitrateMeasureCompletionSource.Task.IsCompleted)
{
Logger.LogDebug("Measure Completion Source is not completed. Cancelling...");
NMeasureCompletionSource.SetCanceled();
}
DisconnectCommManagerAndCloseDevice();
await Task.Delay(2500);
}
private void Process()
{
if (measurementsdone < 3)
{
var message = Comm.Measure(m); //start a new measurement on the device
}
else
{
...
NMeasureCompletionSource.SetResult(result);
}
}
//the method called when the event is raised by the external library
private void Comm_EndMeasurement(object sender, EventArgs e)
{
measurementsdone++;
Process();
}
}
After more testing I have reached the conclusion that there is no memory leak and that all the objects are disposed. The cancellation works well also.
So far it appears that my problem comes from the execution of the headless app on the Raspberry Pi. Although I am using the deferral = taskInstance.GetDeferral(); it seems that the execution is stopped at some point...
I will test more and come back with the results (possibly in a new post, but I will put a link here as well).
Edit:
Here is the new post: UWP - Headless app stops after 3 or 4 hours
Edit 2:
The problem was from a 3rd party library that I had to use and it had to be called differently from a headless app. Internally it was creating its own TaskScheduler if SynchronizationContext.Current was null.
I am creating a console program, which can test read / write to a Cache by simulating multiple clients, and have written following code. Please help me understand:
Is it correct way to achieve the multi client simulation
What can I do more to make it a genuine load test
void Main()
{
List<Task<long>> taskList = new List<Task<long>>();
for (int i = 0; i < 500; i++)
{
taskList.Add(TestAsync());
}
Task.WaitAll(taskList.ToArray());
long averageTime = taskList.Average(t => t.Result);
}
public static async Task<long> TestAsync()
{
// Returns the total time taken using Stop Watch in the same module
return await Task.Factory.StartNew(() => // Call Cache Read / Write);
}
Adjusted your code slightly to see how many threads we have at a particular time.
static volatile int currentExecutionCount = 0;
static void Main(string[] args)
{
List<Task<long>> taskList = new List<Task<long>>();
var timer = new Timer(Print, null, TimeSpan.FromSeconds(1), TimeSpan.FromSeconds(1));
for (int i = 0; i < 1000; i++)
{
taskList.Add(DoMagic());
}
Task.WaitAll(taskList.ToArray());
timer.Change(Timeout.Infinite, Timeout.Infinite);
timer = null;
//to check that we have all the threads executed
Console.WriteLine("Done " + taskList.Sum(t => t.Result));
Console.ReadLine();
}
static void Print(object state)
{
Console.WriteLine(currentExecutionCount);
}
static async Task<long> DoMagic()
{
return await Task.Factory.StartNew(() =>
{
Interlocked.Increment(ref currentExecutionCount);
//place your code here
Thread.Sleep(TimeSpan.FromMilliseconds(1000));
Interlocked.Decrement(ref currentExecutionCount);
return 4;
}
//this thing should give a hint to scheduller to use new threads and not scheduled
, TaskCreationOptions.LongRunning
);
}
The result is: inside a virtual machine I have from 2 to 10 threads running simultaneously if I don't use the hint. With the hint — up to 100. And on real machine I can see 1000 threads at once. Process explorer confirms this. Some details on the hint that would be helpful.
If it is very busy, then apparently your clients have to wait a while before their requests are serviced. Your program does not measure this, because your stopwatch starts running when the service request starts.
If you also want to measure what happen with the average time before a request is finished, you should start your stopwatch when the request is made, not when the request is serviced.
Your program takes only threads from the thread pool. If you start more tasks then there are threads, some tasks will have to wait before TestAsync starts running. This wait time would be measured if you remember the time Task.Run is called.
Besides the flaw in time measurements, how many service requests do you expect simultaneously? Are there enough free threads in your thread pool to simulate this? If you expect about 50 service requests at the same time, and the size of your thread pool is only 20 threads, then you'll never run 50 service requests at the same time. Vice versa: if your thread pool is way bigger than your number of expected simultaneous service requests, then you'll measure longer times than are actual the case.
Consider changing the number of threads in your thread pool, and make sure no one else uses any threads of the pool.
In Brief
I have a Windows Service that executes several jobs as async Tasks in parallel. However, when the OnStop is called, it seems that these are all immediately terminated instead of being allowed to stop in a more gracious manner.
In more detail
Each job represents an iteration of work, so having completed its work the job then needs to run again.
To accomplish this, I am writing a proof-of-concept Windows Service that:
runs each job as an awaited async TPL Task (these are all I/O bound tasks)
each job is run iteratively within a loop
each job's loop is run in parallel
When I run the Service, I see everything executing as I expect. However, when I Stop the service, it seems that everything stops dead.
Okay - so how is this working?
In the Service I have a cancellation token, and a TaskCompletion Source:
private static CancellationTokenSource _cancelSource = new CancellationTokenSource();
private TaskCompletionSource<bool> _jobCompletion = new TaskCompletionSource<bool>();
private Task<bool> AllJobsCompleted { get { return _finalItems.Task; } }
The idea is that when every Job has gracefully stopped, then the Task AllJobsCompleted will be marked as completed.
The OnStart simply starts running these jobs:
protected override async void OnStart(string[] args)
{
_cancelSource = new CancellationTokenSource();
var jobsToRun = GetJobsToRun(); // details of jobs not relevant
Task.Run(() => this.RunJobs(jobsToRun, _cancelSource.Token).ConfigureAwait(false), _cancelSource.Token);
}
The Task RunJobs will run each job in a parallel loop:
private async Task RunModules(IEnumerable<Jobs> jobs, CancellationToken cancellationToken)
{
var parallelOptions = new ParallelOptions { CancellationToken = cancellationToken };
int jobsRunningCount = jobs.Count();
object lockObject = new object();
Parallel.ForEach(jobs, parallelOptions, async (job, loopState) =>
{
try
{
do
{
await job.DoWork().ConfigureAwait(false); // could take 5 seconds
parallelOptions.CancellationToken.ThrowIfCancellationRequested();
}while(true);
}
catch(OperationCanceledException)
{
lock (lockObject) { jobsRunningCount --; }
}
});
do
{
await Task.Delay(TimeSpan.FromSeconds(5));
} while (modulesRunningCount > 0);
_jobCompletion.SetResult(true);
}
So, what should be happening is that when each job finishes its current iteration, it should see that the cancellation has been signalled and it should then exit the loop and decrement the counter.
Then, when jobsRunningCount reaches zero, then we update the TaskCompletionSource. (There may be a more elegant way of achieving this...)
So, for the OnStop:
protected override async void OnStop()
{
this.RequestAdditionalTime(100000); // some large number
_cancelSource.Cancel();
TraceMessage("Task cancellation requested."); // Last thing traced
try
{
bool allStopped = await this.AllJobsCompleted;
TraceMessage(string.Format("allStopped = '{0}'.", allStopped));
}
catch (Exception e)
{
TraceMessage(e.Message);
}
}
What what I expect is this:
Click [STOP] on the Service
The Service should take sometime to stop
I should see a trace statement "Task cancellation requested."
I should see a trace statement saying either "allStopped = true", or the exception message
And when I debug this using a WPF Form app, I get this.
However, when I install it as a service:
Click [STOP] on the Service
The Service stops almost immediately
I only see the trace statement "Task cancellation requested."
What do I need to do to ensure the OnStop doesn't kill off my parallel async jobs and waits for the TaskCompletionSource?
Your problem is that OnStop is async void. So, when it does await this.AllJobsCompleted, what actually happens is that it returns from OnStop, which the SCM interprets as having stopped, and terminates the process.
This is one of the rare scenarios where you'd need to block on a task, because you cannot allow OnStop to return until after the task completes.
This should do it:
protected override void OnStop()
{
this.RequestAdditionalTime(100000); // some large number
_cancelSource.Cancel();
TraceMessage("Task cancellation requested."); // Last thing traced
try
{
bool allStopped = this.AllJobsCompleted.GetAwaiter().GetResult();
TraceMessage(string.Format("allStopped = '{0}'.", allStopped));
}
catch (Exception e)
{
TraceMessage(e.Message);
}
}
I have a folder on my windows server, where people will be uploading CSV files to, C:\Uploads.
I want to write a simple windows service application that will scan this uploads folder (every 5 seconds) and collect the files in and process them in parallel (Thread /per File?). However, the main scanning process should not overlap, i.e. locking is required.
So, I was experimenting with it like this:
I am aware this is not windows service code, it's a console app to test ideas...
Updated Code, based on dcastro's reply
class Program
{
static Timer _InternalTimer;
static Object _SyncLock = new Object();
static void Main(string[] args)
{
_InternalTimer = new Timer(InitProcess, null, 0, 5000); // Sync cycle is every 5 sec
Console.ReadKey();
}
private static void InitProcess(Object state)
{
ConsoleLog("Starting Process");
StartProcess();
}
static void StartProcess()
{
bool lockTaken = false;
try
{
Monitor.TryEnter(_SyncLock, ref lockTaken);
if (lockTaken)
{
ConsoleLog("Lock Acquired. Doing some dummy work...");
List<string> fileList = new List<string>()
{
"fileA.csv",
"fileB.csv"
};
Parallel.ForEach(fileList, (string fileName) =>
{
ConsoleLog("Processing File: " + fileName);
Thread.Sleep(10000); // 10 sec to process each file
});
GC.Collect();
}
else
ConsoleLog("Sync Is Busy, Skipping Cycle");
}
finally
{
if (lockTaken)
Monitor.Exit(_SyncLock);
}
}
static void ConsoleLog(String Message)
{
Console.WriteLine("[{0}]: {1}",
DateTime.UtcNow.ToString("HH:mm:ss tt"),
Message);
}
}
When it runs, it looks like this:
Does this look right? Any help/tips on improving this will be much appreciated.
It seems fine to me, apart from the fact that you don't need to start a task with Task.Factory.StartNew. The System.Threading.Timer already executes your callback on the ThreadPool, so there's no need to launch yet another task that will also be run on the thread pool.
Also, if your timer ticks every 5 seconds, and you expect it to take about 10 secs to process the files, then your threads will begin to queue up waiting for the lock to be released. That happened on the example you posted.
If this is the case, I would either increase the timer's period to more than 10 secs, or use Monitor.TryEnter instead of a regular lock. TryEnter will try to acquire the lock, and return immediately regardless of whether or not the lock was taken. If the lock is currently taken by another thread, you just skip this tick entirely.