Forcing certain code to always run on the same thread - c#

We have an old 3rd party system (let's call it Junksoft® 95) that we interface with via PowerShell (it exposes a COM object) and I'm in the process of wrapping it in a REST API (ASP.NET Framework 4.8 and WebAPI 2). I use the System.Management.Automation nuget package to create a PowerShell in which I instantiate Junksoft's COM API as a dynamic object that I then use:
//I'm omitting some exception handling and maintenance code for brevity
powerShell = System.Management.Automation.PowerShell.Create();
powerShell.AddScript("Add-Type -Path C:\Path\To\Junksoft\Scripting.dll");
powerShell.AddScript("New-Object Com.Junksoft.Scripting.ScriptingObject");
dynamic junksoftAPI = powerShell.Invoke()[0];
//Now we issue commands to junksoftAPI like this:
junksoftAPI.Login(user,pass);
int age = junksoftAPI.GetAgeByCustomerId(custId);
List<string> names = junksoftAPI.GetNames();
This works fine when I run all of this on the same thread (e.g. in a console application). However, for some reason this usually doesn't work when I put junksoftAPI into a System.Web.Caching.Cache and use it from different controllers in my web app. I say ususally because this actually works when ASP.NET happens to give the incoming call to the thread that junksoftAPI was created on. If it doesn't, Junksoft 95 gives me an error.
Is there any way for me to make sure that all interactions with junksoftAPI happen on the same thread?
Note that I don't want to turn the whole web application into a single-threaded application! The logic in the controllers and elswhere should happen like normal on different threads. It should only be the Junksoft interactions that happen on the Junksoft-specific thread, something like this:
[HttpGet]
public IHttpActionResult GetAge(...)
{
//finding customer ID in database...
...
int custAge = await Task.Run(() => {
//this should happen on the Junksoft-specific thread and not the next available thread
var cache = new System.Web.Caching.Cache();
var junksoftAPI = cache.Get(...); //This has previously been added to cache on the Junksoft-specific thread
return junksoftAPI.GetAgeByCustomerId(custId);
});
//prepare a response using custAge...
}

You can create your own singleton worker thread to achieve this. Here is the code which you can plug it into your web application.
public class JunkSoftRunner
{
private static JunkSoftRunner _instance;
//singleton pattern to restrict all the actions to be executed on a single thread only.
public static JunkSoftRunner Instance => _instance ?? (_instance = new JunkSoftRunner());
private readonly SemaphoreSlim _semaphore;
private readonly AutoResetEvent _newTaskRunSignal;
private TaskCompletionSource<object> _taskCompletionSource;
private Func<object> _func;
private JunkSoftRunner()
{
_semaphore = new SemaphoreSlim(1, 1);
_newTaskRunSignal = new AutoResetEvent(false);
var contextThread = new Thread(ThreadLooper)
{
Priority = ThreadPriority.Highest
};
contextThread.Start();
}
private void ThreadLooper()
{
while (true)
{
//wait till the next task signal is received.
_newTaskRunSignal.WaitOne();
//next task execution signal is received.
try
{
//try execute the task and get the result
var result = _func.Invoke();
//task executed successfully, set the result
_taskCompletionSource.SetResult(result);
}
catch (Exception ex)
{
//task execution threw an exception, set the exception and continue with the looper
_taskCompletionSource.SetException(ex);
}
}
}
public async Task<TResult> Run<TResult>(Func<TResult> func, CancellationToken cancellationToken = default(CancellationToken))
{
//allows only one thread to run at a time.
await _semaphore.WaitAsync(cancellationToken);
//thread has acquired the semaphore and entered
try
{
//create new task completion source to wait for func to get executed on the context thread
_taskCompletionSource = new TaskCompletionSource<object>();
//set the function to be executed by the context thread
_func = () => func();
//signal the waiting context thread that it is time to execute the task
_newTaskRunSignal.Set();
//wait and return the result till the task execution is finished on the context/looper thread.
return (TResult)await _taskCompletionSource.Task;
}
finally
{
//release the semaphore to allow other threads to acquire it.
_semaphore.Release();
}
}
}
Console Main Method for testing:
public class Program
{
//testing the junk soft runner
public static void Main()
{
//get the singleton instance
var softRunner = JunkSoftRunner.Instance;
//simulate web request on different threads
for (var i = 0; i < 10; i++)
{
var taskIndex = i;
//launch a web request on a new thread.
Task.Run(async () =>
{
Console.WriteLine($"Task{taskIndex} (ThreadID:'{Thread.CurrentThread.ManagedThreadId})' Launched");
return await softRunner.Run(() =>
{
Console.WriteLine($"->Task{taskIndex} Completed On '{Thread.CurrentThread.ManagedThreadId}' thread.");
return taskIndex;
});
});
}
}
}
Output:
Notice that, though the function was launched from the different threads, some portion of code got always executed always on the same context thread with ID: '5'.
But beware that, though all the web requests are executed on independent threads, they will eventually wait for some tasks to get executed on the singleton worker thread. This will eventually create a bottle neck in your web application. This is anyway your design limitation.

Here is how you could issue commands to the Junksoft API from a dedicated STA thread, using a BlockingCollection class:
public class JunksoftSTA : IDisposable
{
private readonly BlockingCollection<Action<Lazy<dynamic>>> _pump;
private readonly Thread _thread;
public JunksoftSTA()
{
_pump = new BlockingCollection<Action<Lazy<dynamic>>>();
_thread = new Thread(() =>
{
var lazyApi = new Lazy<dynamic>(() =>
{
var powerShell = System.Management.Automation.PowerShell.Create();
powerShell.AddScript("Add-Type -Path C:\Path\To\Junksoft.dll");
powerShell.AddScript("New-Object Com.Junksoft.ScriptingObject");
dynamic junksoftAPI = powerShell.Invoke()[0];
return junksoftAPI;
});
foreach (var action in _pump.GetConsumingEnumerable())
{
action(lazyApi);
}
});
_thread.SetApartmentState(ApartmentState.STA);
_thread.IsBackground = true;
_thread.Start();
}
public Task<T> CallAsync<T>(Func<dynamic, T> function)
{
var tcs = new TaskCompletionSource<T>(
TaskCreationOptions.RunContinuationsAsynchronously);
_pump.Add(lazyApi =>
{
try
{
var result = function(lazyApi.Value);
tcs.SetResult(result);
}
catch (Exception ex)
{
tcs.SetException(ex);
}
});
return tcs.Task;
}
public Task CallAsync(Action<dynamic> action)
{
return CallAsync<object>(api => { action(api); return null; });
}
public void Dispose() => _pump.CompleteAdding();
public void Join() => _thread.Join();
}
The purpose of using the Lazy class is for surfacing a possible exception during the construction of the dynamic object, by propagating it to the callers.
...exceptions are cached. That is, if the factory method throws an exception the first time a thread tries to access the Value property of the Lazy<T> object, the same exception is thrown on every subsequent attempt.
Usage example:
// A static field stored somewhere
public static readonly JunksoftSTA JunksoftStatic = new JunksoftSTA();
await JunksoftStatic.CallAsync(api => { api.Login("x", "y"); });
int age = await JunksoftStatic.CallAsync(api => api.GetAgeByCustomerId(custId));
In case you find that a single STA thread is not enough to serve all the requests in a timely manner, you could add more STA threads, all of them running the same code (private readonly Thread[] _threads; etc). The BlockingCollection class is thread-safe and can be consumed concurrently by any number of threads.

If you did not say that was a 3rd party tool, I would have asumed it is a GUI class. For practical reasons, it is a very bad idea to have multiple threads write to them. .NET enforces a strict "only the creating thread shall write" rule, from 2.0 onward.
WebServers in general and ASP.Net in particular use a pretty big thread pool. We are talking 10's to 100's of Threads per Core. That means it is really hard to nail any request down to a specific Thread. You might as well not try.
Again, looking at the GUI classes might be your best bet. You could basically make a single thread with the sole purpose of immitating a GUI's Event Queue. The Main/UI Thread of your average Windows Forms application, is responsible for creating every GUI class instance. It is kept alive by polling/processing the event queue. It ends onlyx when it receies a cancel command, via teh Event Queue. Dispatching just puts orders into that Queue, so we can avoid Cross-Threading issues.

Related

How to write a long running activity to call web services in WF 4.0

I created an activity which executes a web request and stores the result into the database. I found out that for these long running activities I should write some different code so that the workflow engine thread won't be blocked.
public sealed class WebSaveActivity : NativeActivity
{
protected override void Execute(NativeActivityContext context)
{
GetAndSave(); // This takes 1 hour to accomplish.
}
}
How should I rewrite this activity to meet the requirements for a long running activity
You could either spawn a thread within your existing process using e.g. ThreadPool.QueueUserWorkItem() so the rest of your workflow will continue to run if that is desired. Be sure to understand first what multithreading and thread synchronization means, though.
Or you could look into Hangfire or similar components to offload the entire job into a different process.
EDIT:
Based on your comment you could look into Task-based Asynchronous Pattern (TAP): Link 1, Link 2 which would give you a nice model of writing code that continues to work on things that can be done while waiting for the result of your long running action until it returns. I am, however, not certain if this covers your all needs. In Windows Workflow Foundation specifically, you might want to look into some form of workflow hibernation/persistence.
This scenario is where using WF's persistence feature shines. It allows you to persist a workflow instance to a database, to allow for some long running operation to complete. Once that completes, a second thread or process can re-hydrate the workflow instance and allow it to resume.
First you specify to the workflow application a workflow instance store. Microsoft provides a SQL workflow instance store implementation you can use, and provides the SQL scripts you can run on your SQL Server.
namespace MySolution.MyWorkflowApp
{
using System.Activities;
using System.Activities.DurableInstancing;
using System.Activities.Statements;
using System.Threading;
internal static class Program
{
internal static void Main(string[] args)
{
var autoResetEvent = new AutoResetEvent(false);
var workflowApp = new WorkflowApplication(new Sequence());
workflowApp.InstanceStore = new SqlWorkflowInstanceStore("server=mySqlServer;initial catalog=myWfDb;...");
workflowApp.Completed += e => autoResetEvent.Set();
workflowApp.Unloaded += e => autoResetEvent.Set();
workflowApp.Aborted += e => autoResetEvent.Set();
workflowApp.Run();
autoResetEvent.WaitOne();
}
}
}
Your activity would spin up a secondary process / thread that will actually perform the save operation. There is a variety of ways you could do this:
On a secondary thread
By invoking a web method asynchronously that actually does the heavy lifting of performing the save operation
Your activity would look like this:
public sealed class WebSaveActivity : NativeActivity
{
public InArgument<MyBigObject> ObjectToSave { get; set; }
protected override bool CanInduceIdle
{
get
{
// This notifies the WF engine that the activity can be unloaded / persisted to an instance store.
return true;
}
}
protected override void Execute(NativeActivityContext context)
{
var currentBigObject = this.ObjectToSave.Get(context);
currentBigObject.WorkflowInstanceId = context.WorkflowInstanceId;
StartSaveOperationAsync(this.ObjectToSave.Get(context)); // This method should offload the actual save process to a thread or even a web method, then return immediately.
// This tells the WF engine that the workflow instance can be suspended and persisted to the instance store.
context.CreateBookmark("MySaveOperation", AfterSaveCompletesCallback);
}
private void AfterSaveCompletesCallback(NativeActivityContext context, Bookmark bookmark, object value)
{
// Do more things after the save completes.
var saved = (bool) value;
if (saved)
{
// yay!
}
else
{
// boo!!!
}
}
}
The bookmark creation signals to the WF engine that the workflow instance can be unloaded from memory until something wakes up the workflow instance.
In your scenario, you'd like the workflow to resume once the long save operation completes. Lets assume the StartSaveOperationAsync method writes a small message to a queue of some sort, that a second thread or process polls to perform the save operations:
public static void StartSaveOperationAsync(MyBigObject myObjectToSave)
{
var targetQueue = new MessageQueue(".\private$\pendingSaveOperations");
var message = new Message(myObjectToSave);
targetQueue.Send(message);
}
In my second process, I can then poll the queue for new save requests and re-hydrate the persisted workflow instance so it can resume after the save operation finishes. Assume that the following method is in a different console application:
internal static void PollQueue()
{
var targetQueue = new MessageQueue(#".\private$\pendingSaveOperations");
while (true)
{
// This waits for a message to arrive on the queue.
var message = targetQueue.Receive();
var myObjectToSave = message.Body as MyBigObject;
// Perform the long running save operation
LongRunningSave(myObjectToSave);
// Once the save operation finishes, you can resume the associated workflow.
var autoResetEvent = new AutoResetEvent(false);
var workflowApp = new WorkflowApplication(new Sequence());
workflowApp.InstanceStore = new SqlWorkflowInstanceStore("server=mySqlServer;initial catalog=myWfDb;...");
workflowApp.Completed += e => autoResetEvent.Set();
workflowApp.Unloaded += e => autoResetEvent.Set();
workflowApp.Aborted += e => autoResetEvent.Set();
// I'm assuming the object to save has a field somewhere that refers the workflow instance that's running it.
workflowApp.Load(myObjectToSave.WorkflowInstanceId);
workflowApp.ResumeBookmark("LongSaveOperation", true); // The 'true' parameter is just our way of saying the save completed successfully. You can use any object type you desire here.
autoResetEvent.WaitOne();
}
}
private static void LongRunningSave(object myObjectToSave)
{
throw new NotImplementedException();
}
public class MyBigObject
{
public Guid WorkflowInstanceId { get; set; } = Guid.NewGuid();
}
Now the long running save operation will not impede the workflow engine, and it'll make more efficient use of system resources by not keeping workflow instances in memory for long periods of time.

Thread Safety in Concurrent Queue C#

I have a MessagesManager thread to which different threads may send messages and then this MessagesManager thread is responsible to publish these messages inside SendMessageToTcpIP() (start point of MessagesManager thread ).
class MessagesManager : IMessageNotifier
{
//private
private readonly AutoResetEvent _waitTillMessageQueueEmptyARE = new AutoResetEvent(false);
private ConcurrentQueue<string> MessagesQueue = new ConcurrentQueue<string>();
public void PublishMessage(string Message)
{
MessagesQueue.Enqueue(Message);
_waitTillMessageQueueEmptyARE.Set();
}
public void SendMessageToTcpIP()
{
//keep waiting till a new message comes
while (MessagesQueue.Count() == 0)
{
_waitTillMessageQueueEmptyARE.WaitOne();
}
//Copy the Concurrent Queue into a local queue - keep dequeuing the item once it is inserts into the local Queue
Queue<string> localMessagesQueue = new Queue<string>();
while (!MessagesQueue.IsEmpty)
{
string message;
bool isRemoved = MessagesQueue.TryDequeue(out message);
if (isRemoved)
localMessagesQueue.Enqueue(message);
}
//Use the Local Queue for further processing
while (localMessagesQueue.Count() != 0)
{
TcpIpMessageSenderClient.ConnectAndSendMessage(localMessagesQueue.Dequeue().PadRight(80, ' '));
Thread.Sleep(2000);
}
}
}
The different threads (3-4) send their message by calling the PublishMessage(string Message) (using same object to MessageManager). Once the message comes, I push that message into a concurrent queue and notifies the SendMessageToTcpIP() by setting _waitTillMessageQueueEmptyARE.Set();. Inside SendMessageToTcpIP(), I am copying the message from the concurrent queue inside a local queue and then publish one by one.
QUESTIONS: Is it thread safe to do enqueuing and dequeuing in this way? Could there be some strange effects due to it?
While this is probably thread-safe, there are built-in classes in .NET to help with "many publishers one consumer" pattern, like BlockingCollection. You can rewrite your class like this:
class MessagesManager : IDisposable {
// note that your ConcurrentQueue is still in play, passed to constructor
private readonly BlockingCollection<string> MessagesQueue = new BlockingCollection<string>(new ConcurrentQueue<string>());
public MessagesManager() {
// start consumer thread here
new Thread(SendLoop) {
IsBackground = true
}.Start();
}
public void PublishMessage(string Message) {
// no need to notify here, will be done for you
MessagesQueue.Add(Message);
}
private void SendLoop() {
// this blocks until new items are available
foreach (var item in MessagesQueue.GetConsumingEnumerable()) {
// ensure that you handle exceptions here, or whole thing will break on exception
TcpIpMessageSenderClient.ConnectAndSendMessage(item.PadRight(80, ' '));
Thread.Sleep(2000); // only if you are sure this is required
}
}
public void Dispose() {
// this will "complete" GetConsumingEnumerable, so your thread will complete
MessagesQueue.CompleteAdding();
MessagesQueue.Dispose();
}
}
.NET already provides ActionBlock< T> that allows posting messages to a buffer and processing them asynchronously. By default, only one message is processed at a time.
Your code could be rewritten as:
//In an initialization function
ActionBlock<string> _hmiAgent=new ActionBlock<string>(async msg=>{
TcpIpMessageSenderClient.ConnectAndSendMessage(msg.PadRight(80, ' '));
await Task.Delay(2000);
);
//In some other thread ...
foreach ( ....)
{
_hmiAgent.Post(someMessage);
}
// When the application closes
_hmiAgent.Complete();
await _hmiAgent.Completion;
ActionBlock offers many benefits - you can specify a limit to the number of items it can accept in a buffer and specify that multiple messages can be processed in parallel. You can also combine multiple blocks in a processing pipeline. In a desktop application, a message can be posted to a pipeline in response to an event, get processed by separate blocks and results posted to a final block that updates the UI.
Padding, for example, could be performed by an intermediary TransformBlock< TIn,TOut>. This transformation is trivial and the cost of using the block is greater than the method, but that's just an illustration:
//In an initialization function
TransformBlock<string> _hmiAgent=new TransformBlock<string,string>(
msg=>msg.PadRight(80, ' '));
ActionBlock<string> _tcpBlock=new ActionBlock<string>(async msg=>{
TcpIpMessageSenderClient.ConnectAndSendMessage());
await Task.Delay(2000);
);
var linkOptions=new DataflowLinkOptions{PropagateCompletion = true};
_hmiAgent.LinkTo(_tcpBlock);
The posting code doesn't change at all
_hmiAgent.Post(someMessage);
When the application terminates, we need to wait for the _tcpBlock to complete:
_hmiAgent.Complete();
await _tcpBlock.Completion;
Visual Studio 2015+ itself uses TPL Dataflow for such scenarios
Bar Arnon provides a better example in TPL Dataflow Is The Best Library You're Not Using, that shows how both synchronous and asynchronous methods can be used in a block.
The code is thread safe since both ConcurrentQueue and AutoResetEvent are thread safe. your strings are anyway being read and never being written to, so this code is thread safe.
However, You have to make sure you call SendMessageToTcpIP in some sort of a loop.
otherwise , you have a dangerous race condition - some messages may get lost:
while (!MessagesQueue.IsEmpty)
{
string message;
bool isRemoved = MessagesQueue.TryDequeue(out message);
if (isRemoved)
localMessagesQueue.Enqueue(message);
}
//<<--- what happens if another thread enqueues a message here?
while (localMessagesQueue.Count() != 0)
{
TcpIpMessageSenderClient.ConnectAndSendMessage(localMessagesQueue.Dequeue().PadRight(80, ' '));
Thread.Sleep(2000);
}
Other than that, AutoResetEvent is extremely heavy object. it uses a kernel object to synchronize threads. every call is a system call which may be costly. consider using user mode synchronization object (doesn't .net provides some sort of condition variable?)
This is an refactored code snippet of how I would implement this functionality:
class MessagesManager {
private readonly AutoResetEvent messagesAvailableSignal = new AutoResetEvent(false);
private readonly ConcurrentQueue<string> messageQueue = new ConcurrentQueue<string>();
public void PublishMessage(string Message) {
messageQueue.Enqueue(Message);
messagesAvailableSignal.Set();
}
public void SendMessageToTcpIP() {
while (true) {
messagesAvailableSignal.WaitOne();
while (!messageQueue.IsEmpty) {
string message;
if (messageQueue.TryDequeue(out message)) {
TcpIpMessageSenderClient.ConnectAndSendMessage(message.PadRight(80, ' '));
}
}
}
}
}
Points to note here:
This drains the queue completely: if there is at least one message, it will process all of them
The 2000ms Thread sleep is removed

How to stop a thread if thread takes too long

I have a situation that i export data to a file and what i have been asked to do is to provide a cancel button which on click will stop the export if it takes too much time to export.
I started exporting to the file in a thread. And i try to abort the thread on the button click. But it do not work.
I searched on Google and i found that abort() is not recommended. But what else should I choose to achieve it?
My current code is:
private void ExportButtonClick(object param)
{
IList<Ur1R2_Time_Points> data = ct.T_UR.ToList();
DataTable dtData = ExportHelper.ToDataTable(data);
thread = new Thread(new ThreadStart(()=>ExportHelper.DataTableToCsv(dtData, "ExportFile.csv")));
thread.SetApartmentState(ApartmentState.STA);
thread.IsBackground = true;
thread.Name = "PDF";
thread.Start();
}
private void StopButtonClick(object param)
{
if (thread.Name == "PDF")
{
thread.Interrupt();
thread.Abort();
}
}
Aborting a thread is a bad idea, especially when dealing with files. You won't have a chance to clean up half-written files or clean-up inconsistent state.
It won't harm the .NET Runtime bat it can hurt your own application eg if the worker method leaves global state, files or database records in an inconsistent state.
It's always preferable to use cooperative cancellation - the thread periodically checks a coordination construct like a ManualResetEvent or CancellationToken. You can't use a simple variable like a Boolean flag, as this can lead to race conditions, eg if two or more threads try to set it at the same time.
You can read about cancellation in .NET in the Cancellation in Managed Threads section of MSDN.
The CancellationToken/CancellationTokenSource classes were added in .NET 4 to make cancellation easier that passing around events.
In your case, you should modify your DataTableToCsv to accept a CancellationToken. That token is generated by a CancellationTokenSource class.
When you call CancellationTokenSource.Cancel the token's IsCancellationRequested property becomes true. Your DataTableToCsv method should check this flag periodically. If it's set, it should exit any loops, delete any inconsistent files etc.
Timeouts are directly supported with CancelAfter. Essentially, CancelAfter starts a timer that will fire Cancel when it expires.
Your code could look like this:
CancellationTokenSource _exportCts = null;
private void ExportButtonClick(object param)
{
IList<Ur1R2_Time_Points> data = ct.T_UR.ToList();
DataTable dtData = ExportHelper.ToDataTable(data);
_exportCts=new CancellationTokenSource();
var token=_exportCts.Token;
thread = new Thread(new ThreadStart(()=>
ExportHelper.DataTableToCsv(dtData, "ExportFile.csv",token)));
thread.SetApartmentState(ApartmentState.STA);
thread.IsBackground = true;
thread.Name = "PDF";
_exportCts.CancelAfter(10000);
thread.Start();
}
private void StopButtonClick(object param)
{
if (_exportCts!=null)
{
_exportCts.Cancel();
}
}
DataTableToCsv should contain code similar to this:
foreach(var row in myTable)
{
if (token.IsCancellationRequested)
{
break;
}
//else continue with processing
var line=String.Join(",", row.ItemArray);
writer.WriteLine(line);
}
You can clean up your code quite a bit by using tasks instead of raw threads:
private async void ExportButtonClick(object param)
{
IList<Ur1R2_Time_Points> data = ct.T_UR.ToList();
DataTable dtData = ExportHelper.ToDataTable(data);
_exportCts=new CancellationTokenSource();
var token=_exportCts.Token;
_exportCts.CancelAfter(10000);
await Task.Run(()=> ExportHelper.DataTableToCsv(dtData, "ExportFile.csv",token)));
MessageBox.Show("Finished");
}
You could also speed it up by using asynchronous operations, eg to read data from the database or write to text files without blocking or using threads. Windows IO (both file and network) is asynchronous at the driver level. Methods like File.WriteLineAsync don't use threads to write to a file.
Your Export button handler could become :
private void ExportButtonClick(object param)
{
IList<Ur1R2_Time_Points> data = ct.T_UR.ToList();
DataTable dtData = ExportHelper.ToDataTable(data);
_exportCts=new CancellationTokenSource();
var token=_exportCts.Token;
_exportCts.CancelAfter(10000);
await Task.Run(async ()=> ExportHelper.DataTableToCsv(dtData, "ExportFile.csv",token)));
MessageBox.Show("Finished");
}
and DataTableToCsv :
public async Task DataTableToCsv(DataTable table, string file,CancellationToken token)
{
...
foreach(var row in myTable)
{
if (token.IsCancellationRequested)
{
break;
}
//else continue with processing
var line=String.Join(",", row.ItemArray);
await writer.WriteLineAsync(line);
}
You can use a boolean flag. Use a volatile boolean for that.
In the helper do something like:
this.aborted = false;
while(!finished && !aborted) {
//process one row
}
Whenever you want to cancel the operation, you call a method to set aborted to true:
public void Abort() {
this.aborted = true;
}
Have a read here: https://msdn.microsoft.com/en-us/library/system.threading.threadabortexception(v=vs.110).aspx
When a call is made to the Abort method to destroy a thread, the common language runtime throws a ThreadAbortException. ThreadAbortException is a special exception that can be caught, but it will automatically be raised again at the end of the catch block. When this exception is raised, the runtime executes all the finally blocks before ending the thread. Because the thread can do an unbounded computation in the finally blocks or call Thread.ResetAbort to cancel the abort, there is no guarantee that the thread will ever end. If you want to wait until the aborted thread has ended, you can call the Thread.Join method. Join is a blocking call that does not return until the thread actually stops executing.
Since Thread.Abort() is executed by another thread, it can happen anytime and when it happens ThreadAbortException is thrown on target thread.
Inside ExportHelper.DataTableToCsv:
catch(ThreadAbortException e) {
Thread.ResetAbort();
}
On StopButtonClick
if (thread.Name == "PDF")
{
thread.Interrupt();
thread.Join();
}
To Stop a thread you have one option of Thread.Abort.However because this method thrown ThreadAbortException on the target thread when it executed by another thead.
Which is not recommended.
The second option to stop a thread is by using shared variable that both your target and your calling thread can access.
See the Example ::
public static class Program
{
public static void ThreadMethod(object o)
{
for (int i = 0; i < (int)o; i++)
{
Console.WriteLine("ThreadProc: { 0}", i);
Thread.Sleep(0);
}
}
public static void Main()
{
bool stopped = false;
Thread t = new Thread(new ThreadStart(() =>
{
while (!stopped)
{
Console.WriteLine("Running...");
Thread.Sleep(1000);
}
}));
t.Start();
Console.WriteLine("Press any key to exit");
Console.ReadKey();
stopped = true;
t.Join();
}
}
//Source :: Book --> Programming in c#

Tasks appear to be blocking one another

I have a method called WaitForAction, which takes an Action delegate and executes it in a new Task. The method blocks until the task completes or until a timeout expires. It uses ManualResetEvent to wait for timeout/completion.
The following code shows an attempt to test the method in a multi-threaded environment.
class Program
{
public static void Main()
{
List<Foo> list = new List<Foo>();
for (int i = 0; i < 10; i++)
{
Foo foo = new Foo();
list.Add(foo);
foo.Bar();
}
SpinWait.SpinUntil(() => list.Count(f => f.finished || f.failed) == 10, 2000);
Debug.WriteLine(list.Count(f => f.finished));
}
}
public class Foo
{
public volatile bool finished = false;
public volatile bool failed = false;
public void Bar()
{
Task.Factory.StartNew(() =>
{
try
{
WaitForAction(1000, () => { });
finished = true;
}
catch
{
failed = true;
}
});
}
private void WaitForAction(int iMsToWait, Action action)
{
using (ManualResetEvent waitHandle = new ManualResetEvent(false))
{
Task.Factory.StartNew(() =>
{
action();
waitHandle.SafeSet();
});
if (waitHandle.SafeWaitOne(iMsToWait) == false)
{
throw new Exception("Timeout");
}
}
}
}
As the Action is doing nothing I would expect the 10 tasks started by calling Foo.Bar 10 times to complete well within the timeout. Sometimes this happens, but usually the program takes 2 seconds to execute and reports that only 2 instances of Foo 'finished' without error. In other words, 8 calls to WaitForAction have timed out.
I'm assuming that WaitForAction is thread safe, as each call on a Task-provided thread has its own stack. I have more or less proved this by logging the thread ID and wait handle ID for each call.
I realise that this code presented is a daft example, but I am interested in the principle. Is it possible for the task scheduler to be scheduling a task running the action delegate to the same threadpool thread that is already waiting for another action to complete? Or is there something else going on that I've missed?
Task.Factory utilizes the ThreadPool by default. With every call to WaitHandle.WaitOne, you block a worker thread. The .Net 4/4.5 thread pool starts with a small number of worker threads depending on your hardware platform (e.g., 4 on my machine) and it re-evaluates the pool size periodically (I believe it is every 1 second), creating new workers if necessary.
Since your program blocks all worker threads, and the thread pool doesn't grow fast enough, your waithandles timeout as you saw.
To confirm this, you can either 1) increase the timeouts or 2) increase the beginning thread pool size by adding the following line to the beginning of your program:
ThreadPool.SetMinThreads(32, 4);
then you should see the timeouts don't occur.
I believe your question was more academic than anything else, but you can read about a better implementation of a task timeout mechanism here, e.g.
var task = Task.Run(someAction);
if (task == await Task.WhenAny(task, Task.Delay(millisecondsTimeout)))
await task;
else
throw new TimeoutException();

How to track if an async/awaitable task is running

I'm trying to transition from the Event-based Asynchronous Pattern where I tracked running methods using unique id's and the asynoperationmanager. As this has now been dropped from Windows 8 Apps I'm trying to get a similar effect with Async/Await but can't quite figure out how.
What I'm trying to achieve is something like
private async Task updateSomething()
{
if(***the method is already running***)
{
runagain = true;
}
else
{
await someMethod();
if (runagain)
{
run the method again
}
}
}
The part I'm struggling with is finding out if the method is running. I've tried creating a Task and looking at the status of both that and the .status of the async method but they don't appear to be the correct place to look.
Thanks
UPDATE: This is the current code I use in .net 4 to achieve the same result. _updateMetaDataAsync is a class based on the Event-Based Asynchronous Pattern.
private void updateMetaData()
{
if (_updateMetaDataAsync.IsTaskRunning(_updateMetaDataGuid_CheckAllFiles))
{
_updateMetaDataGuid_CheckAllFiles_Again = true;
}
else
{
_updateMetaDataGuid_CheckAllFiles_Again = false;
_updateMetaDataAsync.UpdateMetaDataAsync(_updateMetaDataGuid_CheckAllFiles);
}
}
private void updateMetaDataCompleted(object sender, UpdateMetaDataCompletedEventArgs e)
{
if (_updateMetaDataGuid_CheckAllFiles_Again)
{
updateMetaData();
}
}
async/await itself is intended to be used to create sequential operations executed asynchronously from the UI thread. You can get it to do parallel operations, but generally the operations "join" back to the UI thread with some sort of result. (there's also the possibility of doing "fire-and-forget" types of asynchronous operations with await but it's not recommended). i.e. there's nothing inherent to async/await to support progress reporting.
You can get progress out of code using async/await; but you need to use new progress interfaces like IProgress<T>. For more info on progress reporting with async/await, see http://blogs.msdn.com/b/dotnet/archive/2012/06/06/async-in-4-5-enabling-progress-and-cancellation-in-async-apis.aspx. Migrating to this should just be a matter of calling an IProgress delegate instead of a Progress event.
If you're using a Task you've created, you can check the Task's Status property (or just see Task.IsCompleted if completion is the only state you are interested in).
That being said, await will not "return" until the operation either completes, raises an exception, or cancels. You can basically safely assume that, if you're still waiting on the "await", your task hasn't completed.
SemaphoreSlim queueToAccessQueue = new SemaphoreSlim(1);
object queueLock = new object();
long queuedRequests = 0;
Task _loadingTask;
public void RetrieveItems() {
lock (queueLock) {
queuedRequests++;
if (queuedRequests == 1) { // 1 is the minimum size of the queue before another instance is queued
_loadingTask = _loadingTask?.ContinueWith(async () => {
RunTheMethodAgain();
await queueToAccessQueue.WaitAsync();
queuedRequests = 0; // indicates that the queue has been cleared;
queueToAccessQueue.Release()
}) ?? Task.Run(async () => {
RunTheMethodAgain();
await queueToAccessQueue.WaitAsync();
queuedRequests = 0; // indicates that the queue has been cleared;
queueToAccessQueue.Release();
});
}
}
}
public void RunTheMethodAgain() {
** run the method again **
}
The added bonus is that you can see how many items are sitting in the queue!

Categories