This is my function that get IEnumerable<string> source and search all the files inside this path:
public void search()
{
Task.Factory.StartNew(() =>
{
try
{
Parallel.ForEach(_source,
new ParallelOptions
{
MaxDegreeOfParallelism = 5 //limit number of parallel threads here
},
file =>
{
FileChecker fileChecker = new FileChecker();
string result = fileChecker.check(file);
if (result != null)
OnFileAddEvent(result);
});
}
catch (Exception)
{ }
}).ContinueWith
(t =>
{
OnFinishSearchEvent();
}
, TaskScheduler.FromCurrentSynchronizationContext() //to ContinueWith (update UI) from UI thread
);
}
public void search2()
{
Task.Factory.StartNew(() =>
{
var filtered = _source.AsParallel()
.WithDegreeOfParallelism(5)
.Where(file =>
{
try
{
FileChecker fileChecker = new FileChecker();
string result = fileChecker.check(file);
if (result != null)
OnFileAddEvent(result);
return true;
}
catch (Exception)
{
return false;
}
});
return filtered.ToList();
}).ContinueWith
(t =>
{
OnFinishSearchEvent();
}
, TaskScheduler.FromCurrentSynchronizationContext() //to ContinueWith (update UI) from UI thread
);
}
It looks like you have a fundamental misunderstanding of Multithreading and Exception Handling.
Exception handling in .net typically starts with a managed exception being thrown. This then causes the .net VM to then walk up the call stack until it finds an appropriate try catch block.
In this case the Task.Factory.StartNew(Action), is a wrapper to push onto the thread pool work described in the Action delegate. The top of the call stack would in this case be the Action delegate described in...
file =>
{
// check my file before adding to my Listbox
}
So when an exception bubbles up...nothing is "up" the call stack to catch it.
The solution then is either to, as others have described, add a root level try catch in the delegate you pass into the Task.Factory.StartNew(Action) method.
More generally one would normally add a OnError continuation on the Task returned by the Task.Factory.StartNew(Action)...however I would also add that I would be worried that this entire method is highly flawed, as none of the worker threads should be able to add to the ListBox. The ListBox should ONLY ever be accessed by the STA thread that constructed it.
Ultimately I would change the entire method to the following...
public void search()
{
Task.Factory.StartNew(() =>
{
var filtered = source.AsParallel()
.WithDegreeOfParallelism(5)
.Where(file =>
{
try
{
//Some filter function...
}
catch(Exception)
{
return false;
}
});
return filtered.ToList();
}).ContinueWith
(t =>
{
foreach(var result in t.Result)
{
MyListBox.Add(result);
}
OnFinishSearchEvent();
}
, TaskScheduler.FromCurrentSynchronizationContext() //to ContinueWith (update UI) from UI thread
);
}
You must catch what is inside of the Task:
public void search()
{
try
{
Task.Factory.StartNew(() =>
{
try{
Parallel.ForEach(source,
new ParallelOptions
{
MaxDegreeOfParallelism = 5 //limit number of parallel threads here
},
file =>
{
// check my file before adding to my Listbox
});
}).ContinueWith
(t =>
{
OnFinishSearchEvent();
}
, TaskScheduler.FromCurrentSynchronizationContext() //to ContinueWith (update UI) from UI thread
);
catch(Exception ex){
}
}
catch (Exception ex)
{
}
}
Related
When performing a long running operation I noticed that i could kickstart a long running sub-operation right off the start line and do other stuff while it fetches results from caches/databases.
The given operation is:
public async Task<Fichaclis> Finalize()
{
using (TransactionScope transaction = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
transactionTimer.Start();
var agendasTransitionTask = ExecuteAgendas();
... DO ALOT OF SYNC OPERATIONS ...
await agendasTransitionTask;
transaction.Complete();
}
}
private Task ExecuteAgendas()
{
return ags.GetAgendas().ContinueWith((prev) =>
{
var currentAgendas = prev.Result;
foreach (var item in currentAgendas)
{
... DO QUICK SYNC STUFF...
}
return ags.BulkEditAgenda(currentAgendas);
});
}
GetAgendas is a method used all over with the following signature:
public async Task<List<Agendas>> GetAgendas()
because it's widely used, i believe the problem is not there. As for BulkEditAgenda:
public async Task BulkEditAgenda(IEnumerable<Agendas> agendas)
{
if (agendas == null || agendas.Count() == 0)
{
return;
}
var t1 = AddOrUpdateCache(agendas);
var t2 = Task.Factory.StartNew(() =>
{
try
{
foreach (var item in agendas)
{
EditNoReconnection(item);
}
Save();
}
catch (Exception ex)
{
//log
throw;
}
});
await Task.WhenAll(t1, t2);
}
EditNoReconnect and Save are both sync methods.
private Task AddOrUpdateCache(IEnumerable<Agendas> agendas)
{
var tasks = new List<Task>();
foreach (var item in agendas)
{
tasks.Add(TryGetCache(item)
.ContinueWith((taskResult) =>
{
...DO QUICK SYNC STUFF...
})
);
}
return Task.WhenAll(tasks);
}
TryGetCache is also a widely used method, so I think it's safe... it's signature is private Task<AgendasCacheLookupResult> TryGetCache(
So, resuming the issue at hand: For a small set of items do in the sync session of the Finalize method the command transaction.Complete() is execute before Save() (inside BulkEditAgendas). For a regular or large amount of items, it works as expected.
This means that i'm not chaining the Tasks correctly, or my understanding of how Async/Await + Tasks/ContinueWith works is fundamentally incorrect. Where am I wrong?
The problem is most likely here:
private Task ExecuteAgendas()
{
return ags.GetAgendas().ContinueWith((prev) =>
{
var currentAgendas = prev.Result;
foreach (var item in currentAgendas)
{
... DO QUICK SYNC STUFF...
}
return ags.BulkEditAgenda(currentAgendas);
});
}
First, what you return from this is the continuation task (result of ContinueWith). But body of ContinueWith ends when you do
return ags.BulkEditAgenda(currentAgendas);
So body of continuation ends potentially before BulkEditAgenda task is completed (you don't wait in any way for completion of BulkEditAgenda). So this line
await agendasTransitionTask;
Returns while BulkEditAgenda is still in progress. To clarify even more, note that what is returned from ExecuteAgendas is Task<Task> and result of await agendasTransitionTask is Task which represents your running BulkEditAgenda.
To fix, just use async\await like you do everywhere else:
private async Task ExecuteAgendas() {
var currentAgengas = await ags.GetAgendas();
foreach (var item in currentAgendas) {
// do stuff
}
await ags.BulkEditAgenda(currentAgendas);
}
I have implemented a service name ExamClient which have two operations one is Ping which return a basic string which mean service is available and one is FindStudy which search in DB it may take a long to be proceeded.
In the other side I have several endpoints of ExamClient I wand to run FindStudy per end point by task so in a Dispatcher I have something like this:
public FindStudies_DTO_OUT FindStudies(FindStudies_DTO_IN findStudies_DTO_IN)
{
List<Study_C> ret = new List<Study_C>();
List<Task> tasks = new List<Task>();
foreach (var sp in Cluster)
{
string serviceAddress = sp.GetLibraryAddress(ServiceLibrary_C.PCM) + "/Exam.svc";
var task = Task.Run(() =>
{
ExamClient examClient = new ExamClient(serviceAddress.GetBinding(), new EndpointAddress(serviceAddress), Token);
var ping = Task.Run(() =>
{
examClient.Ping();
});
if (!ping.Wait(examClient.Endpoint.Binding.OpenTimeout))
{
Logging.Log(LoggingMode.Warning, "Timeout on FindStudies for:{0}, address:{1}", sp.Name, serviceAddress);
return new List<Study_C>(); // if return null then need to manage it on ret.AddRange(t.Result);
}
return (examClient.FindStudies(findStudies_DTO_IN).Studies.Select(x =>
{
x.StudyInstanceUID = string.Format("{0}|{1}", sp.Name, x.StudyInstanceUID);
x.InstitutionName = sp.Name;
return x;
}));
});
task.ContinueWith(t =>
{
lock (ret)
{
ret.AddRange(t.Result);
}
}, TaskContinuationOptions.OnlyOnRanToCompletion);
task.ContinueWith(t =>
{
Logging.Log(LoggingMode.Error, "FindStudies failed for :{0}, address:{1}, EXP:{2}", sp.Name, serviceAddress, t.Exception.ToString());
}, TaskContinuationOptions.OnlyOnFaulted);
tasks.Add(task);
}
try
{
Task.WaitAll(tasks.ToArray());
}
catch (AggregateException aggEx)
{
foreach (Exception exp in aggEx.InnerExceptions)
{
Logging.Log(LoggingMode.Error, "Error while FindStudies EXP:{0}", exp.ToString());
}
}
return new FindStudies_DTO_OUT(ret.Sort(findStudies_DTO_IN.SortColumnName, findStudies_DTO_IN.SortOrderBy));
}
First I have to run Ping per end point to know connection is established
after that FindStudy.
if there are three end pints in Cluster six task be run in parallel mode, 3 for Ping and 3 for FindStudy.
I think something is wrong with my code to handle exception nice...
So what is the best way to implement this scenario ?
thanks in advance.
Let me throw my answer to simplify and remove unnecessary code block. And bit of explanation along the code.
public FindStudies_DTO_OUT FindStudies(FindStudies_DTO_IN findStudies_DTO_IN)
{
// Thread-safe collection
var ret = new ConcurrentBag<Study_C>()
// Loop cluster list and process each item in parallel and wait all process to finish. This handle the parallism better than task run
Parallel.Foreach(Cluster, (sp) =>
{
var serviceAddress = sp.GetLibraryAddress(ServiceLibrary_C.PCM) + "/Exam.svc";
ExamClient examClient = new ExamClient(serviceAddress.GetBinding(), new EndpointAddress(serviceAddress), Token);
try
{
examClient.Ping();
// declare result variable type outside try catch to be visible below
var result = examClient.FindStudies(findStudies_DTO_IN);
}
catch(TimeoutException timeoutEx)
{
// abort examclient to dispose channel properly
Logging.Log(LoggingMode.Warning, "Timeout on FindStudies for:{0}, address:{1}", sp.Name, serviceAddress);
}
catch(FaultException fault)
{
Logging.Log(LoggingMode.Error, "FindStudies failed for :{0}, address:{1}, EXP:{2}", sp.Name, serviceAddress, fault.Exception.ToString());
}
catch(Exception ex)
{
// anything else
}
// add exception type as needed for proper logging
// use inverted if to reduce nested condition
if( result == null )
return null;
var study_c = result.Studies.Select(x =>
{
x.StudyInstanceUID = string.Format("{0}|{1}", sp.Name, x.StudyInstanceUID);
x.InstitutionName = sp.Name;
return x;
}));
// Thread-safe collection
ret.AddRange(study_c);
});
// for sorting i guess concurrentBag has orderby; if not working convert to list
return new FindStudies_DTO_OUT(ret.Sort(findStudies_DTO_IN.SortColumnName, findStudies_DTO_IN.SortOrderBy));
}
Note : Code haven't tested but the gist is there. Also I feels like task.run inside task.run is bad idea can't remember which article I read it (probably from Stephen Cleary not sure).
I'm working with Db(via SQLite.NET PCL,not async version). At current moment i have a listview with some data(taken from db),also i got a searchbar/entry(its nvm),where user can input some values and then,via LINQ i'm going to make a query and update SourceItems of my List.
So the problems comes with performance,because my DB got million records and simple LINQ query working very slow.In other words,when user enter too fast some data,application gets huge lags and sometimes can Crash.
To resolve this problem,some things come in my mind(theoretically solution):
1)Need to put method(which i make query into db) on Task(to unlock my main UI thread)
2)Initialize timer,then turn on and :
if 1 second passed,then => run my method(query) on Task(similiar to background thread)
if 1 second doesn't passed,then exit from anonymous method.
Something like that(similar) or any suggestions.Thanks!
UPD:
So to be honest,i tried too much and didnt get a good result
BTW,my current code(Snippets) :
1) My SearchMethod
public void QueryToDB(string filter)
{
this.BeginRefresh ();
if (string.IsNullOrWhiteSpace (filter))
{
this.ItemsSource = SourceData.Select(x => x.name); // Source data is my default List of items
}
else
{
var t = App.DB_Instance.FilterWords<Words>(filter); //FilterWords it's a method,where i make direct requests to the database
this.ItemsSource = t.Select(x => x.name);
}
this.EndRefresh ();
}
2)Searchbar.TextChanged (anonymous method)
searchBar.TextChanged +=async (sender, e) =>
{
ViewModel.isBusy = true; //also i got a indicator,to show progress,while query working
await Task.Run(()=> //my background,works fine
{
listview.QueryToDB(searchBar.Text);
});
ViewModel.isBusy = false; // after method is finished,indicator turn off
};
The main problem is how to implement this part(with these case's),where 1 second passed and only then i'm going to make query to update my sourceItems of list(everytime,when user inputs some value into searchbar,this trigger(timer) must refresh again to zero).
Any help will be appreciated,thanks!
PS Sorry for my eng. skills!
One way to do it is to combine async Task.Run and CancellationTokenSource:
CancellationTokenSource cancellationTokenSource;
searchView.TextChanged += async (sender, e) =>
{
if (cancellationTokenSource != null) cancellationTokenSource.Cancel();
cancellationTokenSource = new CancellationTokenSource();
var cancellationToken = cancellationTokenSource.Token;
var searchBar = (sender as SearchBar);
if (searchBar != null)
{
string searchText = searchBar.Text;
try
{
await Task.Delay(650, cancellationToken);
if (cancellationToken.IsCancellationRequested) return;
var searchResults = await Task.Run(() =>
{
return ViewModel.Search(searchText);
});
if (cancellationToken.IsCancellationRequested) return;
ViewModel.YouItems.Repopulate(searchResults);
}
catch (OperationCanceledException)
{
// Expected
}
catch (Exception ex)
{
Logger.Error(ex);
}
}
};
You want to wait before actually performing your search. Killing a search task in midway can cause undefined behavior.
You want to save the current search filter and compared it again 1 second later. If that hasn't changed, do the search. Otherwise, abort :
searchBar.TextChanged += async (sender, e) =>
{
var filter = searchBar.Text;
await Task.Run(() =>
{
Thread.Sleep(1000);
if (filter == searchBar.Text)
listview.QueryToDB(searchBar.Text);
});
};
As to keep the view model updated, move your isBusy assignments inside QueryToDB because that is when your view model is truly busy :
public void QueryToDB(string filter)
{
this.BeginRefresh ();
ViewModel.isBusy = true;
// do your search
ViewModel.isBusy = false;
this.EndRefresh ();
}
I'm currently working on a small project that use Tasks.Dataflow and I'm a little bit confused about UI notifications. I want to separate my "Pipeline" from the UI in another class called PipelineService, but I'm unable to notify the UI on cancelled operations or data that should be shown up in the UI. How can this be handled in the right manner?
Code:
private void btnStartPipeline_Click(object sender, EventArgs e)
{
btnStartPipeline.Enabled = false;
btnStopPipeline.Enabled = true;
cancellationToken = new CancellationTokenSource();
if (head == null)
{
head = pipeline.SearchPipeline();
}
head.Post(AppDirectoryNames.STORE_PATH);
}
private void btnStopPipeline_Click(object sender, EventArgs e)
{
cancellationToken.Cancel();
}
This methods related to Form1.cs. head is type of ITargetBlock<string>.
public ITargetBlock<string> SearchPipeline()
{
var search = new TransformBlock<string, IEnumerable<FileInfo>>(path =>
{
try
{
return Search(path);
}
catch (OperationCanceledException)
{
return Enumerable.Empty<FileInfo>();
}
});
var move = new ActionBlock<IEnumerable<FileInfo>>(files =>
{
try
{
Move(files);
}
catch (OperationCanceledException ex)
{
throw ex;
}
});
var operationCancelled = new ActionBlock<object>(delegate
{
form.Invoke(form._update);
},
new ExecutionDataflowBlockOptions
{
TaskScheduler = TaskScheduler.FromCurrentSynchronizationContext()
});
search.LinkTo(move);
search.LinkTo(operationCancelled);
return search;
}
Invoke don't take effect with delegate methods. What am I doing wrong here?
At first, I din't understand why you think your code should work. The way you set up your dataflow network, each IEnumerable<FileInfo> generated by the search block is first sent to the move block. If the move block didn't accept it (which never happens here), it would be sent to the operationCancelled block. That doesn't seem to be what you want at all.
After looking at the walkthough you seem to be basing your code on, it does cancellation similar than you, but with one significant difference: it uses LinkTo() with a predicate, which rejects a message that signifies cancellation. If you wanted to do the same, you would need to also use LinkTo() with a predicate. And since I don't think an empty sequence is a good choice to signify cancellation, I think you should switch to null too.
Also, you don't need to use form.Invoke() if you're already using TaskScheduler.FromCurrentSynchronizationContext(), they do basically the same thing.
public ITargetBlock<string> SearchPipeline()
{
var search = new TransformBlock<string, IEnumerable<FileInfo>>(path =>
{
try
{
return Search(path);
}
catch (OperationCanceledException)
{
return null;
}
});
var move = new ActionBlock<IEnumerable<FileInfo>>(files =>
{
try
{
Move(files);
}
catch (OperationCanceledException)
{
// swallow the exception; we don't want to fault the block
}
});
var operationCancelled = new ActionBlock<object>(_ => form._update(),
new ExecutionDataflowBlockOptions
{
TaskScheduler = TaskScheduler.FromCurrentSynchronizationContext()
});
search.LinkTo(move, files => files != null);
search.LinkTo(operationCancelled);
return search;
}
I have the following code that throws an exception:
ThreadPool.QueueUserWorkItem(state => action());
When the action throws an exception, my program crashes. What is the best practice for handling this situation?
Related: Exceptions on .Net ThreadPool Threads
You can add try/catch like this:
ThreadPool.QueueUserWorkItem(state =>
{
try
{
action();
}
catch (Exception ex)
{
OnException(ex);
}
});
If you have access to action's source code, insert a try/catch block in that method; otherwise, create a new tryAction method which wraps the call to action in a try/catch block.
If you're using .Net 4.0, it might be worth investigating the Task class because it can take care of this for you.
The equivalent of your original code, but using Tasks, looks like
Task.Factory.StartNew(state => action(), state);
To deal with exceptions you can add a continuation to the Task returned by StartNew. It might look like this:
var task = Task.Factory.StartNew(state => action(), state);
task.ContinueWith(t =>
{
var exception = t.Exception.InnerException;
// handle the exception here
// (note that we access InnerException, because tasks always wrap
// exceptions in an AggregateException)
},
TaskContinuationOptions.OnlyOnFaulted);
On the other thread, (in the method you are "queueing" up, add a try catch clause... .Then in the catch, place the caught exception into a shared Exception variable (visible to the main thread).
Then in your main thread, when all queued items have finished (use a wait handle array for this) Check if some thread populated that shared exception with an exception... If it did, rethrow it or handle it as appropriate...
here's some sample code from a recent project I used this for...
HasException is shared boolean...
private void CompleteAndQueuePayLoads(
IEnumerable<UsagePayload> payLoads, string processId)
{
List<WaitHandle> waitHndls = new List<WaitHandle>();
int defaultMaxwrkrThreads, defaultmaxIOThreads;
ThreadPool.GetMaxThreads(out defaultMaxwrkrThreads,
out defaultmaxIOThreads);
ThreadPool.SetMaxThreads(
MDMImportConfig.MAXCONCURRENTIEEUSAGEREQUESTS,
defaultmaxIOThreads);
int qryNo = 0;
foreach (UsagePayload uPL in payLoads)
{
ManualResetEvent txEvnt = new ManualResetEvent(false);
UsagePayload uPL1 = uPL;
int qryNo1 = ++qryNo;
ThreadPool.QueueUserWorkItem(
delegate
{
try
{
Thread.CurrentThread.Name = processId +
"." + qryNo1;
if (!HasException && !uPL1.IsComplete)
IEEDAL.GetPayloadReadings(uPL1,
processId, qryNo1);
if (!HasException)
UsageCache.PersistPayload(uPL1);
if (!HasException)
SavePayLoadToProcessQueueFolder(
uPL1, processId, qryNo1);
}
catch (MeterUsageImportException iX)
{
log.Write(log.Level.Error,
"Delegate failed " iX.Message, iX);
lock (locker)
{
HasException = true;
X = iX;
foreach (ManualResetEvent
txEvt in waitHndls)
txEvt.Set();
}
}
finally { lock(locker) txEvnt.Set(); }
});
waitHndls.Add(txEvnt);
}
util.WaitAll(waitHndls.ToArray());
ThreadPool.SetMaxThreads(defaultMaxwrkrThreads,
defaultmaxIOThreads);
lock (locker) if (X != null) throw X;
}
What I usually do is to create a big try ... catch block inside the action() method
then store the exception as a private variable then handle it inside the main thread
Simple Code:
public class Test
{
private AutoResetEvent _eventWaitThread = new AutoResetEvent(false);
private void Job()
{
Action act = () =>
{
try
{
// do work...
}
finally
{
_eventWaitThread.Set();
}
};
ThreadPool.QueueUserWorkItem(x => act());
_eventWaitThread.WaitOne(10 * 1000 * 60);
}
}