To call method asynchronously from loop in c# - c#

I have a requirement where I have pull data from Sailthru API.The problem is it will take forever if I make a call synchronously as the response time depends on data.I have very much new to threading and tried out something but it didn't seems to work as expected. Could anyone please guide.
Below is my sample code
public void GetJobId()
{
Hashtable BlastIDs = getBlastIDs();
foreach (DictionaryEntry entry in BlastIDs)
{
Hashtable blastStats = new Hashtable();
blastStats.Add("stat", "blast");
blastStats.Add("blast_id", entry.Value.ToString());
//Function call 1
//Thread newThread = new Thread(() =>
//{
GetBlastDetails(entry.Value.ToString());
//});
//newThread.Start();
}
}
public void GetBlastDetails(string blast_id)
{
Hashtable tbData = new Hashtable();
tbData.Add("job", "blast_query");
tbData.Add("blast_id", blast_id);
response = client.ApiPost("job", tbData);
object data = response.RawResponse;
JObject jtry = new JObject();
jtry = JObject.Parse(response.RawResponse.ToString());
if (jtry.SelectToken("job_id") != null)
{
//Function call 2
Thread newThread = new Thread(() =>
{
GetJobwiseDetail(jtry.SelectToken("job_id").ToString(), client,blast_id);
});
newThread.Start();
}
}
public void GetJobwiseDetail(string job_id, SailthruClient client,string blast_id)
{
Hashtable tbData = new Hashtable();
tbData.Add("job_id", job_id);
SailthruResponse response;
response = client.ApiGet("job", tbData);
JObject jtry = new JObject();
jtry = JObject.Parse(response.RawResponse.ToString());
string status = jtry.SelectToken("status").ToString();
if (status != "completed")
{
//Function call 3
Thread.Sleep(3000);
Thread newThread = new Thread(() =>
{
GetJobwiseDetail(job_id, client,blast_id);
});
newThread.Start();
string str = "test sleeping thread";
}
else {
string export_url = jtry.SelectToken("export_url").ToString();
TraceService(export_url);
SaveCSVDataToDB(export_url,blast_id);
}
}
I want the Function call 1 to start asynchronously(or may be after gap of 3 seconds to avoid load on processor). In Function call 3 I am calling the same function again if the status is not completed with delay of 3 seconds to give time for receiving response.
Also correct me if my question sounds stupid.

You should never use Thread.Sleep like that because among others you don't know if 3000ms will be enough and instead of using Thread class you should use Task class which is much better option because it provides some additional features, thread pool managing etc. I don't have access to IDE but you should try to use Task.Factory.StartNew to invoke your request asynchronously then your function GetJobwiseDetail should return some value that you want to save to database and then use .ContinueWith with delegate that should save your function result into database. If you are able to use .NET 4.5 you should try feature async/await.
Easier solution:
Parallel.ForEach
Get some details on the internet about it, you have to know nothing about threading. It will invoke every loop iteration on another thread.

First of all, avoid at all cost starting new Threads in your code; those threads will suck your memory like a collapsing star because each of those gets ~1MB of memory allocated to it.
Now for the code - depending on the framework you can choose from the following:
ThreadPool.QueueUserWorkItem for older versions of .NET Framework
Parallel.ForEach for .NET Framework 4
async and await for .NET Framework 4.5
TPL Dataflow also for .NET Framework 4.5
The code you show fits quite well with Dataflow and also I wouldn't suggest using async/await here because its usage would transform your example is a sort of fire and forget mechanism which is against the recommendations of using async/await.
To use Dataflow you'll need in broad lines:
one TransformBlock which will take a string as an input and will return the response from the API
one BroadcastBlock that will broadcast the response to
two ActionBlocks; one to store data in the database and the other one to call TraceService
The code should look like this:
var downloader = new TransformBlock<string, SailthruResponse>(jobId =>
{
var data = new HashTable();
data.Add("job_id", jobId);
return client.ApiGet("job", data);
});
var broadcaster = new BroadcastBlock<SailthruResponse>(response => response);
var databaseWriter = new ActionBlock<SailthruResponse>(response =>
{
// save to database...
})
var tracer = new ActionBlock<SailthruResponse>(response =>
{
//TraceService() call
});
var options = new DataflowLinkOptions{ PropagateCompletion = true };
// link blocks
downloader.LinkTo(broadcaster, options);
broadcaster.LinkTo(databaseWriter, options);
broadcaster.LinkTo(tracer, options);
// process values
foreach(var value in getBlastIds())
{
downloader.Post(value);
}
downloader.Complete();

try with async as below.
async Task Task_MethodAsync()
{
// . . .
// The method has no return statement.
}

Related

Task Factory for each loop with await

I'm new to tasks and have a question regarding the usage. Does the Task.Factory fire for all items in the foreach loop or block at the 'await' basically making the program single threaded? If I am thinking about this correctly, the foreach loop starts all the tasks and the .GetAwaiter().GetResult(); is blocking the main thread until the last task is complete.
Also, I'm just wanting some anonymous tasks to load the data. Would this be a correct implementation? I'm not referring to exception handling as this is simply an example.
To provide clarity, I am loading data into a database from an outside API. This one is using the FRED database. (https://fred.stlouisfed.org/), but I have several I will hit to complete the entire transfer (maybe 200k data points). Once they are done I update the tables, refresh market calculations, etc. Some of it is real time and some of it is End-of-day. I would also like to say, I currently have everything working in docker, but have been working to update the code using tasks to improve execution.
class Program
{
private async Task SQLBulkLoader()
{
foreach (var fileListObj in indicators.file_list)
{
await Task.Factory.StartNew( () =>
{
string json = this.GET(//API call);
SeriesObject obj = JsonConvert.DeserializeObject<SeriesObject>(json);
DataTable dataTableConversion = ConvertToDataTable(obj.observations);
dataTableConversion.TableName = fileListObj.series_id;
using (SqlConnection dbConnection = new SqlConnection("SQL Connection"))
{
dbConnection.Open();
using (SqlBulkCopy s = new SqlBulkCopy(dbConnection))
{
s.DestinationTableName = dataTableConversion.TableName;
foreach (var column in dataTableConversion.Columns)
s.ColumnMappings.Add(column.ToString(), column.ToString());
s.WriteToServer(dataTableConversion);
}
Console.WriteLine("File: {0} Complete", fileListObj.series_id);
}
});
}
}
static void Main(string[] args)
{
Program worker = new Program();
worker.SQLBulkLoader().GetAwaiter().GetResult();
}
}
Your awaiting the task returned from Task.Factory.StartNew does make it effectively single threaded. You can see a simple demonstration of this with this short LinqPad example:
for (var i = 0; i < 3; i++)
{
var index = i;
$"{index} inline".Dump();
await Task.Run(() =>
{
Thread.Sleep((3 - index) * 1000);
$"{index} in thread".Dump();
});
}
Here we wait less as we progress through the loop. The output is:
0 inline
0 in thread
1 inline
1 in thread
2 inline
2 in thread
If you remove the await in front of StartNew you'll see it runs in parallel. As others have mentioned, you can certainly use Parallel.ForEach, but for a demonstration of doing it a bit more manually, you can consider a solution like this:
var tasks = new List<Task>();
for (var i = 0; i < 3; i++)
{
var index = i;
$"{index} inline".Dump();
tasks.Add(Task.Factory.StartNew(() =>
{
Thread.Sleep((3 - index) * 1000);
$"{index} in thread".Dump();
}));
}
Task.WaitAll(tasks.ToArray());
Notice now how the result is:
0 inline
1 inline
2 inline
2 in thread
1 in thread
0 in thread
This is a typical problem that C# 8.0 Async Streams are going to solve very soon.
Until C# 8.0 is released, you can use the AsyncEnumarator library:
using System.Collections.Async;
class Program
{
private async Task SQLBulkLoader() {
await indicators.file_list.ParallelForEachAsync(async fileListObj =>
{
...
await s.WriteToServerAsync(dataTableConversion);
...
},
maxDegreeOfParalellism: 3,
cancellationToken: default);
}
static void Main(string[] args)
{
Program worker = new Program();
worker.SQLBulkLoader().GetAwaiter().GetResult();
}
}
I do not recommend using Parallel.ForEach and Task.WhenAll as those functions are not designed for asynchronous streaming.
You'll want to add each task to a collection and then use Task.WhenAll to await all of the tasks in that collection:
private async Task SQLBulkLoader()
{
var tasks = new List<Task>();
foreach (var fileListObj in indicators.file_list)
{
tasks.Add(Task.Factory.StartNew( () => { //Doing Stuff }));
}
await Task.WhenAll(tasks.ToArray());
}
My take on this: most time consuming operations will be getting the data using a GET operation and the actual call to WriteToServer using SqlBulkCopy. If you take a look at that class you will see that there is a native async method WriteToServerAsync method (docs here)
. Always use those before creating Tasks yourself using Task.Run.
The same applies to the http GET call. You can use the native HttpClient.GetAsync (docs here) for that.
Doing that you can rewrite your code to this:
private async Task ProcessFileAsync(string series_id)
{
string json = await GetAsync();
SeriesObject obj = JsonConvert.DeserializeObject<SeriesObject>(json);
DataTable dataTableConversion = ConvertToDataTable(obj.observations);
dataTableConversion.TableName = series_id;
using (SqlConnection dbConnection = new SqlConnection("SQL Connection"))
{
dbConnection.Open();
using (SqlBulkCopy s = new SqlBulkCopy(dbConnection))
{
s.DestinationTableName = dataTableConversion.TableName;
foreach (var column in dataTableConversion.Columns)
s.ColumnMappings.Add(column.ToString(), column.ToString());
await s.WriteToServerAsync(dataTableConversion);
}
Console.WriteLine("File: {0} Complete", series_id);
}
}
private async Task SQLBulkLoaderAsync()
{
var tasks = indicators.file_list.Select(f => ProcessFileAsync(f.series_id));
await Task.WhenAll(tasks);
}
Both operations (http call and sql server call) are I/O calls. Using the native async/await pattern there won't even be a thread created or used, see this question for a more in-depth explanation. That is why for IO bound operations you should never have to use Task.Run (or Task.Factory.StartNew. But do mind that Task.Run is the recommended approach).
Sidenote: if you are using HttpClient in a loop, please read this about how to correctly use it.
If you need to limit the number of parallel actions you could also use TPL Dataflow as it plays very nice with Task based IO bound operations. The SQLBulkLoaderAsyncshould then be modified to (leaving the ProcessFileAsync method from earlier this answer intact):
private async Task SQLBulkLoaderAsync()
{
var ab = new ActionBlock<string>(ProcessFileAsync, new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = 5 });
foreach (var file in indicators.file_list)
{
ab.Post(file.series_id);
}
ab.Complete();
await ab.Completion;
}
Use a Parallel.ForEach loop to enable data parallelism over any System.Collections.Generic.IEnumerable<T> source.
// Method signature: Parallel.ForEach(IEnumerable<TSource> source, Action<TSource> body)
Parallel.ForEach(fileList, (currentFile) =>
{
//Doing Stuff
Console.WriteLine("Processing {0} on thread {1}", currentFile, Thread.CurrentThread.ManagedThreadId);
});
Why didnt you try this :), this program will not start parallel tasks (in a foreach), it will be blocking but the logic in task will be done in separate thread from threadpool (only one at the time, but the main thread will be blocked).
The proper approach in your situation is to use Paraller.ForEach
How can I convert this foreach code to Parallel.ForEach?

How to use Threads for Processing Many Tasks

I have a C# requirement for individually processing a 'great many' (perhaps > 100,000) records. Running this process sequentially is proving to be very slow with each record taking a good second or so to complete (with a timeout error set at 5 seconds).
I would like to try running these tasks asynchronously by using a set number of worker 'threads' (I use the term 'thread' here cautiously as I am not sure if I should be looking at a thread, or a task or something else).
I have looked at the ThreadPool, but I can't imagine it could queue the volume of requests required. My ideal pseudo code would look something like this...
public void ProcessRecords() {
SetMaxNumberOfThreads(20);
MyRecord rec;
while ((rec = GetNextRecord()) != null) {
var task = WaitForNextAvailableThreadFromPool(ProcessRecord(rec));
task.Start()
}
}
I will also need a mechanism that the processing method can report back to the parent/calling class.
Can anyone point me in the right direction with perhaps some example code?
A possible simple solution would be to use a TPL Dataflow block which is a higher abstraction over the TPL with configurations for degree of parallelism and so forth. You simply create the block (ActionBlock in this case), Post everything to it, wait asynchronously for completion and TPL Dataflow handles all the rest for you:
var block = new ActionBlock<MyRecord>(
rec => ProcessRecord(rec),
new ExecutionDataflowBlockOptions{MaxDegreeOfParallelism = 20});
MyRecord rec;
while ((rec = GetNextRecord()) != null)
{
block.Post(rec);
}
block.Complete();
await block.Completion
Another benefit is that the block starts working as soon as the first record arrives and not only when all the records have been received.
If you need to report back on each record you can use a TransformBlock to do the actual processing and link an ActionBlock to it that does the updates:
var transform = new TransfromBlock<MyRecord, Report>(rec =>
{
ProcessRecord(rec);
return GenerateReport(rec);
}, new ExecutionDataflowBlockOptions{MaxDegreeOfParallelism = 20});
var reporter = new ActionBlock<Report>(report =>
{
RaiseEvent(report) // Or any other mechanism...
});
transform.LinkTo(reporter, new DataflowLinkOptions { PropagateCompletion = true });
MyRecord rec;
while ((rec = GetNextRecord()) != null)
{
transform.Post(rec);
}
transform.Complete();
await transform.Completion
Have you thought about using parallel processing with Actions?
ie, create a method to process a single record, add each record method as an action into a list, and then perform a parrallel.for on the list.
Dim list As New List(Of Action)
list.Add(New Action(Sub() MyMethod(myParameter)))
Parallel.ForEach(list, Sub(t) t.Invoke())
This is in vb.net, but I think you get the gist.

Query on Queues and Thread Safety

Thread-Safety is not an aspect that I have worried about much as the simple apps and libraries I have written usually only run on the main thread, or do not directly modified properties or fields in any classes that I needed to worry about before.
However, I have started working on a personal project that I am using a WebClient to download data asynchronously from a remote server. There is a Queue<Uri> that contains a pre-built queue of a series of URI's to download data.
So consider the following snippet (this is not my real code, but something I am hoping illustrates my question:
private WebClient webClient = new WebClient();
private Queue<Uri> requestQueue = new Queue<Uri>();
public Boolean DownloadNextASync()
{
if (webClient.IsBusy)
return false;
if (requestQueue.Count == 0)
return false
var uri = requestQueue.Dequeue();
webClient.DownloadDataASync(uri);
return true;
}
If I am understanding correctly, this method is not thread safe (assuming this specific instance of this object is known to multiple threads). My reasoning is webClient could become busy during the time between the IsBusy check and the DownloadDataASync() method call. And also, requestQueue could become empty between the Count check and when the next item is dequeued.
My question is what is the best way to handle this type of situation to make it thread-safe?
This is more of an abstract question as I realize for this specific method that there would have to be an exceptionally inconvenient timing for this to actually cause a problem, and to cover that case I could just wrap the method in an appropriate try-catch since both pieces would throw an exception. But is there another option? Would a lock statement be applicable here?
If you're targeting .Net 4.0, you could use the Task Parallel Library for help:
var queue = new BlockingCollection<Uri>();
var maxClients = 4;
// Optionally provide another producer/consumer collection for the data
// var data = new BlockingCollection<Tuple<Uri,byte[]>>();
// Optionally implement CancellationTokenSource
var clients = from id in Enumerable.Range(0, maxClients)
select Task.Factory.StartNew(
() =>
{
var client = new WebClient();
while (!queue.IsCompleted)
{
Uri uri;
if (queue.TryTake(out uri))
{
byte[] datum = client.DownloadData(uri); // already "async"
// Optionally pass datum along to the other collection
// or work on it here
}
else Thread.SpinWait(100);
}
});
// Add URI's to search
// queue.Add(...);
// Notify our clients that we've added all the URI's
queue.CompleteAdding();
// Wait for all of our clients to finish
clients.WaitAll();
To use this approach for progress indication you can use TaskCompletionSource<TResult> to manage the Event based parallelism:
public static Task<byte[]> DownloadAsync(Uri uri, Action<double> progress)
{
var source = new TaskCompletionSource<byte[]>();
Task.Factory.StartNew(
() =>
{
var client = new WebClient();
client.DownloadProgressChanged
+= (sender, e) => progress(e.ProgressPercentage);
client.DownloadDataCompleted
+= (sender, e) =>
{
if (!e.Cancelled)
{
if (e.Error == null)
{
source.SetResult((byte[])e.Result);
}
else
{
source.SetException(e.Error);
}
}
else
{
source.SetCanceled();
}
};
});
return source.Task;
}
Used like so:
// var urls = new List<Uri>(...);
// var progressBar = new ProgressBar();
Task.Factory.StartNew(
() =>
{
foreach (var uri in urls)
{
var task = DownloadAsync(
uri,
p =>
progressBar.Invoke(
new MethodInvoker(
delegate { progressBar.Value = (int)(100 * p); }))
);
// Will Block!
// data = task.Result;
}
});
I highly recommend reading "Threading In C#" by Joseph Albahari. I have taken a look through it in preparation for my first (mis)adventure into threading and it's pretty comprehensive.
You can read it here: http://www.albahari.com/threading/
Both of the thread-safety concerns you raised are valid. Furthermore, the both WebClient and Queue are documented as not being thread-safe (at the bottom of the MSDN docs). For example, if two threads were dequeuing simultaneously, they might actually cause the queue to become internally inconsistent or could lead to non-sensical return values. For example, if the implementation of Dequeue() was something like:
1. var valueToDequeue = this._internalList[this._startPointer];
2. this._startPointer = (this._startPointer + 1) % this._internalList.Count;
3. return valueToDequeue;
and two threads each executed line 1 before either continued to line 2, then both would return the same value (there are other potential issues here as well). This would not necessarily throw an exception, so you should use a lock statement to guarantee that only one thread can be inside the method at a time:
private readonly object _lock = new object();
...
lock (this._lock) {
// body of method
}
You could also lock on the WebClient or the Queue if you know that no-one else will be synchronizing on them.

Parallel.Invoke - Dynamically creating more 'threads'

I am educating myself on Parallel.Invoke, and parallel processing in general, for use in current project. I need a push in the right direction to understand how you can dynamically\intelligently allocate more parallel 'threads' as required.
As an example. Say you are parsing large log files. This involves reading from file, some sort of parsing of the returned lines and finally writing to a database.
So to me this is a typical problem that can benefit from parallel processing.
As a simple first pass the following code implements this.
Parallel.Invoke(
()=> readFileLinesToBuffer(),
()=> parseFileLinesFromBuffer(),
()=> updateResultsToDatabase()
);
Behind the scenes
readFileLinesToBuffer() reads each line and stores to a buffer.
parseFileLinesFromBuffer comes along and consumes lines from buffer and then let's say it put them on another buffer so that updateResultsToDatabase() can come along and consume this buffer.
So the code shown assumes that each of the three steps uses the same amount of time\resources but lets say the parseFileLinesFromBuffer() is a long running process so instead of running just one of these methods you want to run two in parallel.
How can you have the code intelligently decide to do this based on any bottlenecks it might perceive?
Conceptually I can see how some approach of monitoring the buffer sizes might work, spawning a new 'thread' to consume the buffer at an increased rate for example...but I figure this type of issue has been considered in putting together the TPL library.
Some sample code would be great but I really just need a clue as to what concepts I should investigate next. It looks like maybe the System.Threading.Tasks.TaskScheduler holds the key?
Have you tried the Reactive Extensions?
http://msdn.microsoft.com/en-us/data/gg577609.aspx
The Rx is a new tecnology from Microsoft, the focus as stated in the official site:
The Reactive Extensions (Rx)... ...is a library to compose
asynchronous and event-based programs using observable collections and
LINQ-style query operators.
You can download it as a Nuget package
https://nuget.org/packages/Rx-Main/1.0.11226
Since I am currently learning Rx I wanted to take this example and just write code for it, the code I ended up it is not actually executed in parallel, but it is completely asynchronous, and guarantees the source lines are executed in order.
Perhaps this is not the best implementation, but like I said I am learning Rx, (thread-safe should be a good improvement)
This is a DTO that I am using to return data from the background threads
class MyItem
{
public string Line { get; set; }
public int CurrentThread { get; set; }
}
These are the basic methods doing the real work, I am simulating the time with a simple Thread.Sleep and I am returning the thread used to execute each method Thread.CurrentThread.ManagedThreadId. Note the timer of the ProcessLine it is 4 sec, it's the most time-consuming operation
private IEnumerable<MyItem> ReadLinesFromFile(string fileName)
{
var source = from e in Enumerable.Range(1, 10)
let v = e.ToString()
select v;
foreach (var item in source)
{
Thread.Sleep(1000);
yield return new MyItem { CurrentThread = Thread.CurrentThread.ManagedThreadId, Line = item };
}
}
private MyItem UpdateResultToDatabase(string processedLine)
{
Thread.Sleep(700);
return new MyItem { Line = "s" + processedLine, CurrentThread = Thread.CurrentThread.ManagedThreadId };
}
private MyItem ProcessLine(string line)
{
Thread.Sleep(4000);
return new MyItem { Line = "p" + line, CurrentThread = Thread.CurrentThread.ManagedThreadId };
}
The following method I am using it just to update the UI
private void DisplayResults(MyItem myItem, Color color, string message)
{
this.listView1.Items.Add(
new ListViewItem(
new[]
{
message,
myItem.Line ,
myItem.CurrentThread.ToString(),
Thread.CurrentThread.ManagedThreadId.ToString()
}
)
{
ForeColor = color
}
);
}
And finally this is the method that calls the Rx API
private void PlayWithRx()
{
// we init the observavble with the lines read from the file
var source = this.ReadLinesFromFile("some file").ToObservable(Scheduler.TaskPool);
source.ObserveOn(this).Subscribe(x =>
{
// for each line read, we update the UI
this.DisplayResults(x, Color.Red, "Read");
// for each line read, we subscribe the line to the ProcessLine method
var process = Observable.Start(() => this.ProcessLine(x.Line), Scheduler.TaskPool)
.ObserveOn(this).Subscribe(c =>
{
// for each line processed, we update the UI
this.DisplayResults(c, Color.Blue, "Processed");
// for each line processed we subscribe to the final process the UpdateResultToDatabase method
// finally, we update the UI when the line processed has been saved to the database
var persist = Observable.Start(() => this.UpdateResultToDatabase(c.Line), Scheduler.TaskPool)
.ObserveOn(this).Subscribe(z => this.DisplayResults(z, Color.Black, "Saved"));
});
});
}
This process runs totally in the background, this is the output generated:
in an async/await world, you'd have something like:
public async Task ProcessFileAsync(string filename)
{
var lines = await ReadLinesFromFileAsync(filename);
var parsed = await ParseLinesAsync(lines);
await UpdateDatabaseAsync(parsed);
}
then a caller could just do var tasks = filenames.Select(ProcessFileAsync).ToArray(); and whatever (WaitAll, WhenAll, etc, depending on context)
Use a couple of BlockingCollection. Here is an example
The idea is that you create a producer that puts data into the collection
while (true) {
var data = ReadData();
blockingCollection1.Add(data);
}
Then you create any number of consumers that reads from the collection
while (true) {
var data = blockingCollection1.Take();
var processedData = ProcessData(data);
blockingCollection2.Add(processedData);
}
and so on
You can also let TPL handle the number of consumers by using Parallel.Foreach
Parallel.ForEach(blockingCollection1.GetConsumingPartitioner(),
data => {
var processedData = ProcessData(data);
blockingCollection2.Add(processedData);
});
(note that you need to use GetConsumingPartitioner not GetConsumingEnumerable (see here)

What is the best way to do multithreading or asynchronous task in .NET with return value?

What would be the best way to do multithreading or asynchronous task in the following situation in C#?
The simplified situation:
A http request needs to make 5 or more
web service calls. Upon completion
each web service call will receive and
return a string list as a result. The
caller (of 5 web service calls) need
to merge the 5 results into a single
string list and return it to the http
caller.
Because each thread needs to return a value in the end so I am wondering if Asynchronous Delegates is the way to go. Because I am not so experienced in this area so I am asking this questions and/or suggestions.
Thank you!
You should have a look at QueueUserWorkItem. This would allow you to do each call on a separate thread and get the string value based on the particular call e.g.
ManualResetEvent[] calls = new ManualResetEvent[5];
string[] results = new string[5];
calls[0] = new ManualResetEvent(false);
ThreadPool.QueueUserWorkItem(t =>
{
results[0] = // do webservice call
calls[0].Set();
});
calls[1] = new ManualResetEvent(false);
ThreadPool.QueueUserWorkItem(t =>
{
results[1] = // do webservice call
calls[1].Set();
});
....
// wait for all calls to complete
WaitHandle.WaitAll(calls);
// merge the results into a comma delimited string
string resultStr = String.Join(", ", results);
Here's a short piece og code using .Net 4.0 which utilizes the new System.Collections.Concurrent which encapsulates the concurrency code:
class StackOverflowParalell
{
public void Execute()
{
List<int> codeParam = new List<int>(){1,2,3};
ConcurrentBag<string> result = new ConcurrentBag<string>();
Parallel.For(0, codeParam.Count, i => DoSometing(i).ForEach( result.Add ));
// return result here as List, Array....
}
List<string> DoSometing(int value)
{
return new List<string>(){"1","2","3","4"};
}
}
You need to use a callback to accomplish what you're asking.
The easiest way to do this if you're not using the 4.0 framework is as follows:
var strings = new List<string>();
for (var i = 0; i < 5; i++)
{
ThreadPool.QueueUserWorkItem((x) => strings.AddRange(SomeWebServiceCall(i)));
}
This requires some synchronization after the loop to ensure the threads complete before you move on. In the 4.0 framework, the Task Parallel Library does that work for you:
Parallel.For(0, 5, i => strings.AddRange(someWebServiceCall(i)));
Asynchronous delegates were useful when the 1.0 framework came out, but are a lot of extra framework to deal with since the addition of anonymous delegates in 2.0. In 4.0, the TPL makes them even more irrelevant. IAsyncResult relies on an explicit callback, and you can now use implicit callbacks (as demonstrated above).

Categories