This is just a self learning situation. I am pretty new to TPL and Threading. Anyways, I am using a generic Singleton class and creating ~10K instances to check if my code is returning same instance or creating new instance everytime. I am creating instances asynchronously using Task Factory inside a for loop. To validate the creation of instance, i am returning a string having these info as a list of string:
Iteration Counter
Name of instance
Hashcode of instance and
ThreadId
and displaying the list of strings to listbox.
My Queries
On running, I found few things,
the value of i inside the for loop is getting duplicated for the different intances
for those 10K iterations, i have only 8-9 threads created, instead of expected 10k threads. I was expectig 10K threads to popup , do their individual task and then disappear gracefully.
Can I use this in my projects, as class libraries, irrespective of the platforms - Web, Windows or Mobile?
Please do leave a note on my 1OK threads thoughts :). Whether its a good idea / bad idea on multithreading?
My code
Singleton Class
public sealed class Singleton<T> where T : class
{
static Singleton() { }
private Singleton() { }
public static T Instance { get; } = Activator.CreateInstance<T>();
}
Class: SingletonInThread
public class SingletonInThread
{
/// <summary>
/// Method responsible for creation of same instance, 10K times using Task.Factory.StartNew()
/// </summary>
/// <returns></returns>
public async Task<IEnumerable<string>> LoopAsync()
{
List<Task<string>> list = new List<Task<string>>();
for (int i = 0; i <= 9999; i++)
{
list.Add(Task.Factory.StartNew(()=> CreateAndLogInstances(i)));
}
return await Task.WhenAll<string>(list);
}
/// <summary>
/// Creates new instance of Logger and logs its creation with few details. Kind of Unit of Work.
/// </summary>
/// <param name="i"></param>
/// <returns></returns>
private string CreateAndLogInstances(int i)
{
var instance = Singleton<Logger>.Instance;
return $"Instance{i}. Name of instance= {instance.ToString()} && Hashcode ={instance.GetHashCode()} && ThreadId= {Thread.CurrentThread.ManagedThreadId}";
}
}
Frontend
_
On UI side, On buttonclick event, populating listbox
private async void button1_Click(object sender, EventArgs e)
{
IEnumerable<string> list = await new SingletonInThread().LoopAsync();
foreach (var item in list)
{
listBox1.Items.Add(item);
}
}
Also, I noticed one thing that my UI gets blocked while populating list box with 10K items. Please do help me populating it asynchronous way. I knew the bgworker, begininvoke and methodinvoker. Is there anything other than the too in TPL??
Output
---
Update
As suggested if I use Parallel.For,then, instead of 10K strings, I am getting a random figure of 9491, 9326 etc. I.e. less than 10K. I dont know why????
Here's my updated code for LoopAsync method using Parallel.For
public IEnumerable<string> LoopAsync()
{
List<string> list = new List<string>();
Parallel.For(0, 9999, i =>
{
list.Add( CreateAndLogInstances(i));
});
return list;
}
the value of i inside the for loop is getting duplicated for the different intances
This doesn't have anything to do with threading/parallel/asynchrony or singleton instances. You're seeing this because closures capture variables, not values. So this code:
for (int i = 0; i <= 9999; i++)
{
list.Add(Task.Factory.StartNew(()=> CreateAndLogInstances(i)));
}
is passing the variable i to the closure () => CreateAndLogInstances(i), not the current value of i. To capture the current value and use that in your closure, you would need a separate variable per closure, as recommended in a comment:
for (int i = 0; i <= 9999; i++)
{
var index = i;
list.Add(Task.Factory.StartNew(()=> CreateAndLogInstances(index)));
}
for those 10K iterations, i have only 8-9 threads created, instead of expected 10k threads. I was expectig 10K threads to popup , do their individual task and then disappear gracefully.
No, you would very much not want that to happen. Thread creation and destruction has a lot of overhead. StartNew and Parallel queue work to the thread pool, and the thread pool will grow quickly to a certain point and then grow slowly, on purpose. This is because on, e.g., an 8-core machine, there is no point in having 10k threads because they cannot all run anyway.
Can I use this in my projects, as class libraries, irrespective of the platforms - Web, Windows or Mobile?
I never recommend using parallel processing on web applications, because your web host has already parallelized your requests. So doing additional parallel processing tends to burden your web server and potentially make it much less responsive to other requests.
Also, I noticed one thing that my UI gets blocked while populating list box with 10K items. Please do help me populating it asynchronous way.
You normally want to avoid making 10k UI updates at practically the same time. Parallel processing doesn't help with a UI because all UI updates have to be done on the UI thread. Either put all the results in the list with a single call, or use something like control virtualization.
Adding the same object to a WinForms list box multiple times results in multiple lines in the list box, e.g.:
private void Form1_Load(object sender, EventArgs e)
{
string foo = "Hello, world";
listBox1.Items.Add(foo);
listBox1.Items.Add(foo);
listBox1.Items.Add(foo);
}
yields three lines proclaiming Hello, world. So, it isn't unexpected that you receive 10,000 lines in your example. But are they the same object, or are you creating multiple objects?
I created my own Logger class:
public class Logger
{
static private Random rnd = new Random();
public int Id { get; } = rnd.Next();
public override string ToString()
{
return Id.ToString();
}
}
Indeed, each output line has the same Id, thus indicating the same object instance was used in each case. You also output the call to GetHashCode(), which also is the same in each case, indicating a high probability that you are dealing with only one instance.
Related
I have multiple threads generating items and sticking them in a common ConcurrentQueue:
private ConcurrentQueue<GeneratedItem> queuedItems = new ConcurrentQueue<GeneratedItem>();
private void BunchOfThreads () {
// ...
queuedItems.Enqueue(new GeneratedItem(...));
// ...
}
I have another single consumer thread but the way it needs to work in the context of this application is, occasionally, it just needs to grab everything currently in the threads' queue, removing it from that queue, all in one shot. Something like:
private Queue<GeneratedItem> GetAllNewItems () {
return queuedItems.TakeEverything(); // <-- not a real method
}
I think I looked through all the documentation (for the collection and its implemented interfaces) but I didn't seem to find anything like a "concurrently take all objects from queue", or even "concurrently swap contents with another queue".
I could do this no problem if I ditch the ConcurrentQueue and just protect a normal Queue with a lock, like this:
private Queue<GeneratedItem> queuedItems = new Queue<GeneratedItem>();
private void BunchOfThreads () {
// ...
lock (queuedItems) {
queuedItems.Enqueue(new GeneratedItem(...));
}
// ...
}
private Queue<GeneratedItem> GetAllNewItems () {
lock (queuedItems) {
Queue<GeneratedItem> newItems = new Queue<Event>(queuedItems);
queuedItems.Clear();
return newItems;
}
}
But, I like the convenience of the ConcurrentQueue and also since I'm just learning C# I'm curious about the API; so my question is, is there a way to do this with one of the concurrent collections?
Is there perhaps some way to access whatever synchronization object ConcurrentQueue uses and lock it for myself for my own purposes so that everything plays nicely together? Then I can lock it, take everything, and release?
It depends what you want to do. As per the comments in the source code
//number of snapshot takers, GetEnumerator(), ToList() and ToArray() operations take snapshot.
This works by internally calling ToList() which in turn works on m_numSnapshotTakers and a spin mechanism
/// Copies the <see cref="ConcurrentQueue{T}"/> elements to a new <see
/// cref="T:System.Collections.Generic.List{T}"/>.
/// </summary>
/// <returns>A new <see cref="T:System.Collections.Generic.List{T}"/> containing a snapshot of
/// elements copied from the <see cref="ConcurrentQueue{T}"/>.</returns>
private List<T> ToList()
{
// Increments the number of active snapshot takers. This increment must happen before the snapshot is
// taken. At the same time, Decrement must happen after list copying is over. Only in this way, can it
// eliminate race condition when Segment.TryRemove() checks whether m_numSnapshotTakers == 0.
Interlocked.Increment(ref m_numSnapshotTakers);
List<T> list = new List<T>();
try
{
//store head and tail positions in buffer,
Segment head, tail;
int headLow, tailHigh;
GetHeadTailPositions(out head, out tail, out headLow, out tailHigh);
if (head == tail)
{
head.AddToList(list, headLow, tailHigh);
}
else
{
head.AddToList(list, headLow, SEGMENT_SIZE - 1);
Segment curr = head.Next;
while (curr != tail)
{
curr.AddToList(list, 0, SEGMENT_SIZE - 1);
curr = curr.Next;
}
//Add tail segment
tail.AddToList(list, 0, tailHigh);
}
}
finally
{
// This Decrement must happen after copying is over.
Interlocked.Decrement(ref m_numSnapshotTakers);
}
return list;
}
If a snapshot is all you want, then you are in luck. However, there is seemingly no built in way to get and remove all the items from a ConcurrentQueue in a thread safe manner. You will need to bake your own synchronisation by using lock or similar. Or roll your own (which might not be all that difficult looking at the source).
There is no such method, because it is ambiguous what TakeEverything should actually do:
Take item by item until Queue is empty and then return taken items.
Lock whole access to the queue, take a snapshot (take all items in a loop) = clear the queue, unlock, return the snapshot.
Consider first scenario and imagine that other threads are writing to the queue at the time you are removing the items one by one from the queue - should TakeEverything method include those in the result?
If yes then you can just write it as:
public List<GeneratedItem> TakeEverything()
{
var list = new List<GeneratedItem>();
while (queuedItems.TryDequeue(out var item))
{
list.Add(item);
}
return list;
}
If no then I would still use ConcurrentQueue (because all the instance members - methods and properties - from ordinary Queue are not thread safe) and implement custom lock for every read/write access, so you make sure you are not adding items while "taking everything" from the queue.
Say, I have a static class like this
static class PCstatus
{
public static class Cpu
{
//CPU loads
public static int lt;
public static int l1;
public static int l2;
public static int l3;
public static int l4;
//CPU Temp
public static double t0;
//Frequency
}}
Which I'm using as a storage space(should I be doing that?)
And I have 5-6 threads that periodically change different variables in this class(Note: No two threads change the same value) i.e:
First thread:
PCstatus.lt = 0;,
thread.sleep(1000);
Second
PCstatus.l1 = 0;,
thread.sleep(1000);
And then I have another thread that periodically reads all the values from the class, parse them and send them over serial.
Is this a sane way to do it? There is no locking mechanism in the class, so theoretically, one of the threads could try to change a var while the final thread is reading it.
I'm not sure if such a thing can happen, I've run this program for days. So far, haven't noticed any strange behavior.
I can implement a locking mechanism to the class. (bool _isBeingUsed) and make the threads check that value before performing any operation, but I'm not sure if it's necessary.
I know the proper way to output values from threads is to use delegates, but if it's not really necessary, I could do without the added complexity they bring.
Reads and writes to int values in C# are atomic, so you'll never have to worry about data shearing.
However, writing to multiple values within the class is not atomic, so in your example:
First thread:
PCstatus.lt = 0;
thread.sleep(1000);
Second
PCstatus.l1 = 0;
thread.sleep(1000);
There's no guarantee that just because thread 3 sees that lt is 0 that it will also see that l1 is zero. You've potentially got data race issues here.
Also, just because a thread writes to a variable it doesn't mean that other threads will see its value immediately. Instruction reordering of instructions, compiler reordering of instructions and CPU caching strategies may conspire to prevent the write making its way back to main memory and into another thread.
If you're only ever going to change single values from a thread then use methods on the Interlocked class to ensure that your changes are visible across threads. They use a memory barrier to ensure that read/writes to variables propagate across threads.
If you're going to write multiple values in one hit, or if you want to read multiple values in one hit then you'll need to use a lock.
No locking is required, but you should declare those fields volatile to ensure that updates from one thread can be picked up immediately by other threads.
See: https://msdn.microsoft.com/en-us/library/x13ttww7.aspx
Note that you can't declare a double to be volatile. I think for your application you could probably just use a float instead. Otherwise you can use a class that contains an immutable double value.
And I have 5-6 threads that periodically change different variables in this class
Instead of having single storage for results of 5-6 workers you can supply each worker with event. Then anyone who need results can subscribe to it and create local storage, means no thread issues anymore.
Something like
public static class CPUStats
{
public static EventHandler<CPUEventArgs> Measured;
public static CPUStats()
{
Task.Factory.StartNew(() =>
{
while(true)
{
... // poll CPU data periodically
Measured?.Invoke(null, new CPUEventArgs() { LT = lt, L1 = l1, ... });
}
}, TaskCreationOptions.LongRunning);
}
}
public static class StatsWriter
{
static int lt;
static int l1;
...
public static StatsWriter()
{
CPUStats.Measured += (s, e) =>
{
lt = e.LT;
l1 = e.L1;
}
}
public static void Save()
{
var text = $"{DateTime.Now} CPU[{lt},{l1}...]";
... // save text
}
}
So I have a list with 900+ entries in C#. For every entry in the list a method has to be executed, though these must go all at the same time. First I thought of doing this:
public void InitializeThread()
{
Thread myThread = new Thread(run());
myThread.Start();
}
public void run()
{
foreach(Object o in ObjectList)
{
othermethod();
}
}
Now the problem here is that this will execute 1 method at a time for each entry in the list. But I want every single one of them to be running at the same time.
Then I tried making a seperate thread for each entry like this:
public void InitializeThread()
{
foreach(Object o in ObjectList)
{
Thread myThread = new Thread(run());
myThread.Start();
}
}
public void run()
{
while(//thread is allowed to run)
{
// do stuff
}
}
But this seems to give me system.outofmemory exceptions (not a suprise since the list has almost a 1000 entries.
Is there a way to succesfully run all those methods at the same time? Either using multiple threads or only one?
What I'm ultimately trying to achieve is this: I have a GMap, and want to have a few markers on it. These markers represent trains. The marker pops up on the GMap at a certain point in time, and dissappears when it reaches it's destination. All the trains move about at the same time on the map.
If I need to post more of the code I tried please let me know.
Thanks in advance!
What you're looking for is Parallel.ForEach:
Executes a foreach operation on an IEnumerable in which iterations may
run in parallel.
And you use it like this:
Parallel.ForEach(ObjectList, (obj) =>
{
// Do parallel work here on each object
});
I'm using a subscriber/notifier pattern to raise and consume events from my .Net middle-tier in C#. Some of the events are raised in "bursts", for instance, when data is persisted from a batch program importing a file. This executes a potentially long-running task, and I'd like to avoid firing the event several times a second by implementing a "quiet period", whereby the event system waits until the event stream slows down to process the event.
How should I do this when the Publisher takes an active role in notifying subscribers? I don't want to wait until an event comes in to check to see if there are others waiting out the quiet period...
There is no host process to poll the subscription model at the moment. Should I abandon the publish/subscribe pattern or is there a better way?
Here's a rough implementation that might point you in a direction. In my example, the task that involves notification is saving a data object. When an object is saved, the Saved event is raised. In addition to a simple Save method, I've implemented BeginSave and EndSave methods as well as an overload of Save that works with those two for batch saves. When EndSave is called, a single BatchSaved event is fired.
Obviously, you can alter this to suit your needs. In my example, I kept track of a list of all objects that were saved during a batch operation, but this may not be something that you'd need to do...you may only care about how many objects were saved or even simply that a batch save operation was completed. If you anticipate a large number of objects being saved, then storing them in a list as in my example may become a memory issue.
EDIT: I added a "threshold" concept to my example that attempts to prevent a large number of objects being held in memory. This causes the BatchSaved event to fire more frequently, though. I also added some locking to address potential thread safety, though I may have missed something there.
class DataConcierge<T>
{
// *************************
// Simple save functionality
// *************************
public void Save(T dataObject)
{
// perform save logic
this.OnSaved(dataObject);
}
public event DataObjectSaved<T> Saved;
protected void OnSaved(T dataObject)
{
var saved = this.Saved;
if (saved != null)
saved(this, new DataObjectEventArgs<T>(dataObject));
}
// ************************
// Batch save functionality
// ************************
Dictionary<BatchToken, List<T>> _BatchSavedDataObjects = new Dictionary<BatchToken, List<T>>();
System.Threading.ReaderWriterLockSlim _BatchSavedDataObjectsLock = new System.Threading.ReaderWriterLockSlim();
int _SavedObjectThreshold = 17; // if the number of objects being stored for a batch reaches this threshold, then those objects are to be cleared from the list.
public BatchToken BeginSave()
{
// create a batch token to represent this batch
BatchToken token = new BatchToken();
_BatchSavedDataObjectsLock.EnterWriteLock();
try
{
_BatchSavedDataObjects.Add(token, new List<T>());
}
finally
{
_BatchSavedDataObjectsLock.ExitWriteLock();
}
return token;
}
public void EndSave(BatchToken token)
{
List<T> batchSavedDataObjects;
_BatchSavedDataObjectsLock.EnterWriteLock();
try
{
if (!_BatchSavedDataObjects.TryGetValue(token, out batchSavedDataObjects))
throw new ArgumentException("The BatchToken is expired or invalid.", "token");
this.OnBatchSaved(batchSavedDataObjects); // this causes a single BatchSaved event to be fired
if (!_BatchSavedDataObjects.Remove(token))
throw new ArgumentException("The BatchToken is expired or invalid.", "token");
}
finally
{
_BatchSavedDataObjectsLock.ExitWriteLock();
}
}
public void Save(BatchToken token, T dataObject)
{
List<T> batchSavedDataObjects;
// the read lock prevents EndSave from executing before this Save method has a chance to finish executing
_BatchSavedDataObjectsLock.EnterReadLock();
try
{
if (!_BatchSavedDataObjects.TryGetValue(token, out batchSavedDataObjects))
throw new ArgumentException("The BatchToken is expired or invalid.", "token");
// perform save logic
this.OnBatchSaved(batchSavedDataObjects, dataObject);
}
finally
{
_BatchSavedDataObjectsLock.ExitReadLock();
}
}
public event BatchDataObjectSaved<T> BatchSaved;
protected void OnBatchSaved(List<T> batchSavedDataObjects)
{
lock (batchSavedDataObjects)
{
var batchSaved = this.BatchSaved;
if (batchSaved != null)
batchSaved(this, new BatchDataObjectEventArgs<T>(batchSavedDataObjects));
}
}
protected void OnBatchSaved(List<T> batchSavedDataObjects, T savedDataObject)
{
// add the data object to the list storing the data objects that have been saved for this batch
lock (batchSavedDataObjects)
{
batchSavedDataObjects.Add(savedDataObject);
// if the threshold has been reached
if (_SavedObjectThreshold > 0 && batchSavedDataObjects.Count >= _SavedObjectThreshold)
{
// then raise the BatchSaved event with the data objects that we currently have
var batchSaved = this.BatchSaved;
if (batchSaved != null)
batchSaved(this, new BatchDataObjectEventArgs<T>(batchSavedDataObjects.ToArray()));
// and clear the list to ensure that we are not holding on to the data objects unnecessarily
batchSavedDataObjects.Clear();
}
}
}
}
class BatchToken
{
static int _LastId = 0;
static object _IdLock = new object();
static int GetNextId()
{
lock (_IdLock)
{
return ++_LastId;
}
}
public BatchToken()
{
this.Id = GetNextId();
}
public int Id { get; private set; }
}
class DataObjectEventArgs<T> : EventArgs
{
public T DataObject { get; private set; }
public DataObjectEventArgs(T dataObject)
{
this.DataObject = dataObject;
}
}
delegate void DataObjectSaved<T>(object sender, DataObjectEventArgs<T> e);
class BatchDataObjectEventArgs<T> : EventArgs
{
public IEnumerable<T> DataObjects { get; private set; }
public BatchDataObjectEventArgs(IEnumerable<T> dataObjects)
{
this.DataObjects = dataObjects;
}
}
delegate void BatchDataObjectSaved<T>(object sender, BatchDataObjectEventArgs<T> e);
In my example, I choose to use a token concept in order to create separate batches. This allows smaller batch operations running on separate threads to complete and raise events without waiting for a larger batch operation to complete.
I made separete events: Saved and BatchSaved. However, these could just as easily be consolidated into a single event.
EDIT: fixed race conditions pointed out by Steven Sudit on accessing the event delegates.
EDIT: revised locking code in my example to use ReaderWriterLockSlim rather than Monitor (i.e. the "lock" statement). I think there were a couple of race conditions, such as between the Save and EndSave methods. It was possible for EndSave to execute, causing the list of data objects to be removed from the dictionary. If the Save method was executing at the same time on another thread, it would be possible for a data object to be added to that list, even though it had already been removed from the dictionary.
In my revised example, this situation can't happen and the Save method will throw an exception if it executes after EndSave. These race conditions were caused primarily by me trying to avoid what I thought was unnecessary locking. I realized that more code needed to be within a lock, but decided to use ReaderWriterLockSlim instead of Monitor because I only wanted to prevent Save and EndSave from executing at the same time; there wasn't a need to prevent multiple threads from executing Save at the same time. Note that Monitor is still used to synchronize access to the specific list of data objects retrieved from the dictionary.
EDIT: added usage example
Below is a usage example for the above sample code.
static void DataConcierge_Saved(object sender, DataObjectEventArgs<Program.Customer> e)
{
Console.WriteLine("DataConcierge<Customer>.Saved");
}
static void DataConcierge_BatchSaved(object sender, BatchDataObjectEventArgs<Program.Customer> e)
{
Console.WriteLine("DataConcierge<Customer>.BatchSaved: {0}", e.DataObjects.Count());
}
static void Main(string[] args)
{
DataConcierge<Customer> dc = new DataConcierge<Customer>();
dc.Saved += new DataObjectSaved<Customer>(DataConcierge_Saved);
dc.BatchSaved += new BatchDataObjectSaved<Customer>(DataConcierge_BatchSaved);
var token = dc.BeginSave();
try
{
for (int i = 0; i < 100; i++)
{
var c = new Customer();
// ...
dc.Save(token, c);
}
}
finally
{
dc.EndSave(token);
}
}
This resulted in the following output:
DataConcierge<Customer>.BatchSaved: 17
DataConcierge<Customer>.BatchSaved: 17
DataConcierge<Customer>.BatchSaved: 17
DataConcierge<Customer>.BatchSaved: 17
DataConcierge<Customer>.BatchSaved: 17
DataConcierge<Customer>.BatchSaved: 15
The threshold in my example is set to 17, so a batch of 100 items causes the BatchSaved event to fire 6 times.
I am not sure if I understood your question correctly, but I would try to fix the problem at source - make sure the events are not raised in "bursts". You could consider implementing batch operations, which could be used from the file importing program. This would be treated as a single event in your middletier and raise a single event.
I think it will be very tricky to implement some reasonable solution if you can't make the change outlined above - you could try to wrap your publisher in a "caching" publisher, which would implement some heuristic to cache the events if they are coming in bursts. The easiest would be to cache an event if another one of the same type is being currently processed (so your batch would cause at least 2 events - one at the very beginning, and one at the end). You could wait for a short time and only raise an event when the next one hasn't come during that time, but you get a time lag even if there is a single event in the pipeline. You also need to make sure you will raise the event from time to time even if there is constant queue of events - otherwise the publishers will potentially get starved.
The second option is tricky to implement and will contain heuristics, which might go very wrong...
Here's one idea that's just fallen out of my head. I don't know how workable it is and can't see an obvious way to make it more generic, but it might be a start. All it does is provide a buffer for button click events (substitute with your event as necessary).
class ButtonClickBuffer
{
public event EventHandler BufferedClick;
public ButtonClickBuffer(Button button, int queueSize)
{
this.queueSize= queueSize;
button.Click += this.button_Click;
}
private int queueSize;
private List<EventArgs> queuedEvents = new List<EventArgs>();
private void button_Click(object sender, EventArgs e)
{
queuedEvents.Add(e);
if (queuedEvents.Count >= queueSize)
{
if (this.BufferedClick!= null)
{
foreach (var args in this.queuedEvents)
{
this.BufferedClick(sender, args);
}
queuedEvents.Clear();
}
}
}
}
So your subscriber, instead of subscribing as:
this.button1.Click += this.button1_Click;
Would use a buffer, specifying how many events to wait for:
ButtonClickBuffer buffer = new ButtonClickBuffer(this.button1, 5);
buffer.BufferedClick += this.button1_Click;
It works in a simple test form I knocked up, but it's far from production-ready!
You said you didn't want to wait for an event to see if there is a queue waiting, which is exactly what this does. You could substitute the logic inside the buffer to spawn a new thread which monitors the queue and dispatches events as necessary. God knows what threading and locking issues might arise from that!
I've a little problem with this code:
This is the "main" method of the app:
private Thread main_process;
private Clases.GestorTR processor;
public void begin()
{
processor = new Clases.GestorTR();
main_process = new Thread(new ThreadStart(processor.ExecuteP));
main_process.Start();
}
I've created a Thread to process other "Transacction Threads" to avoid blocking the GUI.
This is the method ExecuteP, on processor object:
public void ExecuteP()
{
// Readed an DataTable with BD transacction, filled with numbers
foreach (DataRow dr in dtResults.Rows)
{
int Local_number = Convert.toInt32(dr["autonum"].ToString());
ThreadStart starter;
starter = delegate { new QueryBD.QueryCounter(Local_number); };
new Thread(starter).Start();
}
}
This is QueryCounter method of QueryBD class:
....
private void QueryCounter(int _counter)
{
logs.log("ON QUERY_PROCESS: " + _counter);
}
...
Now, the problem. When calling the delegate, some threads are crossing parameters. For example, in the foreach method the log shows correct (1,2,3,4,5,6,7,8) but, in the QueryCounter method (called each time with the new thread, the log shows (1,1,1,4,5,6,6,8) for example. I've also tried to use locks, but the problem is the same. Also testing with the ThreadPool way with the same result.
I think I'm missing something in the foreach loop, because if I debug the first run, the thread is Started, but without action in the log.
Thanks!,
You should try to change some parts of your code like that:
public void ExecuteP()
{
QueryBD facade = new QueryBD.
foreach (DataRow dr in dtResults.Rows)
{
int Local_number = Convert.toInt32(dr["autonum"].ToString());
new Thread(new ParameterizedThreadStart(facade.QueryCounter)).Start(Local_number);
}
}
public void QueryCounter(object _counter)
{
...
}
Hope it works.
Btw. I've created one object called facade and I'm passing that object to various threads. It can also result in some side effects if there will be thread sensitive part of code in the facade object, so you can also consider locking there:
public void QueryCounter(object _counter)
{
lock(this)
{
//
}
}
or providing new QueryBD to each thread, but it can affect performance.
EDIT: Hey, 4 things:
While using ParametrizedThread, the variable passed to Start method of the thread (thread.Start(variable)) is copied at the time of call. Such copied variable is then used in the child thread. Anonymous delegate works different. It keeps the reference to the variable, so when the variable is used by the child thread, it can be changed by the time in your parent thread. That is why you had unpredicted behaviour.
Better explanation you can find here: Differing behavior when starting a thread: ParameterizedThreadStart vs. Anonymous Delegate. Why does it matter?.
The performance depends. If creation of your object is heavy (ex. it creates new connection to DB each time it is created) performance can be seriously affected by creation of many such objects - it is where lock is better. If creation of the object is light, you can create as many objects as you want. It depends.
If you want your code to be run in defined order, you shouldn't use threads at all. If you want to preserve execution order, sequential invoking is the right way - see Hans Passant explanation.