There is already an open DataReader ... even though it is not - c#

Note: I've went through millions of questions when the issue is not disposing the reader/connection properly, or when the error is because of badly handled lazy loading. I believe that this issue is a different one, and probably related to MySQL's .NET connector.
I'm using MySQL server (5.6) database extensively through its .NET connector (6.8.3). All tables are created with MyISAM engine for performance reasons. I have only one process with one thread (update: in fact, it's not true, see below) accessing the DB sequentially, so there is no need for transactions and concurrency.
Today, after many hours of processing the following piece of code:
public IEnumerable<VectorTransition> FindWithSourceVector(double[] sourceVector)
{
var sqlConnection = this.connectionPool.Take();
this.selectWithSourceVectorCommand.Connection = sqlConnection;
this.selectWithSourceVectorCommand.Parameters["#epsilon"].Value
= this.epsilonEstimator.Epsilon.Min() / 10;
for (int d = 0; d < this.dimensionality; ++d)
{
this.selectWithSourceVectorCommand.Parameters["#source_" + d.ToString()]
.Value = sourceVector[d];
}
// *** the following line (201) throws the exception presented below
using (var reader = this.selectWithSourceVectorCommand.ExecuteReader())
{
while (reader.Read())
{
yield return ReaderToVectorTransition(reader);
}
}
this.connectionPool.Putback(sqlConnection);
}
threw the following exception:
MySqlException: There is already an open DataReader associated with this Connection which must be closed first.
Here is the relevant part of the stack trace:
at MySql.Data.MySqlClient.ExceptionInterceptor.Throw(Exception exception)
at MySql.Data.MySqlClient.MySqlConnection.Throw(Exception ex)
at MySql.Data.MySqlClient.MySqlCommand.CheckState()
at MySql.Data.MySqlClient.MySqlCommand.ExecuteReader(CommandBehavior behavior)
at MySql.Data.MySqlClient.MySqlCommand.ExecuteReader()
at implementation.VectorTransitionsMySqlTable.d__27.MoveNext() in C:\Users\bartoszp...\implementation\VectorTransitionsMySqlTable.cs:line 201
at System.Linq.Enumerable.d__3a1.MoveNext()
at System.Linq.Buffer1..ctor(IEnumerable1 source)
at System.Linq.Enumerable.ToArray[TSource](IEnumerable1 source)
at implementation.VectorTransitionService.Add(VectorTransition vectorTransition) in C:\Users\bartoszp...\implementation\VectorTransitionService.cs:line 38
at Program.Go[T](Environment`2 p, Space parentSpace, EpsilonEstimator epsilonEstimator, ThresholdEstimator thresholdEstimator, TransitionTransformer transitionTransformer, AmbiguityCalculator ac, VectorTransitionsTableFactory vttf, AxesTableFactory atf, NeighbourhoodsTableFactory ntf, AmbiguitySamplesTableFactory astf, AmbiguitySampleMatchesTableFactory asmtf, MySqlConnectionPool connectionPool, Boolean rejectDuplicates, Boolean addNew) in C:\Users\bartoszp...\Program.cs:line 323
The connectionPool.Take returns the first connection that satisfies the following predicate:
private bool IsAvailable(MySqlConnection connection)
{
var result = false;
try
{
if (connection != null
&& connection.State == System.Data.ConnectionState.Open)
{
result = connection.Ping();
}
}
catch (Exception e)
{
Console.WriteLine("Ping exception: " + e.Message);
}
return result && connection.State == System.Data.ConnectionState.Open;
}
(This is related to my previous question, when I resolved a different, but similar issue: MySQL fatal error during information_schema query (software caused connection abort))
The FindWithSourceVector method is called by the following piece of code:
var existing
= this.vectorTransitionsTable
.FindWithSourceVector(vectorTransition.SourceVector)
.Take(2)
.ToArray();
(I need to find at most two duplicate vectors) - this is the VectorTransitionService.cs:line 38 part of the stack trace.
Now the most interesting part: when the debugger stopped execution after the exception occured, I've investigated the sqlConnection object to find, that it doesn't have a reader associated with it (picture below)!
Why is this happening (apparently at "random" - this method was being called almost every minute for the last ~20h)? Can I avoid that (in ways other then guess-adding some sleeps when Ping throws an exception and praying it'll help)?
Additional information regarding the implementation of the connection pool:
Get is intended for methods that call only simple queries and are not using readers, so the returned connection can be used in a re-entrant way. It is not used directly in this example (because of the reader involved):
public MySqlConnection Get()
{
var result = this.connections.FirstOrDefault(IsAvailable);
if (result == null)
{
Reconnect();
result = this.connections.FirstOrDefault(IsAvailable);
}
return result;
}
The Reconnect method just iterates though the whole array and recreates and opens the connections.
Take uses Get but also removes the returned connection from the list of available connections so in case of some methods that during their usage of a reader call other methods that also need a connection, it will not be shared. This is also not the case here, as the FindSourceVector method is simple (doesn't call other methods that use the DB). However, the Take is used for the sake of convention - if there is a reader, use Take:
public MySqlConnection Take()
{
var result = this.Get();
var index = Array.IndexOf(this.connections, result);
this.connections[index] = null;
return result;
}
Putback just puts a connection to the first empty spot, or just forgets about it if the connection pool is full:
public void Putback(MySqlConnection mySqlConnection)
{
int index = Array.IndexOf(this.connections, null);
if (index >= 0)
{
this.connections[index] = mySqlConnection;
}
else if (mySqlConnection != null)
{
mySqlConnection.Close();
mySqlConnection.Dispose();
}
}

I suspect this is the problem, at the end of the method:
this.connectionPool.Putback(sqlConnection);
You're only taking two elements from the iterator - so you never complete the while loop unless there's actually only one value returned from the reader. Now you're using LINQ, which will automatically be calling Dispose() on the iterator, so your using statement will still be disposing of the reader - but you're not putting the connection back in the pool. If you do that in a finally block, I think you'll be okay:
var sqlConnection = this.connectionPool.Take();
try
{
// Other stuff here...
using (var reader = this.selectWithSourceVectorCommand.ExecuteReader())
{
while (reader.Read())
{
yield return ReaderToVectorTransition(reader);
}
}
}
finally
{
this.connectionPool.Putback(sqlConnection);
}
Or ideally, if your connection pool is your own implementation, make Take return something which implements IDisposable and returns the connection back to the pool when it's done.
Here's a short but complete program to demonstrate what's going on, without any actual databases involved:
using System;
using System.Collections.Generic;
using System.Linq;
class DummyReader : IDisposable
{
private readonly int limit;
private int count = -1;
public int Count { get { return count; } }
public DummyReader(int limit)
{
this.limit = limit;
}
public bool Read()
{
count++;
return count < limit;
}
public void Dispose()
{
Console.WriteLine("DummyReader.Dispose()");
}
}
class Test
{
static IEnumerable<int> FindValues(int valuesInReader)
{
Console.WriteLine("Take from the pool");
using (var reader = new DummyReader(valuesInReader))
{
while (reader.Read())
{
yield return reader.Count;
}
}
Console.WriteLine("Put back in the pool");
}
static void Main()
{
var data = FindValues(2).Take(2).ToArray();
Console.WriteLine(string.Join(",", data));
}
}
As written - modelling the situation with the reader only finding two values - the output is:
Take from the pool
DummyReader.Dispose()
0,1
Note that the reader is disposed, but we never get as far as returning anything from the pool. If you change Main to model the situation where the reader only has one value, like this:
var data = FindValues(1).Take(2).ToArray();
Then we get all the way through the while loop, so the output changes:
Take from the pool
DummyReader.Dispose()
Put back in the pool
0
I suggest you copy my program and experiment with it. Make sure you understand everything about what's going on... then you can apply it to your own code. You might want to read my article on iterator block implementation details too.

TyCobb and Jon Skeet have correctly guessed, that the problem was the pool implementation and multi-threading. I forgot that actually I did start some tiny Tasks in the Reconnect method. The first connection was created and opened synchronously but all other where opened asynchronously.
The idea was that because I only need one connection at time, there others can reconnect in different threads. However, because I didn't always put the connection back (as explained in Jon's answer) reconnecting was happening quite frequently, and because the system was quite loaded these reconnection threads weren't fast enough, which eventually led to race conditions. The fix is to reconnect in a more simple and straightforward manner:
private void Reconnect()
{
for (int i = 0; i < connections.Length; ++i)
{
if (!IsAvailable(this.connections[i]))
{
this.ReconnectAt(i);
}
}
}
private void ReconnectAt(int index)
{
try
{
this.connections[index] = new MySqlConnection(this.connectionString);
this.connections[index].Open();
}
catch (MySqlException mse)
{
Console.WriteLine("Reconnect error: " + mse.Message);
this.connections[index] = null;
}
}

Related

How to get data from sql database using dapper in parallel/multiple threads?

I am trying to get data from sql server using dapper. I have requirement to export 460K records stored in a Azure sql database. I decided to get data in batches, so I getting record of 10k records in each batch. I have planned to get the records in Parallel, so I added async methods to a list of task and did Task.WhenAll. The code works fine when i run locally but after deployed to k8s cluster, I am getting data read issue for some records. I am new to multi threading and I don't how to handle this issue. I tried to do a lock inside the method but the system crashes, Below is my code, the code might be clumsy because I was trying many solution to fix the issue.
for (int i = 0; i < numberOfPages; i++)
{
tableviewWithCondition.startRow = startRow;
resultData.Add(_tableviewRepository.GetTableviewRowsByPagination(tableviewExportCondition.TableviewName, modelMappingGroups, tableviewWithCondition.startRow, builder, pageSize, appName, i));
startRow += tableviewWithCondition.pageSize;
}
foreach(var task in resultData)
{
if (task != null)
{
dataToExport.AddRange(task.Result);
}
}
This is the method I implemented to get data from azure sql database using dapper.
public async Task<(IEnumerable<int> unprocessedData, IEnumerable<dynamic> rowData)> GetTableviewRowsByPagination(string tableName, IEnumerable<MappingGroup> tableviewAttributeDetails,
int startRow, SqlBuilder builder, int pageSize = 100, AppNameEnum appName = AppNameEnum.OptiSoil, int taskNumber = 1)
{
var _unitOfWork = _unitOfWorkServices.Build(appName.ToString());
List<int> unprocessedData = new List<int>();
try
{
var columns = tableviewAttributeDetails.Select(c => { return $"{c.mapping_group_value} [{c.attribute}]"; });
var joinedColumn = string.Join(",", columns);
builder.Select(joinedColumn);
var selector = builder.AddTemplate($"SELECT /**select**/ FROM {tableName} with (nolock) /**innerjoin**/ /**where**/ /**orderby**/ OFFSET {startRow} ROWS FETCH NEXT {(pageSize == 0 ? 100 : pageSize)} ROWS ONLY");
using (var connection = _unitOfWork.Connection)
{
connection.Open();
var data = await connection.QueryAsync(selector.RawSql, selector.Parameters);
Console.WriteLine($"data completed for task{taskNumber}");
return (unprocessedData, data);
}
}
catch(Exception ex)
{
Console.WriteLine($"Exception: {ex.Message}");
if (ex.InnerException != null)
Console.WriteLine($"InnerException: {ex.InnerException.Message}");
Console.WriteLine($"Error in fetching from row {startRow}");
unprocessedData.Add(startRow);
return (unprocessedData, null);
}
finally
{
_unitOfWork.Dispose();
}
}
The above code works fine locally, but in server I am getting below issue.
Exception: A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 35 - An internal exception was caught).
InnerException: The WriteAsync method cannot be called when another write operation is pending.
How to avoid this issue when fetch data in parallel tasks?
You're using the same connection and trying to execute multiple commands over it (I'm assuming this because of the naming), also should you be disposing the unit of work?
Rather than :
using (var connection = _unitOfWork.Connection)
{
connection.Open();
var data = await connection.QueryAsync(selector.RawSql, selector.Parameters);
Console.WriteLine($"data completed for task{taskNumber}");
return (unprocessedData, data);
}
Create a new connection for each item, if this is what you truly want to do. I imagine, and this is an educated guess it's working locally because of timing.
Also look into Task.WhenAll it's a better way collect all the results up. Rather than :
foreach(var task in resultData)
{
if (task != null)
{
dataToExport.AddRange(task.Result);
}
}
calling result on a task is usually bad practice.

Best Practices for SQL Statements/Connections in Get() Request

For simple lookups, I need to perform some SQL statements on a DB2 machine. I'm not able to use an ORM at the moment. I have a working example through this code, however I'm wondering if it can be optimized more as this would essentially create a connection on each request. And that just seems like bad programming.
Is there a way I can optimize this Get() request to leave a connection open? Nesting using statements seems dirty, as well. How should I handle the fact that Get() really wants to return an object of User no matter what; even in error? Can I put this connection in the start of the program so that I can use it over and over again? What are some of the best practices for this?
public class UsersController : ApiController
{
String constr = WebConfigurationManager.ConnectionStrings["DB2Connection"].ConnectionString;
public User Get([FromUri] User cst)
{
if (cst == null)
{
throw new HttpResponseException(HttpStatusCode.NotFound);
}
else
{
using (OdbcConnection DB2Conn = new OdbcConnection(constr))
{
DB2Conn.Open();
using (OdbcCommand com = new OdbcCommand(
// Generic SQL Statement
"SELECT * FROM [TABLE] WHERE customerNumber = ?", DB2Conn))
{
com.Parameters.AddWithValue("#var", cst.customerNumber);
using (OdbcDataReader reader = com.ExecuteReader())
{
try
{
while (reader.Read())
{
cst.name = (string)reader["name"];
return cst;
}
}
catch
{
throw;
}
}
}
}
return cst;
}
}
}
I found a great question that doesn't really have detailed answers, I feel like similar solutions exist for both of these questions...
And that just seems like bad programming.
Why do you think that?
The underlying system should be maintaining connections in a connection pool for you. Creating a connection should be very optimized already.
From a logical perspective, what you're doing now is exactly what you want to be doing. Create the connection, use it, and dispose of it immediately. This allows other threads/processes/etc. to use it from the connection pool now that you're done with it.
This also avoids the myriad of problems which arise from manually maintaining your open connections outside of the code that uses them.
Is there a way I can optimize this Get() request to leave a connection open?
Have you measured an actual performance problem? If not, there's nothing to optimize.
And there's a very good chance that hanging on to open connections in a static context in your web application is going to have drastic performance implications.
In short... You're already doing this correctly. (Well, except for that unnecessary try/catch. You can remove that.)
Edit: If you're just looking to improve the readability of the code (which itself is a matter of personal preference), this seems readable to me:
public User Get([FromUri] User cst)
{
if (cst == null)
throw new HttpResponseException(HttpStatusCode.NotFound);
using (var DB2Conn = new OdbcConnection(constr))
using (var com = new OdbcCommand("SELECT * FROM [TABLE] WHERE customerNumber = ?", DB2Conn))
{
com.Parameters.AddWithValue("#var", cst.customerNumber);
DB2Conn.Open();
using (OdbcDataReader reader = com.ExecuteReader())
while (reader.Read())
{
cst.name = (string)reader["name"]
return cst;
}
}
return cst;
}
Note that you can further improve it by re-addressing the logic of that SQL query. Since you're fetching one value from one record then you don't need to loop over a data reader. Just fetch a single literal and return it. Note that this is free-hand and untested, but it might look something like this:
public User Get([FromUri] User cst)
{
if (cst == null)
throw new HttpResponseException(HttpStatusCode.NotFound);
using (var DB2Conn = new OdbcConnection(constr))
using (var com = new OdbcCommand("SELECT name FROM [TABLE] WHERE customerNumber = ? FETCH FIRST 1 ROWS ONLY", DB2Conn))
{
com.Parameters.AddWithValue("#var", cst.customerNumber);
DB2Conn.Open();
cst.name = (string)com.ExecuteScalar();
}
return cst;
}
#David's answer addresses your actual questions perfectly but here's some other observations that may make your code a little more pallatable to you:
remove the try/catch block - all you're doing is re-throwing the exception which is what will happen if you don't use a try/catch at all. Don't catch the exception unless you can do something about it. (I see now that #David's answer addresses that - either it was added after I read it or I missed it - my apologies for the overlap but it's worth reinforcing)
Change your query to just pull name and use ExecuteScalar instead of ExecuteReader. You are taking the name value from the first record and exiting the while loop. ExecuteScalar returns the value from the first column in the first record, so you can eliminate the while loop and the using there.

Monitor.TryEnter for multiple resources

I tried searching for this but did not find the suggestion best suited for the issue that I am facing.
My issue is that we have list/stack of available resources (Calculation Engines). These resources are used to perform certain calculation.
The request to perform the calculation is triggered from an external process. So when the request for calculation is made, I need to check if any of the available resources are currently not performing other calculations, If so wait for some time and check again.
I was wondering what the best way to implement this is. I have the following code in place, but not sure if it is very safe.
If you have any further suggestions, that will be great:
void Process(int retries = 0) {
CalcEngineConnection connection = null;
bool securedConnection = false;
foreach (var calcEngineConnection in _connections) {
securedConnection = Monitor.TryEnter(calcEngineConnection);
if (securedConnection) {
connection = calcEngineConnection;
break;
}
}
if (securedConnection) {
//Dequeue the next request
var calcEnginePool = _pendingPool.Dequeue();
//Perform the operation and exit.
connection.RunCalc(calcEnginePool);
Monitor.Exit(connection);
}
else {
if (retries < 10)
retries += 1;
Thread.Sleep(200);
Process(retries);
}
}
I'm not sure that using Monitor is the best approach here anyway, but if you do decide to go that route, I'd refactor the above code to:
bool TryProcessWithRetries(int retries) {
for (int attempt = 0; attempt < retries; attempt++) {
if (TryProcess()) {
return true;
}
Thread.Sleep(200);
}
// Throw an exception here instead?
return false;
}
bool TryProcess() {
foreach (var connection in _connections) {
if (TryProcess(connection)) {
return true;
}
}
return false;
}
bool TryProcess(CalcEngineConnection connection) {
if (!Monitor.TryEnter(connection)) {
return false;
}
try {
var calcEnginePool = _pendingPool.Dequeue();
connection.RunCalc(calcEnginePool);
} finally {
Monitor.Exit(connection);
}
return true;
}
This decomposes the three pieces of logic:
Retrying several times
Trying each connection in a collection
Trying a single connection
It also avoids using recursion for the sake of it, and puts the Monitor.Exit call into a finally block, which it absolutely should be in.
You could replace the middle method implementation with:
return _connections.Any(TryProcess);
... but that may be a little too "clever" for its own good.
Personally I'd be tempted to move TryProcess into CalcEngineConnection itself - that way this code doesn't need to know about whether or not the connection is able to process something - it's up to the object itself. It means you can avoid having publicly visible locks, and also it would be flexible if some resources could (say) process two requests at a time in the future.
There are multiple issues that could potentially occur, but let's simplify your code first:
void Process(int retries = 0)
{
foreach (var connection in _connections)
{
if(Monitor.TryEnter(connection))
{
try
{
//Dequeue the next request
var calcEnginePool = _pendingPool.Dequeue();
//Perform the operation and exit.
connection.RunCalc(calcEnginePool);
}
finally
{
// Release the lock
Monitor.Exit(connection);
}
return;
}
}
if (retries < 10)
{
Thread.Sleep(200);
Process(retries+1);
}
}
This will correctly protect your connection, but note that one of the assumptions here is that your _connections list is safe and it will not be modified by another thread.
Furthermore, you might want to use a thread safe queue for the _connections because at certain load levels you might end up using only the first few connections (not sure if that will make a difference). In order to use all of your connections relatively evenly, I would place them in a queue and dequeue them. This will also guarantee that no two threads are using the same connection and you don't have to use the Monitor.TryEnter().

Need to know if my threading lock does what it is supposed to in .Net?

I have an application that, before is creates a thread it calls the database to pull X amount of records. When the records are retrieved from the database a locked flag is set so those records are not pulled again.
Once a thread has completed it will pull some more records form that database. When I call the database from a thread should I set a lock on that section of code so it is called only by that thread at that time? Here is an exmaple of my code (I commented in the area where I have the lock):
private void CreateThreads()
{
for(var i = 1; i <= _threadCount; i++)
{
var adapter = new Dystopia.DataAdapter();
var records = adapter.FindAllWithLocking(_recordsPerThread,_validationId,_validationDateTime);
if(records != null && records.Count > 0)
{
var paramss = new ArrayList { i, records };
ThreadPool.QueueUserWorkItem(ThreadWorker, paramss);
}
this.Update();
}
}
private void ThreadWorker(object paramList)
{
try
{
var parms = (ArrayList) paramList;
var stopThread = false;
var threadCount = (int) parms[0];
var records = (List<Candidates>) parms[1];
var runOnce = false;
var adapter = new Dystopia.DataAdapter();
var lastCount = records.Count;
var runningCount = 0;
while (_stopThreads == false)
{
if (records.Count > 0)
{
foreach (var record in records)
{
var proc = new ProcRecords();
proc.Validate(ref rec);
adapter.Update(rec);
if (_stopThreads)
{
break;
}
}
//This is where I think I may need to sync the threads.
//Is this correct?
lock(this){
records = adapter.FindAllWithLocking;
}
}
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
SQL to Pull records:
WITH cte AS (
SELECT TOP (#topCount) *
FROM Candidates WITH (READPAST)
WHERE
isLocked = 0 and
isTested = 0 and
validated = 0
)
UPDATE cte
SET
isLocked = 1,
validationID = #validationId,
validationDateTime = #validationDateTime
OUTPUT INSERTED.*;
You shouldn't need to lock your threads as the database should be doing this on the request for you.
I see a few issues.
First, you are testing _stopThreads == false, but you have not revealed whether this a volatile read. Read the second of half this answer for a good description of what I am talking about.
Second, the lock is pointless because adapter is a local reference to a non-shared object and records is a local reference which just being replaced. I am assuming that the adapter makes a separate connection to the database, but if it shares an existing connection then some type of synchronization may need to take place since ADO.NET connection objects are not typically thread-safe.
Now, you probably will need locking somewhere to publish the results from the work item. I do not see where the results are being published to the main thread so I cannot offer any guidance here.
By the way, I would avoid showing a message box from a ThreadPool thread. The reason being that this will hang that thread until the message box closes.
You shouldn't lock(this) since its really easy for you to create deadlocks you should create a separate lock object. if you search for "lock(this)" you can find numerous articles on why.
Here's an SO question on lock(this)

Dual-queue producer-consumer in .NET (forcing member variable flush)

I have a thread which produces data in the form of simple object (record). The thread may produce a thousand records for each one that successfully passes a filter and is actually enqueued. Once the object is enqueued it is read-only.
I have one lock, which I acquire once the record has passed the filter, and I add the item to the back of the producer_queue.
On the consumer thread, I acquire the lock, confirm that the producer_queue is not empty,
set consumer_queue to equal producer_queue, create a new (empty) queue, and set it on producer_queue. Without any further locking I process consumer_queue until it's empty and repeat.
Everything works beautifully on most machines, but on one particular dual-quad server I see in ~1/500k iterations an object that is not fully initialized when I read it out of consumer_queue. The condition is so fleeting that when I dump the object after detecting the condition the fields are correct 90% of the time.
So my question is this: how can I assure that the writes to the object are flushed to main memory when the queue is swapped?
Edit:
On the producer thread:
(producer_queue above is m_fillingQueue; consumer_queue above is m_drainingQueue)
private void FillRecordQueue() {
while (!m_done) {
int count;
lock (m_swapLock) {
count = m_fillingQueue.Count;
}
if (count > 5000) {
Thread.Sleep(60);
} else {
DataRecord rec = GetNextRecord();
if (rec == null) break;
lock (m_swapLock) {
m_fillingQueue.AddLast(rec);
}
}
}
}
In the consumer thread:
private DataRecord Next(bool remove) {
bool drained = false;
while (!drained) {
if (m_drainingQueue.Count > 0) {
DataRecord rec = m_drainingQueue.First.Value;
if (remove) m_drainingQueue.RemoveFirst();
if (rec.Time < FIRST_VALID_TIME) {
throw new InvalidOperationException("Detected invalid timestamp in Next(): " + rec.Time + " from record " + rec);
}
return rec;
} else {
lock (m_swapLock) {
m_drainingQueue = m_fillingQueue;
m_fillingQueue = new LinkedList<DataRecord>();
if (m_drainingQueue.Count == 0) drained = true;
}
}
}
return null;
}
The consumer is rate-limited, so it can't get ahead of the consumer.
The behavior I see is that sometimes the Time field is reading as DateTime.MinValue; by the time I construct the string to throw the exception, however, it's perfectly fine.
Have you tried the obvious: is microcode update applied on the fancy 8-core box(via BIOS update)? Did you run Windows Updates to get the latest processor driver?
At the first glance, it looks like you're locking your containers. So I am recommending the systems approach, as it sound like you're not seeing this issue on a good-ol' dual core box.
Assuming these are in fact the only methods that interact with the m_fillingQueue variable, and that DataRecord cannot be changed after GetNextRecord() creates it (read-only properties hopefully?), then the code at least on the face of it appears to be correct.
In which case I suggest that GregC's answer would be the first thing to check; make sure the failing machine is fully updated (OS / drivers / .NET Framework), becasue the lock statement should involve all the required memory barriers to ensure that the rec variable is fully flushed out of any caches before the object is added to the list.

Categories