Thread Safety with Paralel.For - c#

I have an application that runs on a machine with multiple processors. When I run the code in visual studio on my development machine it runs fairly quickly. When I run the published version on the server with the same inputs, it runs more slowly. I'm working on a theory, here. My development laptop has a single processor with a higher clock speed than the multiple processors on the server. Since the application is single-threaded, it seems logical that it would run faster locally. So, if I can add some multithreading to the application to make use of the additional processors on the server, I might be able to improve performance. That's the theory, anyway. And my experience with multithreaded applications is limited.
The crux of the application is a loop which calls several methods. A simplified version would look something like:
public DataTable MyMethod()
{
DataTable MyDataTable = new DataTable();
<add columns to the data table>
for (int counter = 1; counter <= MaxCounter; counter++)
{
<<generate some values>>
ComputeOutputsByRecipe(id, ref MyDataTable);
}
return MyDataTable;
}
private void ComputeOutputsByRecipe(int RecipeID, ref DataTable Results)
{
switch (RecipeID)
{
case 1:
ProcessRecipe_1(ref Results);
break;
case 2:
<repeat for supported recipe IDs>
}
}
private void ProcessRecipe_1(ref DataTable Results)
{
<do some processing>
DataRow dr = Results.NewRow();
<populate the new data row>
Results.Rows.Add(dr);
}
So what I'm looking to do is replace the "For" with "Paralel.For" to take advantage of multiple threads running on multiple processors. But since each iteration of the loop writes to a reference parameter, I'm concerned about thread safety. Now... the order that the data gets written to this data table is not important. And I don't read from the data table until after the looping has completed. So I don't think that this is a problem. But since Add() is an instance method, I'm concerned about what would happen.
So the question is, is it safe to add rows to a datatable like this if the for loop in my example is replaced with Paralel.For() and I don't read from the data table until after the loop has completed?

No, it is not safe. DataTable is not designed to be mutated from multiple concurrent threads.
You'll need to synchronize access to it in order to ensure it works properly.

Related

Make segment of code in c# thread un-interruptible

I'm using a c# thread in which I want to take a snapshot of 28 dictionaries (stored in an array for easy access) that are being populated by event handler methods, in the thread I'm looping over the array of dictionaries and storing the snapshots using ToList() method in another array of Lists, so that I can do some work in the thread afterwards even though the collections keep changing in real-time. The problem is that I want the snapshots of all dictionaries to be as close in time as possible, so I don't want my thread to be interrupted while I'm looping over the array. The code looks something like this:
public void ThreadProc_Scoring()
{
List<KeyValuePair<KeyValuePair<DateTime, double>, double>>[] SnapBOUGHT = new List<KeyValuePair<KeyValuePair<DateTime, double>, double>>[27];
List<KeyValuePair<KeyValuePair<DateTime, double>, double>>[] SnapSOLD = new List<KeyValuePair<KeyValuePair<DateTime, double>, double>>[27];
int index;
int NormIdx;
DateTime actual = new DateTime();
double score = 0;
double totAScore = 0;
double totBScore = 0;
double AskNormalization = 1;
double BidNormalization = 1;
while (true)
{
Thread.Sleep(15000);
//Get a snapshot of BOUGHT/SOLD collections
//Enter un-interruptible section
foreach (var symbol in MySymbols)
{
index = symbol.Value;
SnapBOUGHT[index] = BOUGHT[index].ToList();
SnapSOLD[index] = SOLD[index].ToList();
}
//Exit un-interruptible section
//Do some work on Snapshots
}
If you have any other suggestions on how to approach the situation, I'd be grateful for the help. Thank you!
You are looking for a critical section, i.e. a lock. used like
lock(myLockObject){
// do uninteruptable work
}
where myLockObject is a object that is shared by all threads that use the resources. Any thread that uses the dictionaries will need to take this lock using the same object. You might be able to use a readwritelock to reduce contention, but that would require more details about your specific use case. If you are using regular dictionaries you will need some kind of locking anyway since dictionaries are not threadsafe, this can be avoided by using concurrentDictionaries.
If you do not depend on the dictionary snapshot being taken at the same time you can simply use concurrentDictionaries and run your loop without locking, and deal with the consequences.
The idea of "uninteruptable" does not really exist in c#, it is the OS that controls all scheduling of threads. Processors have a feature to disable interrupts that have sort of that effect, but this is used for system level programming and not for regular client software.
Assuming that you are targeting a multitasking operating system (like Windows, Linux, Android, macOS, iOS, Solaris etc), the closest you can get to an un-interruptible section is to run this section on a thread configured with the maximum priority:
var thread = new Thread(GetTheSnapshots);
thread.Priority = ThreadPriority.Highest;
thread.Start();
You should not expect it to be consistently and deterministically un-interruptible though, since the operating system is not obligated to schedule the thread on a time-slice of infinite duration.

improve performance of a nested loop

I have simplified my program for this example, so I basically load in a file and add the values from the file into a list.
IList<string> MyList = new List<string>();
Main ()
{
foreach(Row r in InputFile)
{
foreach(Cell c in r)
{
AddToList(c.Value);
}
}
}
public void AddToTheList(string value)
{
MyList.Add(value);
}
I am looking to speed up the processing of the loop, I do not care about the order that the values are added.
I am thinking about running the loops in parallel and/or treating the AddToTheList method as an asynchronous fire and forget.
What is the most simple way to make the code use the servers processing power and speed up the total time to process the file?
Update: If the inner loop is heavy enough to make this task CPU-bound (rather than IO-bound), then you could partition the loop using Parallel.ForEach. Here's an example:
Parallel.ForEach(InputFile, row =>
{
foreach(Cell c in row)
AddToList(c.Value);
});
Or, change the AddToList signature to return the value you need, and use PLINQ instead.
MyList = InputFile.AsParallel()
.SelectMany(row => row.AsParallel()
.Select(cell => TransformCell(cell.Value))
.ToList();
public string TransformCell(string value)
{
return value + " something";
}
Making AddToTheList a fire-and-forget async method is almost certainly not a good option. Exceptions thrown by that method would go unhandled, and depending on which framework you're using, these may crash the application.
Parallelizing the calls to AddToTheList is no good - this task is IO-bound.
The bottleneck is in how fast you can read data from disk.
Parallelizing disk access would be no good either. Having two or more threads reading the same file won't be any faster - they'll have to take turns anyway. See this answer to Is it possible to use threads to speed up file reading?
Use as many threads as you have files.
It depends. If parsing rows and cells and adding values to the list is simple, doing things in parallel will not help you - you will be limited I/O, which is a lot slower than the CPU.
However, if parsing the rows takes time, and you're not really adding to a List but rather doing something more complicated, you can read rows from the files, and then handle the rows in parallel - just preallocate the memory for them (List lets you do that) and access each row's List positions in parallel.

Multi Threading with LINQ to SQL

I am writing a WinForms application. I am pulling data from my database, performing some actions on that data set and then plan to save it back to the database. I am using LINQ to SQL to perform the query to the database because I am only concerned with 1 table in our database so I didn't want to implement an entire ORM for this.
I have it pulling the dataset from the DB. However, the dataset is rather large. So currently what I am trying to do is separate the dataset into 4 relatively equal sized lists (List<object>).
Then I have a separate background worker to run through each of those lists, perform the action and report its progress while doing so. I have it planned to consolidate those sections into one big list once all 4 background workers have finished processing their section.
But I keep getting an error while the background workers are processing their unique list. Do the objects maintain their tie to the DataContext for the LINQ to SQL even though they have been converted to List objects? Any ideas how to fix this? I have minimal experience with multi-threading so if I am going at this completely wrong, please tell me.
Thanks guys. If you need any code snippets or any other information just ask.
Edit: Oops. I completely forgot to give the error message. In the DataContext designer.cs it gives the error An item with the same key has already been added. on the SendPropertyChanging function.
private void Setup(){
List<MyObject> quarter1 = _listFromDB.Take(5000).ToList();
bgw1.RunWorkerAsync();
}
private void bgw1_DoWork(object sender, DoWorkEventArgs e){
e.Result = functionToExecute(bgw1, quarter1);
}
private List<MyObject> functionToExecute(BackgroundWorker caller, List<MyObject> myList)
{
int progress = 0;
foreach (MyObject obj in myList)
{
string newString1 = createString();
obj.strText = newString;
//report progress here
caller.ReportProgress(progress++);
}
return myList;
}
This same function is called by all four workers and is given a different list for myList based on which worker is called the function.
Because a real answer has yet to be posted, I'll give it a shot.
Given that you haven't shown any LINQ-to-SQL code (no usage of DataContext) - I'll take an educated guess that the DataContext is shared between the threads, for example:
using (MyDataContext context = new MyDataContext())
{
// this is just some random query, that has not been listed - ToList()
// thus query execution is defered. listFromDB = IQueryable<>
var listFromDB = context.SomeTable.Where(st => st.Something == true);
System.Threading.Tasks.Task.Factory.StartNew(() =>
{
var list1 = listFromDB.Take(5000).ToList(); // runs the SQL query
// call some function on list1
});
System.Threading.Tasks.Task.Factory.StartNew(() =>
{
var list2 = listFromDB.Take(5000).ToList(); // runs the SQL query
// call some function on list2
});
}
Now the error you got - An item with the same key has already been added. - was because the DataContext object is not thread safe! A lot of stuff happens in the background - DataContext has to load objects from SQL, track their states, etc. This background work is what throws the error (because each thread is running the query, the DataContext gets accessed).
At least this is my own personal experience. Having come across the same error while sharing the DataContext between multiple threads. You only have two options in this scenario:
1) Before starting the threads, call .ToList() on the query, making listFromDB not an IQueryable<>, but an actual List<>. This means that the query has already ran and the threads operate on an actual List, not on the DataContext.
2) Move the DataContext definition into each thread. Because the DataContext is no longer shared, no more errors.
The third option would be to re-write the scenario into something else, like you did (for example, make everything sequential on a single background thread)...
First of all, I don't really see why you'd need multiple worker threads at all. (are theses lists in seperate databases / tables / servers? Do you really want to show 4 progress bars if you have 4 lists or are you somehow merging these progress reportings into one weird progress bar:D
Also, you're trying to speed up processing updates to your databases, but you don't send linq to sql any SAVES, so you're not really batching transactions, you'll just save everything at the end in one big transaction, is that really what you're aiming for? the progress bar will just stop at 100% and then spend a lot of time on the SQL side.
Just create one background thread and process everything synchronously, but batch a save transaction every couple of rows (i'd suggest something like every 1000 rows, but you should experiment with this) , it'll be fast, even with millions of rows,
If you really need this multithreaded solution:
The "another blabla with the same key has been added" error suggests that you are adding the same item to multiple "mylists", or adding the same item to the same list twice, otherwise how would there be any errors at all?
Using Parallel LINQ (PLINQ), you can take benefit of multiple CPU cores for processing your data. But if your application is going to run on single-core CPU, then splitting data into peaces wouldn't give you performance benefits instead it will incur some context-change overhead.
Hope it Helps

Asp.net -- Multithreading in C#

I have a huge data to run which takes awful amount of time so thought Threading might do the job for me quickly.
What I do : Call SQL Stored Procedures from ASP.NET front end and processing takes place there and it takes almost 30 hours.
What I need : I have split the data into different batches and created respective SPs for each. Now I require all SPs to be running at the same time at a single button click.
Please help!
I used the below code but it doesnt seem to run in parallel.
protected void Button3_Click(object sender, EventArgs e)
{
Thread t1 = new Thread(Method1);
Thread t2 = new Thread(Method2);
t1.Start();
t2.Start();
t1.Join();
t2.Join();
}
void Method1()
{
for (int i = 0; i < 10000; i++)
{
Response.Write("hello1"+i);
Response.Write("<br>");
}
}
void Method2()
{
for (int i = 0; i < 10000; i++)
{
Response.Write("hello2" + i);
}
}
You probably don't want to be doing this directly in ASP.NET for a variety of reasons, such as the worker process has limited execution time.
Also note that the SqlConnection etc also have their own time limits.
What you should really do is queue up the work to do (using IPC or another database table etc) and have something like a Windows service or external process in a scheduled task pick up and process through the queue.
Hell, you could even kick off a job within SQL Server and have that directly do the work.
Threading doesnt magically speed up your process.
If you dont know what you are doing server side threading is not a good idea in general.
Sql server probably will time out for 30hrs :)
for 30 hours of job Asp.net is not the way to go. This is a big process and you shouldn't handle it within Asp.net. As an alternative you might want to write a windows service. Pass your parameters to it ( maybe with msmq or some kind of messaging system) Do your process and send progress to web application show it with signalR or ajax pulls.
Narendran, start here:
http://www.albahari.com/threading/
This is the best Threading tutorial I have seen online and respective book is also very good.
Make sure you spend enough time to go through the whole tutorial(I have done it and believe me, it worth it!).
As said above using Join method of thread class in this case defeats the purpose of using threads. Instead of using join use lock(see basic Synchoronization in the above tutorial) to make sure threads are synchronized.
Also as mentioned, before doing multithreading Run those stored procedures on SQL server directly and all together. If it still takes 30 hours for them to get executed ,then using Threading won't do any help. If you see less than 30 hours then you may benefeat from multithreading.

Multi threading C# application with SQL Server database calls

I have a SQL Server database with 500,000 records in table main. There are also three other tables called child1, child2, and child3. The many to many relationships between child1, child2, child3, and main are implemented via the three relationship tables: main_child1_relationship, main_child2_relationship, and main_child3_relationship. I need to read the records in main, update main, and also insert into the relationship tables new rows as well as insert new records in the child tables. The records in the child tables have uniqueness constraints, so the pseudo-code for the actual calculation (CalculateDetails) would be something like:
for each record in main
{
find its child1 like qualities
for each one of its child1 qualities
{
find the record in child1 that matches that quality
if found
{
add a record to main_child1_relationship to connect the two records
}
else
{
create a new record in child1 for the quality mentioned
add a record to main_child1_relationship to connect the two records
}
}
...repeat the above for child2
...repeat the above for child3
}
This works fine as a single threaded app. But it is too slow. The processing in C# is pretty heavy duty and takes too long. I want to turn this into a multi-threaded app.
What is the best way to do this? We are using Linq to Sql.
So far my approach has been to create a new DataContext object for each batch of records from main and use ThreadPool.QueueUserWorkItem to process it. However these batches are stepping on each other's toes because one thread adds a record and then the next thread tries to add the same one and ... I am getting all kinds of interesting SQL Server dead locks.
Here is the code:
int skip = 0;
List<int> thisBatch;
Queue<List<int>> allBatches = new Queue<List<int>>();
do
{
thisBatch = allIds
.Skip(skip)
.Take(numberOfRecordsToPullFromDBAtATime).ToList();
allBatches.Enqueue(thisBatch);
skip += numberOfRecordsToPullFromDBAtATime;
} while (thisBatch.Count() > 0);
while (allBatches.Count() > 0)
{
RRDataContext rrdc = new RRDataContext();
var currentBatch = allBatches.Dequeue();
lock (locker)
{
runningTasks++;
}
System.Threading.ThreadPool.QueueUserWorkItem(x =>
ProcessBatch(currentBatch, rrdc));
lock (locker)
{
while (runningTasks > MAX_NUMBER_OF_THREADS)
{
Monitor.Wait(locker);
UpdateGUI();
}
}
}
And here is ProcessBatch:
private static void ProcessBatch(
List<int> currentBatch, RRDataContext rrdc)
{
var topRecords = GetTopRecords(rrdc, currentBatch);
CalculateDetails(rrdc, topRecords);
rrdc.Dispose();
lock (locker)
{
runningTasks--;
Monitor.Pulse(locker);
};
}
And
private static List<Record> GetTopRecords(RecipeRelationshipsDataContext rrdc,
List<int> thisBatch)
{
List<Record> topRecords;
topRecords = rrdc.Records
.Where(x => thisBatch.Contains(x.Id))
.OrderBy(x => x.OrderByMe).ToList();
return topRecords;
}
CalculateDetails is best explained by the pseudo-code at the top.
I think there must be a better way to do this. Please help. Many thanks!
Here's my take on the problem:
When using multiple threads to insert/update/query data in SQL Server, or any database, then deadlocks are a fact of life. You have to assume they will occur and handle them appropriately.
That's not so say we shouldn't attempt to limit the occurence of deadlocks. However, it's easy to read up on the basic causes of deadlocks and take steps to prevent them, but SQL Server will always surprise you :-)
Some reason for deadlocks:
Too many threads - try to limit the number of threads to a minimum, but of course we want more threads for maximum performance.
Not enough indexes. If selects and updates aren't selective enough SQL will take out larger range locks than is healthy. Try to specify appropriate indexes.
Too many indexes. Updating indexes causes deadlocks, so try to reduce indexes to the minimum required.
Transaction isolational level too high. The default isolation level when using .NET is 'Serializable', whereas the default using SQL Server is 'Read Committed'. Reducing the isolation level can help a lot (if appropriate of course).
This is how I might tackle your problem:
I wouldn't roll my own threading solution, I would use the TaskParallel library. My main method would look something like this:
using (var dc = new TestDataContext())
{
// Get all the ids of interest.
// I assume you mark successfully updated rows in some way
// in the update transaction.
List<int> ids = dc.TestItems.Where(...).Select(item => item.Id).ToList();
var problematicIds = new List<ErrorType>();
// Either allow the TaskParallel library to select what it considers
// as the optimum degree of parallelism by omitting the
// ParallelOptions parameter, or specify what you want.
Parallel.ForEach(ids, new ParallelOptions {MaxDegreeOfParallelism = 8},
id => CalculateDetails(id, problematicIds));
}
Execute the CalculateDetails method with retries for deadlock failures
private static void CalculateDetails(int id, List<ErrorType> problematicIds)
{
try
{
// Handle deadlocks
DeadlockRetryHelper.Execute(() => CalculateDetails(id));
}
catch (Exception e)
{
// Too many deadlock retries (or other exception).
// Record so we can diagnose problem or retry later
problematicIds.Add(new ErrorType(id, e));
}
}
The core CalculateDetails method
private static void CalculateDetails(int id)
{
// Creating a new DeviceContext is not expensive.
// No need to create outside of this method.
using (var dc = new TestDataContext())
{
// TODO: adjust IsolationLevel to minimize deadlocks
// If you don't need to change the isolation level
// then you can remove the TransactionScope altogether
using (var scope = new TransactionScope(
TransactionScopeOption.Required,
new TransactionOptions {IsolationLevel = IsolationLevel.Serializable}))
{
TestItem item = dc.TestItems.Single(i => i.Id == id);
// work done here
dc.SubmitChanges();
scope.Complete();
}
}
}
And of course my implementation of a deadlock retry helper
public static class DeadlockRetryHelper
{
private const int MaxRetries = 4;
private const int SqlDeadlock = 1205;
public static void Execute(Action action, int maxRetries = MaxRetries)
{
if (HasAmbientTransaction())
{
// Deadlock blows out containing transaction
// so no point retrying if already in tx.
action();
}
int retries = 0;
while (retries < maxRetries)
{
try
{
action();
return;
}
catch (Exception e)
{
if (IsSqlDeadlock(e))
{
retries++;
// Delay subsequent retries - not sure if this helps or not
Thread.Sleep(100 * retries);
}
else
{
throw;
}
}
}
action();
}
private static bool HasAmbientTransaction()
{
return Transaction.Current != null;
}
private static bool IsSqlDeadlock(Exception exception)
{
if (exception == null)
{
return false;
}
var sqlException = exception as SqlException;
if (sqlException != null && sqlException.Number == SqlDeadlock)
{
return true;
}
if (exception.InnerException != null)
{
return IsSqlDeadlock(exception.InnerException);
}
return false;
}
}
One further possibility is to use a partitioning strategy
If your tables can naturally be partitioned into several distinct sets of data, then you can either use SQL Server partitioned tables and indexes, or you could manually split your existing tables into several sets of tables. I would recommend using SQL Server's partitioning, since the second option would be messy. Also built-in partitioning is only available on SQL Enterprise Edition.
If partitioning is possible for you, you could choose a partion scheme that broke you data in lets say 8 distinct sets. Now you could use your original single threaded code, but have 8 threads each targetting a separate partition. Now there won't be any (or at least a minimum number of) deadlocks.
I hope that makes sense.
Overview
The root of your problem is that the L2S DataContext, like the Entity Framework's ObjectContext, is not thread-safe. As explained in this MSDN forum exchange, support for asynchronous operations in the .NET ORM solutions is still pending as of .NET 4.0; you'll have to roll your own solution, which as you've discovered isn't always easy to do when your framework assume single-threadedness.
I'll take this opportunity to note that L2S is built on top of ADO.NET, which itself fully supports asynchronous operation - personally, I would much prefer to deal directly with that lower layer and write the SQL myself, just to make sure that I fully understood what was transpiring over the network.
SQL Server Solution?
That being said, I have to ask - must this be a C# solution? If you can compose your solution out of a set of insert/update statements, you can just send over the SQL directly and your threading and performance problems vanish.* It seems to me that your problems are related not to the actual data transformations to be made, but center around making them performant from .NET. If .NET is removed from the equation, your task becomes simpler. After all, the best solution is often the one that has you writing the smallest amount of code, right? ;)
Even if your update/insert logic can't be expressed in a strictly set-relational manner, SQL Server does have a built-in mechanism for iterating over records and performing logic - while they are justly maligned for many use cases, cursors may in fact be appropriate for your task.
If this is a task that has to happen repeatedly, you could benefit greatly from coding it as a stored procedure.
*of course, long-running SQL brings its own problems like lock escalation and index usage that you'll have to contend with.
C# Solution
Of course, it may be that doing this in SQL is out of the question - maybe your code's decisions depend on data that comes from elsewhere, for example, or maybe your project has a strict 'no-SQL-allowed' convention. You mention some typical multithreading bugs, but without seeing your code I can't really be helpful with them specifically.
Doing this from C# is obviously viable, but you need to deal with the fact that a fixed amount of latency will exist for each and every call you make. You can mitigate the effects of network latency by using pooled connections, enabling multiple active result sets, and using the asynchronous Begin/End methods for executing your queries. Even with all of those, you will still have to accept that there is a cost to shipping data from SQL Server to your application.
One of the best ways to keep your code from stepping all over itself is to avoid sharing mutable data between threads as much as possible. That would mean not sharing the same DataContext across multiple threads. The next best approach is to lock critical sections of code that touch the shared data - lock blocks around all DataContext access, from the first read to the final write. That approach might just obviate the benefits of multithreading entirely; you can likely make your locking more fine-grained, but be ye warned that this is a path of pain.
Far better is to keep your operations separate from each other entirely. If you can partition your logic across 'main' records, that's ideal - that is to say, as long as there aren't relationships between the various child tables, and as long as one record in 'main' doesn't have implications for another, you can split your operations across multiple threads like this:
private IList<int> GetMainIds()
{
using (var context = new MyDataContext())
return context.Main.Select(m => m.Id).ToList();
}
private void FixUpSingleRecord(int mainRecordId)
{
using (var localContext = new MyDataContext())
{
var main = localContext.Main.FirstOrDefault(m => m.Id == mainRecordId);
if (main == null)
return;
foreach (var childOneQuality in main.ChildOneQualities)
{
// If child one is not found, create it
// Create the relationship if needed
}
// Repeat for ChildTwo and ChildThree
localContext.SaveChanges();
}
}
public void FixUpMain()
{
var ids = GetMainIds();
foreach (var id in ids)
{
var localId = id; // Avoid closing over an iteration member
ThreadPool.QueueUserWorkItem(delegate { FixUpSingleRecord(id) });
}
}
Obviously this is as much a toy example as the pseudocode in your question, but hopefully it gets you thinking about how to scope your tasks such that there is no (or minimal) shared state between them. That, I think, will be the key to a correct C# solution.
EDIT Responding to updates and comments
If you're seeing data consistency issues, I'd advise enforcing transaction semantics - you can do this by using a System.Transactions.TransactionScope (add a reference to System.Transactions). Alternately, you might be able to do this on an ADO.NET level by accessing the inner connection and calling BeginTransaction on it (or whatever the DataConnection method is called).
You also mention deadlocks. That you're battling SQL Server deadlocks indicates that the actual SQL queries are stepping on each other's toes. Without knowing what is actually being sent over the wire, it's difficult to say in detail what's happening and how to fix it. Suffice to say that SQL deadlocks result from SQL queries, and not necessarily from C# threading constructs - you need to examine what exactly is going over the wire. My gut tells me that if each 'main' record is truly independent of the others, then there shouldn't be a need for row and table locks, and that Linq to SQL is likely the culprit here.
You can get a dump of the raw SQL emitted by L2S in your code by setting the DataContext.Log property to something e.g. Console.Out. Though I've never personally used it, I understand the LINQPad offers L2S facilities and you may be able to get at the SQL there, too.
SQL Server Management Studio will get you the rest of the way there - using the Activity Monitor, you can watch for lock escalation in real time. Using the Query Analyzer, you can get a view of exactly how SQL Server will execute your queries. With those, you should be able to get a good notion of what your code is doing server-side, and in turn how to go about fixing it.
I would recommend moving all the XML processing into the SQL server, too. Not only will all your deadlocks disappear, but you will see such a boost in performance that you will never want to go back.
It will be best explained by an example. In this example I assume that the XML blob already is going into your main table (I call it closet). I will assume the following schema:
CREATE TABLE closet (id int PRIMARY KEY, xmldoc ntext)
CREATE TABLE shoe(id int PRIMARY KEY IDENTITY, color nvarchar(20))
CREATE TABLE closet_shoe_relationship (
closet_id int REFERENCES closet(id),
shoe_id int REFERENCES shoe(id)
)
And I expect that your data (main table only) initially looks like this:
INSERT INTO closet(id, xmldoc) VALUES (1, '<ROOT><shoe><color>blue</color></shoe></ROOT>')
INSERT INTO closet(id, xmldoc) VALUES (2, '<ROOT><shoe><color>red</color></shoe></ROOT>')
Then your whole task is as simple as the following:
INSERT INTO shoe(color) SELECT DISTINCT CAST(CAST(xmldoc AS xml).query('//shoe/color/text()') AS nvarchar) AS color from closet
INSERT INTO closet_shoe_relationship(closet_id, shoe_id) SELECT closet.id, shoe.id FROM shoe JOIN closet ON CAST(CAST(closet.xmldoc AS xml).query('//shoe/color/text()') AS nvarchar) = shoe.color
But given that you will do a lot of similar processing, you can make your life easier by declaring your main blob as XML type, and further simplifying to this:
INSERT INTO shoe(color)
SELECT DISTINCT CAST(xmldoc.query('//shoe/color/text()') AS nvarchar)
FROM closet
INSERT INTO closet_shoe_relationship(closet_id, shoe_id)
SELECT closet.id, shoe.id
FROM shoe JOIN closet
ON CAST(xmldoc.query('//shoe/color/text()') AS nvarchar) = shoe.color
There are additional performance optimizations possible, like pre-computing repeatedly invoked Xpath results in a temporary or permanent table, or converting the initial population of the main table into a BULK INSERT, but I don't expect that you will really need those to succeed.
sql server deadlocks are normal & to be expected in this type of scenario - MS's recommendation is that these should be handled on the application side rather than the db side.
However if you do need to make sure that a stored procedure is only called once then you can use a sql mutex lock using sp_getapplock. Here's an example of how to implement this
BEGIN TRAN
DECLARE #mutex_result int;
EXEC #mutex_result = sp_getapplock #Resource = 'CheckSetFileTransferLock',
#LockMode = 'Exclusive';
IF ( #mutex_result < 0)
BEGIN
ROLLBACK TRAN
END
-- do some stuff
EXEC #mutex_result = sp_releaseapplock #Resource = 'CheckSetFileTransferLock'
COMMIT TRAN
This may be obvious, but looping through each tuple and doing your work in your servlet container involves a lot of per-record overhead.
If possible, move some or all of that processing to the SQL server by rewriting your logic as one or more stored procedures.
If
You don't have a lot of time to spend on this issue and need it to fix it right now
You are sure that your code is done so that different thread will NOT modify the same record
You are not afraid
Then ... you can just add "WITH NO LOCK" to your queries so that MSSQL doesn't apply the locks.
To use with caution :)
But anyway, you didn't tell us where the time is lost (in the mono-threaded version). Because if it's in the code, I'll advise you to write everything in the DB directly to avoid continuous data exchange. If it's in the db, I'll advise to check index (too much ?), i/o, cpu etc.

Categories