C# method to lock SQL Server table - c#

I have a C# program that needs to perform a group of mass updates (20k+) to a SQL Server table. Since other users can update these records one at a time via an intranet website, we need to build the C# program with the capability of locking the table down. Once the table is locked to prevent another user from making any alterations/searches we will then need to preform the requested updates/inserts.
Since we are processing so many records, we cannot use TransactionScope (seemed the easiest way at first) due to the fact our transaction winds up being handled by the MSDTC service. We need to use another method.
Based on what I've read on the internet using a SqlTransaction object seemed to be the best method, however I cannot get the table to lock. When the program runs and I step through the code below, I'm still able to perform updates and search via the intranet site.
My question is twofold. Am I using the SqlTransaction properly? If so (or even if not) is there a better method for obtaining a table lock that allows the current program running to search and preform updates?
I would like for the table to be locked while the program executes the code below.
C#
SqlConnection dbConnection = new SqlConnection(dbConn);
dbConnection.Open();
using (SqlTransaction transaction = dbConnection.BeginTransaction(IsolationLevel.Serializable))
{
//Instantiate validation object with zip and channel values
_allRecords = GetRecords();
validation = new Validation();
validation.SetLists(_allRecords);
while (_reader.Read())
{
try
{
record = new ZipCodeTerritory();
_errorMsg = string.Empty;
//Convert row to ZipCodeTerritory type
record.ChannelCode = _reader[0].ToString();
record.DrmTerrDesc = _reader[1].ToString();
record.IndDistrnId = _reader[2].ToString();
record.StateCode = _reader[3].ToString().Trim();
record.ZipCode = _reader[4].ToString().Trim();
record.LastUpdateId = _reader[7].ToString();
record.ErrorCodes = _reader[8].ToString();
record.Status = _reader[9].ToString();
record.LastUpdateDate = DateTime.Now;
//Handle DateTime types separetly
DateTime value = new DateTime();
if (DateTime.TryParse(_reader[5].ToString(), out value))
{
record.EndDate = Convert.ToDateTime(_reader[5].ToString());
}
else
{
_errorMsg += "Invalid End Date; ";
}
if (DateTime.TryParse(_reader[6].ToString(), out value))
{
record.EffectiveDate = Convert.ToDateTime(_reader[6].ToString());
}
else
{
_errorMsg += "Invalid Effective Date; ";
}
//Do not process if we're missing LastUpdateId
if (string.IsNullOrEmpty(record.LastUpdateId))
{
_errorMsg += "Missing last update Id; ";
}
//Make sure primary key is valid
if (_reader[10] != DBNull.Value)
{
int id = 0;
if (int.TryParse(_reader[10].ToString(), out id))
{
record.Id = id;
}
else
{
_errorMsg += "Invalid Id; ";
}
}
//Validate business rules if data is properly formatted
if (string.IsNullOrWhiteSpace(_errorMsg))
{
_errorMsg = validation.ValidateZipCode(record);
}
//Skip record if any errors found
if (!string.IsNullOrWhiteSpace(_errorMsg))
{
_issues++;
//Convert to ZipCodeError type in case we have data/formatting errors
_errors.Add(new ZipCodeError(_reader), _errorMsg);
continue;
}
else if (flag)
{
//Separate updates to appropriate list
SendToUpdates(record);
}
}
catch (Exception ex)
{
_errors.Add(new ZipCodeError(_reader), "Job crashed reading this record, please review all columns.");
_issues++;
}
}//End while
//Updates occur in one of three methods below. If I step through the code,
//and stop the program here, before I enter any of the methods, and then
//make updates to the same records via our intranet site the changes
//made on the site go through. No table locking has occured at this point.
if (flag)
{
if (_insertList.Count > 0)
{
Updates.Insert(_insertList, _errors);
}
if (_updateList.Count > 0)
{
_updates = Updates.Update(_updateList, _errors);
_issues += _updateList.Count - _updates;
}
if (_autotermList.Count > 0)
{
//_autotermed = Updates.Update(_autotermList, _errors);
_autotermed = Updates.UpdateWithReporting(_autotermList, _errors);
_issues += _autotermList.Count - _autotermed;
}
}
transaction.Commit();
}

SQL doesn't really provide a way to exclusively lock a table: it's designed to try to maximize concurrent use while keeping ACID.
You could try using these table hints on your queries:
TABLOCK
Specifies that the acquired lock is applied at the table level. The type of lock that
is acquired depends on the statement being executed. For example, a SELECT statement
may acquire a shared lock. By specifying TABLOCK, the shared lock is applied to the
entire table instead of at the row or page level. If HOLDLOCK is also specified, the
table lock is held until the end of the transaction.
TABLOCKX
Specifies that an exclusive lock is taken on the table.
UPDLOCK
Specifies that update locks are to be taken and held until the transaction completes.
UPDLOCK takes update locks for read operations only at the row-level or page-level. If
UPDLOCK is combined with TABLOCK, or a table-level lock is taken for some other
reason, an exclusive (X) lock will be taken instead.
XLOCK
Specifies that exclusive locks are to be taken and held until the transaction
completes. If specified with ROWLOCK, PAGLOCK, or TABLOCK, the exclusive locks apply
to the appropriate level of granularity.
HOLDLOCK/SERIALIZABLE
Makes shared locks more restrictive by holding them until a transaction is completed,
instead of releasing the shared lock as soon as the required table or data page is no
longer needed, whether the transaction has been completed or not. The scan is
performed with the same semantics as a transaction running at the SERIALIZABLE
isolation level. For more information about isolation levels, see SET TRANSACTION
ISOLATION LEVEL (Transact-SQL).
Alternatively, you could try SET TRANSACTION ISOLATION LEVEL SERIALIZABLE:
Statements cannot read data that has been modified but not yet committed by other
transactions.
No other transactions can modify data that has been read by the current transaction
until the current transaction completes.
Other transactions cannot insert new rows with key values that would fall in the
range of keys read by any statements in the current transaction until the current
transaction completes.
Range locks are placed in the range of key values that match the search conditions of
each statement executed in a transaction. This blocks other transactions from updating
or inserting any rows that would qualify for any of the statements executed by the
current transaction. This means that if any of the statements in a transaction are
executed a second time, they will read the same set of rows. The range locks are held
until the transaction completes. This is the most restrictive of the isolation levels
because it locks entire ranges of keys and holds the locks until the transaction
completes. Because concurrency is lower, use this option only when necessary. This
option has the same effect as setting HOLDLOCK on all tables in all SELECT statements
in a transaction.
But almost certainly, lock escalation will cause blocking and your users will be pretty much dead in the water (in my experience).
So...
Wait until you have a schedule maintenance window. Set the database in single-user mode, make your changes and bring it back online.

Try this: when you get records from you table (in the GetRecords() function?) use TABLOCKX hint:
SELECT * FROM Table1 (TABLOCKX)
It will queue all other reads and updates outside your transaction until the transaction is either commited or rolled back.

It's all about Isolation level here. Change your Transaction Isolation Level to ReadCommited (didn't lookup the Enum Value in C# but that should be close). When you execute the first update/insert to the table, SQL will start locking and no one will be able to read the data you're changing/adding until you Commit or Rollback thr transaction, provided they are not performing dirty reads (using NoLock on their SQL, or have the connection Isolation level set to Read Uncommited).. Be careful though, depending on how you're inserting/updating data you may lock the whole table for the duration of your transaction though which would cause timeout errors at the client when they try to read from this table while your transaction is open. Without seeing the SQL behind the updates though I can't tell if that will happen here.

As someone has pointed out, the transaction doesn't seem to be used after being taken out.
From the limited information we have on the app/purpose, it's hard to tell, but from the code snippet, it seems to me we don't need any locking. We are getting some data from source X (in this case _reader) and then inserting/updating into destination Y.
All the validation happens against the source data to make sure it's correct, it doesn't seem like we're making any decision or care for what's in the destination.
If the above is true then a better approach would be to load all this data into a temporary table (can be a real temp table "#" or a real table that we destroy afterwards, but the purpose is the same), and then in a single sql statement, we can do a mass insert/update from the temp table into our destination. Assuming the db schema is in decent shape, 20 (or even 30) thousand records should happen almost instantly without any need to wait for maintenance window or lock out users for extended periods of time
Also to strictly answer the question about using transaction, below is a simple sample on how to properly use a transaction, there should be plenty of other samples and info on the web
SqlConnection conn = new SqlConnection();
SqlCommand cmd1 = new SqlCommand();
SqlTransaction tran = conn.BeginTransaction();
...
cmd1.Transaction = tran;
...
tran.Commit();

Related

Will DbContextTransaction.BeginTransaction prevent this race condition

I have a method that needs to "claim" a payment number to ensure it is available at a later time. I cannot just get a new payment number when ready to commit to the database, as the number is added to a signed token, and then the payment number is taken from the signed token later on when committing to the database to allow the token to be linked to the payment afterwards.
Payment numbers are sequential and the current method used in existing code is:
Create a Payment
Get the last payment number from the database
Increment the payment number
Use this payment number for the Payment
Update the database with the incremented payment number
In my service I am trying to prevent the following race-condition:
My service reads the payment number (eg. 100)
Another service uses and updates the payment number (now 101)
My service increments the number locally (to 101) and updates the database (still 101)
This would produce two payments with a payment number of 100.
Here is my implementation so far, in my Transaction class:
private DbSet<PaymentIdentifier> paymentIdentifier;
//...
private int ClaimNextPaymentNumber()
{
int nextPaymentNumber = -1;
using(var dbTransaction = db.Database.BeginTransaction())
{
int lastPaymentNumber = paymentIdentifier.ElementAt(0).Identifier;
nextPaymentNumber = lastPaymentNumber + 1;
paymentIdentifier.ElementAt(0).Identifier = nextPaymentNumber;
db.SaveChanges();
dbTransaction.Commit();
}
return nextPaymentNumber;
}
The PaymentIdentifier table has a single row and a single column "Identifier" (hence the .ElementAt(0)). I am unable to change the database structure as there is lots of legacy code relying on it that is very brittle.
Will having the code wrapped in a transaction (as I have done) protect against the race condition, or is there some Entity Framework / PostgreSQL idiosyncrasies I need to deal with to protect the identifier from being read whilst performing the transaction?
Thank you!
(As a side point, I believe lots of legacy code in the other software connecting to the database simply ignores the race condition and relies on it being "very fast")
It helps you with the race condition only if all code, including legacy, will use this method. If there is still code that continue using client side incrementing without transaction, you'll get the same problem. Just exchange 'My service' and 'Another service' in your description.
1. Another service reads the payment number (eg. 100) **without** transaction
2. My service uses and updates the payment number (now 101) **with** transaction
3. Another service increments the number locally (to 101) and updates the database (still 101) **without** transaction
Note that you can replace your code with simpler one by executing this query without explicit transaction.
update PaymentIdentifier set Identifier = Identifier + 1 returning Identifier;
But again, it will not solve your concurrency problem until you replace all places where the Identifier is incremented. If you can change that, you would better use SEQUENCE or Generators that will safely provide you with incremental Ids.
A transaction does not automaticaly lock your table. A Transaction just ensures that multiple changes to the database are done altogether or nothing at all (see the A (atomic) in ACID). But the thing you want is that only one session can read, add one, update the value. And after that is done the next session is allowed to do the same thing.
So you now have different possibilities:
Use a Sequence you can get the next value for example like that SELECT nextval('mysequencename'). If if two sessions try to get a value at the same time they will get two differnt values.
If you have more complex needs and want to store every "token" with additional data in a table. so every token is a row in the table with additional colums you could use table locking. With this you could restrict the access to table. So only one session is allowed to access the table at a time. But make sure that you use locks for as short as possible because this will become your performance bottleneck.
The database prevents the race condition by throwing a concurrency violation error in this case. So, I looked at how this is handled in the legacy code (following the suggestion by #sergey-l) and it uses a simple retry mechanism. So, I did the same:
private int ClaimNextPaymentNumber()
{
DbContextTransaction dbTransaction;
bool failed;
int paymentNumber = -1;
do
{
failed = false;
using(dbTransaction = db.Database.BeginTransaction())
{
try
{
paymentNumber = TryToClaimNextPaymentNumber();
}
catch(DbUpdateConcurrencyException ex)
{
failed = true;
ResetForClaimPaymentNumberRetry(ex);
}
dbTransaction.Commit();
concurrencyExceptionRetryCount = 0;
}
}
while(failed);
return paymentNumber;
}

C# How do I prevent data loss from crashes during a long running query?

I have the following code that takes about an hour to run through a few hundred thousand rows:
public void Recording(int rowindex)
{
using (OleDbCommand cmd = new OleDbCommand())
{
try
{
using (OleDbConnection connection = new OleDbConnection(Con))
{
cmd.Connection = connection;
connection.Open();
using (OleDbTransaction Scope = connection.BeginTransaction(SD.IsolationLevel.ReadCommitted))
{
try
{
string Query = #"UPDATE [" + SetupAction.currentTable + "] set Description=#Description, Description_Department=#Description_Department, Accounts=#Accounts where ID=#ID";
cmd.Parameters.AddWithValue("#Description", VirtualTable.Rows[rowindex][4].ToString());
cmd.Parameters.AddWithValue("#Description_Department", VirtualTable.Rows[rowindex][18].ToString());
cmd.Parameters.AddWithValue("#Accounts", VirtualTable.Rows[rowindex][22].ToString());
cmd.Parameters.AddWithValue("#ID", VirtualTable.Rows[rowindex][0].ToString());
cmd.CommandText = Query;
cmd.Transaction = Scope;
cmd.ExecuteNonQuery();
Scope.Commit();
}
catch (OleDbException odex)
{
MessageBox.Show(odex.Message);
Scope.Rollback();
}
}
}
}
catch (OleDbException ex)
{
MessageBox.Show("SQL: " + ex);
}
}
}
It works as I expect it to, however today my program crashed while running the query (in a for loop where rowindex is the index of a datatable), the computer crashed, and when I rebooted the program, it said:
Multi-step OleDB operation generated errors: followed by my connection string.
What happened is that database is entirely uninteractable, even microsoft access's recovery methods can't seem to help out here.
I've read that this may be caused when the data structure of the database is altered from what it expected it to be. My question is, how do I prevent this, since I can't really detect whether my program stopped functioning all of a sudden.
There could be a way for me to restructure it somehow, maybe there's a function I don't know about. Perhaps it is sending something of an empty query when the crash happens, but I don't know how to stop it.
The Jet/ACE database engine already attempts to avoid corruption and to automatically recover from catastrophic events (lost connections, computer crashing). Transactions can further protected against inconsistent data by committing (or discarding) multiple operations altogether. But eventually there may be some coincidental system failure which could terminate an operation at some critical write position, thereby creating critical inconsistencies in the database file. Making regular and timely backups is part of an overall solution. For very long operations it might be worth making an automated copy of the entire database file prior to the operation.
Otherwise, an extreme alternative is to
Create a second intermediate database into which all data is first inserted. (Only needs to be done once.)
In this intermediate database, create linked tables to relevant tables in the permanent, working database.
Also in the intermediate database, create an indexed local table that mirrors the linked table structure into which data will be inserted. OR if the intermediate database and table already exist, clear the local table (i.e. delete all rows).
Have your current software insert into the local intermediate table.
Run a single query which then updates the linked table from the temporary table. Wrap that update in a transaction.
Here's where the linked table has the benefit that it can be referenced in an SQL query just like any local table. You only have to explicitly open the intermediate data. In other words, just perform a simple query like UPDATE LocalTable INNER JOIN LinkedTable ON LocalTable.UpdateID = LinkedTable.ID SET LinkedTable.Data = LocalTable.Data
The benefit to this process is that the single query that updates one Access table from another can be very fast, possibly much faster than the multiple update operations in your code. This could reduce the likelihood that errors in your update code will negatively effect your database. This of course doesn't completely eliminate the random computer crash that can effect the database, but reducing the time that multiple connections and update queries are executed might make it less likely.
I think your catch block is wrong, because if you get an exception other than OleDbException, you will not roll back the transaction
try
{
// ...
Scope.Commit();
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
Scope.Rollback();
}
That is, Exception instead of OleDbException. Exceptions could come from anywhere and not necessarily Ole DB, and you still want to roll back everything you've done so far in that case.
That being said, if you have a few hundred thousand rows, I would seriously consider batching the update, and processing just a few thousand per iteration with a transaction per iteration
In terms of transactional behavior, the main question would be: Do you really want to roll back everything you have updated so far in case of failure, or just retry/continue where you left off? If answer is that you want to retry/continue then you will likely want to create a BatchUpdateTask table or similar... with all the information you need for each iteration

How to totally lock a row in Entity Framework

I am working with a situation where we are dealing with money transactions.
For example, I have a table of users wallets, with their balance in that row.
UserId; Wallet Id; Balance
Now in our website and web services, every time a certain transaction happens, we need to:
check that there is enough funds available to perform that transaction:
deduct the costs of the transaction from the balance.
How and what is the correct way to go about locking that row / entity for the entire duration of my transaction?
From what I have read there are some solutions where EF marks an entity and then compares that mark when it saves it back to the DB, however what does it do when another user / program has already edited the amount?
Can I achieve this with EF? If not what other options do I have?
Would calling a stored procedure possibly allow for me to lock the row properly so that no one else can access that row in the SQL Server whilst program A has the lock on it?
EF doesn't have built-in locking mechanism, you probably would need to use raw query like
using (var scope = new TransactionScope(...))
{
using (var context = new YourContext(...))
{
var wallet =
context.ExecuteStoreQuery<UserWallet>("SELECT UserId, WalletId, Balance FROM UserWallets WITH (UPDLOCK) WHERE ...");
// your logic
scope.Complete();
}
}
you can set the isolationlevel on the transaction in Entity framework to ensure no one else can change it:
YourDataContext.Database.BeginTransaction(IsolationLevel.RepeatableRead)
RepeatableRead
Summary:
Locks are placed on all data that is used in a query, preventing other users from updating the data. Prevents non-repeatable reads but phantom rows are still possible.
The whole point of a transactional database is that the consumer of the data determines how isolated their view of the data should be.
Irrespective of whether your transaction is serialized someone else can perform a dirty read on the same data that you just changed, but did not commit.
You should firstly concern yourself with the integrity of your view and then only accept a degredation of the quality of that view to improve system performance where you are sure it is required.
Wrap everthing in a TransactionScope with Serialized isolation level and you personally cannot really go wrong. Only drop the isolation level when you see it is genuinely required (i.e. when getting things wrong sometimes is OK).
Someone asks about this here: SQL Server: preventing dirty reads in a stored procedure

How do I lock just the relevant rows in a .NET transaction

I have a Project Table, a Stakeholder Table, and an Association Table (which takes a ProjectID and a StakeholderID as foreign keys).
I want to delete a single Project but must first delete all that Project's rows in the Association Table.
Here is the method. ProjectRow is a strongly typed DataRow created with the DataSet Designer.
public void RemoveProject(ProjectRow project)
{
try
{
var associations = from a in ds.Association.AsEnumerable()
where a.Project == project.ProjID
select a;
foreach (DataRow assoc in associations)
{
assoc.Delete();
}
project.Delete();
using (TransactionScope scope = new TransactionScope())
{
assocTableAdapter.Update(ds.Association);
System.Threading.Thread.Sleep(40000); // to test the transaction.
projTableAdapter.Update(ds.Project);
scope.Complete();
}
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
}
This method does achieve the effect required (stops associations being added to the deleted project during the transaction) but it seems to place a read and write lock on all the tables so I cannot even read from the Project Table during the sleep period.
I would like to be able to add other Project/Stakeholder pairs to the Association Table during the transaction. How do I achieve this?
Cheers.
A few links but you can hint that you'd like row level locking and the databaase engine may or may not take the suggestion. However, since you're letting the library handle the deletes, who knows what it's doing (short of turning on profiler and capturing statements). It could very well be issuing table locks or you simply have the misfortune of the row locks escalating to page locks and the rows you are attempting to access in your query outside the transaction happen to be on the same page.
Is it possible to force row level locking in SQL Server?
Why is SQL Server 2008 blocking SELECT's on long transaction INSERT's?
https://dba.stackexchange.com/questions/6512/difference-between-row-level-and-page-level-locking-and-consequences
What's a body to do? You need to balance your concurrency needs against your risk for bad data. Here's a fun poster about SQL Server Isolation Levels

How to speed up LINQ inserts with SQL CE?

History
I have a list of "records" (3,500) which I save to XML and compress on exit of the program. Since:
the number of the records increases
only around 50 records need to be updated on exit
saving takes about 3 seconds
I needed another solution -- embedded database. I chose SQL CE because it works with VS without any problems and the license is OK for me (I compared it to Firebird, SQLite, EffiProz, db4o and BerkeleyDB).
The data
The record structure: 11 fields, 2 of them make primary key (nvarchar + byte). Other records are bytes, datatimes, double and ints.
I don't use any relations, joins, indices (except for primary key), triggers, views, and so on. It is flat Dictionary actually -- pairs of Key+Value. I modify some of them, and then I have to update them in database. From time to time I add some new "records" and I need to store (insert) them. That's all.
LINQ approach
I have blank database (file), so I make 3500 inserts in a loop (one by one). I don't even check if the record already exists because db is blank.
Execution time? 4 minutes, 52 seconds. I fainted (mind you: XML + compress = 3 seconds).
SQL CE raw approach
I googled a bit, and despite such claims as here:
LINQ to SQL (CE) speed versus SqlCe
stating it is SQL CE itself fault I gave it a try.
The same loop but this time inserts are made with SqlCeResultSet (DirectTable mode, see: Bulk Insert In SQL Server CE) and SqlCeUpdatableRecord.
The outcome? Do you sit comfortably? Well... 0.3 second (yes, fraction of the second!).
The problem
LINQ is very readable, and raw operations are quite contrary. I could write a mapper which translates all column indexes to meaningful names, but it seems like reinventing the wheel -- after all it is already done in... LINQ.
So maybe it is a way to tell LINQ to speed things up? QUESTION -- how to do it?
The code
LINQ
foreach (var entry in dict.Entries.Where(it => it.AlteredByLearning))
{
PrimLibrary.Database.Progress record = null;
record = new PrimLibrary.Database.Progress();
record.Text = entry.Text;
record.Direction = (byte)entry.dir;
db.Progress.InsertOnSubmit(record);
record.Status = (byte)entry.LastLearningInfo.status.Value;
// ... and so on
db.SubmitChanges();
}
Raw operations
SqlCeCommand cmd = conn.CreateCommand();
cmd.CommandText = "Progress";
cmd.CommandType = System.Data.CommandType.TableDirect;
SqlCeResultSet rs = cmd.ExecuteResultSet(ResultSetOptions.Updatable);
foreach (var entry in dict.Entries.Where(it => it.AlteredByLearning))
{
SqlCeUpdatableRecord record = null;
record = rs.CreateRecord();
int col = 0;
record.SetString(col++, entry.Text);
record.SetByte(col++,(byte)entry.dir);
record.SetByte(col++,(byte)entry.LastLearningInfo.status.Value);
// ... and so on
rs.Insert(record);
}
Do more work per transaction.
Commits are generally very expensive operations for a typical relational database as the database must wait for disk flushes to ensure data is not lost (ACID guarantees and all that). Conventional HDD disk IO without specialty controllers is very slow in this sort of operation: the data must be flushed to the physical disk -- perhaps only 30-60 commits can occur a second with an IO sync between!
See the SQLite FAQ: INSERT is really slow - I can only do few dozen INSERTs per second. Ignoring the different database engine, this is the exact same issue.
Normally, LINQ2SQL creates a new implicit transaction inside SubmitChanges. To avoid this implicit transaction/commit (commits are expensive operations) either:
Call SubmitChanges less (say, once outside the loop) or;
Setup an explicit transaction scope (see TransactionScope).
One example of using a larger transaction context is:
using (var ts = new TransactionScope()) {
// LINQ2SQL will automatically enlist in the transaction scope.
// SubmitChanges now will NOT create a new transaction/commit each time.
DoImportStuffThatRunsWithinASingleTransaction();
// Important: Make sure to COMMIT the transaction.
// (The transaction used for SubmitChanges is committed to the DB.)
// This is when the disk sync actually has to happen,
// but it only happens once, not 3500 times!
ts.Complete();
}
However, the semantics of an approach using a single transaction or a single call to SubmitChanges are different than that of the code above calling SubmitChanges 3500 times and creating 3500 different implicit transactions. In particular, the size of the atomic operations (with respect to the database) is different and may not be suitable for all tasks.
For LINQ2SQL updates, changing the optimistic concurrency model (disabling it or just using a timestamp field, for instance) may result in small performance improvements. The biggest improvement, however, will come from reducing the number of commits that must be performed.
Happy coding.
i'm not positive on this, but it seems like the db.SubmitChanges() call should be made outside of the loop. maybe that would speed things up?

Categories