I am trying to insert some data into two MYSQL tables.
the second table stores the first table row Id as a foreign key.
I have this code that works fine but it is super slow. what is the best/fastest way to make it faster?
string ConnectionString = "server=localhost; password = 1234; database = DB ; user = Jack";
MySqlConnection mConnection = new MySqlConnection(ConnectionString);
mConnection.Open();
int index = 1;
for (int i = 0; i < 100000; i++)
{
string insertPerson = "INSERT INTO myentities(Name) VALUES (#first_name);"
+ "INSERT INTO secondtable(Id, Name,myentities) VALUES (#ID, #city, LAST_INSERT_ID());";
MySqlCommand command = new MySqlCommand(insertPerson, mConnection);
command.Parameters.AddWithValue("#first_name", "Jack");
command.Parameters.AddWithValue("#ID", i+1);
command.Parameters.AddWithValue("#city", "Frank");
command.ExecuteNonQuery();
command.Parameters.Clear();
}
I have found the following code on one of the StackoverFlow questions but it was inserting data to a single table only, not to multiple tables which are connected through a foreign key.
This code is pretty fast, but I was not sure how I can make it work with multiple tables.
public static void BulkToMySQL()
{
string ConnectionString = "server=192.168.1xxx";
StringBuilder sCommand = new StringBuilder("INSERT INTO User (FirstName, LastName) VALUES ");
using (MySqlConnection mConnection = new MySqlConnection(ConnectionString))
{
List<string> Rows = new List<string>();
for (int i = 0; i < 100000; i++)
{
Rows.Add(string.Format("('{0}','{1}')", MySqlHelper.EscapeString("test"), MySqlHelper.EscapeString("test")));
}
sCommand.Append(string.Join(",", Rows));
sCommand.Append(";");
mConnection.Open();
using (MySqlCommand myCmd = new MySqlCommand(sCommand.ToString(), mConnection))
{
myCmd.CommandType = CommandType.Text;
myCmd.ExecuteNonQuery();
}
}
}
The fastest way possible is to craft a strategy for not calling mysql in a loop via the .NET MySQL Connector. Especially for i=0 to 99999 . The way you achieve this is either thru CASE A: direct db table manipulation or CASE B: thru CSV to db imports with LOAD DATA INFILE.
For CASE B: it is often wise to bring that data into a staging table or tables. Checks can be made for data readiness depending on the particular circumstances. What that means is that you may be getting external data that needs scrubbed (ETL). Other benefits include not committing unholy data to your production tables not fit for consumption. So it leaves an abort option open to you.
Now onto performance anecdotes. With MySQL and the .NET Connector version 6.9.9.0 in late 2016, I can achieve up to 40x performance gains by going this route. It may seem unnatural not to call an INSERT query but I don't in loops. Ok, sure, in small loops, but not in data ingest with bulk. Not even for 500 rows. You will experience noticable UX improvement if you re-craft some routines.
So the above is for data that truly came from external sources. For CASE A: the normal data that is already in your db the above does not apply. In those situations you strive to craft your SQL to massage your data as much as possible (read: 100%) on the server-side. As such it does so without bringing the data back to the client thus requiring some client-side with Connector looping call to get it back into the server. This does not mandate Stored Procedures necessarily or at all. Client-side calls that operate on the data in place without toward client transfers then back up are what you shoot for.
You can gain some improvement by moving unnecessary operations out of the loop, since anything you do there is repeated 100,000 times:
string insertPerson =
"INSERT INTO myentities(Name) VALUES (#first_name);"
+ "INSERT INTO secondtable(Id, Name,myentities) VALUES (#ID, #city, LAST_INSERT_ID());";
string ConnectionString = "server=localhost; password = 1234; database = DB ; user = Jack";
using (var Connection = new MySqlConnection(ConnectionString))
using (var command = new MySqlCommand(insertPerson, mConnection))
{
//guessing at column types and lengths here
command.Parameters.Add("#first_name", MySqlDbType.VarChar, 50).Value = "Jack";
var id = command.Parameters.Add("#ID", MySqlDbType.Int32);
command.Parameters.Add("#city", MySqlDbType.VarChar, 50).Value = "Frank";
mConnection.Open();
for (int i = 1; i <= 100000; i++)
{
id.Value = i;
command.ExecuteNonQuery();
}
}
But mostly, you try to avoid this scenario. Instead, you'd do something like use a numbers table to project the results for both tables in advance. There are some things you can do with foreign key constraints to set locking (you need to lock the whole table to avoid bad keys if someone else inserts or tries to read partially inserted records), transaction logging (you can set it only log the batch, rather than each change) and foreign keys enforcment (you can turn it off while you handle the insert).
Related
I want to insert data into table using SP. But I have database table column name in parameters. I want to use parameters as column in SP to insert data. You have any idea to insert data through column name as parameter.
cmd3.Connection = conn;
cmd3.CommandType = CommandType.StoredProcedure;
cmd3.CommandText = "CustomerInfoProcedure";
cmd2.Parameters.AddWithValue("#Col0", Col0);
cmd2.Parameters.AddWithValue("#Col1", Col1);
cmd2.Parameters.AddWithValue("#Col2", Col2);
//insert query using in SP, but its give error
insert into #AgentDetails (#Col0, #Col1, #Col2)
values (#eCol0, #eCol1, #eCol2);
To my knowledge (although I haven't worked with SQL Server for a while) you can't really do that directly. If you insist on having it this way, you have a couple of (bad) options:
Use sp_executesql and build your query dynamically by concatenating relevant strings (MSDN). This approach will likely result in fairly slow queries (since they can't be optimized before they are generated) and has huge security downsides (think SQL injection).
Have a set of prepared queries inside your SP that cover all possible combinations of the input parameters. This will alleviate performance and security concerns, but it will leave you (depending on the number of field combinations you want to have) with a huge and complicated code in your SP that will be hard to maintain.
UPD:
After seeing your comment: in your case it will be much better to handle column reordering in the code and only have a single signature for the SP. E.g.
var value0 = Col0 == "field0" ? eCol0 : Col1 == "field0" ? eCol1 : eCol2;
var value1 = ...
cmd3.Connection = conn;
cmd3.CommandType = CommandType.StoredProcedure;
cmd3.CommandText = "CustomerInfoProcedure";
cmd2.Parameters.AddWithValue("#value0", value0);
cmd2.Parameters.AddWithValue("#value1", value1);
cmd2.Parameters.AddWithValue("#value2", value2);
//insert query using in SP
insert into #AgentDetails (field0, field1, field2)
values (#value0, #value1, #value2);
UPD2
If you have a large number of variables, then similar approach would work as long as all the values and column-field mapping are stored in appropriate data structures. E.g. let's say that you have an array of column names in the order they follow in your spreadsheet, e.g. taken from a spreadsheet header:
string[] columnsNames; // ["field1", "field2", "field10", "field0", ...]
, an array of values that you need to insert, per row, at the same order as the columnsNames:
string[] values; // ["value1", "value2", "value10", "value0", ...]
and an array where the column names are listed in the order they need to be in for your SP parameters:
// This list can be made into a constant, but you need
// to keep it in sync with the SP signature
string[] parameterOrder = ["field0", "field1", "field2", ...];
In this case you can use logic like this one to add your data right into the correct place:
// This dictionary will be used for field position lookups
var columnOrderDict = new Dictionary<string, int>();
for (var i = 0; i < columnsNames.Length; i++)
{
columnOrderDict[columnsNames[i]] = i;
}
cmd3.Connection = conn;
cmd3.CommandType = CommandType.StoredProcedure;
cmd3.CommandText = "CustomerInfoProcedure";
for (var j = 0; j < parameterOrder.Length; j++)
{
var currentFieldName = parameterOrder[j];
if (columnOrderDict.ContainsKey(currentFieldName))
{
cmd3.Parameters.AddWithValue(currentFieldName, values[columnOrderDict[currentFieldname]]);
} else {
cmd3.Parameters.AddWithValue(currentFieldName, DBNull.Value);
}
}
This code is built on multiple assumptions, such as that the column headers in your spreadsheet will exactly match the stored procedure parameter names etc, but I hope it should give you enough of a hint to build your own logic.
Also don't forget proper validation - currently the only thing this code guards against is the situation when a field that's needed by SP is missing from the input data. You also need to validate data format, the number of values should match the number of headers etc. etc.
I want to create table dynamically and fill it with random numbers, but when execute code, MySQL throw an error about syntax.
for (int i = 0; i < 100; i++)
{
//gameSession is int, i tried use Parameters.AddWithValue but still have same problem.
MySqlCommand dice = new MySqlCommand("INSERT INTO "+gameSession.ToString()+" (Dice1,Dice2) Values (#dice1,#dice2)", myConnection);
Random dice1 = new Random();
dice.Parameters.AddWithValue("#dice1", dice1.Next(1, 7));
Random dice2 = new Random();
dice.Parameters.AddWithValue("#dice2", dice2.Next(1, 7));
if (myConnection.State == ConnectionState.Closed)
{
myConnection.Open();
}
dice.ExecuteNonQuery();
}
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near INSERT INTO '7571877 ...
One major reason you have an error in your syntax is that the table is not quoted. Check out the MySQL manual for identifiers (which includes table names).
Just like you can't really use numbers by default to represent variables, you can't do the same for tables without escaping the identifiers. A couple solutions for you would be to prefix the table name with a letter or to use a single table.
Answering the question in your title:
You can create a table if it doesn't exist using the "IF NOT EXISTS" clause. The SQL would look like this:
CREATE TABLE table_name IF NOT EXISTS (
dice1 INTEGER,
dice2 INTEGER
);
If you insist on going down this route, you should create your table names with a standard prefix letter:
"s" + gameSession.ToString()
Be warned, having a separate table for each session does complicate your database maintenance needs, particularly for dealing with abandoned sessions. It's a lot easier to remove rows in a single table than find all tables that are abandoned.
Another serious concern has to do with database security. When building an application against a database, it is far better to only grant insert and update privileges to a user than it is to grant them the ability to create and drop any table in your database. Since the table names are dynamically created, you can't really maintain per-table privileges easily. If this is web facing you are opening yourself up to some serious liability for very little gain.
Answering the need:
A better design is to create a single table with a session id. Otherwise, you will have any number of tables with one record each. You would create the table once like this:
CREATE TABLE DiceRolls IF NOT EXISTS (
session INTEGER,
dice1 INTEGER,
dice2 INTEGER
);
Your inserts would only need to change a little bit:
for (int i = 0; i < 100; i++)
{
//gameSession is int, i tried use Parameters.AddWithValue but still have same problem.
MySqlCommand dice = new MySqlCommand("INSERT INTO DiceRolls (Session, Dice1,Dice2) Values (#session,#dice1,#dice2)", myConnection);
dice.Parameters.AddWithValue("#session", gameSession);
Random dice1 = new Random();
dice.Parameters.AddWithValue("#dice1", dice1.Next(1, 7));
Random dice2 = new Random();
dice.Parameters.AddWithValue("#dice2", dice2.Next(1, 7));
if (myConnection.State == ConnectionState.Closed)
{
myConnection.Open();
}
dice.ExecuteNonQuery();
}
This makes clean up a lot easier as you can batch job something that deletes everything that isn't a current session.
The problem is with your table name. Make sure table name is correct, then use this function:
void InsertDiece(string connectionString, string tableName, int count)
{
var commandText = String.Format("INSERT INTO `{0}` (Dice1, Dice2) Values (#dice1, #dice2)", tableName);
var random = new Random(DateTime.Now.Milliseconds);
using(var connection = new MySqlConnection(connectionString))
using(var command = connection.CreateCommand())
{
connection.Open();
command.CommandText = commandText;
for(int i = 0; i < count; i++)
{
command.Parameters.AddWithValue("#dice1", random.Next(1, 7));
command.Parameters.AddWithValue("#dice2", random.Next(1, 7));
command.ExecuteNonQuery();
command.Parameters.Clear();
}
}
}
I have the following code that it works fine. My problem is the insert took more than three hours.
How can I optimize the insert query in sql table?
foreach(var sheetName in GetExcelSheetNames(connectionString)) {
using(OleDbConnection con1 = new OleDbConnection(connectionString)) {
var dt = new DataTable();
string query = string.Format("SELECT * FROM [{0}]", sheetName);
con1.Open();
OleDbDataAdapter adapter = new OleDbDataAdapter(query, con1);
adapter.Fill(dt);
using(SqlConnection con = new SqlConnection(consString)) {
con.Open();
for (int i = 2; i < dt.Rows.Count; i++) {
for (int j = 1; j < dt.Columns.Count; j += 3) {
try {
var s = dt.Rows[i][0].ToString();
var dt1 = DateTime.Parse(s, CultureInfo.GetCultureInfo("fr-FR"));
var s1 = dt.Rows[i][j].ToString();
var s2 = dt.Rows[i][j + 1].ToString();
var s3 = sheetName.Remove(sheetName.Length - 1);
{
SqlCommand command = new SqlCommand("INSERT INTO [Obj CA MPX] ([CA TTC],[VAL MRG TTC],[CA HT],[VAL MRG HT],[Rayon],[Date],[Code Site]) VALUES(#ca,#val,#catHT ,#valHT ,#rayon, #date ,#sheetName )", con);
command.Parameters.Add("#date", SqlDbType.Date).Value = dt1;
command.Parameters.AddWithValue("#ca", s1);
command.Parameters.AddWithValue("#val", s2);
command.Parameters.AddWithValue("#rayon", dt.Rows[0][j].ToString());
command.Parameters.AddWithValue("#sheetName", s3);
command.Parameters.Add("#catHT", DBNull.Value).Value = DBNull.Value;
command.Parameters.Add("#valHT", DBNull.Value).Value = DBNull.Value;
command.ExecuteNonQuery();
}
}
}
maybe you should save it as file and use bulk insert
https://msdn.microsoft.com/de-de/library/ms188365%28v=sql.120%29.aspx
SQL Server has the option of using a Bulk Insert.
Here is a good article on importing a csv.
You should first read this article from Eric Lippert: Which is faster?.
Keep this in mind while trying to optimize your process.
The insert took 3 hours, but have you inserted 10 items or 900.000.000.000 items?
If it's the last one, maybe 3 hours are pretty good.
What is your database? SQL Server 2005 Express? SQL Server 2014 Enterprise?
The advices could differ.
Without more details, we will only be able to give you suggestions, that could or could not apply depending on your configuration.
Here are some on the top of my head:
Is the bottleneck on the DB side? Check the execution plan, add indexes if needed
Beware of AddWithValue, it can prevent the use of indexes in your query
If you are loading a lot of data on a non-live database, you could use a lighter recovery model to prevent having a lot of useless logs (using Bulk load will use automatically BULKED_LOGGED, or you could activate the SIMPLE recovery model (alter database [YourDB] set recovery SIMPLE, don't forget to re-enable the FULL recovery model after)
Are there other alternatives than loading data from an Excel file? Can't you use another database instead or converting the Excel file to a CSV?
What does the performance monitor tells you? Maybe you need better hardware (more ram, faster disks, RAID), or move some heavily used files (mdf, ldf) on separate disks.
You could copy the Excel file several times and use parallelization, load in different tables that will be partitions of your final table.
This list could continue forever.
Here is an interesting article about optimizing data loading: We Loaded 1TB in 30 Minutes with SSIS, and So Can You
This article is focused on SSIS but some advices do not apply only to it.
You can put several (e.g. 100) inserts into a string using a string builder. Use an index for the parameter names. Note that you can have a maximum of 2100 parameters for one query.
StringBuilder batch = new StringBuilder();
for (int i = 0; i < pageSize; i++)
{
batch.AppendFormat(
#"INSERT INTO [Obj CA MPX] ([CA TTC],[VAL MRG TTC], ...) VALUES(#ca{0},#val{0}, ...)"
i);
batch.AppendLine();
batch.AppendLine();
}
SqlCommand command = new SqlCommand(batch.ToString(), con)
// append parameters, using the index
for (int i = 0; i < pageSize; i++)
{
command.Parameters.Add("#date" + i, SqlDbType.Date).Value = dt1[i];
command.Parameters.AddWithValue("#ca" + i, s1[i]);
// ...
}
command.ExecuteNonQuery();
Of course this is not finished, you have to integrate the pages into your existing loops, which may not be too simple.
Alternatively, you do not use parameters and put the arguments directly into the query. This way, you can create much larger batches (I would put 1000 to 10000 inserts into one batch) and it's much easier to implement.
I have a list Called ListTypes that holds 10 types of products. Below the store procedure loops and gets every record with the product that is looping and it stores it in the list ListIds. This is killing my sql box since I have over 200 users executing this constantly all day.
I know is not a good architecture to loop a sql statement, but this the only way I made it work. Any ideas how I can make this without looping? Maybe a Linq statement, I never used Linq with this magnitude. Thank you.
protected void GetIds(string Type, string Sub)
{
LinkedIds.Clear();
using (SqlConnection cs = new SqlConnection(connstr))
{
for (int x = 0; x < ListTypes.Count; x++)
{
cs.Open();
SqlCommand select = new SqlCommand("spUI_LinkedIds", cs);
select.CommandType = System.Data.CommandType.StoredProcedure;
select.Parameters.AddWithValue("#Type", Type);
select.Parameters.AddWithValue("#Sub", Sub);
select.Parameters.AddWithValue("#TransId", ListTypes[x]);
SqlDataReader dr = select.ExecuteReader();
while (dr.Read())
{
ListIds.Add(Convert.ToInt32(dr["LinkedId"]));
}
cs.Close();
}
}
}
Not a full answer, but this wouldn't fit in a comment. You can at least update your existing code to be more efficient like this:
protected List<int> GetIds(string Type, string Sub, IEnumerable<int> types)
{
var result = new List<int>();
using (SqlConnection cs = new SqlConnection(connstr))
using (SqlCommand select = new SqlCommand("spUI_LinkedIds", cs))
{
select.CommandType = System.Data.CommandType.StoredProcedure;
//Don't use AddWithValue! Be explicit about your DB types
// I had to guess here. Replace with the actual types from your database
select.Parameters.Add("#Type", SqlDBType.VarChar, 10).Value = Type;
select.Parameters.Add("#Sub", SqlDbType.VarChar, 10).Value = Sub;
var TransID = select.Parameters.Add("#TransId", SqlDbType.Int);
cs.Open();
foreach(int type in types)
{
TransID.Value = type;
SqlDataReader dr = select.ExecuteReader();
while (dr.Read())
{
result.Add((int)dr["LinkedId"]);
}
}
}
return result;
}
Note that this way you only open and close the connection once. Normally in ADO.Net it's better to use a new connection and re-open it for each query. The exception is in a tight loop like this. Also, the only thing that changes inside the loop this way is the one parameter value. Finally, it's better to design methods that don't rely on other class state. This method no longer needs to know about the ListTypes and ListIds class variables, which makes it possible to (among other things) do better unit testing on the method.
Again, this isn't a full answer; it's just an incremental improvement. What you really need to do is write another stored procedure that accepts a table valued parameter, and build on the query from your existing stored procedure to JOIN with the table valued parameter, so that all of this will fit into a single SQL statement. But until you share your stored procedure code, this is about as much help as I can give you.
Besides the improvements others wrote.
You could insert your ID's into a temp table and then make one
SELECT * from WhatEverTable WHERE transid in (select transid from #tempTable)
On a MSSQL this works really fast.
When you're not using a MSSQL it could be possible that one great SQL-Select with joins is faster than a SELECT IN. You have to test these cases by your own on your DBMS.
According to your comment:
The idea is lets say I have a table and I have to get all records from the table that has this 10 types of products. How can I get all of this products? But this number is dynamic.
So... why use a stored procedure at all? Why not query the table?
//If [Type] and [Sub] arguments are external inputs - as in, they come from a user request or something - they should be sanitized. (remove or escape '\' and apostrophe signs)
//create connection
string queryTmpl = "SELECT LinkedId FROM [yourTable] WHERE [TYPE] = '{0}' AND [SUB] = '{1}' AND [TRANSID] IN ({2})";
string query = string.Format(queryTmpl, Type, Sub, string.Join(", ", ListTypes);
SqlCommand select = new SqlCommand(query, cs);
//and so forth
To use Linq-to-SQL you would need to map the table to a class. This would make the query simpler to perform.
Replaces Question: Update multiple rows into SQL table
Here's a Code Snippet to update an exam results set.
DB structure is as given, but I can submit Stored Procedures for inclusion (Which are a pain to modify, so I save that until the end.)
The question: Is there a better way using SQL server v 2005.,net 2.0 ?
string update = #"UPDATE dbo.STUDENTAnswers
SET ANSWER=#answer
WHERE StudentID =#ID and QuestionNum =#qnum";
SqlCommand updateCommand = new SqlCommand( update, conn );
conn.Open();
string uid = Session["uid"].ToString();
for (int i= tempStart; i <= tempEnd; i++)
{
updateCommand.Parameters.Clear();
updateCommand.Parameters.AddWithValue("#ID",uid);
updateCommand.Parameters.AddWithValue("#qnum",i);
updateCommand.Parameters.AddWithValue("#answer", Request.Form[i.ToString()]);
try
{
updateCommand.ExecuteNonQuery();
}
catch { }
}
A few things stand out:
You don't show where the SqlConnection is instantiated, so it's not clear that you're disposing it properly.
You shouldn't be swallowing exceptions in the loop - better to handle them in a top level exception handler.
You're instantiating new parameters on each iteration through the loop - you could just reuse the parameters.
Putting this together it could look something like the following (if you don't want to use a transaction, i.e. don't care if some but not all updates succeed):
using (SqlConnection conn = new SqlConnection(connectionString))
{
conn.Open();
using (SqlCommand updateCommand = new SqlCommand(update, conn))
{
string uid = Session["uid"].ToString();
updateCommand.Parameters.AddWithValue("#ID", uid);
updateCommand.Parameters.AddWithValue("#qnum", i);
updateCommand.Parameters.Add("#answer", System.Data.SqlDbType.VarChar);
for (int i = tempStart; i <= tempEnd; i++)
{
updateCommand.Parameters["#answer"] = Request.Form[i.ToString()];
updateCommand.ExecuteNonQuery();
}
}
}
Or to use a transaction to ensure all or nothing:
using (SqlConnection conn = new SqlConnection(connectionString))
{
conn.Open();
using (SqlTransaction transaction = conn.BeginTransaction())
{
using (SqlCommand updateCommand = new SqlCommand(update, conn, transaction))
{
string uid = Session["uid"].ToString();
updateCommand.Parameters.AddWithValue("#ID", uid);
updateCommand.Parameters.AddWithValue("#qnum", i);
updateCommand.Parameters.Add("#answer", System.Data.SqlDbType.VarChar);
for (int i = tempStart; i <= tempEnd; i++)
{
updateCommand.Parameters["#answer"] = Request.Form[i.ToString()];
updateCommand.ExecuteNonQuery();
}
transaction.Commit();
}
} // Transaction will be disposed and rolled back here if an exception is thrown
}
Finally, another problem is that you are mixing UI code (e.g. Request.Form) with data access code. It would be more modular and testable to separate these - e.g. by splitting your application into UI, Business Logic and Data Access layers.
For 30 updates I think you're on the right track, although the comment about the need for a using around updateCommand is correct.
We've found the best performing way to do bulk updates (>100 rows) is via the SqlBulkCopy class to a temporary table followed by a stored procedure call to populate the live table.
An issue I see is when you are opening your connection.
I would at least before every update call the open and then close the connection after the update.
If your loop takes time to execute you will have your connection open for a long time.
It is a good rule to never open your command until you need it.
You can bulk insert using OpenXML. Create an xml document containing all your questions and answers and use that to insert the values.
Edit: If you stick with your current solution, I would at least wrap your SqlConnection and SqlCommand in a using block to make sure they get disposed.
emit a single update that goes against a values table:
UPDATE s SET ANSWER=a FROM dbo.STUDENTAnswers s JOIN (
SELECT 1 as q, 'answer1' as a
UNION ALL SELECT 2, 'answer2' -- etc...
) x ON s.QuestionNum=x.q AND StudentID=#ID
so you just put this together like this:
using(SqlCommand updateCommand = new SqlCommand()) {
updateCommand.CommandType = CommandType.Text;
updateCommand.Connection = conn;
if (cn.State != ConnectionState.Open) conn.Open();
StringBuilder sb = new StringBuilder("UPDATE s SET ANSWER=a FROM dbo.STUDENTAnswers s JOIN (");
string fmt = "SELECT {0} as q, #A{0} as a";
for(int i=tempStart; i<tempEnd; i++) {
sb.AppendFormat(fmt, i);
fmt=" UNION ALL SELECT {0},#A{0}";
updateCommand.Parameters.AddWithValue("#A"+i.ToString(), Request.Form[i.ToString()]);
}
sb.Append(") x ON s.QuestionNum=x.q AND StudentID=#ID");
updateCommand.CommandText = sb.ToString();
updateCommand.Parameters.AddWithValue("#ID", uid);
updateCommand.ExecuteNonQuery();
}
This has the advantages of being an all other nothing operation (like if you'd wrapped several updates in a transaction) and will run faster since:
The table and associated indexes are looked at/updated once
You only pay for the latency between your application and the database server once, rather than on each update