I have c# project that is using sqlserver compact edition and entity framework for data access. I have the need to insert or update a large amount of rows, 5000+ or more to the db, so if the key exists update the record if not insert it. I can not find a way to do this with compact edition and EF with out horrible performance, ie taking 2 mins plus on a core i7 computer. I have tried searching for the record to see if it exists then inserting if not or update if it does, the search is the killer on that. I have tried compiling the search query and that only gave a small improvement. Another thing ive tried is inserting the record in a try catch and if it fails update, but that forces me to savechanges on every record to get the exception as opposed to at the end which is a performance killer. Obviously I can't use stored procedures since it is compact edition. Also I've looked at just executing t-sql directly somehow on the db, but lack of process statements in compact seems to rule that out.
I've searched the world over and out of ideas. I really wanted to use compact if I can over express for the deployment benefits and ability to prevent the user from digging around the db. Any suggesitons would be appreciated.
Thanks
When we're using SQL CE (and SQL 2005 Express for that matter) we always call an update first and then call an insert if the udate gives a row count of 0. This is very simple to implement and does not require expensice try..catch blocks for control flow.
Maybe you could obtain the result you seek by using simple queries.
Let's say the the table you want to insert into or update is like this
TABLE original
id integer,
value char(100)
first you could create a temporary table with the new values (you can use a SELECT INTO or other ways to create it)
TABLE temp
id integer,
value char(100)
now, you need to do two things, update the rows in original and then insert the new values
UPDATE original
SET original.value = temp.value
FROM original, temp
WHERE original.id = temp.id
INSERT INTO original
SELECT * from temp
WHERE temp.id not IN (select o.id from original o)
Given your problem statement, I'm going to guess that this software assumes a relatively beefy environment. Have you considered taking the task of determining off of sqlce and doing it on your own? Essentially, grab a sorted list of all the IDs(keys?) from the relevant table and checking every object key against that list before queueing it for insertion?
This makes a few assumptions that would be bad news with a typical DB, but that you can probably get away with in sqlce. E.g., it assumes that rows won't be inserted or significantly modified by a different user while you're performing this insert.
If the list of keys is too long to reasonably hold in memory for such a check, I'm afraid I'd say that sqlce just might not be the right tool for the job. :(
I'm not sure if this is feasible or not, as I haven't used the Entity Framework, but have you tried running the update first and checking the rowcount -- inserting if no rows were updated? This may be faster than catching exceptions. It's generally a bad practise to use exceptions for control flow, and often slows things down dramatically.
If you can write the SQL directly, then the fastest way to do it would be to get all the data into a temporary table and then update what exists and insert the rests (as in Andrea Bertani's example above). You should get slightly better results by using a left join on the original table in the select in your insert, and excluding any rows with values from the original table that are not null:
INSERT INTO original
SELECT * FROM temp
LEFT JOIN original ON original.id = temp.id
WHERE original.id IS NULL
I would recommend using SqlCeResultSet directly. You lose the nice type-safeness of EF, but performance is incredibly fast. We switched from ADO.NET 2.0-style TypeDataSets to SqlCeResultSet and SqlCeDataReader and saw 20 to 50 times increases in speed.
See SqlCeResultSet. For a .NETCF project I removed almost all sql code in favor of this class.
Just search for "SqlCeResultSet" here and msdn.
A quick overview:
Open the resultSet.
If you need seek (for existence check) you will have to provide an index for the result set.
Seek on the result set & read to check whether you found the row. This is extremely fast even on tables with tens of thousands rows (because the seek uses the index).
Insert or update the record (see SqlCeResultSet.NewRecord).
We have successfully developed a project with a sqlce database with a main product table with over 65000 rows (read/write with 4 indexes).
SQL Server compact edition is pretty early in development at this point. Also, depending on your device, memory-disk access can be pretty slow, and SQLCE plus .NET type-safety overhead is pretty intensive. It works best with a pretty static data store.
I suggest you either use a lighter-weight API or consider SQLite.
Related
I have a program in c# that Insert 4,500,000 record into sql using ExecuteNoneQuery
and take too long to insert
I need a fast way to insert that take maximum about 10 minutes however when I insert 4,500,000 record from another table to my table via management studio it take 3 minutes
The SqlBulkCopy class is designed for fast insertion of large sets of data.
However, you need to understand that with that kind of size, disk access speed and network latency/bandwidth/saturation come into play.
Your example of populating one table from another is not valid in such a scenario as you are copying on the same machine.
As Oded has already stated, the starting point for this is SqlBulkCopy. However, if you have control over the database, you should also check that the Recovery Model on the database is set to Simply or "Bulk Logged". Without this you will take a heavy hit with SQL Server creating journal entries. You also need to ensure that SqlBulkCopyOptions of TableLock is set.
These two are straight forward. It can also be worth playing with the SqlBulCopy BatchSize setting, and on the transaction model (see UseInternalTransaction) but these settings are harder to give advice on, as the optimal settings can be quite different for different scenarions. If doing the TableLock and checking the Recovery Model doesn't quite get you the speed you want, then you can start playing with the Batch Size, but it is more of a grey area.
I am trying to figure out the best way to design my C# application which utilizes data from a SQL Server backend.
My application periodically has to update 55K rows each one at a time from a loop in my application. Before it does an update it needs to check if the record to be updated exists.
If it exists it updates one field.
If not it performs an insert of 4 fields.
The table to be updated has 600K rows.
What is the most efficient way to handle these updates/inserts from my application?
Should I create a data dictionary in c# and load the 600K records and query the dictionary first instead of the database?
Is this a faster approach?
Should I use a stored procedure?
What’s the best way to achieve maximum performance based on this scenario?
You could use SqlBulkCopy to upload to a temp table then have a SQL Server job do the merge.
You should try to avoid "update 55K rows each one at a time from a loop". That will be very slow.
Instead, try to find a way to batch the updates (n at a time). Look into SQL Server table value parameters as a way to send a set of data to a stored procedure.
Here's an article on updating multiple rows with TVPs: http://www.sqlmag.com/article/sql-server-2008/using-table-valued-parameters-to-update-multiple-rows
What if you did something like this, instead?
By some means, get those 55,000 rows of data into the database; if they're not already there. (If you're right now getting those rows from some query, arrange instead for the query-results to be stored in a temporary table on that database. (This might be a proper application for a stored procedure.)
Now, you could express the operations that you need to perform, perhaps, as two separate SQL queries: one to do the updates, and one or more others to do the inserts. The first query might use a clause such as "WHERE FOO IN (SELECT BAR FROM #TEMP_TABLE ...)" to identify the rows to be updated. The others might use "WHERE FOO NOT IN (...)"
This is, to be precise, exactly the sort of thing that I would expect to need to use a stored procedure to do, because, if you think about it, "the SQL server itself" is precisely the right party to be doing the work, because he's the only guy around who already has the data on-hand that you intend to manipulate. He alone doesn't have to "transmit" those 55,000 rows anywhere. Perfect.
I need to find the best way to insert or update data in database using sql server and asp.net. It is a standard scenario if data exist it is updated if not it is inserted. I know that there are many topic here about that but no one has answered what i need to know.
So my problem is that there is really no problem when you update/insert 5k - 10k rows but what with 50k and more.
My first idea was to use sql server 2008 MERGE command, but i have some performance consideration if it will be 50k+ rows. Also i don't know if i can marge data this way based not on primary id key (int) but on other unique key in the table. (to be precise an product serial number that will not change in time).
My second idea was to first get all product serials, then compare the new data serials with that and divide it into data to insert and data to update, then just make one bulk insert and one bulk update.
I just don't know which will be better, with MERGE i don't know what the performance will be and it is supported only by sql server 2008, but it looks quite simple, the second option doesn't need sql 2008, the batches should be fast but selecting first all serials and dividing based on them could have some performance penalties.
What is you opinion, what to choose ?
Merge performace way better because "One of the most important advantage of MERGE statement is all the data is read and processed only once"
You dont need a primary key, you can join on one or more fields what makes your records unique
There should be no problem performing the merge on the serial number as you've described it. You may want to read Optimizing MERGE Statement Performance for Microsoft's recommended best practices when using MERGE.
I need to update about 250k rows on a table and each field to update will have a different value depending on the row itself (not calculated based on the row id or the key but externally).
I tried with a parametrized query but it turns out to be slow (I still can try with a table-value parameter, SqlDbType.Structured, in SQL Server 2008, but I'd like to have a general way to do it on several databases including MySql, Oracle and Firebird).
Making a huge concat of individual updates is also slow (BUT about 2 times faster than making thousands of individual calls (roundtrips!) using parametrized queries)
What about creating a temp table and running an update joining my table and the tmp one? Will it work faster?
How slow is "slow"?
The main problem with this is that it would create an enormous entry in the database's log file (in case there's a power failure half-way through the update, the database needs to log each action so that it can rollback in the event of failure). This is most likely where the "slowness" is coming from, more than anything else (though obviously with such a large number of rows, there are other ways to make the thing inefficient [e.g. doing one DB roundtrip per update would be unbearably slow], I'm just saying once you eliminate the obvious things, you'll still find it's pretty slow).
There's a few ways you can do it more efficiently. One would be to do the update in chunks, 1,000 rows at a time, say. That way, the database writes lots of small log entries, rather than one really huge one.
Another way would be to turn off - or turn "down" - the database's logging for the duration of the update. In SQL Server, for example, you can set the Recovery Model to "simple" or "bulk update" which would speed it up considerably (with the caveat that you are more at risk if there's a power failure or something during the update).
Edit Just to expand a little more, probably the most efficient way to actually execute the queries in the first place would be to do a BULK INSERT of all the new rows into a temporary table, and then do a single UPDATE of the existing table from that (or to do the UPDATE in chunks of 1,000 as I said above). Most of my answer was addressing the problem once you've implemented it like that: you'll still find it's pretty slow...
call a stored procedure if possible
If the columns updated are part of indexes you could
drop these indexes
do the update
re-create the indexes.
If you need these indexes to retrieve the data, well, it doesn't help.
You should use the SqlBulkCopy with the KeepIdentities flag set.
As part of a SqlTransaction do a query to SELECT all the records that need updating and then DELETE THEM, returning those selected (and now removed) records. Read them into C# in a single batch. Update the records on the C# side in memory, now that you've narrowed the selection and then SqlBulkCopy those updated records back, keys and all. And don't forget to commit the transaction. It's more work, but it's very fast.
Here's what I would do:
Retrieve the entire table, that is, the columns you need in order to calculate/retrieve/find/produce the changes externally
Calculate/produce those changes
Run a bulk insert to a temporary table, uploading the information you need server-side in order to do the changes. This would require the key information + new values for all the rows you intend to change.
Run SQL on the server to copy new values from the temporary table into the production table.
Pros:
Running the final step server-side is faster than running tons and tons of individual SQL, so you're going to lock the table in question for a shorter time
Bulk insert like this is fast
Cons:
Requires extra space in your database for the temporary table
Produces more log data, logging both the bulk insert and the changes to the production table
Here are things that can make your updates slow:
executing updates one by one through parametrized query
solution: do update in one statement
large transaction creates big log entry
see codeka's answer
updating indexes (RDBMS will update index after each row. If you change indexed column, it could be very costly on large table)
if you can, drop indices before update and recreate them after
updating field that has foreign key constraint - for each inserted record RDBMS will go and look for appropriate key
if you can, disable foreign key constraints before update and enable them after update
triggers and row level checks
if you can, disable triggers before update and enable them after
I have one stored procedure which inserts data into 3 tables, (does UPSERTS), and has some rudamentary logic. (IF-THEN-ELSE)
I need to execute this Sproc millions of times (From a C# app) using different parameters and I need it to be FAST.
What is the best way to do so?
Does anybody know an open-source (or not) off the shelf document indexer besides Lucene or Sql Server FTS??
*I am trying to build a document word-index. For each word in the document I insert into the DB the word, docID, and word position.
This happens 100000 times for 100 documents for example.
The Sproc : there are 3 tables to insert into, for each one I do an UPSERT.
The C# app :
using (SqlConnection con = new SqlConnection(_connectionString))
{
con.Open();
SqlTransaction trans = con.BeginTransaction();
SqlCommand command = new SqlCommand("add_word", con, trans);
command.CommandType = System.Data.CommandType.StoredProcedure;
string[] TextArray;
for (int i = 0; i < Document.NumberOfFields; i++)
{
...
Addword(..., command); <---- this updates parameters with new values and ExecuteNonQuery.
}
}
I Forgot to mention , this code produces deadlocks in Sql Server . I have no idea why this happens.
Drop all the indexes on the table(s) you are loading, then add them back in once the load is complete. This will prevent a lot of thrashing / reindexing for each change.
Make sure the database has allocated enough physical file space prior to the load that way it doesn't have to spend time constantly grabbing it from the file system as you load. Usually databases are set to grow by something like 10% when full at which point sql server blocks queries until more space is allocated. When loading the amount of data you are talking about, sql will have to do a lot of blocking.
Look into bulk load / bulk copy if possible.
Do all of your IF THEN ELSE logic in code. Just send the actual values you want stored to the s'proc when it's ready. You might even run two threads. One to evaluate the data and queue it up, the other to write the queue to the DB server.
Look into Off The Shelf programs that do exactly what you are talking about with indexing the documents. Most likely they've solved these problems.
Get rid of the Transaction requirements if possible. Try to keep the s'proc calls as simple as possible.
See if you can limit the words you are storing. For example, if you don't care about the words "it", "as", "I", etc then filter them out BEFORE calling the s'proc.
If you want to quickly bulk INSERT data from C#, check out the SqlBulkCopy class (.NET 2.0 onwards).
This might seem like a rudimentary approach, but it should work and it should be fast. You can just generate a huge textfile with a list of SQL statements and then run it from a command line. If I’m not mistaken it should be possible to batch commands using the GO statement. Alternatively, you can do it directly from you application concatenating several SQL commands as strings and execute them in batches. It seems that what you are trying to do is a onetime task and that the data does not come directly as auser input. So you should be able to handle escapign yourself.
I’m sure there are more sophisticated ways to do that (the SqlBulkCopy looks like a good start), so please consider this as just a suggestion. I would spend some time investigating whether there are not more elegant ways better ways first.
Also, I would make sure that the logic in the stored procedure is as simple as possible and that the table does not have any indexes. They should be added later.
This is probably too generic as a requirement - in order for the procedure to be fast itself we need to see it and have some knowledge of your db-schema.
On the other end if you want to know what the best way to execute as fast as possible the same (non-optimized or optimized) procedure, usually the best way to go is to do some sort of caching on the client and call the procedure as few times as possible batching your operations.
If this is in a loop, what people usually do is - instead of calling the procedure each iteration - build/populate some caching data structure that will call down to the store procedure when the loop exits (or any given number of loops if you need this to happen more often) batching the operations that you cached (i.e. you can pass an xml string down to your sp which will then parse it, put the stuff in temp tables and then go from there - you can save a whole lot of overhead like this).
Another common solution solution for this is to use SqlServer Bulk operations.
To go back to the stored procedure - keep into account that optimizing your T-SQL and db-schema (with indexes etc.) can have a glorious impact on your performance.
Try use XML to do that.
You just will need execute 1 time:
Example:
DECLARE #XMLDoc XML
SET #XMLDoc = '<words><word>test</word><word>test2</word></words>'
CREATE PROCEDURE add_words
(
#XMLDoc XML
)
AS
DECLARE #handle INT
EXEC sp_xml_preparedocument #handle OUTPUT, #XMLDoc
INSERT INTO TestTable
SELECT * FROM OPENXML (#handle, '/words', 2) WITH
(
word varchar(100)
)
EXEC sp_xml_removedocument #handle
If you're trying to optimize for speed, consider simply upgrading your SQL Server hardware. Putting some RAM and a blazing fast RAID in your server may be the most cost effective long-term solution to speed up your query speed. Hardware is relatively cheap compared to developer time.
Heed the words of Jeff Atwood:
Coding Horror: Hardware is Cheap, Programmers are Expensive
The communication with the database will likely be a bottle-neck in this case, especially if the db is on another machine. I suggest sending the entire document to the database and writing a sproc that splits it into words, or use sql-server hosted managed code.
Assuming this is an app where there would not be contention between multiple users, try this approach instead:
Insert your parameters into a table set up for that purpose
Change your SP to loop through that table and perform its work on each row
Call the SP once
Have the SP truncate the table of inputs when it is complete
This will eliminate the overhead of calling the SP millions of times, and the inserts of the parameters into the table can be concatenated ("INSERT INTO foo(v) VALUE('bar'); INSERT INTO foo(v) VALUE('bar2'); INSERT INTO foo(v) VALUE('bar3');").
Disadvantage: the SP is going to take a long time to execute, and there won't be any feedback of progress, which isn't terribly user-friendly.
To move over a lot of data to the server, use either SqlBulkCopy or table valued parameter if you are on 2008. If you need speed, do not execute a stored procedure once per row, develop a set based one that processes all (or a large batch of) rows.
--Edited since the question was edited.
The biggest issue is to make sure the stored proc is correctly tuned. Your C# code is about as fast as you are going to get it.