I am inserting 8500 lines at a SQLite database. It takes > 30sec at a Core 2 Duo.
Its using 70% of CPU during this time, then the problem is the CPU usage.
I am using transaction.
I create the database, tables and inserts on the fly at a temp file. Then I don´t need to worry about corruption, etc.
I just tried to use this, but dont help:
PRAGMA journal_mode = OFF;
PRAGMA synchronous = OFF;
What more can I do?
If I run the same script at Firefox SQLite Manager Plugin, it runs instantly.
I run the profiler.
All the time is at
27seg System.Data.SQLite.SQLite3.Prepare(SQLiteConnection, String, SQLiteStatement, UInt32, String&)
This method calls the three methods
12seg System.Data.SQLite.SQLiteConvert.UTF8ToString(IntPtr, Int32)
9seg System.Data.SQLite.SQLiteConvert.ToUTF8(String)
4seg System.Data.SQLite.UnsafeNativeMethods.sqlite3_prepare_interop(IntPtr, IntPtr, Int32, IntPtr&, IntPtr&, Int32&)
You asked to show the insert. Here:
INSERT INTO [Criterio] ([cd1],[cd2],[cd3],[dc4],[dc5],[dt6],[dc7],[dt8],[dt9],[dt10],[dt11])VALUES('FFFFFFFF-FFFF-FFFF-FFFF-B897A4DE6949',10,20,'',NULL,NULL,'',NULL,julianday('2011-11-25 17:00:00'),NULL,NULL);
Table:
CREATE TABLE Criterio (
cd1 integer NOT NULL,
cd2 text NOT NULL,
dc3 text NOT NULL,
cd4 integer NOT NULL,
dt5 DATE NOT NULL DEFAULT CURRENT_TIMESTAMP,
dt6 DATE NULL,
dt7 DATE NULL,
dc8 TEXT NULL,
dt9 datetime NULL,
dc10 TEXT NULL,
dt11 datetime NULL,
PRIMARY KEY (cd2 ASC, cd1 ASC)
);
C# Code:
scriptSql = System.IO.File.ReadAllText(#"C:\Users\Me\Desktop\ScriptToTest.txt");
using (DbCommand comando = Banco.GetSqlStringCommand(scriptSql))
{
try
{
using (TransactionScope transacao = new TransactionScope())
{
Banco.ExecuteNonQuery(comando);
transacao.Complete();
}
}
catch (Exception ex)
{
Logging.ErroBanco(comando, ex);
throw;
}
}
I don't know why pst deleted his answer so I'll re-post the same information from it as this appears to be the correct answer.
According to the SQLite FAQ - INSERT is really slow - I can only do few dozen INSERTs per second
Actually, SQLite will easily do 50,000 or more INSERT statements per second on an average desktop computer. But it will only do a few dozen transactions per second
...
By default, each INSERT statement is its own transaction. But if you surround multiple INSERT statements with BEGIN...COMMIT then all the inserts are grouped into a single transaction.
So basically you need to group your INSERTs into fewer transactions.
Update: So the problem is probably mostly due to the sheer size of the SQL script - SQLite needs to parse the entire script before it can execute, but the parser will be designed to parse small scripts not massive ones! This is why you are seeing so much time spent in the SQLite3.Prepare method.
Instead you should use a parameterised query and insert records in a loop in your C# code, for example if your data was in CSV format in your text file you could use something like this:
using (TransactionScope txn = new TransactionScope())
{
using (DbCommand cmd = Banco.GetSqlStringCommand(sql))
{
string line = null;
while ((line = reader.ReadLine()) != null)
{
// Set the parameters for the command at this point based on the current line
Banco.ExecuteNonQuery(cmd);
txn.Complete();
}
}
}
Have you tried a parameterized insert? In my experience, transactions will improve speed quite a bit, but parameterized queries have a much bigger impact.
Related
I have a strange performance issue with executing a simple merge SQL command on Entity Framework 6.
First my Entity Framework code:
var command = #"MERGE [StringData] AS TARGET
USING (VALUES (#DCStringID_Value, #TimeStamp_Value)) AS SOURCE ([DCStringID], [TimeStamp])
ON TARGET.[DCStringID] = SOURCE.[DCStringID] AND TARGET.[TimeStamp] = SOURCE.[TimeStamp]
WHEN MATCHED THEN
UPDATE
SET [DCVoltage] = #DCVoltage_Value,
[DCCurrent] = #DCCurrent_Value
WHEN NOT MATCHED THEN
INSERT ([DCStringID], [TimeStamp], [DCVoltage], [DCCurrent])
VALUES (#DCStringID_Value, #TimeStamp_Value, #DCVoltage_Value, #DCCurrent_Value);";
using (EntityModel context = new EntityModel())
{
for (int i = 0; i < 100; i++)
{
var entity = _buffer.Dequeue();
context.ContextAdapter.ObjectContext.ExecuteStoreCommand(command, new object[]
{
new SqlParameter("#DCStringID_Value", entity.DCStringID),
new SqlParameter("#TimeStamp_Value", entity.TimeStamp),
new SqlParameter("#DCVoltage_Value", entity.DCVoltage),
new SqlParameter("#DCCurrent_Value", entity.DCCurrent),
});
}
}
Execution time ~20 seconds.
This looks a little bit slow so I tried the same command to run direct in management studio (also 100 times in a row).
SQL Server Management Studio:
Execution time <1 second.
Ok that is strange!?
Some tests:
First I compare both execution plans (Entity Framework and SSMS) but they are absolutely identical.
Second I tried is using a transaction inside my code.
using (PowerdooModel context = PowerdooModel.CreateModel())
{
using (var dbContextTransaction = context.Database.BeginTransaction())
{
try
{
for (int i = 0; i < 100; i++)
{
context.ContextAdapter.ObjectContext.ExecuteStoreCommand(command, new object[]
{
new SqlParameter("#DCStringID_Value", entity.DCStringID),
new SqlParameter("#TimeStamp_Value", entity.TimeStamp),
new SqlParameter("#DCVoltage_Value", entity.DCVoltage),
new SqlParameter("#DCCurrent_Value", entity.DCCurrent),
});
}
dbContextTransaction.Commit();
}
catch (Exception)
{
dbContextTransaction.Rollback();
}
}
}
Third I added 'OPTION(recompile)' to avoid parameter sniffing.
Execution time still ~10 seconds. What is still very poor performance.
Question: what am I doing wrong? Please give me a hint.
-------- Some more tests - edited 18.11.2016 ---------
If I execute the commands inside a transaction (like above), the following times comes up:
Duration complete: 00:00:06.5936006
Average Command: 00:00:00.0653457
Commit: 00:00:00.0590299
Is it not strange that the commit nearly takes no time and the average command takes nearly the same time?
In SSMS you are running one long batch that contains 100 separate MERGE statements. In C# you are running 100 separate batches. Obviously it is longer.
Running 100 separate batches in 100 separate transactions is obviously longer than 100 batches in 1 transaction. Your measurements confirm that and show you how much longer.
To make it efficient use a single MERGE statement that processes all 100 rows from a table-valued parameter in one go. See also Table-Valued Parameters for .NET Framework
Often a table-valued parameter is a parameter of a stored procedure, but you don't have to use a stored procedure. It could be a single statement, but instead of multiple simple scalar parameters you'd pass the whole table at once.
I never used entity framework, so I can't show you a C# example how to call it. I'm sure if you search for "how to pass table-valued parameter in entity framework" you'll find an example. I use DataTable class to pass a table as a parameter.
I can show you an example of T-SQL stored procedure.
At first you define a table type that pretty much follows the definition of your StringData table:
CREATE TYPE dbo.StringDataTableType AS TABLE(
DCStringID int NOT NULL,
TimeStamp datetime2(0) NOT NULL,
DCVoltage float NOT NULL,
DCCurrent float NOT NULL
)
Then you use it as a type for the parameter:
CREATE PROCEDURE dbo.MergeStringData
#ParamRows dbo.StringDataTableType READONLY
AS
BEGIN
SET NOCOUNT ON;
SET XACT_ABORT ON;
BEGIN TRANSACTION;
BEGIN TRY
MERGE INTO dbo.StringData WITH (HOLDLOCK) as Dst
USING
(
SELECT
TT.DCStringID
,TT.TimeStamp
,TT.DCVoltage
,TT.DCCurrent
FROM
#ParamRows AS TT
) AS Src
ON
Dst.DCStringID = Src.DCStringID AND
Dst.TimeStamp = Src.TimeStamp
WHEN MATCHED THEN
UPDATE SET
Dst.DCVoltage = Src.DCVoltage
,Dst.DCCurrent = Src.DCCurrent
WHEN NOT MATCHED BY TARGET THEN
INSERT
(DCStringID
,TimeStamp
,DCVoltage
,DCCurrent)
VALUES
(Src.DCStringID
,Src.TimeStamp
,Src.DCVoltage
,Src.DCCurrent)
;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
-- TODO: handle the error
ROLLBACK TRANSACTION;
END CATCH;
END
Again, it doesn't have to be a stored procedure, it can be just a MERGE statement with one table-valued parameter.
I'm pretty sure that it would be much faster than your loop with 100 separate queries.
Details on why there should be HOLDLOCK hint with MERGE: “UPSERT” Race Condition With MERGE
A side note:
It is not strange that Commit in your last test is very fast. It doesn't do much, because everything is already written to the database. If you tried to do Rollback, that would take some time.
It looks like the difference is that you are using OPTION (RECOMPILE) in your EF code only, but when you run the code in SSMS there is no recompile.
Recompiling 100 times would certainly add some time to execution.
Related to the answer from #Vladimir Baranov I take a look how Entity Framework supports bulk operations (for me bulk merge).
Short answer... no! Bulk delete, update and merge operations are not supported.
Then I found this sweet little library!
zzzproject Entity Framework Extensions
It provides a finished extension that works like #Vladimir Baranov descripted.
Create a temporary table in SQL Server.
Bulk Insert data with .NET SqlBulkCopy into the temporary table.
Perform a SQL statement between the temporary table and the destination table.
Drop the temporary table from the SQL Server.
You can find a short interview by Jonathan Allen about the functions and how it can dramatically improve Entity Framework performance with bulk operations.
For me it really kicks up the performance from 10 seconds to under 1 second. Nice!
I want to be clear, I'm not working or be related to them. This is no advertisement. Yes the library is commercial, but basic version is free.
I am receiving (streamed) data from an external source (over Lightstreamer) into my C# application.
My C# application receives data from the listener. The data from the listener are stored in a queue (ConcurrentQueue).
The queue is getting cleaned every 0.5 seconds with TryDequeue into a DataTable. The DataTable will then be copy into a SQL database using SqlBulkCopy.
The SQL database processes the newly data arrived from the staging table into the final table.
I currently receive around 300'000 rows per day (can increae within the next weeks strongly) and my goal is to stay under 1 second from the time I receive the data until they are available in the final SQL table.
Currently the maximum rows per seconds I have to process is around 50 rows.
Unfortunately, since receiving more and more data, my logic is getting slower in performance (still far under 1 second, but I wanna keep improving). The main bottleneck (so far) is the processing of the staging data (on the SQL database) into the final table.
In order to improve the performance, I would like to switch the staging table into a memory-optimized table. The final table is already a memory-optimized table so they will work together fine for sure.
My questions:
Is there a way to use SqlBulkCopy (out of C#) with memory-optimized tables? (as far as I know there is no way yet)
Any suggestions for the fastest way to write the received data from my C# application into the memory-optimized staging table?
EDIT (with solution):
After the comments/answers and performance evaluations I decided to give up the bulk insert and use SQLCommand to handover a IEnumerable with my data as table-valued parameter into a native compiled stored procedure to store the data directly in my memory-optimized final table (as well as a copy into the "staging" table which now serves as archive). Performance increased significantly (even I did not consider parallelizing the inserts yet (will be at a later stage)).
Here is part of the code:
Memory-optimized user-defined table type (to handover the data from C# into SQL (stored procedure):
CREATE TYPE [Staging].[CityIndexIntradayLivePrices] AS TABLE(
[CityIndexInstrumentID] [int] NOT NULL,
[CityIndexTimeStamp] [bigint] NOT NULL,
[BidPrice] [numeric](18, 8) NOT NULL,
[AskPrice] [numeric](18, 8) NOT NULL,
INDEX [IndexCityIndexIntradayLivePrices] NONCLUSTERED
(
[CityIndexInstrumentID] ASC,
[CityIndexTimeStamp] ASC,
[BidPrice] ASC,
[AskPrice] ASC
)
)
WITH ( MEMORY_OPTIMIZED = ON )
Native compiled stored procedures to insert the data into final table and staging (which serves as archive in this case):
create procedure [Staging].[spProcessCityIndexIntradayLivePricesStaging]
(
#ProcessingID int,
#CityIndexIntradayLivePrices Staging.CityIndexIntradayLivePrices readonly
)
with native_compilation, schemabinding, execute as owner
as
begin atomic
with (transaction isolation level=snapshot, language=N'us_english')
-- store prices
insert into TimeSeries.CityIndexIntradayLivePrices
(
ObjectID,
PerDateTime,
BidPrice,
AskPrice,
ProcessingID
)
select Objects.ObjectID,
CityIndexTimeStamp,
CityIndexIntradayLivePricesStaging.BidPrice,
CityIndexIntradayLivePricesStaging.AskPrice,
#ProcessingID
from #CityIndexIntradayLivePrices CityIndexIntradayLivePricesStaging,
Objects.Objects
where Objects.CityIndexInstrumentID = CityIndexIntradayLivePricesStaging.CityIndexInstrumentID
-- store data in staging table
insert into Staging.CityIndexIntradayLivePricesStaging
(
ImportProcessingID,
CityIndexInstrumentID,
CityIndexTimeStamp,
BidPrice,
AskPrice
)
select #ProcessingID,
CityIndexInstrumentID,
CityIndexTimeStamp,
BidPrice,
AskPrice
from #CityIndexIntradayLivePrices
end
IEnumerable filled with the from the queue:
private static IEnumerable<SqlDataRecord> CreateSqlDataRecords()
{
// set columns (the sequence is important as the sequence will be accordingly to the sequence of columns in the table-value parameter)
SqlMetaData MetaDataCol1;
SqlMetaData MetaDataCol2;
SqlMetaData MetaDataCol3;
SqlMetaData MetaDataCol4;
MetaDataCol1 = new SqlMetaData("CityIndexInstrumentID", SqlDbType.Int);
MetaDataCol2 = new SqlMetaData("CityIndexTimeStamp", SqlDbType.BigInt);
MetaDataCol3 = new SqlMetaData("BidPrice", SqlDbType.Decimal, 18, 8); // precision 18, 8 scale
MetaDataCol4 = new SqlMetaData("AskPrice", SqlDbType.Decimal, 18, 8); // precision 18, 8 scale
// define sql data record with the columns
SqlDataRecord DataRecord = new SqlDataRecord(new SqlMetaData[] { MetaDataCol1, MetaDataCol2, MetaDataCol3, MetaDataCol4 });
// remove each price row from queue and add it to the sql data record
LightstreamerAPI.PriceDTO PriceDTO = new LightstreamerAPI.PriceDTO();
while (IntradayQuotesQueue.TryDequeue(out PriceDTO))
{
DataRecord.SetInt32(0, PriceDTO.MarketID); // city index market id
DataRecord.SetInt64(1, Convert.ToInt64((PriceDTO.TickDate.Replace(#"\/Date(", "")).Replace(#")\/", ""))); // # is used to avoid problem with / as escape sequence)
DataRecord.SetDecimal(2, PriceDTO.Bid); // bid price
DataRecord.SetDecimal(3, PriceDTO.Offer); // ask price
yield return DataRecord;
}
}
Handling the data every 0.5 seconds:
public static void ChildThreadIntradayQuotesHandler(Int32 CityIndexInterfaceProcessingID)
{
try
{
// open new sql connection
using (SqlConnection TimeSeriesDatabaseSQLConnection = new SqlConnection("Data Source=XXX;Initial Catalog=XXX;Integrated Security=SSPI;MultipleActiveResultSets=false"))
{
// open connection
TimeSeriesDatabaseSQLConnection.Open();
// endless loop to keep thread alive
while(true)
{
// ensure queue has rows to process (otherwise no need to continue)
if(IntradayQuotesQueue.Count > 0)
{
// define stored procedure for sql command
SqlCommand InsertCommand = new SqlCommand("Staging.spProcessCityIndexIntradayLivePricesStaging", TimeSeriesDatabaseSQLConnection);
// set command type to stored procedure
InsertCommand.CommandType = CommandType.StoredProcedure;
// define sql parameters (table-value parameter gets data from CreateSqlDataRecords())
SqlParameter ParameterCityIndexIntradayLivePrices = InsertCommand.Parameters.AddWithValue("#CityIndexIntradayLivePrices", CreateSqlDataRecords()); // table-valued parameter
SqlParameter ParameterProcessingID = InsertCommand.Parameters.AddWithValue("#ProcessingID", CityIndexInterfaceProcessingID); // processing id parameter
// set sql db type to structured for table-value paramter (structured = special data type for specifying structured data contained in table-valued parameters)
ParameterCityIndexIntradayLivePrices.SqlDbType = SqlDbType.Structured;
// execute stored procedure
InsertCommand.ExecuteNonQuery();
}
// wait 0.5 seconds
Thread.Sleep(500);
}
}
}
catch (Exception e)
{
// handle error (standard error messages and update processing)
ThreadErrorHandling(CityIndexInterfaceProcessingID, "ChildThreadIntradayQuotesHandler (handler stopped now)", e);
};
}
Use SQL Server 2016 (it's not RTM yet, but it's already much better than 2014 when it comes to memory-optimized tables). Then use either a memory-optimized table variable or just blast a whole lot of native stored procedure calls in a transaction, each doing one insert, depending on what's faster in your scenario (this varies). A few things to watch out for:
Doing multiple inserts in one transaction is vital to save on network roundtrips. While in-memory operations are very fast, SQL Server still needs to confirm every operation.
Depending on how you're producing data, you may find that parallelizing the inserts can speed things up (don't overdo it; you'll quickly hit the saturation point). Don't try to be very clever yourself here; leverage async/await and/or Parallel.ForEach.
If you're passing a table-valued parameter, the easiest way of doing it is to pass a DataTable as the parameter value, but this is not the most efficient way of doing it -- that would be passing an IEnumerable<SqlDataRecord>. You can use an iterator method to generate the values, so only a constant amount of memory is allocated.
You'll have to experiment a bit to find the optimal way of passing through data; this depends a lot on the size of your data and how you're getting it.
Batch the data from the staging table to the final table in row counts less than 5k, I typically use 4k, and do not insert them in a transaction. Instead, implement programmatic transactions if needed. Staying under 5k inserted rows keeps the number of row locks from escalating into a table lock, which has to wait until everyone else gets out of the table.
Are you sure it's your logic slowing down and not the actual transactions to the database? Entity Framework for example is "sensitive", for lack of a better term, when trying to insert a ton of rows and gets pretty slow.
There's a third party library, BulkInsert, on Codeplex which I've used and it's pretty nice to do bulk inserting of data: https://efbulkinsert.codeplex.com/
You could also write your own extension method on DBContext if you EF that does this too that could be based on record count. Anything under 5000 rows use Save(), anything over that you can invoke your own bulk insert logic.
I have a file (which has 10 million records) like below:
line1
line2
line3
line4
.......
......
10 million lines
So basically I want to insert 10 million records into the database.
so I read the file and upload it to SQL Server.
C# code
System.IO.StreamReader file =
new System.IO.StreamReader(#"c:\test.txt");
while((line = file.ReadLine()) != null)
{
// insertion code goes here
//DAL.ExecuteSql("insert into table1 values("+line+")");
}
file.Close();
but insertion will take a long time.
How can I insert 10 million records in the shortest time possible using C#?
Update 1:
Bulk INSERT:
BULK INSERT DBNAME.dbo.DATAs
FROM 'F:\dt10000000\dt10000000.txt'
WITH
(
ROWTERMINATOR =' \n'
);
My Table is like below:
DATAs
(
DatasField VARCHAR(MAX)
)
but I am getting following error:
Msg 4866, Level 16, State 1, Line 1
The bulk load failed. The column is too long in the data file for row 1, column 1. Verify that the field terminator and row terminator are specified correctly.
Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7330, Level 16, State 2, Line 1
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
Below code worked:
BULK INSERT DBNAME.dbo.DATAs
FROM 'F:\dt10000000\dt10000000.txt'
WITH
(
FIELDTERMINATOR = '\t',
ROWTERMINATOR = '\n'
);
Please do not create a DataTable to load via BulkCopy. That is an ok solution for smaller sets of data, but there is absolutely no reason to load all 10 million rows into memory before calling the database.
Your best bet (outside of BCP / BULK INSERT / OPENROWSET(BULK...)) is to stream the contents from the file into the database via a Table-Valued Parameter (TVP). By using a TVP you can open the file, read a row & send a row until done, and then close the file. This method has a memory footprint of just a single row. I wrote an article, Streaming Data Into SQL Server 2008 From an Application, which has an example of this very scenario.
A simplistic overview of the structure is as follows. I am assuming the same import table and field name as shown in the question above.
Required database objects:
-- First: You need a User-Defined Table Type
CREATE TYPE ImportStructure AS TABLE (Field VARCHAR(MAX));
GO
-- Second: Use the UDTT as an input param to an import proc.
-- Hence "Tabled-Valued Parameter" (TVP)
CREATE PROCEDURE dbo.ImportData (
#ImportTable dbo.ImportStructure READONLY
)
AS
SET NOCOUNT ON;
-- maybe clear out the table first?
TRUNCATE TABLE dbo.DATAs;
INSERT INTO dbo.DATAs (DatasField)
SELECT Field
FROM #ImportTable;
GO
C# app code to make use of the above SQL objects is below. Notice how rather than filling up an object (e.g. DataTable) and then executing the Stored Procedure, in this method it is the executing of the Stored Procedure that initiates the reading of the file contents. The input parameter of the Stored Proc isn't a variable; it is the return value of a method, GetFileContents. That method is called when the SqlCommand calls ExecuteNonQuery, which opens the file, reads a row and sends the row to SQL Server via the IEnumerable<SqlDataRecord> and yield return constructs, and then closes the file. The Stored Procedure just sees a Table Variable, #ImportTable, that can be access as soon as the data starts coming over (note: the data does persist for a short time, even if not the full contents, in tempdb).
using System.Collections;
using System.Data;
using System.Data.SqlClient;
using System.IO;
using Microsoft.SqlServer.Server;
private static IEnumerable<SqlDataRecord> GetFileContents()
{
SqlMetaData[] _TvpSchema = new SqlMetaData[] {
new SqlMetaData("Field", SqlDbType.VarChar, SqlMetaData.Max)
};
SqlDataRecord _DataRecord = new SqlDataRecord(_TvpSchema);
StreamReader _FileReader = null;
try
{
_FileReader = new StreamReader("{filePath}");
// read a row, send a row
while (!_FileReader.EndOfStream)
{
// You shouldn't need to call "_DataRecord = new SqlDataRecord" as
// SQL Server already received the row when "yield return" was called.
// Unlike BCP and BULK INSERT, you have the option here to create a string
// call ReadLine() into the string, do manipulation(s) / validation(s) on
// the string, then pass that string into SetString() or discard if invalid.
_DataRecord.SetString(0, _FileReader.ReadLine());
yield return _DataRecord;
}
}
finally
{
_FileReader.Close();
}
}
The GetFileContents method above is used as the input parameter value for the Stored Procedure as shown below:
public static void test()
{
SqlConnection _Connection = new SqlConnection("{connection string}");
SqlCommand _Command = new SqlCommand("ImportData", _Connection);
_Command.CommandType = CommandType.StoredProcedure;
SqlParameter _TVParam = new SqlParameter();
_TVParam.ParameterName = "#ImportTable";
_TVParam.TypeName = "dbo.ImportStructure";
_TVParam.SqlDbType = SqlDbType.Structured;
_TVParam.Value = GetFileContents(); // return value of the method is streamed data
_Command.Parameters.Add(_TVParam);
try
{
_Connection.Open();
_Command.ExecuteNonQuery();
}
finally
{
_Connection.Close();
}
return;
}
Additional notes:
With some modification, the above C# code can be adapted to batch the data in.
With minor modification, the above C# code can be adapted to send in multiple fields (the example shown in the "Steaming Data..." article linked above passes in 2 fields).
You can also manipulate the value of each record in the SELECT statement in the proc.
You can also filter out rows by using a WHERE condition in the proc.
You can access the TVP Table Variable multiple times; it is READONLY but not "forward only".
Advantages over SqlBulkCopy:
SqlBulkCopy is INSERT-only whereas using a TVP allows the data to be used in any fashion: you can call MERGE; you can DELETE based on some condition; you can split the data into multiple tables; and so on.
Due to a TVP not being INSERT-only, you don't need a separate staging table to dump the data into.
You can get data back from the database by calling ExecuteReader instead of ExecuteNonQuery. For example, if there was an IDENTITY field on the DATAs import table, you could add an OUTPUT clause to the INSERT to pass back INSERTED.[ID] (assuming ID is the name of the IDENTITY field). Or you can pass back the results of a completely different query, or both since multiple results sets can be sent and accessed via Reader.NextResult(). Getting info back from the database is not possible when using SqlBulkCopy yet there are several questions here on S.O. of people wanting to do exactly that (at least with regards to the newly created IDENTITY values).
For more info on why it is sometimes faster for the overall process, even if slightly slower on getting the data from disk into SQL Server, please see this whitepaper from the SQL Server Customer Advisory Team: Maximizing Throughput with TVP
In C#, the best solution is to let the SqlBulkCopy reads the file. To do this you need to pass an IDataReader direct to SqlBulkCopy.WriteToServer method. Here is an example: http://www.codeproject.com/Articles/228332/IDataReader-implementation-plus-SqlBulkCopy
the best way is a mix between your 1st solution and 2nd,
create DataTable and in the loop add rows to it then use BulkCopy to upload
to DB in one connection use this for help in bulk copy
one other thing to pay attention that bulk copy is a very sensitive operation that almost
every mistake will void the copy, such if you declare the column name in the dataTable as "text" and in the DB its "Text" it will throw an exception, good luck.
If you want to insert 10 million records in the shortest time to direct using SQL query for testing purpose you should use this query
CREATE TABLE TestData(ID INT IDENTITY (1,1), CreatedDate DATETIME)
GO
INSERT INTO TestData(CreatedDate) SELECT GetDate()
GO 10000000
I want to know if we are already getting the fastest SqlSever Write Performance for our application.
We created a sample application that performs a BulkCopy operation to a local SQL Server database. The BulkCopy operation writes 100,000 rows of data from a DataTable in memory. The table being inserted into has no indexes. This is because we just want to get the maximum write speed of SQL Server.
Here is the schema of the table we are inserting into:
CREATE TABLE [dbo].[HistorySampleValues](
[HistoryParameterID] [bigint] NOT NULL,
[SourceTimeStamp] [datetime2](7) NOT NULL,
[ArchiveTimestamp] [datetime2](7) NOT NULL,
[ValueStatus] [int] NOT NULL,
[ArchiveStatus] [int] NOT NULL,
[IntegerValue] [int] SPARSE NULL,
[DoubleValue] [float] SPARSE NULL,
[StringValue] [varchar](100) SPARSE NULL,
[EnumNamedSetName] [varchar](100) SPARSE NULL,
[EnumNumericValue] [int] SPARSE NULL,
[EnumTextualValue] [varchar](256) SPARSE NULL
) ON [PRIMARY]
We measure the performance from our C# code.
public double PerformBulkCopy()
{
DateTime timeToBulkCopy = DateTime.Now;
double bulkCopyTimeSpentMs = -1.0;
DataTable historySampleValuesDataTable = CreateBulkCopyRecords();
//start the timer here
timeToBulkCopy = DateTime.Now;
using (SqlConnection sqlConn = ConnectDatabase())
{
sqlConn.Open();
using (SqlTransaction sqlTransaction = sqlConn.BeginTransaction())
{
try
{
using (SqlBulkCopy sqlBulkCopy = new SqlBulkCopy(sqlConn, SqlBulkCopyOptions.KeepIdentity, sqlTransaction))
{
sqlBulkCopy.ColumnMappings.Add(SqlServerDatabaseStrings.SQL_FIELD_HISTORY_PARMETER_ID, SqlServerDatabaseStrings.SQL_FIELD_HISTORY_PARMETER_ID);
sqlBulkCopy.ColumnMappings.Add(SqlServerDatabaseStrings.SQL_FIELD_SOURCE_TIMESTAMP, SqlServerDatabaseStrings.SQL_FIELD_SOURCE_TIMESTAMP);
sqlBulkCopy.ColumnMappings.Add(SqlServerDatabaseStrings.SQL_FIELD_VALUE_STATUS, SqlServerDatabaseStrings.SQL_FIELD_VALUE_STATUS);
sqlBulkCopy.ColumnMappings.Add(SqlServerDatabaseStrings.SQL_FIELD_ARCHIVE_STATUS, SqlServerDatabaseStrings.SQL_FIELD_ARCHIVE_STATUS);
sqlBulkCopy.ColumnMappings.Add(SqlServerDatabaseStrings.SQL_FIELD_INTEGER_VALUE, SqlServerDatabaseStrings.SQL_FIELD_INTEGER_VALUE);
sqlBulkCopy.ColumnMappings.Add(SqlServerDatabaseStrings.SQL_FIELD_DOUBLE_VALUE, SqlServerDatabaseStrings.SQL_FIELD_DOUBLE_VALUE);
sqlBulkCopy.ColumnMappings.Add(SqlServerDatabaseStrings.SQL_FIELD_STRING_VALUE, SqlServerDatabaseStrings.SQL_FIELD_STRING_VALUE);
sqlBulkCopy.ColumnMappings.Add(SqlServerDatabaseStrings.SQL_FIELD_ENUM_NAMEDSET_NAME, SqlServerDatabaseStrings.SQL_FIELD_ENUM_NAMEDSET_NAME);
sqlBulkCopy.ColumnMappings.Add(SqlServerDatabaseStrings.SQL_FIELD_ENUM_NUMERIC_VALUE, SqlServerDatabaseStrings.SQL_FIELD_ENUM_NUMERIC_VALUE);
sqlBulkCopy.ColumnMappings.Add(SqlServerDatabaseStrings.SQL_FIELD_ENUM_TEXTUAL_VALUE, SqlServerDatabaseStrings.SQL_FIELD_ENUM_TEXTUAL_VALUE);
sqlBulkCopy.DestinationTableName = SqlServerDatabaseStrings.SQL_TABLE_HISTORYSAMPLEVALUES;
sqlBulkCopy.WriteToServer(historySampleValuesDataTable);
}
sqlTransaction.Commit();
//end the timer here
bulkCopyTimeSpentMs = DateTime.Now.Subtract(timeToBulkCopy).TotalMilliseconds;
}
catch (Exception ex)
{
sqlTransaction.Rollback();
}
CleanUpDatabase(sqlConn);
}
sqlConn.Close();
}
return bulkCopyTimeSpentMs;
}
I have tried the different overloads of SqlBulkCopy.WriteToServer(): DataTable, DataReader and DataRow[].
On a machine with this specs:
I3-2120 CPU # 3.30GHz
8GB of RAM
Seagate Barracuda 7200.12 ST3500413AS 500GB 7200 RPM
I am getting a throughput of ~150K-160K rows inserted per second using the different overloads.
Btw, I'm not including the creation of the DataTable in the measurement because I just wanted to get the actual performance of SQLBulkCopy as a baseline.
I am asking now, given our sample data and the sample table, is this the most we can get out of SQL Server SE? Or is there something we can do to make this even faster?
Let me know if there are more information you need about our setup
Both SSIS and bcp utility will perform better.
If you have to use SqlBulkCopy, make sure that you've covered all the basics: disabling indexes and triggers, switching to simple or bulk load recovery model to minimize tlog, x-lock on the table and so on. Also fine-tune the BatchSize property (depending on yur data, it could be a value between 100 and 5000) and maybe UseInternalTransaction as well.
Here's a link to MSDN you'll probably find helpful:
http://msdn.microsoft.com/en-us/library/ms190421.aspx
Please read The Data Loading Performance Guide. And also How to analyse SQL Server performance. You can't solve a performance problem without first identifying the bottleneck, the second link will help you properly measure where your bottleneck is. Right now it could be anywhere, for instance it could be a matter that your client is not sending data faster than 160 rows/sec (paging of a large DataTable object could easily explain such) .
As a general, non-specific and non-committal answer, you need to achieve minimally logged operations using reasonable sized batches. Which implies commit every N rows (usually something int he 10k-100k rows range). And make sure your database is properly set up for minimally logged operations, see Operations That Can Be Minimally Logged.
But again, the most important thing is to measure so you identify the bottleneck.
As a side note, I really do hope you use SPARSE columns because your real table has +1000 columns. As is posted here, the use of SPARSE is just detrimental and offers no benefits.
I am doing a loop insert as seen below(Method A), it seems that calling the database with every single loop isn't a good idea. I found an alternative is to loop a comma-delimited string in my SProc instead to do the insert so to have only one entry to the DB. Will be any significant improvement in terms of performance? :
Method A:
foreach (DataRow row in dt.Rows)
{
userBll = new UserBLL();
UserId = (Guid)row["UserId"];
// Call userBll method to insert into SQL Server with UserId as one of the parameter.
}
Method B:
string UserIds = "Tom, Jerry, 007"; // Assuming we already concatenate the strings. So no loops this time here.
userBll = new UserBLL();
// Call userBll method to insert into SQL Server with 'UserIds' as parameter.
Method B SProc / Perform a loop insert in the SProc.
if right(rtrim(#UserIds ), 1) <> ','
SELECT #string = #UserIds + ','
SELECT #pos = patindex('%,%' , #UserIds )
while #pos <> 0
begin
SELECT #piece = left(#v, (#pos-1))
-- Perform the insert here
SELECT #UserIds = stuff(#string, 1, #pos, '')
SELECT #pos = patindex('%,%' , #UserIds )
end
Less queries usually mean faster processing. That said, a co-worker of mine had some success with .NET Framework's wrapper of the TSQL BULK INSERT, which is provided by the Framework as SqlBulkCopy.
This MSDN blog entry shows how to use it.
The main "API" sample is this (taken from the linked article as-is, it writes the contents of a DataTable to SQL):
private void WriteToDatabase()
{
// get your connection string
string connString = "";
// connect to SQL
using (SqlConnection connection =
new SqlConnection(connString))
{
// make sure to enable triggers
// more on triggers in next post
SqlBulkCopy bulkCopy =
new SqlBulkCopy
(
connection,
SqlBulkCopyOptions.TableLock |
SqlBulkCopyOptions.FireTriggers |
SqlBulkCopyOptions.UseInternalTransaction,
null
);
// set the destination table name
bulkCopy.DestinationTableName = this.tableName;
connection.Open();
// write the data in the "dataTable"
bulkCopy.WriteToServer(dataTable);
connection.Close();
}
// reset
this.dataTable.Clear();
this.recordCount = 0;
}
The linked article explains what needs to be done to leverage this mechanism.
In my experience, there are three things you don't want to have to do for each record:
Open/close a sql connection per row. This concern is handled by ADO.NET connection pooling. You shouldn't have to worry about it unless you have disabled the pooling.
Database roundtrip per row. This tends to be less about the network bandwidth or network latency and more about the client side thread sleeping. You want a substantial amount of work on the client side each time it wakes up or you are wasting your time slice.
Open/close the sql transaction log per row. Opening and closing the log is not free, but you don't want to hold it open too long either. Do many inserts in a single transaction, but not too many.
On any of these, you'll probably see a lot of improvement going from 1 row per request to 10 rows per request. You can achieve this by building up 10 insert statements on the client side before transmitting the batch.
Your approach of sending a list into a proc has been written about in extreme depth by Sommarskog.
If you are looking for better insert performance with multiple input values of a given type, I would recommend you look at table valued parameters.
And a sample can be found here, showing some example code that uses them.
You can use bulk insert functionality for this.
See this blog for details: http://blogs.msdn.com/b/nikhilsi/archive/2008/06/11/bulk-insert-into-sql-from-c-app.aspx