I have a small process that works on a few SQL tables. The tables were originally Guid primary keyed, but for efficiency we're updating them to a BigInt identity.
I have a batch insert that adds an item in the primary key table, then several items in the foreign key table. With Guids, this was easy, as I'd create the Guid in the code and pass it in for the parameter.
I'm curious what the best approach is for an identity column? I know I can do :
declare #id int
insert into PrimaryKeyTable (...) Values (...)
select #id = Scope_Identity()
and get back the primary key.
Is the best approach to split the batches into two, and pass the parameter back in in the code for the foreign key inserts? Or is there a way to do all the inserts in one SQL statement? Is there a general public opinion on the matter, or a best-practice? Thankyou for any guidance.
You need to use OUTPUT
INSERT...
OUTPUT INSERTED.ID
This allows you to do a batch insert and it will spit out the batched identity ID's and whatever else you explicitly set to output
Related
I am trying to implement bulk insert of data from Datatable. In my MS-SQL Table(Destination table) i have a column with primary key not Identity column, so i have to increment manually. But its not possible in Code because there will be multi Thread on the same table.Please give me suggestion if any.
public void BulkInsert(DataTable dtTable)
{
DataTable dtProductSold = dtTable;
//creating object of SqlBulkCopy
SqlBulkCopy objbulk = new SqlBulkCopy(ConStr.ToString());
//assigning Destination table name
objbulk.DestinationTableName = "BatchData_InvReportMapping";
//Mapping Table column
objbulk.ColumnMappings.Add("InvPK", "InvPK");
objbulk.ColumnMappings.Add("DateValue", "DateDalue");
objbulk.ColumnMappings.Add("TextValue", "TextValue");
objbulk.ColumnMappings.Add("NumericValue", "NumericValue");
objbulk.ColumnMappings.Add("ErrorValue", "ErrorValue");
//inserting bulk Records into DataBase
objbulk.WriteToServer(dtProductSold);
}
Thanks in advance,
This is too long for a comment.
If you have a primary key column, then you need to take responsibility for its being unique and non-NULL when you insert rows. SQL Server offers a very handy mechanism to help with this, which is the identity column.
If you do not have an identity, then I you basically have two options:
Load data that has a valid primary key column.
Create a trigger that assigns the value when rows are loaded in.
Oh, wait. The default option for bulk insert is not to fire triggers, so the second choice really isn't a good option.
Instead, modify the table to have an identity primary key column. Then define a view on the table without the primary key and do the bulk insert into the view. The primary key will then be assigned automatically.
EDIT:
There is a third option, which might be feasible. Load the data into a staging table. Then insert from the staging table into the final table, calculating the primary key value. Something like this:
insert into finaltable (pk, . . .)
select m.maxpk + seqnum, . . . .
from (select row_number() over (order by (select null)) as seqnum,
. . .
from stagingtable
) s cross join
(select max(pk) as maxpk
from finaltable
) m;
i had one idea
generally we use tables to store the records, even if you insert the data using front end finally it will be stored in table.So i am suggesting to use sequences with insert trigger on the table. which means when you insert the data into the table first the trigger will be called, sequence will be incremented the the increased value will be stored along with other values in the table. just try this. because in oracle 11g we don't have identity() hence we will use sequences and insert trigger for identity column
Create a Table called id's. VARCHAR(50) TableName, INT Id.
When you want to generate your ids read the relevant row and increment it by the number of rows you want to insert within the same transaction.
you can now bulk insert these rows whenever you want without worrying about other threads inserting them.
Similar to how Nhibernates HiLow generator works.
http://weblogs.asp.net/ricardoperes/making-better-use-of-the-nhibernate-hilo-generator
We are converting database primary keys from GUIDs to auto-incremented INTs. We have data that we parse from text files and put into two C# DataTables Claim and ClaimCharge that we have been using to bulk insert into identically named tables in the database. In the database, ClaimCharge.ClaimID is a foreign key to Claim.ID and several claim charges exist for one claim.
With GUIDs we generated the Claim and ClaimCharge IDs in C#, so bulk inserting was no problem. But with INTs, I don't know what the Claim.ID will be, so I can't assign ClaimCharge.ClaimID. I need some ideas on how this could be accomplished with INTs.
For instance, if the Claim table could be manually locked against inserts, I could:
Bulk insert into alternate tables named ClaimBulkData ClaimChargeBulkData. These tables would still use GUIDs for convenience in keeping the relationship maintained between C# and SQL.
Manually lock the Claim table against inserts (don't know if this is possible) and get the max(ID).
Increment all of the data in ClaimBulkData using MAX(ID).
Associate ClaimChargeBulkData to ClaimBulkData using the newly updated INT
Insert data into real Claim table as a set using IDENTITY_INSERT ON using some kind of exception to the imaginary lock created in step 2.
Release manually created lock against inserts on Claim table (again I don't know if this is possible.
Insert data into real ClaimCharge table.
I want to avoid inserting the data one row at a time in either C# or T-SQL.
Why not just add the new auto-increment column to the master tables -- you will then have both GUID and autoid column so you can fix up the foreign key relationship (one master table at a time)
i.e.,
Assume you have master1 and detail1 and detail1
alter table Master1 add ID int identity(1,1) not null
GO
alter Detail1 add master1ID int null
GO
alter Detail2 add master1ID int null
GO
Then update Detail1 and Detail12 based on joining Master1 on the oldguid key to set the corresponding value of Master1ID for each table
You can then add the foreign keys based on Master1ID to Detail and Detail2
At this point you should have a complete set of data based on both sets of keys, and you can test update views, etc. to make sure they work with the new integer ids
Finally, once all is cool, drop to unneeded GUID foreign key and the Guid columns themselves.
You can always run a database pack once you get everything clean and converted if your intent was to reduce overall disk usage via this restructuring. The point is much of the work is fixups for foreign keys in a process like this.
We have an ASP.NET/MSSQL based web app which generates orders with sequential order numbers.
When a user saves a form, a new order is created as follows:
SELECT MAX(order_number) FROM order_table, call this max_order_number
set new_order_number = max_order_number + 1
INSERT a new order record, with this new_order_number (it's just a field in the order record, not a database key)
If I enclose the above 3 steps in single transaction, will it avoid duplicate order numbers from being created, if two customers save a new order at the same time? (And let's say the system is eventually on a web farm with multiple IIS servers and one MSSQL server).
I want to avoid two customers selecting the same MAX(order_number) due to concurrency somewhere in the system.
What isolation level should be used? Thank you.
Why not just use an Identity as the order number?
Edit:
As far as I know, you can make the current order_number column an Identity (you may have to reset the seed, it's been a while since I've done this). You might want to do some tests.
Here's a good read about what actually goes on when you change a column to an Identity in SSMS. The author mentions how this may take a while if the table already has millions of rows.
Using an identity is by far the best idea. I create all my tables like this:
CREATE TABLE mytable (
mytable_id int identity(1, 1) not null primary key,
name varchar(50)
)
The "identity" flag means, "Let SQL Server assign this number for me". The (1, 1) means that identity numbers should start at 1 and be incremented by 1 each time someone inserts a record into the table. Not Null means that nobody should be allowed to insert a null into this column, and "primary key" means that we should create a clustered index on this column. With this kind of a table, you can then insert your record like this:
-- We don't need to insert into mytable_id column; SQL Server does it for us!
INSERT INTO mytable (name) VALUES ('Bob Roberts')
But to answer your literal question, I can give a lesson about how transactions work. It's certainly possible, although not optimal, to do this:
-- Begin a transaction - this means everything within this region will be
-- executed atomically, meaning that nothing else can interfere.
BEGIN TRANSACTION
DECLARE #id bigint
-- Retrieves the maximum order number from the table
SELECT #id = MAX(order_number) FROM order_table
-- While you are in this transaction, no other queries can change the order table,
-- so this insert statement is guaranteed to succeed
INSERT INTO order_table (order_number) VALUES (#id + 1)
-- Committing the transaction releases your lock and allows other programs
-- to work on the order table
COMMIT TRANSACTION
Just keep in mind that declaring your table with an identity primary key column does this all for you automatically.
The risk is two processes selecting the MAX(order_number) before one of them inserts the new order. A safer way is to do it in one step:
INSERT INTO order_table
(order_number, /* other fields */)
VALUES
( (SELECT MAX(order_number)+1 FROM order_table ) order_number,
/* other values */
)
I agree with G_M; use an Identity field. When you add your record, just
INSERT INTO order_table (/* other fields */)
VALUES (/* other fields */) ; SELECT SCOPE_IDENTITY()
The return value from Scope Identity will be your order number.
What is the best approach when generating a primary key for a table?
That is, when the data received by the database is not injective and can't be used as a primary key.
In the code, what is the best way to manage a primary key for the table rows?
Thanks.
First recommendation stay away from uniqueidentifier for any primary key. Although it has some interesting easy ways to generate it client side, it makes it almost impossible to have any idexes on the primary key that may be useful. If I could go back in time and ban uniqueidentifiers from 99% of the places that they have been used, this would have saved more than 3 man years of dba/development time in the last 2 years.
Here is what I would recommend, using the INT IDENTITY as a primary key.
create table YourTableName(
pkID int not null identity primary key,
... the rest of the columns declared next.
)
where pkID is the name of your primary key column.
This should do what you are looking for.
AUTO_INCREMENT in mysql, IDENTITY in SQL Server..
IDENTITY in SQL Server
and if you need to get know what you new ID was while INSERT-ing data, use OUTPUT clause of INSERT statement - so the copy of new rows is put to table-type param.
If for some reason generating unique ID at SQL is not suitable for you, generate GUID's at your app - GUID has a very hight level of uniquness (but it's not guaranteed in fact). And SQL Server has dedicated GUID type for column - it's called uniqueidentifier.
http://msdn.microsoft.com/en-us/library/ms187942.aspx
A database exists with two tables
Data_t : DataID Primary Key that is
Identity 1,1. Also has another field
'LEFT' TINYINT
Data_Link_t : DataID PK and FK where
DataID MUST exist in Data_t. Also has another field 'RIGHT' SMALLINT
Coming from a microsoft access environment into C# and sql server I'm looking for a good method of importing a record into this relationship.
The record contains information that belongs on both sides of this join (Possibly inserting/updating upwards 5000 records at once). Bonus to process the entire batch in some kind of LINQ list type command but even if this is done record by record the key goal is that BOTH sides of this record should be processed in the same step.
There are countless approaches and I'm looking at too many to determine which way I should go so I thought faster to ask the general public. Is LINQ an option for inserting/updating a big list like this with LINQ to SQL? Should I go record by record? What approach should I use to add a record to normalized tables that when joined create the full record?
Sounds like a case where I'd write a small stored proc and call that from C# - e.g. as a function on my Linq-to-SQL data context object.
Something like:
CREATE PROCEDURE dbo.InsertData(#Left TINYINT, #Right SMALLINT)
AS BEGIN
DECLARE #DataID INT
INSERT INTO dbo.Data_t(Left) VALUES(#Left)
SELECT #DataID = SCOPE_IDENTITY();
INSERT INTO dbo.Data_Link_T(DataID, Right) VALUES(#DataID, #Right)
END
If you import that into your data context, you could call this something like:
using(YourDataContext ctx = new YourDataContext)
{
foreach(YourObjectType obj in YourListOfObjects)
{
ctx.InsertData(obj.Left, obj.Right)
}
}
and let the stored proc handle all the rest (all the details, like determining and using the IDENTITY from the first table in the second one) for you.
I have never tried it myself, but you might be able to do exactly what you are asking for by creating an updateable view and then inserting records into the view.
UPDATE
I just tried it, and it doesn't look like it will work.
Msg 4405, Level 16, State 1, Line 1
View or function 'Data_t_and_Data_Link_t' is not updatable because the modification affects multiple base tables.
I guess this is just one more thing for all the Relational Database Theory purists to hate about SQL Server.
ANOTHER UPDATE
Further research has found a way to do it. It can be done with a view and an "instead of" trigger.
create table Data_t
(
DataID int not null identity primary key,
[LEFT] tinyint,
)
GO
create table Data_Link_t
(
DataID int not null primary key foreign key references Data_T (DataID),
[RIGHT] smallint,
)
GO
create view Data_t_and_Data_Link_t
as
select
d.DataID,
d.[LEFT],
dl.[RIGHT]
from
Data_t d
inner join Data_Link_t dl on dl.DataID = d.DataID
GO
create trigger trgInsData_t_and_Data_Link_t on Data_t_and_Data_Link_T
instead of insert
as
insert into Data_t ([LEFT]) select [LEFT] from inserted
insert into Data_Link_t (DataID, [RIGHT]) select ##IDENTITY, [RIGHT] from inserted
go
insert into Data_t_and_Data_Link_t ([LEFT],[RIGHT]) values (1, 2)