Insert multiple sql rows via stored proc - c#

I have looked a some related topics but my question isn't quite answered:
C# - Inserting multiple rows using a stored procedure
Insert Update stored proc on SQL Server
Efficient Multiple SQL insertion
I have the following kind of setup when running my stored procedure in the code behind for my web application. The thing is I am now faced with the possibility of inserting multiple products and I would like to do it all in one ExecuteNonQuery rather than do a foreach loop and run it n number of times.
I am not sure how to do this, or if it can be, with my current setup.
The code should be somewhat self explanatory but if clarification is needed let me know. Thanks.
SqlDatabase database = new SqlDatabase(transMangr.ConnectionString);
DbCommand commandWrapper = StoredProcedureProvider.GetCommandWrapper(database, "proc_name", useStoredProc);
database.AddInParameter(commandWrapper, "#ProductID", DbType.Int32, entity._productID);
database.AddInParameter(commandWrapper, "#ProductDesc", DbType.String, entity._desc);
...more parameters...
Utility.ExecuteNonQuery(transMangr, commandWrapper);
Proc
ALTER PROCEDURE [dbo].[Products_Insert]
-- Add the parameters for the stored procedure here
#ProductID int,
#Link varchar(max)
#ProductDesc varchar(max)
#Date DateTime
AS BEGIN
SET NOCOUNT ON;
INSERT INTO [dbo].[Prodcuts]
(
[CategoryID],
[Link],
[Desc],
[Date]
)
VALUES
(
#ProductID,
#Link,
#ProductDesc,
#Date
)
END

You should be fine running your stored procedure in a loop. Just make sure that you commit rarely, not after every insert.
For alternatives, you have already found the discussion about loading data.
Personally, I like SQL bulk insert of the form insert into myTable (select *, literalValue from someOtherTable);
But that will probably not do in your case.

You could pass all your data as a table value parameter - MSDN has a pretty good write up about it here
Something along the lines of the following should work
CREATE TABLE dbo.tSegments
(
SegmentID BIGINT NOT NULL CONSTRAINT pkSegment PRIMARY KEY CLUSTERED,
SegCount BIGINT NOT NULL
);
CREATE TYPE dbo.SegmentTableType AS TABLE
(
SegmentID BIGINT NOT NULL
);
CREATE PROCEDURE dbo.sp_addSegments
#Segments dbo.SegmentTableType READONLY
AS
BEGIN
MERGE INTO dbo.tSegments AS tSeg
USING #Segments AS S
ON tSeg.SegmentID = S.SegmentID
WHEN MATCHED THEN UPDATE SET T.SegCount = T.SegCount + 1
WHEN NOT MATCHED THEN INSERT VALUES(tSeg.SegmentID, 1);
END

Define the commandWrapper and parameters for the command outside of the loop and then with in the loop you just assign parameter values and execute the proc.
SqlDatabase database = new SqlDatabase(transMangr.ConnectionString);
DbCommand commandWrapper = StoredProcedureProvider.GetCommandWrapper(database, "proc_name", useStoredProc);
database.AddInParameter(commandWrapper, "#ProductID", DbType.Int32 );
database.AddInParameter(commandWrapper, "#ProductDesc", DbType.String);
...more parameters...
foreach (var entity in entitties)
{
database.SetParameterValue(commandWrapper, "#ProductID",entity._productID);
database.SetParameterValue(commandWrapper, "#ProductDesc",entity._desc);
//..more parameters...
Utility.ExecuteNonQuery(transMangr, commandWrapper);
}

Not ideal from a purist way of doing things, but sometimes one is limited by frameworks and libraries, and that you are forced to call stored procedures in a certain way, bind parameters in a certain way, and that connections are managed by pools as part of your framework.
In such circumstances, a method we have found to work is to simply write your stored procedure with a lot of parameters, usually a name followed by a number, e.g. #ProductId1, #ProductDesc1, #ProductId2, #ProductDesc2 up to a number you decide, possibly say 32.
You can use some form of scripting language to produce the lines for this.
You can get the stored procedure to insert all the values first into a table parameter that allows nulls, then do bulk inserts / merges on this data in a way similar to Johnv2020's answer. You might remove the null rows first.
It will usually be more efficient than doing it one at a time (partly because of the database operations itself, and partly because of your framework's overheads in getting the connection to call the procedure etc.)

Related

Passing an array of objects to SQL stored procedure

I have 2 tables
payment (payment_id, otherCosts, GarageCosts)
spareparts (payment_id, sparepartId, sparePartQty)
In payment table payment_id is autogenerated. Apart from otherCosts and garagecosts values, in my C# asp.net application there is an array of objects with
{ sparepartId : 'Somevalue', sparePartQty : 'somevalue' }
What I need to do is in a stored procedure first enter the record into payment table with garage costs and others costs value. Then return the last generated payment ID and enter it to spareParts table as paymentId for each of the value pairs in the array.
What is the way to achieve this? Please help.
Based on your tags I am going to assume we are talking about SQL Server / T-SQL.
You could do all this in one stored procedure:
CREATE PROCEDURE dbo.Foo ... /* input parameters */
AS
BEGIN
DECLARE #PaymentId int
INSERT INTO payment(otherCosts, GarageCosts) VALUES (...)
SET #PaymentId = SCOPE_IDENTITY()
INSERT INTO spareparts(payment_id, sparepartId, sparePartQty) VALUES(#PaymentId, ...)
END
GO
You may want to also look into ##IDENTITY but make sure you read about ##IDENTITY and SCOPE_IDENTITY and understand the risks associated with the first one.
If you need to have two separate sprocs you can do that too and here is how the first sproc would look like. Note that the #PaymentId is an output parameter which means that the caller can retrieve it and pass it to the second procedure.
CREATE PROCEDURE dbo.Foo
/* input parameters */
#PaymentId int OUT
AS
BEGIN
INSERT INTO payment(otherCosts, GarageCosts) VALUES (...)
SET #PaymentId = SCOPE_IDENTITY()
END
GO
Edit - after the scope of the question was clarified:
If you need to call the second stored procedure and pass it an array of parameters, with SQL Server 2008 or newer you can use TVP (Table Value Parameters). To see how you can use them in stored procedures and how you can pass them from C# code see Table Value Parameters in SQL Server 2008 and .NET (C#) or Table-Valued Parameters.
You can also use TVPs with the solution where you only have one sproc.
To solve your problem try this
First insert your data in payment table with otherCosts and GarageCosts.
Then create a procedure to get the latest stored payment_id from payment table
create procedure select_last_payment_id
as
begin
select top 1 payment_id
from payment
order by payment_id desc
end
Lastly get that payment_id by running stored procedure and assigning it to payment_id of spareparts table and storing spareparts data.
Hope it works for you.

How to prevent SQL injection using a stored procedure while insert record?

I am new to SQL Server, I am trying to insert records into table using a stored procedure as shown below.
I want a suggestion that is using the below stored procedure. Also:
can I prevent SQL injection?
is it the right way?
Correct me if I miss anything in below procedure which leads to SQL injection.
Create PROCEDURE [dbo].[spInsertParamTable]
#CmpyCode nvarchar(50),
#Code nvarchar(50),
#DisplayCode nvarchar(50),
#TotalDigit int,
#Nos bigint,
#IdentitY int OUTPUT
AS
BEGIN
INSERT tblParamTable (CmpyCode, Code, DisplayCode, TotalDigit, Nos)
VALUES (#CmpyCode, #Code, #DisplayCode, #TotalDigit, #Nos)
END
SELECT #Identity = SCOPE_IDENTITY();
RETURN #Identity
SQL Injection specifically refers to injecting SQL code into an existing SQL query that's built up via string concatenation and executed dynamically. It is almost always of the form:
#dynamicSQL = "select * from sensitivetable where field = " + #injectableParameter
sp_executesql #dynamicSQL
For this particular stored procedure, the worst an attacker could do is insert unhelpful values into your tblParamTable.
However, if these values are then used in a dynamically-built query later on, then this merely becomes a second-order attack: insert values on page 1, see results of dynamic query on page 2. (I only mention this since your table is named tblParamTable, suggesting it might contain parameters for later re-use.)
Can I prevent SQL injection?
You already are - there is no way to "inject" code into your SQL statement since you're using parameters.
Is it the right way?
Well, there's not one "right" way - but I don't see anything seriously wrong with what you're doing. A few suggestions:
You don't need to RETURN your output parameter value. Setting it is enough.
You have the last SELECT outside of the BEGIN/END block, which isn't hurting anything but for consistency you should put everything inside BEGIN/END (or leave them out altogether).

Should you make multiple insert calls or pass XML?

I have an account creation process and basically when the user signs up, I have to make entries in mutliple tables namely User, Profile, Addresses. There will be 1 entry in User table, 1 entry in Profile and 2-3 entries in Address table. So, at most there will be 5 entries. My question is should I pass a XML of this to my stored procedure and parse it in there or should I create a transaction object in my C# code, keep the connection open and insert addresses one by one in loop?
How do you approach this scenario? Can making multiple calls degrade the performance even though the connection is open?
No offence, but you're over thinking this.
Gather your information, when you have it all together, create a transaction and insert the new rows one at a time. There's no performance hit here, as the transaction will be short lived.
A problem would be if you create the transaction on the connection, insert the user row, then wait for the user to enter more profile information, insert that, then wait for them to add address information, then insert that, DO NOT DO THIS, this is a needlessly long running transaction, and will create problems.
However, your scenario (where you have all the data) is a correct use of a transaction, it ensures your data integrity and will not put any strain on your database, and will not - on it's own - create deadlocks.
Hope this helps.
P.S. The drawbacks with the Xml approach is the added complexity, your code needs to know the schema of the xml, your stored procedure needs to know the Xml schema too. The stored procedure has the added complexity of parsing the xml, then inserting the rows. I really don't see the advantage of the extra complexity for what is a simple short running transaction.
If you want to insert records in multiple table then using XML parameter is a complex method. Creating Xml in .net and extracting records from xml for three diffrent tables is complex in sql server.
Executing queries within a transaction is easy approach but some performance will degrade there to switch between .net code and sql server.
Best approach is to use table parameter in storedprocedure. Create three data table in .net code and pass them in stored procedure.
--Create Type TargetUDT1,TargetUDT2 and TargetUDT3 for each type of table with all fields which needs to insert
CREATE TYPE [TargetUDT1] AS TABLE
(
[FirstName] [varchar](100)NOT NULL,
[LastName] [varchar](100)NOT NULL,
[Email] [varchar](200) NOT NULL
)
--Now write down the sp in following manner.
CREATE PROCEDURE AddToTarget(
#TargetUDT1 TargetUDT1 READONLY,
#TargetUDT2 TargetUDT2 READONLY,
#TargetUDT3 TargetUDT3 READONLY)
AS
BEGIN
INSERT INTO [Target1]
SELECT * FROM #TargetUDT1
INSERT INTO [Target2]
SELECT * FROM #TargetUDT2
INSERT INTO [Target3]
SELECT * FROM #TargetUDT3
END
In .Net, Create three data table and fill the value, and call the sp normally.
For example assuming your xml as below
<StoredProcedure>
<User>
<UserName></UserName>
</User>
<Profile>
<FirstName></FirstName>
</Profile>
<Address>
<Data></Data>
<Data></Data>
<Data></Data>
</Address>
</StoredProcedure>
this would be your stored procedure
INSERT INTO Users (UserName) SELECT(UserName) FROM OPENXML(#idoc,'StoredProcedure/User',2)
WITH ( UserName NVARCHAR(256))
where this would provide idoc variable value and #doc is the input to the stored procedure
DECLARE #idoc INT
--Create an internal representation of the XML document.
EXEC sp_xml_preparedocument #idoc OUTPUT, #doc
using similar technique you would run 3 inserts in single stored procedure. Note that it is single call to database and multiple address elements will be inserted in single call to this stored procedure.
Update
just not to mislead you here is a complete stored procedure for you do understand what you are going to do
USE [DBNAME]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER OFF
GO
CREATE PROCEDURE [dbo].[procedure_name]
#doc [ntext]
WITH EXECUTE AS CALLER
AS
DECLARE #idoc INT
DECLARE #RowCount INT
SET #ErrorProfile = 0
--Create an internal representation of the XML document.
EXEC sp_xml_preparedocument #idoc OUTPUT, #doc
BEGIN TRANSACTION
INSERT INTO Users (UserName)
SELECT UserName FROM OPENXML(#idoc,'StoredProcedure/User',2)
WITH ( UserName NVARCHAR(256) )
-- Insert Address
-- Insert Profile
SELECT #ErrorProfile = ##Error
IF #ErrorProfile = 0
BEGIN
COMMIT TRAN
END
ELSE
BEGIN
ROLLBACK TRAN
END
EXEC sp_xml_removedocument #idoc
Have you noticed any performance problems, what you are trying to do is very straight forward and many applications do this day in day out. Be careful not to be drawn into any premature optimization.
Database inserts should be very cheep, as you have suggested create a new transaction scope, open you connection, run your inserts, commit the transaction and finally dispose everything.
using (var tran = new TransactionScope())
using (var conn = new SqlConnection(YourConnectionString))
using (var insetCommand1 = conn.CreateCommand())
using (var insetCommand2 = conn.CreateCommand())
{
insetCommand1.CommandText = \\SQL to insert
insetCommand2.CommandText = \\SQL to insert
insetCommand1.ExecuteNonQuery();
insetCommand2.ExecuteNonQuery();
tran.Complete();
}
Bundling all your logic into a stored procedure and using XML gives you added complications, you will need to have additional logic in your database, you now have to transform your entities into an XML blob and you code has become harder to unit test.
There are a number of things you can do to make the code easier to use. The first step would be to push your database logic into a reusable database layer and use the concept of a repository to read and write your objects from the database.
You could of course make your life a lot easier and have a look at any of the ORM (Object-relational mapping) libraries that are available. They take away the pain of talking to the database and handle that for you.

Better ways for merging data from two sources using Linq

I have some C#/Linq code used to merge data from excel file into db, which needs better performance.
There are
1. A List read from excel file: List<Score> newScoreList
2. A DB table named Scores, primary keys peopleId and testDate
I need to merge data from the list to the table, and if there is any duplicate data, update it.
My current solution is:
1) Find the duplicate data with this LINQ expression:
var dupliData =
from newScore in newScoreList
from oldScore in db.Scores
where newScore.peopleId == oldScore.peopleId && newScore.testDate == oldScore.testDate
select oldScore;
2) Delete the duplicate data.
db.Scores.DeleteAllOnSubmit(dupliData);
3) Insert the new data from list.
db.Scores.InsertAllOnSubmit(newScoreList);
Could anybody give me a better solution?
I really hate stored procedures in general, but this is probably a perfect case for using one. My TSQL is rusty, but this should give an idea.
CREATE PROCEDURE dbo.InsertOrUpdateScore
(
#id as Int,
#date as DateTime,
#result as varchar(20)
)
AS
if not exists(SELECT id FROM Scores WHERE id = #id AND date = #date)
begin
INSERT INTO Scores (id, date, result) values (#id, #date, #result)
end
else
begin
UPDATE Scores
SET result = #result
WHERE id = #id AND date = #date
end
GO
Now in your LINQ server browser, select the Score entity, and change its INSERT and UPDATE behaviour to use the stored procedure you just created. Make sure the user accessing the database has EXECUTE permission to the SPROC.
This should perform quite a bit quicker than your version. You're trading an IN clause for N SELECTs on an index which may be quicker. However, the result set of the IN clause is not transported back to the client over the network, which could save quite a bit of time.
Profile exactly how long your method is taking before implementing this, so you can gauge if this is truly quicker.
I'm not sure if this is the only way to create a Score in your application, but you might want to consider the case where you're INSERTing a record that doesn't yet have an ID. You'll need to modify the SPROC to allow #id as null, and handle the INSERT appropriately.
Then it should just be:
db.Scores.InsertAllOnSubmit(newScoreList);
If you are using SQL 2008 you can use the Merge command
http://www.builderau.com.au/program/sqlserver/soa/Using-SQL-Server-2008-s-MERGE-statement/0,339028455,339283059,00.htm

MySql Batching Stored Procedure Calls with .Net / Connector?

Is there a way to batch stored procedure calls in MySql with the .Net / Connector to increase performance?
Here's the scenario... I'm using a stored procedure that accepts a few parameters as input. This procedure basically checks to see whether an existing record should be updated or a new one inserted (I'm not using INSERT INTO .. ON DUPLICATE KEY UPDATE because the check involves date ranges, so I can't really make a primary key out of the criteria).
I want to call this procedure a lot of times (let's say batches of 1000 or so). I can of course, use one MySqlConnection and one MySqlCommand instance and keep changing the parameter values, and calling .ExecuteNonQuery().
I'm wondering if there's a better way to batch these calls?
The only thought that comes to mind is to manually construct a string like 'call sp_myprocedure(#parama_1,#paramb_1);call sp_myprocedure(#parama_2,#paramb2);...', and then create all the appropriate parameters. I'm not convinced this will be any better than calling .ExecuteNonQuery() a bunch of times.
Any advice? Thanks!
EDIT: More info
I'm actually trying to store data from an external data source, on a regular basis. Basically I'm taking rss feeds of Domain auctions (from various sources like godaddy, pool, etc.), and updating a table with the auction info using this stored procedure (let's call it sp_storeSale). Now, in this table that the sale info gets stored, I want to keep historical records for sales for a given domain, so I have a domain table, and a sale table. The sale table has a many to one relationship with the domain table.
Here's the stored procedure:
-- --------------------------------------------------------------------------------
-- Routine DDL
-- Note: comments before and after the routine body will not be stored by the server
-- --------------------------------------------------------------------------------
DELIMITER $$
CREATE PROCEDURE `DomainFace`.`sp_storeSale`
(
middle VARCHAR(63),
extension VARCHAR(10),
brokerId INT,
endDate DATETIME,
url VARCHAR(500),
category INT,
saleType INT,
priceOrBid DECIMAL(10, 2),
currency VARCHAR(3)
)
BEGIN
DECLARE existingId BIGINT DEFAULT NULL;
DECLARE domainId BIGINT DEFAULT 0;
SET #domainId = fn_getDomainId(#middle, #extensions);
SET #existingId = (
SELECT id FROM sale
WHERE
domainId = #domainId
AND brokerId = #brokerId
AND UTC_TIMESTAMP() BETWEEN startDate AND endDate
);
IF #existingId IS NOT NULL THEN
UPDATE sale SET
endDate = #endDate,
url = #url,
category = #category,
saleType = #saleType,
priceOrBid = #priceOrBid,
currency = #currency
WHERE
id = #existingId;
ELSE
INSERT INTO sale (domainId, brokerId, startDate, endDate, url,
category, saleType, priceOrBid, currency)
VALUES (#domainId, #brokerId, UTC_TIMESTAMP(), #endDate, #url,
#category, #saleType, #priceOrBid, #currency);
END IF;
END
As you can see, I'm basically looking for an existing record that is not 'expired', but has the same domain, and broker, in which case I assume the auction is not over yet, and the data is an update to the existing auction. Otherwise, I assume the auction is over, it is a historical record, and the data I've got is for a new auction, so I create a new record.
Hope that clears up what I'm trying to achieve :)
I'm not entirely sure what you're trying to do but it sounds kinda house-keeping or maintenance related so I won't be too ashamed at posting the following suggestion.
Why dont you move all of your logic into the database and process it all server side ?
The following example uses a cursor (shock/horror) but it's perfectly acceptable to use them in such circumstances.
If you can avoid using cursors at all - great, but the main point of my suggestion is about moving the logic from your application tier back into the data tier to save on the round trips. You'd call the following sproc once and it would process the entire range of data in single call.
call house_keeping(curdate() - interval 1 month, curdate());
Also, if you can provide just a bit more information about what you're trying to do we might be able to suggest other approaches.
Example stored procedure
drop procedure if exists house_keeping;
delimiter #
create procedure house_keeping
(
in p_start_date date,
in p_end_date date
)
begin
declare v_done tinyint default 0;
declare v_id int unsigned;
declare v_expired_date date;
declare v_cur cursor for
select id, expired_date from foo where
expired_date between p_start_date and p_end_date;
declare continue handler for not found set v_done = 1;
open v_cur;
repeat
fetch v_cur into v_id, v_expired_date;
/*
if <some condition> then
insert ...
else
update ...
end if;
*/
until v_done end repeat;
close v_cur;
end #
delimiter ;
Just incase you think I'm completely mad in suggesting cursors you might want to read this
Optimal MySQL settings for queries that deliver large amounts of data?
Hope this helps :)

Categories