In case I'm executing set of sql commands within single SqlCommand.ExecuteReader() may I consider them under transaction by default or I should express transaction explicitly?
Would be .NET pattern using(var scope = new TransactionScope) {...} enough to make it transactional in case it isn't?
First part of command batch is declaration of table variable based on value of table being updated in second part.
DECLARE #tempTable TABLE (ID int,...)
INSERT INTO #tempTable (ID,...)
SELECT [ID],.. FROM [Table] WHERE ...
Second part is set of items which should be Updated or Inserted
IF EXISTS(SELECT 1 FROM #tempTable WHERE ID=#id{0})
BEGIN
UPDATE [Table]
SET ... WHERE ...
END ELSE BEGIN
INSERT INTO [Table] ([ID], ...) VALUES (#id{0}, ...)
END
After they're concatenated I'm executing them by single call SqlCommand.ExecuteQuery()
Reason to do that batching is I'd like to minimize recurring transfers of parameters between SQL Server and my app since they're constants in scope of that batch.
Reason to ask is to prevent redundant code and understand whether it's possible that original data in [Table] (source of #tempTable) may change before last update/insert fragment of batch is finished.
Executing SqlCommand won't begin any transaction.
You have two choices:
Use "BeginTransaction" from connection and passing transaction object to SqlCommand
Use TransactionScope, which uses DTC and allows you to manage transactions across multiple dbs
I would use the second.
Related
Is a single call to ExecuteNonQuery() atomic or does it make sense to use Transactions if there are multiple sql statements in a single DbCommand?
See my example for clarification:
using (var ts = new TransactionScope())
{
using (DbCommand lCmd = pConnection.CreateCommand())
{
lCmd.CommandText = #"
DELETE FROM ...;
INSERT INTO ...";
lCmd.ExecuteNonQuery();
}
ts.Complete();
}
If you don't ask for a transaction, you (mostly) don't get one. SQL Server wants everything in transactions and so, by default (with no other transaction management), for each separate statement, SQL Server will create a transaction and automatically commit it. So in your sample (if there was no TransactionScope), you'll get two separate transactions, both independently committed or rolled back (on error).
(Unless you've turned IMPLICIT_TRANSACTIONS on on that connection, in which case you'll get one transaction but you need an explicit COMMIT or ROLLBACK at the end. The only people I've found using this mode are people porting from Oracle and trying to minimize changes. I wouldn't recommend turning it on for greenfield work because it'll just confuse people used to SQL Server's defaults)
It's not. SQL engine will treat this text as two separate instructions. TransactionScope is required (or any other form of transaction, i.e. implicit BEGIN TRAN-COMMIT in SQL text if you prefer).
No, as the above answers say the command (as opposed to individual statements within the command) will not be run inside a transaction
Will be easy to verify
Sample code
create table t1
(
Id int not null,
Name text
)
using (var conn = new SqlConnection(...))
using (var cmd = conn.CreateCommand())
{
cmd.CommandText = #"
insert into t1 values (1, 'abc');
insert into t1 values (null, 'pqr');
";
cmd.ExecuteNonQuery();
}
The second statement will fail. But the first statement will execute and you'll have a row in the table.
For testing I INSERT a bunch of rows into a SQLServer DB. I want to clear them up at the end.. while this should be a specific test-only database I want to be well-behaved just in case. For the same reason, I don't want to make assumptions on what data may already be in the DB when deleting my test data e.g. deleting all rows or all rows with dates in a certain range.
When inserting rows is there a way I can catch an reference to the actual rows inserted, so I can later delete those exact rows regardless if the data in them has been modified. So I know which exact rows I 'own' for the test?
I'm calling SQLServer queries from C# code within the ms-test framework.
Yes it is possible with OUTPUT clause.
You need to create one intermediate table for capturing inserted records identity values.
DECLARE #TAB TABLE(ID INT)
INSERT INTO TABLE1
output inserted.Table1PK into #TAB
SELECT id,name ... ETC
Now check your table variable
SELECT * FROM #TAB
If you want dump the Table variable data into one Actual table.
For more info look at OUTPUT Clause (Transact-SQL)
Now you can delete the TABLE1 records like
DELETE FROM TABLE1 WHERE ID IN (SELECT ID FROM #TAB)
You can use transactions and rollback the transaction instead of committing the transaction at the end of your test.
You can do this with an explicit transaction, or by set implicit_transactions on; at the start of your session and not committing the transactions.
begin transaction test /* explicit transaction */
insert into TestTable (pk,col) values
(1,'val');
update TestTable
set pk=2
where pk=1;
select *
from TestTable
where pk = 2;
rollback transaction
--commit transaction
I wouldn't do this in a production environment because the transaction will hold locks until the transaction completes, but if you are testing combinations of inserts, updates, and deletes where the pk might change, this could be safer than trying to track all of your changes to manually roll them back.
This would also protect you from making mistakes in your tests that you aren't able to manually roll back.
Related: How to: Write a SQL Server Unit Test that Runs within the Scope of a Single Transaction
In my .NET code, inside a database transaction (using TransactionScope), I could include a nested block with TransactionScopeOption.Suppress, which ensures that the commands inside the nested block are committed even if the outer block rolls back.
Following is a code sample:
using (TransactionScope txnScope = new TransactionScope(TransactionScopeOption.Required))
{
db.ExecuteNonQuery(CommandType.Text, "Insert Into Business(Value) Values('Some Value')");
using (TransactionScope txnLogging = new TransactionScope(TransactionScopeOption.Suppress))
{
db.ExecuteNonQuery(CommandType.Text, "Insert Into Logging(LogMsg) Values('Log Message')");
txnLogging.Complete();
}
// Something goes wrong here. Logging is still committed
txnScope.Complete();
}
I was trying to find if this could be done in T-SQL. A few people have recommended OPENROWSET, but it doesn't look very 'elegant' to use. Besides, I think it is a bad idea to put connection information in T-SQL code.
I've used SQL Service Broker in past, but it also supports Transactional Messaging, which means message is not posted to the queue until the database transaction is committed.
My requirement: Our application stored procedures are being fired by some third party application, within an implicit transaction initiated outside stored procedure. And I want to be able to catch and log any errors (in a database table in the same database) within my stored procedures. I need to re-throw the exception to let the third party app rollback the transaction, and for it to know that the operation has failed (and thus do whatever is required in case of a failure).
You can set up a loopback linked server with the remote proc transaction Promotion option set to false and then access it in TSQL or use a CLR procedure in SQL server to create a new connection outside the transaction and do your work.
Both methods suggested in How to create an autonomous transaction in SQL Server 2008.
Both methods involve creating new connections. There is an open connect item requesting this functionality be provided natively.
Values in a table variable exist beyond a ROLLBACK.
So in the following example, all the rows that were going to be deleted can be inserted into a persisted table and queried later on thanks to a combination of OUTPUT and table variables.
-- First, create our table
CREATE TABLE [dbo].[DateTest] ([Date_Test_Id] INT IDENTITY(1, 1), [Test_Date] datetime2(3));
-- Populate it with 15,000,000 rows
-- from 1st Jan 1900 to 1st Jan 2017.
INSERT INTO [dbo].[DateTest] ([Test_Date])
SELECT
TOP (15000000)
DATEADD(DAY, 0, ABS(CHECKSUM(NEWID())) % 42734)
FROM [sys].[messages] AS [m1]
CROSS JOIN [sys].[messages] AS [m2];
BEGIN TRAN;
BEGIN TRY
DECLARE #logger TABLE ([Date_Test_Id] INT, [Test_Date] DATETIME);
-- Delete every 1000 row
DELETE FROM [dbo].[DateTest]
OUTPUT deleted.Date_Test_Id, deleted.Test_Date INTO #logger
WHERE [Date_Test_Id] % 1000 = 0;
-- Make it fail
SELECT 1/0
-- So this will never happen
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRAN
SELECT * INTO dbo.logger FROM #logger;
END CATCH;
SELECT * FROM dbo.logger;
DROP TABLE dbo.logger;
I have to configure a system which provides me with an area to input SQL statements.
It is important to notice that we cannot modify the system we are configuring.
I believe the system was built in C# (.net for sure but C# is a guess).
Any way, I'm trying to create a script that would:
create a temporary table
create a temporary procedure (which inserts into the table created)
call the temporary procedure 4 times
Read the temp table as a response to the system's call.
Something like:
CREATE Procedure #myTempProcedure(
#param1 nvarchar(max)
) as
begin
insert #tempTable (col1, col2) select aCol, bCol from table2 where col2 = #param1;
end;
CREATE TABLE #tempTable
(col1 nvarchar(512),
(col2 nvarchar(512));
EXEC #myTempProcedure N'val1';
EXEC #myTempProcedure N'val2';
EXEC #myTempProcedure N'val3';
EXEC #myTempProcedure N'val4';
select col1, col2 from #tempTable;
The system is very likely executing my script via C# SqlCommand.ExecuteReader() method. As I can simulate the problem in a simple C# application I created.
The problem is that when executing this script, the system (or SQL Server) assumes the procedure body to be the entire script and seems to disregard my ; in line 6 of the example above. My intention with this ; was to flag the end of the procedure creation.
Executing this script in Management studio requires a GO to be placed in line 7 of the example above, otherwise the same problem reported by the system would happen in Management Studio.
Is there a GO equivalent I could use in this script to get it to work??
Or is there a better way to script this??
I have a background in Oracle, and I'm still leaning SQL server usual tricks... The System accepts multiple commands apart from the create procedure here, So I'm inclined to believe there is a SQL Server trick I could use here.
Thank you in advance!
The problem is that syntactically there is no way to create a procedure and then do something after it in the same batch. The compiler doesn't know where it ends, and things like semi-colon don't fix it (because semi-colon only terminates a statement, not a batch).
Using dynamic SQL, (and fixing one syntax error) this works:
EXEC('
CREATE Procedure ##myTempProcedure(
#param1 nvarchar(max)
) as
begin
insert #tempTable (col1, col2) select aCol, bCol from table2 where col2 = #param1;
end;
');
CREATE TABLE #tempTable
(
col1 nvarchar(512),
col2 nvarchar(512)
);
EXEC ##myTempProcedure N'val1';
EXEC ##myTempProcedure N'val2';
EXEC ##myTempProcedure N'val3';
EXEC ##myTempProcedure N'val4';
select col1, col2 from #tempTable;
EXEC('DROP PROC ##myTempProcedure;');
1) please look at the rights on the server you many have some issues with if you cannot change or add anything to the system.
ie., Create procedure statements.
2) you could do a small exercise
open a connection object using the SqlConnection()
keep the connection open till you execute all you statements
ie., a) create your #table
b) execute your insert statement.
c) select * from your #table
this should get you back the data you are intending to get back from your temp table note i skipped the entire proc here.
Instead of creating stored procedure, you can execute sql statements delimited by semi colon. You can execute multiple statements this way. Also if you want to create a temp table and load it with data, you can use the same connection with multiple sql commands.
Given that the proc definition doesn't change, and that there is no real harm in the proc existing beyond the end of this particular process, it could just as easily be a regular (i.e. non-temporary) Stored Procedure that just happens to exist in tempdb. The benefit of using a regular Stored Procedure created in tempdb is that you do not need to worry about potential name collisions when using global temporary stored procedures. The script simply needs to ensure that the stored procedure exists. But there is no need to remove the Stored Procedure manually or have it automatically cleaned up.
The following code is adapted from #RBarryYoung's answer:
IF (OBJECT_ID(N'tempdb.dbo.myTempProcedure') IS NULL)
BEGIN
USE [tempdb];
EXEC('
CREATE PROCEDURE dbo.myTempProcedure(
#param1 NVARCHAR(MAX)
) AS
BEGIN
INSERT INTO #tempTable (col1, col2)
SELECT aCol, bCol
FROM table2
WHERE col2 = #param1;
END;
');
END;
CREATE TABLE #tempTable
(
col1 NVARCHAR(512),
col2 NVARCHAR(512)
);
EXEC tempdb.dbo.myTempProcedure N'val1';
EXEC tempdb.dbo.myTempProcedure N'val2';
EXEC tempdb.dbo.myTempProcedure N'val3';
EXEC tempdb.dbo.myTempProcedure N'val4';
SELECT col1, col2 FROM #tempTable;
The only difference here is that a non-temporary Stored Procedure does not execute in the context of the current database, but instead, like any other non-temporary Stored Procedure, runs within the context of the database where it exists, which in this case is tempdb. So the table that is being selected from (i.e. table2) needs to be fully-qualified. This means that if the proc needs to run in multiple databases and reference objects local to each of them, then this approach is probably not an option.
I have an account creation process and basically when the user signs up, I have to make entries in mutliple tables namely User, Profile, Addresses. There will be 1 entry in User table, 1 entry in Profile and 2-3 entries in Address table. So, at most there will be 5 entries. My question is should I pass a XML of this to my stored procedure and parse it in there or should I create a transaction object in my C# code, keep the connection open and insert addresses one by one in loop?
How do you approach this scenario? Can making multiple calls degrade the performance even though the connection is open?
No offence, but you're over thinking this.
Gather your information, when you have it all together, create a transaction and insert the new rows one at a time. There's no performance hit here, as the transaction will be short lived.
A problem would be if you create the transaction on the connection, insert the user row, then wait for the user to enter more profile information, insert that, then wait for them to add address information, then insert that, DO NOT DO THIS, this is a needlessly long running transaction, and will create problems.
However, your scenario (where you have all the data) is a correct use of a transaction, it ensures your data integrity and will not put any strain on your database, and will not - on it's own - create deadlocks.
Hope this helps.
P.S. The drawbacks with the Xml approach is the added complexity, your code needs to know the schema of the xml, your stored procedure needs to know the Xml schema too. The stored procedure has the added complexity of parsing the xml, then inserting the rows. I really don't see the advantage of the extra complexity for what is a simple short running transaction.
If you want to insert records in multiple table then using XML parameter is a complex method. Creating Xml in .net and extracting records from xml for three diffrent tables is complex in sql server.
Executing queries within a transaction is easy approach but some performance will degrade there to switch between .net code and sql server.
Best approach is to use table parameter in storedprocedure. Create three data table in .net code and pass them in stored procedure.
--Create Type TargetUDT1,TargetUDT2 and TargetUDT3 for each type of table with all fields which needs to insert
CREATE TYPE [TargetUDT1] AS TABLE
(
[FirstName] [varchar](100)NOT NULL,
[LastName] [varchar](100)NOT NULL,
[Email] [varchar](200) NOT NULL
)
--Now write down the sp in following manner.
CREATE PROCEDURE AddToTarget(
#TargetUDT1 TargetUDT1 READONLY,
#TargetUDT2 TargetUDT2 READONLY,
#TargetUDT3 TargetUDT3 READONLY)
AS
BEGIN
INSERT INTO [Target1]
SELECT * FROM #TargetUDT1
INSERT INTO [Target2]
SELECT * FROM #TargetUDT2
INSERT INTO [Target3]
SELECT * FROM #TargetUDT3
END
In .Net, Create three data table and fill the value, and call the sp normally.
For example assuming your xml as below
<StoredProcedure>
<User>
<UserName></UserName>
</User>
<Profile>
<FirstName></FirstName>
</Profile>
<Address>
<Data></Data>
<Data></Data>
<Data></Data>
</Address>
</StoredProcedure>
this would be your stored procedure
INSERT INTO Users (UserName) SELECT(UserName) FROM OPENXML(#idoc,'StoredProcedure/User',2)
WITH ( UserName NVARCHAR(256))
where this would provide idoc variable value and #doc is the input to the stored procedure
DECLARE #idoc INT
--Create an internal representation of the XML document.
EXEC sp_xml_preparedocument #idoc OUTPUT, #doc
using similar technique you would run 3 inserts in single stored procedure. Note that it is single call to database and multiple address elements will be inserted in single call to this stored procedure.
Update
just not to mislead you here is a complete stored procedure for you do understand what you are going to do
USE [DBNAME]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER OFF
GO
CREATE PROCEDURE [dbo].[procedure_name]
#doc [ntext]
WITH EXECUTE AS CALLER
AS
DECLARE #idoc INT
DECLARE #RowCount INT
SET #ErrorProfile = 0
--Create an internal representation of the XML document.
EXEC sp_xml_preparedocument #idoc OUTPUT, #doc
BEGIN TRANSACTION
INSERT INTO Users (UserName)
SELECT UserName FROM OPENXML(#idoc,'StoredProcedure/User',2)
WITH ( UserName NVARCHAR(256) )
-- Insert Address
-- Insert Profile
SELECT #ErrorProfile = ##Error
IF #ErrorProfile = 0
BEGIN
COMMIT TRAN
END
ELSE
BEGIN
ROLLBACK TRAN
END
EXEC sp_xml_removedocument #idoc
Have you noticed any performance problems, what you are trying to do is very straight forward and many applications do this day in day out. Be careful not to be drawn into any premature optimization.
Database inserts should be very cheep, as you have suggested create a new transaction scope, open you connection, run your inserts, commit the transaction and finally dispose everything.
using (var tran = new TransactionScope())
using (var conn = new SqlConnection(YourConnectionString))
using (var insetCommand1 = conn.CreateCommand())
using (var insetCommand2 = conn.CreateCommand())
{
insetCommand1.CommandText = \\SQL to insert
insetCommand2.CommandText = \\SQL to insert
insetCommand1.ExecuteNonQuery();
insetCommand2.ExecuteNonQuery();
tran.Complete();
}
Bundling all your logic into a stored procedure and using XML gives you added complications, you will need to have additional logic in your database, you now have to transform your entities into an XML blob and you code has become harder to unit test.
There are a number of things you can do to make the code easier to use. The first step would be to push your database logic into a reusable database layer and use the concept of a repository to read and write your objects from the database.
You could of course make your life a lot easier and have a look at any of the ORM (Object-relational mapping) libraries that are available. They take away the pain of talking to the database and handle that for you.