ChangeConflictException when updating rows with LINQ-to-SQL - c#

I have a form which contains a data grid and a save button.
When the user clicks the save button I check for new rows by checking a specific column. If its value is 0 I insert the row to database, and if the column value is not 0 then I update that row.
I can insert correctly but when updating an exception occurs:
ChangeConflictException was unhandled,1 of 6 updates failed.
I have checked the update statement and I'm sure it's correct. What is the problem, can any one help me?
int id;
for (int i = 0; i < dgvInstructores.Rows.Count - 1; i++)
{
id = int.Parse(dgvInstructores.Rows[i].Cells["ID"].Value.toString());
if (id == 0)
{
dataClass.procInsertInstructores(name, nationalNum, tel1, tel2,
address, email);
dataClass.SubmitChanges();
}
else
{
dataClass.procUpdateInstructores(id, name, nationalNum, tel1, tel2,
address, email);
dataClass.SubmitChanges();
}
}
I'm using linq to query sql server2005 database and vs2008
the stored procedure for 'procUpdateInstructores' is :
set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
go
ALTER proc [dbo].[procUpdateInstructores]
#ID int,
#name varchar(255),
#NationalNum varchar(25),
#tel1 varchar(15),
#tel2 varchar(15),
#address varchar(255),
#email varchar(255)
as
begin
BEGIN TRANSACTION
update dbo.Instructores
set
Name = #name , NationalNum = #NationalNum ,
tel1 = #tel1 , tel2 = #tel2 , address = #address , email = #email
where ID = #ID
IF (##ROWCOUNT > 0) AND (##ERROR = 0)
BEGIN
COMMIT TRANSACTION
END
ELSE
BEGIN
ROLLBACK TRANSACTION
END
end

In my experience, (working with .net forms and mvc with linq-to-sql) I have found that several times if the form collection contains the ID parameter of the data object then the update surely fails.
Even if the ID is the actual ID, it is still flagged as 'propertyChanged' when you bind it or update it or assign to another variable.
As such can we see the code for your stored procs? More specifically, the update proc?
The code you have posted above is fine, the exception should be coming from your stored proc.
However if you are confident that the proc is correct then perhaps look at the HTML code being used to generate the table. Some bugs might be present with respect to 0/1 on ID columns, etc.

In the absence of further information (what your SQL or C# update code looks like...) my first recommendation would be to do SubmitChanges once, outside the for loop, rather than submitting changes once per row.

It appears in this case that you are using a DataGridView (thus WinForms). I further guess that your dataClass is persisted on the form so that you loaded and bound the DataGridView from the same dataClass that you are trying to save the changes to in this example.
Assuming you are databinding the DataGridView to entities returned via LINQ to SQL, when you edit the values, you are marking the entity in question that it is needing to be updated when the next SubmitChanges is called.
In your update, you are calling dataClass.procUpdateInstructores(id, name, nationalNum, tel1, tel2, address, email); which immediately issues the stored procedure against the database, setting the new values as they have been edited. The next line is the kicker. Since your data context still thinks the object is still dirty, SubmitChanges tries to send another update statement to your database with the original values that it fetched as part of the Where clause (to check for concurrency). Since the stored proc updated those values, the Where clause can't find a matching value and thus returns a concurrency exception.
Your best bet in this case is to modify the LINQ to SQL model to use your stored procedures for updates and inserts rather than the runtime generated versions. Then in your parsing code, simply call SubmitChanges without calling procUpdateInstructores manually. If your dbml is configured correctly, it will call the stored proc rather than the dynamic update statement.
Also, FWIW, your stored proc doesn't seem to be doing anything more than the generated SQL would. Actually, LINQ to SQL would give you more functionality since you aren't doing any concurrency checking in your stored proc anyway. If you are required to use stored procs by your DBA or some security policy, you can retain them, but you may want to consider bypassing them if this is all your stored procs are doing and rely on the runtime generated SQL for updates.

Related

Explain Code First CRUD auto-generated SQL for Identity column

Code-first auto generates an insert procedure code as below for a table that has ProductID as primary key (identity column).
CREATE PROCEDURE [dbo].[InsertProducts]
#ProductName [nvarchar](max),
#Date [datetime],
AS
BEGIN
INSERT dbo.ProductsTable([ProductName], [Date])
VALUES (#ProductName, #Date)
-- identity stuff starts here
DECLARE #ProductID int
SELECT #ProductID = [ProductID]
FROM dbo.FIT_StorageLocations
WHERE ##ROWCOUNT > 0 AND [ProductID] = scope_identity()
SELECT t0.[ProductID]
FROM dbo.ProductsTable AS t0
WHERE ##ROWCOUNT > 0 AND t0.[ProductID] = #ProductID
END
GO
Could you please explain the code that handles the identity column? Also, if an insert procedure is to be manually written from scratch, would it be handled differently?
If for example I would remove this auto generated code, I would encounter one of the following errors:
Procedure ....expects parameter '#ProductID', which was not supplied
Store update, insert, or delete statement affected an unexpected number of rows (0). Entities may have been modified or deleted since entities were loaded. See http://go.microsoft.com/fwlink/?LinkId=472540 for information on understanding and handling optimistic concurrency exceptions.
In the app, this is how I call the procedure which works fine until I try to mess with the code first auto generated SQL:
using (var db = new AppContext())
{
var record = new ProductObj()
{
ProductName= this.ProductName,
Date = DateTime.UtcNow
};
db.ProductDbSet.Add(record);
db.SaveChanges();
}
I guess there are two things to be explained here.
Why a SELECT statement when I insert stuff?
Let's first see what a regular insert by Entity Framework looks like. By "regular" I mean an insert without mapping CUD actions to stored procedures. The normal pattern is:
INSERT [dbo].[Product]([Name], ...)
VALUES (#0, ...)
SELECT [Id]
FROM [dbo].[Product]
WHERE ##ROWCOUNT > 0 AND [Id] = scope_identity()
So the INSERT is followed by a SELECT. This is because EF needs to know the identity value that the database assigns to the new Product to assign it to the entity object's Product.ProductId property and to track the entity. If for some reason you'd decide to do an update immediately after the insert, EF will be able to generate an update statement like UPDATE ... WHERE Id = #0.
When the insert is handled by a stored procedure, the sproc should return the new Id value in a way that looks like the regular insert. It expects to receive a one-column result set of which the column is named after the identity column. It should contain one row, the new identity value.
So that's why there is a SELECT statement in there, and why EF complains if you remove it. But, you might ask, does EF really need 7 lines of code to get an assigned identity value?
Why so much code?
Honestly, I have to speculate a bit here, because it isn't documented as far as I can find. But let's look at a minimal working version:
INSERT [dbo].[Products]([Name])
VALUES (#Name)
SELECT scope_identity() AS ProductId;
This does the job. It's even the standard example of many tutorials, including official ones, on mapping CUD actions to stored procedures.
But a database can be stuffed with triggers, constraints, defaults, etc. It's hard to predict their influence on the returned scope_identity() under the wide range of circumstances EF may encounter. So EF wants to guarantee that the returned value really belongs to the newly inserted record. And that a record has actually been inserted in the first place. That's why it adds the SELECT from the Product table, including the ##ROWCOUNT.
To implement these safeguards, a minimal version would be:
INSERT [dbo].[Products]([Name])
VALUES (#Name)
SELECT t0.[ProductId]
FROM [dbo].[Products] AS t0
WHERE ##ROWCOUNT > 0 AND t0.[ProductId] = scope_identity()
Same as in the regular insert.
That's as far as I can follow EF. It puzzles me a bit that this single SELECT apparently is enough for a regular INSERT but not for a stored procedure. I can't explain why there are two SELECTs in the generated code.

Insert multiple sql rows via stored proc

I have looked a some related topics but my question isn't quite answered:
C# - Inserting multiple rows using a stored procedure
Insert Update stored proc on SQL Server
Efficient Multiple SQL insertion
I have the following kind of setup when running my stored procedure in the code behind for my web application. The thing is I am now faced with the possibility of inserting multiple products and I would like to do it all in one ExecuteNonQuery rather than do a foreach loop and run it n number of times.
I am not sure how to do this, or if it can be, with my current setup.
The code should be somewhat self explanatory but if clarification is needed let me know. Thanks.
SqlDatabase database = new SqlDatabase(transMangr.ConnectionString);
DbCommand commandWrapper = StoredProcedureProvider.GetCommandWrapper(database, "proc_name", useStoredProc);
database.AddInParameter(commandWrapper, "#ProductID", DbType.Int32, entity._productID);
database.AddInParameter(commandWrapper, "#ProductDesc", DbType.String, entity._desc);
...more parameters...
Utility.ExecuteNonQuery(transMangr, commandWrapper);
Proc
ALTER PROCEDURE [dbo].[Products_Insert]
-- Add the parameters for the stored procedure here
#ProductID int,
#Link varchar(max)
#ProductDesc varchar(max)
#Date DateTime
AS BEGIN
SET NOCOUNT ON;
INSERT INTO [dbo].[Prodcuts]
(
[CategoryID],
[Link],
[Desc],
[Date]
)
VALUES
(
#ProductID,
#Link,
#ProductDesc,
#Date
)
END
You should be fine running your stored procedure in a loop. Just make sure that you commit rarely, not after every insert.
For alternatives, you have already found the discussion about loading data.
Personally, I like SQL bulk insert of the form insert into myTable (select *, literalValue from someOtherTable);
But that will probably not do in your case.
You could pass all your data as a table value parameter - MSDN has a pretty good write up about it here
Something along the lines of the following should work
CREATE TABLE dbo.tSegments
(
SegmentID BIGINT NOT NULL CONSTRAINT pkSegment PRIMARY KEY CLUSTERED,
SegCount BIGINT NOT NULL
);
CREATE TYPE dbo.SegmentTableType AS TABLE
(
SegmentID BIGINT NOT NULL
);
CREATE PROCEDURE dbo.sp_addSegments
#Segments dbo.SegmentTableType READONLY
AS
BEGIN
MERGE INTO dbo.tSegments AS tSeg
USING #Segments AS S
ON tSeg.SegmentID = S.SegmentID
WHEN MATCHED THEN UPDATE SET T.SegCount = T.SegCount + 1
WHEN NOT MATCHED THEN INSERT VALUES(tSeg.SegmentID, 1);
END
Define the commandWrapper and parameters for the command outside of the loop and then with in the loop you just assign parameter values and execute the proc.
SqlDatabase database = new SqlDatabase(transMangr.ConnectionString);
DbCommand commandWrapper = StoredProcedureProvider.GetCommandWrapper(database, "proc_name", useStoredProc);
database.AddInParameter(commandWrapper, "#ProductID", DbType.Int32 );
database.AddInParameter(commandWrapper, "#ProductDesc", DbType.String);
...more parameters...
foreach (var entity in entitties)
{
database.SetParameterValue(commandWrapper, "#ProductID",entity._productID);
database.SetParameterValue(commandWrapper, "#ProductDesc",entity._desc);
//..more parameters...
Utility.ExecuteNonQuery(transMangr, commandWrapper);
}
Not ideal from a purist way of doing things, but sometimes one is limited by frameworks and libraries, and that you are forced to call stored procedures in a certain way, bind parameters in a certain way, and that connections are managed by pools as part of your framework.
In such circumstances, a method we have found to work is to simply write your stored procedure with a lot of parameters, usually a name followed by a number, e.g. #ProductId1, #ProductDesc1, #ProductId2, #ProductDesc2 up to a number you decide, possibly say 32.
You can use some form of scripting language to produce the lines for this.
You can get the stored procedure to insert all the values first into a table parameter that allows nulls, then do bulk inserts / merges on this data in a way similar to Johnv2020's answer. You might remove the null rows first.
It will usually be more efficient than doing it one at a time (partly because of the database operations itself, and partly because of your framework's overheads in getting the connection to call the procedure etc.)

Cannot alter table with LINQ TO SQL and stored procedure

I have to alter the table Statistic when I add a new metric in the table Metric I add a column in table Statistic.
I used a stored procedure that allows me to alter the table Statistic so the code :
CREATE PROCEDURE dbo.addnewmetricInstat
(
#MetricName varchar(254),
#TypeMetric varchar(254)
)
AS
IF (#TypeMetric='int')
Begin
alter table Statistic
add #MetricName int null
end
ELSE if (#TypeMetric='string')
begin
alter table Statistic
add #MetricName varchar(254) null
end
Then I successfully called the stored procedure but the columns is not added. The code I used in C# for calling this stored procedure is:
using (DataClassesDataContext db = new DataClassesDataContext("Data Source=EMEA-TUN-WS0367\\SQLEXPRESS;Initial Catalog=Perfgas;Integrated Security=True"))
{
db.addnewmetricInstat(metric.MetricName, metric.Type);
db.SubmitChanges();
}
First you shoud not be trying to call the sp from your application until you have tested it in SSMS. That way you know if the problem is your call or the sp. It will save you much debugging time if you do this.
Your proc is the problem. You will need to use dynamic SQl for this. Right now you are trying to add a column called MetericName because you didn't use the variable. This would work once, but of course the second time you run it, you will get an error becasue teh column already exists. However you can;t just throw a variable into an alter table statement, you must use dynamic SQL.

Should you make multiple insert calls or pass XML?

I have an account creation process and basically when the user signs up, I have to make entries in mutliple tables namely User, Profile, Addresses. There will be 1 entry in User table, 1 entry in Profile and 2-3 entries in Address table. So, at most there will be 5 entries. My question is should I pass a XML of this to my stored procedure and parse it in there or should I create a transaction object in my C# code, keep the connection open and insert addresses one by one in loop?
How do you approach this scenario? Can making multiple calls degrade the performance even though the connection is open?
No offence, but you're over thinking this.
Gather your information, when you have it all together, create a transaction and insert the new rows one at a time. There's no performance hit here, as the transaction will be short lived.
A problem would be if you create the transaction on the connection, insert the user row, then wait for the user to enter more profile information, insert that, then wait for them to add address information, then insert that, DO NOT DO THIS, this is a needlessly long running transaction, and will create problems.
However, your scenario (where you have all the data) is a correct use of a transaction, it ensures your data integrity and will not put any strain on your database, and will not - on it's own - create deadlocks.
Hope this helps.
P.S. The drawbacks with the Xml approach is the added complexity, your code needs to know the schema of the xml, your stored procedure needs to know the Xml schema too. The stored procedure has the added complexity of parsing the xml, then inserting the rows. I really don't see the advantage of the extra complexity for what is a simple short running transaction.
If you want to insert records in multiple table then using XML parameter is a complex method. Creating Xml in .net and extracting records from xml for three diffrent tables is complex in sql server.
Executing queries within a transaction is easy approach but some performance will degrade there to switch between .net code and sql server.
Best approach is to use table parameter in storedprocedure. Create three data table in .net code and pass them in stored procedure.
--Create Type TargetUDT1,TargetUDT2 and TargetUDT3 for each type of table with all fields which needs to insert
CREATE TYPE [TargetUDT1] AS TABLE
(
[FirstName] [varchar](100)NOT NULL,
[LastName] [varchar](100)NOT NULL,
[Email] [varchar](200) NOT NULL
)
--Now write down the sp in following manner.
CREATE PROCEDURE AddToTarget(
#TargetUDT1 TargetUDT1 READONLY,
#TargetUDT2 TargetUDT2 READONLY,
#TargetUDT3 TargetUDT3 READONLY)
AS
BEGIN
INSERT INTO [Target1]
SELECT * FROM #TargetUDT1
INSERT INTO [Target2]
SELECT * FROM #TargetUDT2
INSERT INTO [Target3]
SELECT * FROM #TargetUDT3
END
In .Net, Create three data table and fill the value, and call the sp normally.
For example assuming your xml as below
<StoredProcedure>
<User>
<UserName></UserName>
</User>
<Profile>
<FirstName></FirstName>
</Profile>
<Address>
<Data></Data>
<Data></Data>
<Data></Data>
</Address>
</StoredProcedure>
this would be your stored procedure
INSERT INTO Users (UserName) SELECT(UserName) FROM OPENXML(#idoc,'StoredProcedure/User',2)
WITH ( UserName NVARCHAR(256))
where this would provide idoc variable value and #doc is the input to the stored procedure
DECLARE #idoc INT
--Create an internal representation of the XML document.
EXEC sp_xml_preparedocument #idoc OUTPUT, #doc
using similar technique you would run 3 inserts in single stored procedure. Note that it is single call to database and multiple address elements will be inserted in single call to this stored procedure.
Update
just not to mislead you here is a complete stored procedure for you do understand what you are going to do
USE [DBNAME]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER OFF
GO
CREATE PROCEDURE [dbo].[procedure_name]
#doc [ntext]
WITH EXECUTE AS CALLER
AS
DECLARE #idoc INT
DECLARE #RowCount INT
SET #ErrorProfile = 0
--Create an internal representation of the XML document.
EXEC sp_xml_preparedocument #idoc OUTPUT, #doc
BEGIN TRANSACTION
INSERT INTO Users (UserName)
SELECT UserName FROM OPENXML(#idoc,'StoredProcedure/User',2)
WITH ( UserName NVARCHAR(256) )
-- Insert Address
-- Insert Profile
SELECT #ErrorProfile = ##Error
IF #ErrorProfile = 0
BEGIN
COMMIT TRAN
END
ELSE
BEGIN
ROLLBACK TRAN
END
EXEC sp_xml_removedocument #idoc
Have you noticed any performance problems, what you are trying to do is very straight forward and many applications do this day in day out. Be careful not to be drawn into any premature optimization.
Database inserts should be very cheep, as you have suggested create a new transaction scope, open you connection, run your inserts, commit the transaction and finally dispose everything.
using (var tran = new TransactionScope())
using (var conn = new SqlConnection(YourConnectionString))
using (var insetCommand1 = conn.CreateCommand())
using (var insetCommand2 = conn.CreateCommand())
{
insetCommand1.CommandText = \\SQL to insert
insetCommand2.CommandText = \\SQL to insert
insetCommand1.ExecuteNonQuery();
insetCommand2.ExecuteNonQuery();
tran.Complete();
}
Bundling all your logic into a stored procedure and using XML gives you added complications, you will need to have additional logic in your database, you now have to transform your entities into an XML blob and you code has become harder to unit test.
There are a number of things you can do to make the code easier to use. The first step would be to push your database logic into a reusable database layer and use the concept of a repository to read and write your objects from the database.
You could of course make your life a lot easier and have a look at any of the ORM (Object-relational mapping) libraries that are available. They take away the pain of talking to the database and handle that for you.

C# database update

I'm stuck on a little problem concerning database.
Once a month I get a XML file with customer information (Name, address, city,etc.). My primary key is a customer number which is provided in the XML file.
I have no trouble inserting the information in the database;
var cmd = new SqlCommand("insert into [customer_info]
(customer_nr, firstname, lastname, address_1, address_2, address_3.......)");
//some code
cmd.ExecuteNonQuery();
Now, I would like to update my table or just fill it with new information. How can I achieve this?
I've tried using TableAdapter but it does not work.
And I'm only permitted to add one XML because I can only have one customer_nr as primary key.
So basically how do I update or fill my table with new information?
Thanks.
One way would be to bulk insert the data into a new staging table in the database (you could use SqlBulkCopy for this for optimal insert speed). Once it's in there, you could then index the customer_nr field and then run 2 statements:
-- UPDATE existing customers
UPDATE ci
SET ci.firstname = s.firstname,
ci.lastname = s.lastname,
... etc
FROM StagingTable s
INNER JOIN Customer_Info ci ON s.customer_nr = ci.customer_nr
-- INSERT new customers
INSERT Customer_Info (customer_nr, firstname, lastname, ....)
SELECT s.customer_nr, s.firstname, s.lastname, ....
FROM StagingTable s
LEFT JOIN Customer_Info ci ON s.customer_nr = ci.customer_nr
WHERE ci.customer_nr IS NULL
Finally, drop your staging table.
Alternatively, instead of the 2 statements, you could just use the MERGE statement if you are using SQL Server 2008 or later, which allows you to do INSERTs and UPDATEs via a single statement.
If I understand your question correctly - if the customer already exists you want to update their information, and if they don't already exist you want to insert a new row.
I have a lot of problems with hard-coded SQL commands in your code, so I would firstly be very tempted to refactor what you have done. However, to achieve what you want, you will need to execute a SELECT on the primary key, if it returns any results you should execute an UPDATE else you should execute an INSERT.
It would be best to do this in something like a Stored Procedure - you can pass the information to the stored procedure at then it can make a decision on whether to UPDATE or INSERT - this would also reduce the overhead of making several calls for your code to the database (A stored procedure would be much quicker)
AdaTheDev has indeed given the good suggestion.
But in case, you must insert/update from .NET code then you can
Create a stored procedure that will handle insert/update i.e. instead of using a direct insert query as command text, you make a call to stored proc. The SP will check if row exists or not and then update (or insert).
User TableAdapter - but this would be tedious. First you have to setup both insert & update commands. Then you have to query the database to get the existing customer numbers and then update the corresponding rows in the datatable making the Rowstate as Updated. I would rather not go this way.

Categories