I have a SQL Server Database with several tables. One of them has "ID" (primary key), "Name" and other columns that i won't mention here for sake of simplicity. "ID" column is auto increment, unique and when i add some row using "SQL Server management studio", "ID" column increments properly. Database is old and current auto increment is at 1244 or so.
Now, i have created a C# project that uses TYPED Dataset to work with data from database. My database starts empty, dataset is filled using table adapters, new rows are added using my program but there's a problem i have never stumbled upon so far: when my program adds new row to Dataset, then updates database (using table adapter), "ID" column in my database gets correct auto-incremented number (1245,1246 etc), BUT my "ID" column in dataset gets "-1", "-2" instead! What's the problem? How can i tell my dataset to use auto-increment seed specified by database instead generating it's own NEGATIVE (???) primary key numbers?
EDIT:
I get and compare rows using this:
dsNames.tbNamesRow[] TMP = basedataset.tbNames.Select() as dsNames.tbNamesRow[];
foreach (dsNames.tbNamesRow row in TMP)
{
string Name = row.Name;
bool Found = Name == Search;
if (CompareDelegate != null)
Found = CompareDelegate(Name, Search);
if (Found)
{
int ID = row.ID;
break;
}
}
My original comment was kind of incorrect, I assumed you were retrieving the value from the database and THAT dataset had incorrect values in it.
The way ADO.NET deals with preventing collisions with it's disconnected dataset, it assigns negative IDENTITY column values, because it wouldn't know a possible positive number that IS NOT a collision as it's disconnected. These (negative) values are unique in terms of that transaction.
When you try and commit your changes, the ADO.NET engine determines the proper SQL to produce the correct result.
Related
I know I can do a bulk insert into my table with an identity column by not specifying the SqlBulkCopyOptions.KeepIdentity as mentioned here.
What I would like to be able to do is get the identity values that the server generates and put them in my datatable, or even a list. I saw this post, but I want my code to be general, and I can't have a version column in all my tables. Any suggestions are much appreciated. Here is my code:
public void BulkInsert(DataTable dataTable, string DestinationTbl, int batchSize)
{
// Get the DataTable
DataTable dtInsertRows = dataTable;
using (SqlBulkCopy sbc = new SqlBulkCopy(sConnectStr))
{
sbc.DestinationTableName = DestinationTbl;
// Number of records to be processed in one go
sbc.BatchSize = batchSize;
// Add your column mappings here
foreach (DataColumn dCol in dtInsertRows.Columns)
{
sbc.ColumnMappings.Add(dCol.ColumnName, dCol.ColumnName);
}
// Finally write to server
sbc.WriteToServer(dtInsertRows);
}
}
AFAIK, you can't.
The only way (that I know of) to get the values(s) of the identity field is by using either SCOPE_IDENTITY() when you insert row-by-row; or by using the OUTPUT approach when inserting an entire set.
The 'simplest' approach probably would be that you would SqlBulkCopy the records in the table and then fetch them back again later on. The problem might be that it could be hard to properly (and quickly) fetch those rows from the server again. (e.g. it would be rather ugly (and slow) to have a WHERE clause with IN (guid1, guid2, .., guid999998, guid999999) =)
I'm assuming performance is an issue here as you're already using SqlBulkCopy so I'd suggest to go for the OUTPUT approach in which case you'll firstly need a staging table to SqlBulkCopy your records in. Said table should then be including some kind of batch-identifier (GUID?) as to allow multiple treads to run side by side. You'll need a stored procedure to INSERT <table> OUTPUT inserted.* SELECT the data from the staging-table into the actual destination table and also clean-up the staging table again. The returend recordset from said procedure would then match 1:1 to the origanal dataset responsible for filling the staging table, but off course you should NOT rely on it's order. In other words : your next challenge than will be matching the returned Identity-fields back to the original records in your application.
Thinking things over, I'd say that in all cases -- except the row-by-row & SCOPY_IDENTITY() approach, which is going to be dog-slow -- you'll need to have (or add) a 'key' to your data to link the generated id's back to the original data =/
You can do a similar approach described above by deroby but instead of retrieving them back via a WHERE IN (guid1, etc... You match them back up to the rows inserted in memory based on their order.
So I would suggest to add a column onto the table to match the row to a SqlBulkCopy transaction and then do the following to match the generated Ids back to the in memory collection of rows you just inserted.
Create a new Guid and set this value on all the rows in the bulk copy mapping to the new column
Run the WriteToServer method of the BulkCopy object
Retrieve all the rows that have that same key
Iterate through this list which will be in the order they were added, these will be in the same order as the the in memory collection of rows so you then will know the generated id for each item.
This will give you better performance than giving each individual row a unique key. So after you bulk insert the data table you could do something like this (In my example I will have a list of objects from which I will create the data table and then map the generated ids back to them)
List<myObject> myCollection = new List<myObject>
Guid identifierKey = Guid.NewGuid();
//Do your bulk insert where all the rows inserted have the identifierKey
//set on the new column. In this example you would create a data table based
//off the myCollection object.
//Identifier is a column specifically for matching a group of rows to a sql
//bulk copy command
var myAddedRows = myDbContext.DatastoreRows.AsNoTracking()
.Where(d => d.Identifier == identiferKey)
.ToList();
for (int i = 0; i < myAddedRows.Count ; i++)
{
var savedRow = myAddedRows[i];
var inMemoryRow = myCollection[i];
int generatedId = savedRow.Id;
//Now you know the generatedId for the in memory object you could set a
// a property on it to store the value
inMemoryRow.GeneratedId = generatedId;
}
I am using LINQ-to-SQL class. I am inserting a new row using LINQ method object.InsertOnSubmit().
I need to set same value which is generate by SQL Server (using Identity) for table primary key column.
Now I need the same value at the time of inserting new row into table. And set the same value for other column in the same table at the time of insert only.
As I cannot update as after inserting because table has UPDATE TRIGGER.
I tried the following
_db.EmpNews.InsertOnSubmit(_EmpNews);
...
_db.DisplaySeq = _EmpNews.ID;
...
_db.SubmitChanges();
Where ID is the auto-generated (Identity) column.
The first question really is: why would you need to store the same value in two separate columns in the same table? What do you need this for? Doesn't seem to make a lot of sense to me....
Since the value of the IDENTITY column is only available once the row has actually been inserted, there is no way to get that value and set it to another column before the row has indeed been saved to the database table.
That basically leaves three options to get that value and store it somewhere else:
you can write an AFTER INSERT trigger that just set the other column to the value that's just been inserted in the IDENTITY column
you could wrap the whole saving process into a stored procedure which you call from your C# code (instead of just saving the object) and you would do the INSERT of the row, then get the newly created IDENTITY value and update the row again with that new value. But that would cause an UPDATE to happen - which you seem to say is impossible for you because of an UPDATE trigger (not quite clear on why this should be a problem....)
you can write two lines of C# code to get the IDENTITY value after it's been inserted (and available in the ID property of your object) and then store the object a second time. But that, too, would cause an UPDATE to happen - which you seem to say is impossible for you because of an UPDATE trigger (not quite clear on why this should be a problem....)
So I guess your best option would be an INSERT trigger to do this.
Try something like this:
CREATE TRIGGER trInsertEmpNews
ON dbo.EmpNews AFTER INSERT
AS BEGIN
UPDATE dbo.EmpNews
SET DisplaySeq = i.ID
FROM INSERTED i
WHERE dbo.EmpNews.ID = i.ID
END
I have a piece of code which copies data from an excel spreadsheet to a MSSQL table using DataReader and SqlBulkCopy. It worked fine until I created a primary key on the table and now it fails. I am first deleting the contents of the SQL table before filling it again with the data from excel.
As it is only a small amount of data I am moving, I wondered if there was a better way to do this than using BulkCopy?
Update: below is the relative code and the error I receive is:
"The given value of type String from the data source cannot be converted to type float of the specified target column."
using (OleDbConnection connection = new OleDbConnection(excelConnectionString))
{
connection.Open();
OleDbCommand cmd = new OleDbCommand
("SELECT Name, Date, Amount FROM ExcelNamedRange", connection);
using (OleDbDataReader dr = cmd.ExecuteReader())
{
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(sqlConnectionString))
{
bulkCopy.DestinationTableName = "SqlTable";
bulkCopy.WriteToServer(dr);
}
}
}
SqlBulkCopy automatically maps the fields. But since you added a primary key that default mapping is no longer valid.
You will have to set ColumnMapping to tell your SqlBulkCopy object explicityly how to map the fields.
Do this for all your fields, except the primary key (assuming you use an identity on the PK).
For example:
_bulkCopyEngine.ColumnMappings.Add("fieldname_from", "fieldname_to");
Creating a primary key, suggests you are enforcing a domain constraint (a good thing).
Therefore, your actual problem is not that you need another way to perform the bulk insert, but that you need to find out why you have duplicate keys (the precise reason for enforcing the PK).
BulkCopy should work just fine, so your problem seems to be a duplicate key (what's the error message?). You either have data that that is wrong there, or the primary key you created is too narrow.
What you could also do is push the data into a staging table first (no keys/ indexes etc, just a plain table) and then use an update (merge when on 2008) statement to put it into the actual table.
GJ
ok that seems to be a different problem altogether, there seems to be a value in year ExcelNamedRange that cannot be cast as one of the columns in SqlTable. Can you see any? Maybe division by 0 error etc?
Also make sure the columns line up. Not sure exactly how SqlBulkCopy maps the columns, I think it just puts the first column from NamedRange into the first column of SqlTale etc. So make sure they;re in the right order. (or see what happens if you change the names)
The BulkCopy is fastest way, how to insert data into MSSQL from C#.
I have my data set filled and I allow a user to add a row to the data table. Once they confirm they want to add the row I add the row to the DataSet and proceed to update the Database DataTable.
MyTypedDataSet.MyDataTable newRow = dataSetObject.MyDataTable.NewMyDataTableRow();
newRow.Description = "New Row";
dataSetObject.MyDataTable.AddMyDataTableRow( newRow );
I can update the data table by using an SQLConnection and calling SQLDataAdapter.Update(), but the problem is, the ID value from the data set gets carried over into the database, which is still negative.
How do i update my DataTable so that newly added rows have the correctly incremented ID?
You didn't indicate the RDBMS you're using, but given your comments and my assumption, you're using SQL Server. And, you're using the IDENTITY property to increment the key value.
If new rows are not getting the correct next identity value, you can run the following to reset the value.
DBCC CHECKIDENT('table-name', RESEED)
This will reseed the identity value using the max value in the table.
You can view the current information about the identity using the IDENT_CURRENT, IDENT_INCR and IDENT_SEED functions.
Do not take the ID data from your client. Rather use a generated ID for your table.
May be you could use a stored procedure(CRUD) which captures the MAX(col) and puts the nextid in the new row created.
Operations:
Delete in DataGridView selected row from Dataset:
FuDataSet.FuRow row = (FuDataSet.FuRow) ((DataRowView)FuBindingSource.Current).Row;
row.Delete();
To add a new Row I'm doing:
FuDataSet.FuRow row = FuDataSet.Fus.NewFuRow();
row.Someting = "Some initial Content";
row.SomethingElse = "More Initial Content";
...
FuDataSet.Fus.AddFuRow(row);
Saving user changes in current row in Dataset:
FuDataSet.FuRow row = (FuDataSet.FuRow) (((DataRowView) FuBindingSource.Current).Row);
row.Someting = someTextBox.text;
...
Save in Database:
Validate();
FuBindingSource.EndEdit();
FuTableAdapter.Update(FuDataSet.Fus); <-- Exception here
I'm using the standard DatagridView, Dataset, TableAdapter, BindingSource Scheme VS puts automaticly up after defining the database structure. There is only a single table involved and SQL Server compact 3.5 is used.
Now my problem is that I get a Concurrency Exception (DeletedRowInaccessibleException) each time I'm doing this (starting with an empty database):
Creating a new row, delete this row, save in Database, new row, save in database, delete this row, save in database <- Exception
I think that there is some synchroniszing problem between the database and the dataset.
If I'm reloading the databse after each save via FuTableAdapter.Fill(FuDataSet.Fus) the problem is gone. However, this cannot be the intention I think.
I hope someone can help me out and spot a failure in the design or explain me what may go wrong.
Thank you!
Does your table have an auto increment identity column as the primary key? If so it might not be updating the dataset table with the new value after the insert, so when you come to delete it, it cannot find the row in the database. That could explain why it works once you called the Fill() method.
You will need to somehow return the primary key on the insert so that the dataset table stays in sync with database. If you are using a store procedure to do the inserts, then primary key can be returned using an out parameter. Not sure what the best way is if you are using an SQL insert statement in the command, but you will then have to get the primary key back from the database table and assign it to the database table row.
Not sure if you are doing this after the saveing, but calling FuDataSet.AcceptChanges() will help the dataset track new changes after the database has been updated.
What you have listed there is correct. When a new row is created in the dataset table, it creates it's own ID. When you save to the database, the database table creates it's own ID as well, which in most cases will be different to the one in the dataset.
When you created the table adapter for that table, you had to supply a sql state to create the dataset table. On the advanced Options button, there is a checkbox called "Refresh the data table". Check that to have a sql statement added after the insert and update to retrieve the identity column.
If the checkbox is disabled then I am not sure what else you could, other than reload the data after each save, which will not be optimal.
Sorry I cannot be of more assistance. Best of luck