I have an identity (auto increment integer) column in my data table in a SQL Server database.
When I start my program and add new record to this table identity column always equals -1. For the next record it becomes -2 and so on. I add new record this way:
http://msdn.microsoft.com/en-us/library/5ycd1034.aspx
However when I restart my program all identity values are reordered (become 1, 2, ...).
Any ideas why this happens? It would be no issue if i could delete these records without restarting. I use SQL Server 2008.
Also is there any way to specify MAX size for column data type through GUI interface (when adding table in visual studio 2012 server explorer) ?
Why they are negative I don't know. However, when you reload the application those records already exist in the database and have id's that were assigned when they were committed to the database; that's why they have real values on restart.
But, you don't need to delete the records from the DataTable, you just need to refresh that row after committing it to the database. There are a number of ways to do this, and would depend significantly on exactly how you're accessing your data now, but you can do things like tack on the SELECT SCOPE_IDENTITY() command with the INSERT command and then use ExecuteScalar to commit the row, like this:
var insertCmd = "INSERT INTO tbl (fld1, fld2) VALUES (#fld1, #fld2); SELECT SCOPE_IDENTITY()";
using (var c = new SqlConnection(connString))
using (var cmd = new SqlCommand(insertCmd, c))
{
cmd.AddParameterWithValue("#fld1", fld1Value);
cmd.AddParameterWithValue("#fld2", fld2Value);
var result = cmd.ExecuteScalar();
int id;
if (int.TryParse(result, out id))
{
// update the DataTable row here
dataTable.Rows[index]["id_column"] = id;
dataTable.AcceptChanges();
}
}
You could even choose to reload the entire DataTable after performing the update.
Related
One of the columns, called ID my SQL Server database is auto generated. When I insert a new row(s) using SqlDataAdapter.Update(table) and accept the changes SqlDataAdapter.AcceptChanges(), the ID column in table is set to -1, instead of new auto generated ID value. Database insertion work and new rows(s) are inserted into database with sequential auto generated IDvalues.
How do I force SqlDataAdapter or SqlDataTable to get back correct ID values ?
I have solved the issue. For future reference
operationDBAdapter.InsertCommand.CommandText = "INSERT INTO Operations(operationType,agentID,resetTime,description,enabled,logLevel) VALUES
(#operationType,#agentID,#resetTime,#description,#enabled,#logLevel);
**SELECT ID,operationType,agentID,resetTime,description,enabled,logLevel
FROM Operations WHERE (ID = SCOPE_IDENTITY())**"
After the insertion command, just select rows from the same table.
I am trying to create a temp table from the a select statement so that I can get the schema information from the temp table.
I am able to achieve this in SQL Server with the following code:
//This creates the temp table
SELECT location.id, location.name into #URM_TEMP_TABLE from location
//This retrieves column information from the temp table
SELECT * FROM tempdb.INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME like '#U%'
If I run the code in c# like so:
using (CONN = new SqlConnection(Settings.Default.UltrapartnerDBConnectionString))
{
var commandText = ReportToDisplay.ReportQuery.ToLower().Replace("from", "into #URM_TEMP_TABLE from");
using (SqlCommand command = CONN.CreateCommand())
{
//Create temp table
CONN.Open();
command.CommandText = commandText;
int retVal = command.ExecuteNonQuery();
CONN.Close();
//get column data from temp table
command.CommandText = "SELECT * FROM TEMPDB.INFORMATION_SCHEMA.Columns WHERE TABLE_NAME like '#U%'";
CONN.Open();
using (var reader = command.ExecuteReader())
{
while (reader.Read())
{
ColumnsForReport.Add(new ListBoxCheckBoxItemModel
{
Name = reader["COLUMN_NAME"].ToString(),
DataType = reader["DATA_TYPE"].ToString(),
IsSelected = false,
RVMCommandModel = this
});
}
}
CONN.Close();
//drop table
command.CommandText = "DROP TABLE #URM_TEMP_TABLE";
CONN.Open();
command.ExecuteNonQuery();
CONN.Close();
}
}
Everything works until it gets to the drop statement: Cannot drop the table '#URM_TEMP_TABLE'
So ExecuteNonQuery returns back 2547 - which is the number of rows the temp table is supposed to have in it. However, it seems that the table does not actually get created using this. Is ExecuteNonQuery the right method to call?
temporary tables are only in scope for the current session, in the code you've posted you're opening a connection, creating a temp table, closing connection.
then opening another connection (new session) and attempting to drop a table which is not in scope of that session.
You would need to drop the temp table within the same connection, or possibly make it a global temp table (##) - though in this case with two separate connections, a global temp table would still fall out of scope.
Additionally, as it was pointed out in the comments your temp tables will be cleaned up automatically - but if you really did want to drop them, you must do so from the session that created them.
EDIT taken from another SO thread:
Global temporary tables in SQL Server
Global temporary tables operate much like local temporary tables; they
are created in tempdb and cause less locking and logging than
permanent tables. However, they are visible to all sessions, until the
creating session goes out of scope (and the global ##temp table is no
longer being referenced by other sessions). If two different sessions
try the above code, if the first is still active, the second will
receive the following:
Server: Msg 2714, Level 16, State 6, Line 1 There is already an object
named '##people' in the database.
I have yet to see a valid justification for the use of a global ##temp
table. If the data needs to persist to multiple users, then it makes
much more sense, at least to me, to use a permanent table. You can
make a global ##temp table slightly more permanent by creating it in
an autostart procedure, but I still fail to see how this is
advantageous over a permanent table. With a permanent table, you can
deny permissions; you cannot deny users from a global ##temp table.
Looks like global temp tables still go out of scope... they're just bad to use in general IMO. Can you just drop the table in the same session or rethink your solution?
I have found a strange phenomena on MSSQL server.
Let say we have a table:
CREATE TABLE [testTable]
(
[ID] [numeric](11, 0) NOT NULL,
[Updated] [datetime] NULL,
PRIMARY KEY (ID)
);
I do a simple select based on Updated field:
SELECT TOP 10000 ID, Updated
FROM testTable
WHERE Updated>='2013-05-22 08:55:12.152'
ORDER BY Updated
And now comes the fun part: how can I have in result set double records - I mean same ID in 2 records with different Updated value.
For me it seems to be, that the Updated datetime value was changed and it was included one more time in result set. But is it possible?
UPDATE:
Source code I using for downloading data from SQL server:
using (SqlCommand cmd = new SqlCommand(sql, Connection) { CommandTimeout = commandTimeout })
{
using (System.Data.SqlClient.SqlDataAdapter adapter = new System.Data.SqlClient.SqlDataAdapter(cmd))
{
DataTable retVal = new DataTable();
adapter.Fill(retVal);
return retVal;
}
}
Connection = SqlConnection
sql = "SELECT TOP 10000 ...."
Your question seems to lack some details but here's my ideas.
The first case I'd think of would be that you are somehow selecting those IDs twice (could be a join, group by, ...). Please manually check inside your table (in MSSQL Server rather than inside a function or method) to see if there is dupplicated IDs. If there is, the issue is that your Primary Key hasn't been set correctly. Otherwise, you will need to provide all the relevant code that is used to select the data in order to get more help.
Another case might be that someone or something altered the primary key so it is on both ID and Updated, allowing the same ID to be inserted twice as long as the Updated field doesn't match too.
You may also try this query to see if it gets dupplicated IDs inside your context:
SELECT ID
from testTable
ORDER BY ID
I hope this helps.
I'm trying to insert records using a high performance table parameter method ( http://www.altdevblogaday.com/2012/05/16/sql-server-high-performance-inserts/ ), and I'm curious if it's possible to retrieve back the identity values for each record I insert.
At the moment, the answer appears to be no - I insert the data, then retrieve back the identity values, and they don't match. Specifically, they don't match about 75% of the time, and they don't match in unpredictable ways. Here's some code that replicates this issue:
// Create a datatable with 100k rows
DataTable dt = new DataTable();
dt.Columns.Add(new DataColumn("item_id", typeof(int)));
dt.Columns.Add(new DataColumn("comment", typeof(string)));
for (int i = 0; i < 100000; i++) {
dt.Rows.Add(new object[] { 0, i.ToString() });
}
// Insert these records and retrieve back the identity
using (SqlConnection conn = new SqlConnection("Data Source=localhost;Initial Catalog=testdb;Integrated Security=True")) {
conn.Open();
using (SqlCommand cmd = new SqlCommand("proc_bulk_insert_test", conn)) {
cmd.CommandType = CommandType.StoredProcedure;
// Adding a "structured" parameter allows you to insert tons of data with low overhead
SqlParameter param = new SqlParameter("#mytable", SqlDbType.Structured);
param.Value = dt;
cmd.Parameters.Add(param);
SqlDataReader dr = cmd.ExecuteReader();
// Set all the records' identity values
int i = 0;
while (dr.Read()) {
dt.Rows[i].ItemArray = new object[] { dr.GetInt32(0), dt.Rows[i].ItemArray[1] };
i++;
}
dr.Close();
}
// Do all the records' ID numbers match what I received back from the database?
using (SqlCommand cmd = new SqlCommand("SELECT * FROM bulk_insert_test WHERE item_id >= #base_identity ORDER BY item_id ASC", conn)) {
cmd.Parameters.AddWithValue("#base_identity", (int)dt.Rows[0].ItemArray[0]);
SqlDataReader dr = cmd.ExecuteReader();
DataTable dtresult = new DataTable();
dtresult.Load(dr);
}
}
The database is defined using this SQL server script:
CREATE TABLE bulk_insert_test (
item_id int IDENTITY (1, 1) NOT NULL PRIMARY KEY,
comment varchar(20)
)
GO
CREATE TYPE bulk_insert_table_type AS TABLE ( item_id int, comment varchar(20) )
GO
CREATE PROCEDURE proc_bulk_insert_test
#mytable bulk_insert_table_type READONLY
AS
DECLARE #TableOfIdentities TABLE (IdentValue INT)
INSERT INTO bulk_insert_test (comment)
OUTPUT Inserted.item_id INTO #TableOfIdentities(IdentValue)
SELECT comment FROM #mytable
SELECT * FROM #TableOfIdentities
Here's the problem: the values returned from proc_bulk_insert_test are not in the same order as the original records were inserted. Therefore, I can't programmatically assign each record the item_id value I received back from the OUTPUT statement.
It seems like the only valid solution is to SELECT back the entire list of records I just inserted, but frankly I'd prefer any solution that would reduce the amount of data piped across my SQL Server's network card. Does anyone have better solutions for large inserts while still retrieving identity values?
EDIT: Let me try clarifying the question a bit more. The problem is that I would like my C# program to learn what identity values SQL Server assigned to the data that I just inserted. The order isn't essential; but I would like to be able to take an arbitrary set of records within C#, insert them using the fast table parameter method, and then assign their auto-generated ID numbers in C# without having to requery the entire table back into memory.
Given that this is an artificial test set, I attempted to condense it into as small of a readable bit of code as possible. Let me describe what methods I have used to resolve this issue:
In my original code, in the application this example came from, I would insert about 15 million rows using 15 million individual insert statements, retrieving back the identity value after each insert. This worked but was slow.
I revised the code using high performance table parameters for insertion. I would then dispose of all of the objects in C#, and read back from the database the entire objects. However, the original records had dozens of columns with lots of varchar and decimal values, so this method was very network traffic intensive, although it was fast and it worked.
I now began research to figure out whether it was possible to use the table parameter insert, while asking SQL Server to just report back the identity values. I tried scope_identity() and OUTPUT but haven't been successful so far on either.
Basically, this problem would be solved if SQL Server would always insert the records in exactly the order I provided them. Is it possible to make SQL server insert records in exactly the order they are provided in a table value parameter insert?
EDIT2: This approach seems very similar to what Cade Roux cites below:
http://www.sqlteam.com/article/using-the-output-clause-to-capture-identity-values-on-multi-row-inserts
However, in the article, the author uses a magic unique value, "ProductNumber", to connect the inserted information from the "output" value to the original table value parameter. I'm trying to figure out how to do this if my table doesn't have a magic unique value.
Your TVP is an unordered set, just like a regular table. It only has order when you specify as such. Not only do you not have any way to indicate actual order here, you're also just doing a SELECT * at the end with no ORDER BY. What order do you expect here? You've told SQL Server, effectively, that you don't care. That said, I implemented your code and had no problems getting the rows back in the right order. I modified the procedure slightly so that you can actually tell which identity value belongs to which comment:
DECLARE #TableOfIdentities TABLE (IdentValue INT, comment varchar(20))
INSERT INTO bulk_insert_test (comment)
OUTPUT Inserted.item_id, Inserted.comment
INTO #TableOfIdentities(IdentValue, comment)
SELECT comment FROM #mytable
SELECT * FROM #TableOfIdentities
Then I called it using this code (we don't need all the C# for this):
DECLARE #t bulk_insert_table_type;
INSERT #t VALUES(5,'foo'),(2,'bar'),(3,'zzz');
SELECT * FROM #t;
EXEC dbo.proc_bulk_insert_test #t;
Results:
1 foo
2 bar
3 zzz
If you want to make sure the output is in the order of identity assignment (which isn't necessarily the same "order" that your unordered TVP has), you can add ORDER BY item_id to the last select in your procedure.
If you want to insert into the destination table so that your identity values are in an order that is important to you, then you have a couple of options:
add a column to your TVP and insert the order into that column, then use a cursor to iterate over the rows in that order, and insert one at a time. Still more efficient than calling the entire procedure for each row, IMHO.
add a column to your TVP that indicates order, and use an ORDER BY on the insert. This isn't guaranteed, but is relatively reliable, particularly if you eliminate parallelism issues using MAXDOP 1.
In any case, you seem to be placing a lot of relevance on ORDER. What does your order actually mean? If you want to place some meaning on order, you shouldn't be doing so using an IDENTITY column.
You specify no ORDER BY on this: SELECT * FROM #TableOfIdentities so there's no guarantee of order. If you want them in the same order they were sent, do an INNER JOIN in that to the data that was inserted with an ORDER BY which matches the order the rows were sent in.
I have records in table1, if the records exist, it must copy into table2. I want to delete those records in a table1 once all the records are copied into another table2. Im still a beginner in database and with some researches, i found some tutorials on d internet how to connect with database, and the codes easy to understand so i came out with this program.This codes only do the copy part and i'm still lack of the delete part. Can help me figure out how to do the delete part? i found 2 reference in msdn, but i'm not sure and not understand on the codes given.
try
{
//create connection
System.Data.SqlClient.SqlConnection sqlConnection1 =
new System.Data.SqlClient.SqlConnection("Data Source=.dbname;Integrated Security=True;User Instance=True");
//command queries
System.Data.SqlClient.SqlCommand cmd = new System.Data.SqlClient.SqlCommand();
cmd.CommandType = System.Data.CommandType.Text;
cmd.CommandText = "INSERT INTO tblSend (ip, msg, date) SELECT ip, msg, date FROM tblOutbox";
cmd.Connection = sqlConnection1;
sqlConnection1.Open(); //open con
cmd.ExecuteNonQuery(); //execute query
sqlConnection1.Close(); //close con
}
catch (System.Exception excep)
{
MessageBox.Show(excep.Message);
}
If i replace the query into this: //cmd.CommandText = "DELETE tblSend WHERE id = 5";
its only delete one rows. But what if many records involved? Do i need to consider the EOF things? DO i need to use DataGridView? Becoz the code i did didn't use DataGridView at all. i dont want the records to be displayed, i just want it to running behind.
No you do not need a worry about EOF or using a DataGridView. Just as you can use an ExecuteNonQuery method to insert multiple rows you can also do the same when using DELETE.
Data manipulation statements such as INSERT, UPDATE and DELETE do not generate a result set and hence you would normally use ExecuteNonQuery to run them. All the data manipulation runs in the database server engine.
If I understand correctly you need all data from table1 in table2 and then delete table1.
Options
1) It you need it once you could rename table1 to table2 and recreate table1
-- move the records to table 2, ok I assume it does not exist;)
RENAME TABLE table1 TO table2;
-- Create new table1 with same structure as table 2
CREATE TABLE table1 AS SELECT * FROM table2 WHERE 1=2;
2) Do a separate copy and delete assuming you have something like a primary key
-- copy the records
INSERT table2(field1, field2, ...) SELECT field1, field2, ... FROM table1;
-- and delete them
DELETE FROM table1;
3) Do it using C# but as this seems a database problem to me I would not go that far in pulling all the records to the client and then throwing them back.
DELETE FROM tblSend WHERE id = 5;
This will delete all rows that match the WHERE condition.
I am not sure I understand the relevance of the DataGridView. If it is databound, it will automatically remove the records as well. You only need to issue the delete query once and the rest should happen automatically, assuming you have the databinding correct.
What do you mean by "if they exist"? Compared to what?
To delete multiple records from table 1, you have to make a loop which goes through your table and compare.
Pseudo code:
forach (whatever as whut)
row = select whatever from table1.
if (whut == row)
copy row from table 1 to table 2;
Delete from table 1 where whut.id == row.id;
DELETE FROM tblSend WHERE id = 5;
This is the one solution for deletion a record.
If you want to set the identity key to 0 again, use this code
DBCC CHECKIDENT('tblSend', RESEED, 0);
Then press F5,