I have records in table1, if the records exist, it must copy into table2. I want to delete those records in a table1 once all the records are copied into another table2. Im still a beginner in database and with some researches, i found some tutorials on d internet how to connect with database, and the codes easy to understand so i came out with this program.This codes only do the copy part and i'm still lack of the delete part. Can help me figure out how to do the delete part? i found 2 reference in msdn, but i'm not sure and not understand on the codes given.
try
{
//create connection
System.Data.SqlClient.SqlConnection sqlConnection1 =
new System.Data.SqlClient.SqlConnection("Data Source=.dbname;Integrated Security=True;User Instance=True");
//command queries
System.Data.SqlClient.SqlCommand cmd = new System.Data.SqlClient.SqlCommand();
cmd.CommandType = System.Data.CommandType.Text;
cmd.CommandText = "INSERT INTO tblSend (ip, msg, date) SELECT ip, msg, date FROM tblOutbox";
cmd.Connection = sqlConnection1;
sqlConnection1.Open(); //open con
cmd.ExecuteNonQuery(); //execute query
sqlConnection1.Close(); //close con
}
catch (System.Exception excep)
{
MessageBox.Show(excep.Message);
}
If i replace the query into this: //cmd.CommandText = "DELETE tblSend WHERE id = 5";
its only delete one rows. But what if many records involved? Do i need to consider the EOF things? DO i need to use DataGridView? Becoz the code i did didn't use DataGridView at all. i dont want the records to be displayed, i just want it to running behind.
No you do not need a worry about EOF or using a DataGridView. Just as you can use an ExecuteNonQuery method to insert multiple rows you can also do the same when using DELETE.
Data manipulation statements such as INSERT, UPDATE and DELETE do not generate a result set and hence you would normally use ExecuteNonQuery to run them. All the data manipulation runs in the database server engine.
If I understand correctly you need all data from table1 in table2 and then delete table1.
Options
1) It you need it once you could rename table1 to table2 and recreate table1
-- move the records to table 2, ok I assume it does not exist;)
RENAME TABLE table1 TO table2;
-- Create new table1 with same structure as table 2
CREATE TABLE table1 AS SELECT * FROM table2 WHERE 1=2;
2) Do a separate copy and delete assuming you have something like a primary key
-- copy the records
INSERT table2(field1, field2, ...) SELECT field1, field2, ... FROM table1;
-- and delete them
DELETE FROM table1;
3) Do it using C# but as this seems a database problem to me I would not go that far in pulling all the records to the client and then throwing them back.
DELETE FROM tblSend WHERE id = 5;
This will delete all rows that match the WHERE condition.
I am not sure I understand the relevance of the DataGridView. If it is databound, it will automatically remove the records as well. You only need to issue the delete query once and the rest should happen automatically, assuming you have the databinding correct.
What do you mean by "if they exist"? Compared to what?
To delete multiple records from table 1, you have to make a loop which goes through your table and compare.
Pseudo code:
forach (whatever as whut)
row = select whatever from table1.
if (whut == row)
copy row from table 1 to table 2;
Delete from table 1 where whut.id == row.id;
DELETE FROM tblSend WHERE id = 5;
This is the one solution for deletion a record.
If you want to set the identity key to 0 again, use this code
DBCC CHECKIDENT('tblSend', RESEED, 0);
Then press F5,
Related
I have a windows forms application (C#) that reads some data from MySQL database. In a new version I needed to add a new column in one of the tables (to add some functionality). Sometimes I need to make a restore database (from dump file). If I restore the old table from the old database (without the new column) I get "unnknown column" error.
How should I alter my SQL command to select data from this table? If 'newcolumn' exists, I need to select data, if not I need to select NULL.
MySqlDataAdapter da = new MySqlDataAdapter(
"SELECT my_id AS Id,myColumn1 AS Column1,myColumn2 AS Column2,
newcolumn AS NewColumn (here IF NOT EXIST = NULL)", connection);
da.Fill(izpis_podatkov);
Thank you!
If you restore the database but leave the code as it is then there's a mismatch between code and database schema. The simplest option would be to alter the table to add the missing column after you do the restore. Something like:
ALTER TABLE yourtable ADD newcolumn VARCHAR(255)
Obviously, changing the table, column name and data type to the appropriate values for your situation.
EDIT:
You could do something closer to what you actually asked by creating a stored procedure that would check for the existence of the column and add it if it is not there:
CREATE PROCEDURE `MyStoredProc` ()
BEGIN
IF NOT EXISTS (SELECT * FROM information_schema.COLUMNS WHERE TABLE_SCHEMA = `db_name` AND TABLE_NAME = `newtable` AND COLUMN_NAME = `newcolumn`)
BEGIN
ALTER TABLE 'newtable' ADD 'newcolumn' VARCHAR(255) NULL;
END
SELECT my_id AS Id, myColumn1 AS Column1, myColumn2 AS Column2, newcolumn AS NewColumn FROM newtable;
END
You would then need to change your C# code to call this stored procedure:
command = new MySqlCommand(procName, connection);
da = new MySqlDataAdapter(command);
command.ExecuteNonQuery();
da.Fill(izpis_podatkov);
I don't know MySql that well so please check the syntax of the stored procedure first!
You can't use a SQL Data Manipulation Language query, in MySQL, that mentions a nonexistent column in a table. The query planner in the MySQL server rejects it even if it shows up in a conditional context like IFNULL().
You could use a UNION ALL operation, like so
SELECT myColumn1, myColumn2, newColumn
FROM newTable
UNION ALL
SELECT myColumn1, myColumn2, NULL as newColumn
FROM oldTable
This will give you back the rows of the newTable and the rows of the oldTable as if they were a single table. It basically hides your two tables behind the UNION, pretending they have the same layout, making an alias for the missing column in oldTable.
Then, if you restore the oldTable from backup, you can truncate (remove all rows from) the newTable, if that makes sense in your application.
You can use the query I showed in a CREATE VIEW statement, then your production software will see it as if it were table.
CREATE VIEW table AS
SELECT myColumn1, myColumn2, newColumn
FROM newTable
UNION ALL
SELECT myColumn1, myColumn2, NULL as newColumn
FROM oldTable;
I check my SQL database to see if a column exists if not create, but I wanted to insert a string in that column, but only if the column didnĀ“t exist.
Otherwise I handle that information in my C# code.
So far I have this code :
string query = "IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'tabela' AND COLUMN_NAME = 'coluna') ALTER TABLE tabela ADD coluna varchar(50)" ;
SqlCommand command = new SqlCommand(query, con);
command.ExecuteNonQuery();
How should I do ?
Change your query to execute a block after the IF (psuedocode):
IF NOT EXISTS(...)
BEGIN
ALTER TABLE MyTable ...;
INSERT INTO MyTable ...;
END
Be sure to put semicolons at the end of the ALTER and INSERT commands, since you are sending these in a single command from an application, so SQL Server will see them as being all on one line.
You can write a trigger for each SELECT from the database object. In that you can first check if the column exists, and then you can do the needful. This you can achieve entirely by using SQL triggres (or even stored procedures
), C# has nothing to do with it :)
For more details on triggers, you can check this out
I have an identity (auto increment integer) column in my data table in a SQL Server database.
When I start my program and add new record to this table identity column always equals -1. For the next record it becomes -2 and so on. I add new record this way:
http://msdn.microsoft.com/en-us/library/5ycd1034.aspx
However when I restart my program all identity values are reordered (become 1, 2, ...).
Any ideas why this happens? It would be no issue if i could delete these records without restarting. I use SQL Server 2008.
Also is there any way to specify MAX size for column data type through GUI interface (when adding table in visual studio 2012 server explorer) ?
Why they are negative I don't know. However, when you reload the application those records already exist in the database and have id's that were assigned when they were committed to the database; that's why they have real values on restart.
But, you don't need to delete the records from the DataTable, you just need to refresh that row after committing it to the database. There are a number of ways to do this, and would depend significantly on exactly how you're accessing your data now, but you can do things like tack on the SELECT SCOPE_IDENTITY() command with the INSERT command and then use ExecuteScalar to commit the row, like this:
var insertCmd = "INSERT INTO tbl (fld1, fld2) VALUES (#fld1, #fld2); SELECT SCOPE_IDENTITY()";
using (var c = new SqlConnection(connString))
using (var cmd = new SqlCommand(insertCmd, c))
{
cmd.AddParameterWithValue("#fld1", fld1Value);
cmd.AddParameterWithValue("#fld2", fld2Value);
var result = cmd.ExecuteScalar();
int id;
if (int.TryParse(result, out id))
{
// update the DataTable row here
dataTable.Rows[index]["id_column"] = id;
dataTable.AcceptChanges();
}
}
You could even choose to reload the entire DataTable after performing the update.
I'm trying to insert records using a high performance table parameter method ( http://www.altdevblogaday.com/2012/05/16/sql-server-high-performance-inserts/ ), and I'm curious if it's possible to retrieve back the identity values for each record I insert.
At the moment, the answer appears to be no - I insert the data, then retrieve back the identity values, and they don't match. Specifically, they don't match about 75% of the time, and they don't match in unpredictable ways. Here's some code that replicates this issue:
// Create a datatable with 100k rows
DataTable dt = new DataTable();
dt.Columns.Add(new DataColumn("item_id", typeof(int)));
dt.Columns.Add(new DataColumn("comment", typeof(string)));
for (int i = 0; i < 100000; i++) {
dt.Rows.Add(new object[] { 0, i.ToString() });
}
// Insert these records and retrieve back the identity
using (SqlConnection conn = new SqlConnection("Data Source=localhost;Initial Catalog=testdb;Integrated Security=True")) {
conn.Open();
using (SqlCommand cmd = new SqlCommand("proc_bulk_insert_test", conn)) {
cmd.CommandType = CommandType.StoredProcedure;
// Adding a "structured" parameter allows you to insert tons of data with low overhead
SqlParameter param = new SqlParameter("#mytable", SqlDbType.Structured);
param.Value = dt;
cmd.Parameters.Add(param);
SqlDataReader dr = cmd.ExecuteReader();
// Set all the records' identity values
int i = 0;
while (dr.Read()) {
dt.Rows[i].ItemArray = new object[] { dr.GetInt32(0), dt.Rows[i].ItemArray[1] };
i++;
}
dr.Close();
}
// Do all the records' ID numbers match what I received back from the database?
using (SqlCommand cmd = new SqlCommand("SELECT * FROM bulk_insert_test WHERE item_id >= #base_identity ORDER BY item_id ASC", conn)) {
cmd.Parameters.AddWithValue("#base_identity", (int)dt.Rows[0].ItemArray[0]);
SqlDataReader dr = cmd.ExecuteReader();
DataTable dtresult = new DataTable();
dtresult.Load(dr);
}
}
The database is defined using this SQL server script:
CREATE TABLE bulk_insert_test (
item_id int IDENTITY (1, 1) NOT NULL PRIMARY KEY,
comment varchar(20)
)
GO
CREATE TYPE bulk_insert_table_type AS TABLE ( item_id int, comment varchar(20) )
GO
CREATE PROCEDURE proc_bulk_insert_test
#mytable bulk_insert_table_type READONLY
AS
DECLARE #TableOfIdentities TABLE (IdentValue INT)
INSERT INTO bulk_insert_test (comment)
OUTPUT Inserted.item_id INTO #TableOfIdentities(IdentValue)
SELECT comment FROM #mytable
SELECT * FROM #TableOfIdentities
Here's the problem: the values returned from proc_bulk_insert_test are not in the same order as the original records were inserted. Therefore, I can't programmatically assign each record the item_id value I received back from the OUTPUT statement.
It seems like the only valid solution is to SELECT back the entire list of records I just inserted, but frankly I'd prefer any solution that would reduce the amount of data piped across my SQL Server's network card. Does anyone have better solutions for large inserts while still retrieving identity values?
EDIT: Let me try clarifying the question a bit more. The problem is that I would like my C# program to learn what identity values SQL Server assigned to the data that I just inserted. The order isn't essential; but I would like to be able to take an arbitrary set of records within C#, insert them using the fast table parameter method, and then assign their auto-generated ID numbers in C# without having to requery the entire table back into memory.
Given that this is an artificial test set, I attempted to condense it into as small of a readable bit of code as possible. Let me describe what methods I have used to resolve this issue:
In my original code, in the application this example came from, I would insert about 15 million rows using 15 million individual insert statements, retrieving back the identity value after each insert. This worked but was slow.
I revised the code using high performance table parameters for insertion. I would then dispose of all of the objects in C#, and read back from the database the entire objects. However, the original records had dozens of columns with lots of varchar and decimal values, so this method was very network traffic intensive, although it was fast and it worked.
I now began research to figure out whether it was possible to use the table parameter insert, while asking SQL Server to just report back the identity values. I tried scope_identity() and OUTPUT but haven't been successful so far on either.
Basically, this problem would be solved if SQL Server would always insert the records in exactly the order I provided them. Is it possible to make SQL server insert records in exactly the order they are provided in a table value parameter insert?
EDIT2: This approach seems very similar to what Cade Roux cites below:
http://www.sqlteam.com/article/using-the-output-clause-to-capture-identity-values-on-multi-row-inserts
However, in the article, the author uses a magic unique value, "ProductNumber", to connect the inserted information from the "output" value to the original table value parameter. I'm trying to figure out how to do this if my table doesn't have a magic unique value.
Your TVP is an unordered set, just like a regular table. It only has order when you specify as such. Not only do you not have any way to indicate actual order here, you're also just doing a SELECT * at the end with no ORDER BY. What order do you expect here? You've told SQL Server, effectively, that you don't care. That said, I implemented your code and had no problems getting the rows back in the right order. I modified the procedure slightly so that you can actually tell which identity value belongs to which comment:
DECLARE #TableOfIdentities TABLE (IdentValue INT, comment varchar(20))
INSERT INTO bulk_insert_test (comment)
OUTPUT Inserted.item_id, Inserted.comment
INTO #TableOfIdentities(IdentValue, comment)
SELECT comment FROM #mytable
SELECT * FROM #TableOfIdentities
Then I called it using this code (we don't need all the C# for this):
DECLARE #t bulk_insert_table_type;
INSERT #t VALUES(5,'foo'),(2,'bar'),(3,'zzz');
SELECT * FROM #t;
EXEC dbo.proc_bulk_insert_test #t;
Results:
1 foo
2 bar
3 zzz
If you want to make sure the output is in the order of identity assignment (which isn't necessarily the same "order" that your unordered TVP has), you can add ORDER BY item_id to the last select in your procedure.
If you want to insert into the destination table so that your identity values are in an order that is important to you, then you have a couple of options:
add a column to your TVP and insert the order into that column, then use a cursor to iterate over the rows in that order, and insert one at a time. Still more efficient than calling the entire procedure for each row, IMHO.
add a column to your TVP that indicates order, and use an ORDER BY on the insert. This isn't guaranteed, but is relatively reliable, particularly if you eliminate parallelism issues using MAXDOP 1.
In any case, you seem to be placing a lot of relevance on ORDER. What does your order actually mean? If you want to place some meaning on order, you shouldn't be doing so using an IDENTITY column.
You specify no ORDER BY on this: SELECT * FROM #TableOfIdentities so there's no guarantee of order. If you want them in the same order they were sent, do an INNER JOIN in that to the data that was inserted with an ORDER BY which matches the order the rows were sent in.
I have a table table1 with fields id(int), name(nchar), grade(real).
The following code isn't working. There are no errors or warnings. The code executes well but the number of affected rows = 0.
MsSql Server
sqlConnection1.Open();
SqlCommand cmd = new SqlCommand("Delete from [table1] where [id] = 1", sqlConnection1);
int c = cmd.ExecuteNonQuery();
sqlConnection1.Close();
All other queries are working well.
A slight expansion of what others have already asked. Are you certain that there are records to be deleted in your target table? Moreover, are you certain you are getting the table from the right database? It's possible the default is tempdb, for instance, and that just happens to have a table with the target name and with an id column.
First do a select from the SQL prompt to insure there are items of the type you are looking for:
SELECT TOP 10 * FROM [database].[schema].[table1] WHERE [id] = 1
If that provides results, try changing your command to explicitly state the database and schema as well:
DELETE FROM [database].[schema].[table1] WHERE [id] = 1
Thoughts:
is there a row with [id] 1
do you have a trigger that is firing?
my guess would be the second... the number is after triggers have been taken into account, and is the number of rows from the last operation.
I know this might sound silly, but is there data in the call with an ID of 1? Can you see it executing via SQL Profiler? What happens if you execute it via SSMS?
Does the query work when you run it in Sql Management studio?
Apart from Marc Gravell's comment about triggers you should also check, if there are foreign key constraints with ON DELETE RESTRICT` in place (and the error message somehow disappears before getting to you ...)