I am trying to code a simple database management tool in C#. I am in the process of coding a function to insert a new row into the database, but I have run into a problem. I need to be able to detect which ID numbers are not already taken. I have done some research but haven't found any clear answers.
Example table:
ID Name
---------------
1 John
2 Linda
4 Mark
5 Jessica
How would I add a function that automatically detects that ID 3 is empty, and places a new entry there?
Edit: My real question is; When I want to insert a new row via C#, how do I handle a column which is auto-increment? An example would be fantastic :)
I don't like giving answers like this...but I am going to anyway on this occasion.
Don't
What if you store more data in another table which has a foreign key to the ID in this table? If you reuse numbers you are asking for trouble with referential integrity down the line.
I assume your field is an int? If so, an auto increment should give more than enough for most purposes. It makes your insert simpler, and maintains integrity.
Edit: You might have a very good reason to do it, but I wanted to make the point in case somebody comes along and sees this later on who thinks it is a good idea.
SQL:
SELECT ID From TABLE
OR
SELECT t.ID
FROM ( SELECT number + 1 AS ID
FROM master.dbo.spt_values
WHERE Type = 'p'
AND number <= ( SELECT MAX(ID) - 1
FROM #Table
)
) t
LEFT JOIN #Table ON t.ID = [#Table].ID
WHERE [#Table].ID IS NULL
C#
DataTable dt = new DataTable();
//Populate Dt with SQL
var tableInts = dt.Rows.Cast<DataRow>().Select(row => row.Field<int>("ID")).ToList<int>();
var allInts = Enumerable.Range(1, tableInts.Max()).ToList();
var minInt = allInts.Except(tableInts).Min();
SELECT #temp.Id
FROM #temp
LEFT JOIN table1 ON #temp.Id = table1.Id
WHERE table1.Id IS NULL
Try this?
But my suggestion is, just autoincrement the field.
How you do that is, you set the IDENTITY property of the column to true, and set it as Primary key too(not null).
To handle inserts, you might need triggers, which are like stored procedures, but they can act in place of insert or update or delete, or before/after insert/update/delete
Google triggers.
from How do I find a "gap" in running counter with SQL?
select
MIN(ID)
from (
select
0 ID
union all
select
[YourIdColumn]+1
from
[YourTable]
where
--Filter the rest of your key--
) foo
left join
[YourTable]
on [YourIdColumn]=ID
and --Filter the rest of your key--
where
[YourIdColumn] is null
Related
I have the following query which is executed in a single string through C#.
SET #out_param := '';
SELECT
sightinguid
FROM
pc
INNER JOIN
c ON uid = id
WHERE
//... other conditions
AND (#out_param:=CONCAT_WS(',', sightinguid, #out_param))
LIMIT 50 FOR UPDATE;
UPDATE pc
SET
last_accessed_timestamp = NOW()
WHERE
sightinguid IN (#out_param);
SELECT #out_param;
I am basically trying to put the first 50 values of the first query in a comma separated string, and return this string at the end. Before doing so, I would like the update statement to execute on those same records. However, only the very first sightinguid is being updated. When I hardcode multiple values in the sightinguid IN (#out_param) part it works and updates them all - so I am assuming there is something wrong with that part.
I cannot put the SELECT in a subquery and update from there, due to the LIMIT 50 part, since MySQL does not let you put a LIMIT in a subquery.
Any ideas?
As you said, I don't know if you can use IN like that, i.e. using a variable.
Anyway, a simple workaround would be to use a temporary table to store information between the two queries:
CREATE TEMPORARY TABLE temp(
sightinguid #typeofsightinguid
)
INSERT INTO temp
SELECT //select1
sightinguid
FROM
pc
INNER JOIN
c ON uid = id
WHERE
//... other conditions
AND (#out_param:=CONCAT_WS(',', sightinguid, #out_param))
LIMIT 50 FOR UPDATE;
UPDATE pc
SET
last_accessed_timestamp = NOW()
WHERE
sightinguid IN (SELECT sightinguid FROM temp);
DROP TABLE temp;
SELECT #out_param;
If temporary tables are not an option (whatever the reason), then you're gonna have to do something like suggested here or here: basically, limit a subquery of the subquery. Like:
UPDATE pc
SET
last_accessed_timestamp = NOW()
WHERE
sightinguid IN (
SELECT sightinguid FROM (
SELECT //select2
sightinguid
FROM pc
INNER JOIN c
ON uid = id
WHERE //... other conditions
LIMIT 50
) tmp
)
Also, one more thing of note that I forgot to mention previously: using LIMIT without ORDER BY can result in non-deterministic queries, i.e. with different row order. So, following the example I wrote, you COULD get 2 different result sets on select1 and select2.
So let's say I have a table in SQL server that serves as a queue for items that need processing. Something like this:
Id (bigint)
BatchGuid (guid)
BatchProcessed (bit)
...
...along with some other columns describing the item that needs to be processed, etc. So there are many running consumers that add records to this table as needed to indicate that an item needs to be processed.
So now let's say I have a job that is in charge of getting a batch of items from this table and processing them. Say we want to let it process 10 at a time. Now also assume that this job can have many instances running at once, so it is concurrently accessing the table (along with any other consumers who may be adding new records to the queue).
I was planning to do something like this:
using(var tx = new Transaction(Isolation.Serializable))
{
var batchGuid = //newGuid
executeSql("update top(10) [QUeueTable] set [BatchGuid] = batchGuid where [BatchGuid] is null");
var itemsToProcess = executeSql("select * from [QueueTable] where [BatchGuid] = batchGuid");
tx.Commit()
}
So basically what I'd be doing is starting a transaction as serializable, marking 10 items with a specific GUID, then getting those 10 items, then committing.
Is this a feasible strategy? I believe the isolation level of serializable will basically lock the whole table to prevent read/write until the transaction is complete - is this correct? Basically the transaction will block all other read/write operations on the table? I believe this is what I want in this case as I don't want to read dirty data and I don't want concurrent running jobs to stomp on each other when marking a batch of 10 to process.
Any insights as to whether I'm on the right track with this would be much appreciated. If there are better ways to accomplish this I'd welcome alternatives as well.
Serializable isolation mode does not necessarily lock the whole table. If you have an index on BatchGuid you will probably do ok, but if not then SQL will probably escalate to a table lock.
A few things you may want to look at:
Using the OUTPUT statement you can combine your UPDATE and SELECT into one query
You may need to use UPDLOCK if you have multiple processes running this query
You can do this in a single statement if you use the OUTPUT clause:
UPDATE TOP (10) [QueueTable]
OUTPUT inserted.*
SET [BatchGuid] = batchGuid
WHERE [BatchGuid] IS NULL;
Or more specifically:
var itemsToProcess = executeSql("update top(10) [QUeueTable] output inserted.* set [BatchGuid] = batchGuid where [BatchGuid] is null");
It is personally preference I suppose, but I have never been a fan of the UPDATE TOP(n) syntax, because you can't specify an ORDER BY, and in most cases when specfifying top, you want to specify an order by, I much prefer using something like:
UPDATE q
OUTPTUT inserted.*
SET [BatchGuid] = batchGuid
FROM ( SELECT TOP (10) *
FROM dbo.QueueTable
WHERE BatchGuid IS NULL
ORDER BY ID
) AS q
ADDENDUM
In response to the comment, I don't believe there is any chance of a race condition, but I was not 100% certain. The reason I don't believe this because although the query reads as a SELECT, and an UPDATE, it is syntactic sugar, it is just an update, and uses exactly the same plan, and locks as the top query. However, since I don't know for sure I decided to test:
First I set up a sample table in temp DB, and a logging table to log updated IDs
USE TempDB;
GO
CREATE TABLE dbo.T (ID BIGINT NOT NULL IDENTITY PRIMARY KEY, Col UNIQUEIDENTIFIER NULL);
INSERT dbo.T (Col)
SELECT TOP 1000000 NULL
FROM sys.all_objects a, sys.all_objects b;
CREATE TABLE dbo.T2 (ID BIGINT NOT NULL PRIMARY KEY);
Then in 10 different SSMS windows I ran this:
WHILE 1 = 1
BEGIN
DECLARE #ID UNIQUEIDENTIFIER = NEWID();
UPDATE T
SET Col = #ID
OUTPUT inserted.ID INTO dbo.T2 (ID)
FROM ( SELECT TOP 10 *
FROM dbo.T
WHERE Col IS NULL
ORDER BY ID
) t;
IF ##ROWCOUNT = 0
RETURN;
END
The whole process ran for 20 minutes updating ~500,000 rows before I stopped all 10 threads. Since updating the same row twice would throw and error when inserting to T2 as a primary key violation, and all 10 threads needed to be stopped this shows that there was no race condition, and to confirm this, I ran the following:
SELECT Col, COUNT(*)
FROM dbo.T
WHERE Col IS NOT NULL
GROUP BY Col
HAVING COUNT(*) <> 10;
Which, as expected, returned no rows.
I am happy to be proved wrong and to concede I was lucky in that none of these 100,000 iterations clashed, but I don't believe it was luck. I really believe there is a single lock, therefore it doesn't matter if you have a transaction or not, you just need the correct isolation level.
first of all I am sorry if this question is too obvious, since I am quite new in SQL.
So, I have a list of IDs (variable, depending how many products the user chooses). And I want to check if all of them are in a table. If one of them is not, the result of the query should be null. If all of them are there, the result should be all the rows where those IDs are.
How can I do this?
Best regards,
Flavio
Do a LEFT JOIN from the list to the table on the ID field. You'll get a null if there is no record
You can even put a WHERE clause like 'WHERE List.ID IS NULL' to only see those that aren't in the table
Edit: Original Poster did not say they were using C# when I wrote this answer
UNTESTED:
Not sure if this is the most efficient but it seems like it should work.
1st it generates a count of items in the table for your list. Next it cross joins the 1 result record to a query containing the entire list ensuring the count matches the count in your provided list and limiting the results to your list.
SELECT *
FROM Table
CROSS JOIN (
SELECT count(*) cnt
FROM table
WHERE ID in (yourlist)) b
WHERE b.cnt = yourCount
and ID IN (YourList)
Running two in statements seems like it would be terribly slow overall; but my first step when writing SQL is usually to get something that works and then seek to improve performance if needed.
Get the list of Ids into a table, (you can pass them as a table variable parameter to a Stored proc), then in the stored proc, write
assuming the list of ids from C# is in table variable #idList
Select * from myTable
Where id in (Select id from #idList)
and not exists
(Select * from #idList
where id Not in
(Select id from myTable))
I am trying to insert null with my query into a datetime column which allows null. My query works fine (I think). But its putting 1900-01-01 00:00:00.000 instead of null into the datetime column. Why is this happening and how do I fix it ?
I created my own table to test this and nulling is NOT a problem there. But, it is a problem in another database. I think it must be something to do with the other database.
When inserting using INSERT query, don't specify the column name and don't give any value. That should insert null in the field.
For example, If Entry_Date is my nullable datetime column in abc table, then my insert statement would be like:
Insert into abc (Entry_Id, Entry_Value) values (1,1000);
By not mentioning the column, it should have null in it. Hope it helps.
To see all the triggers in the database, use the following query:
select
t.name as [TableName],
tr.name as [TriggerName],
m.[definition]
from sys.triggers tr
join sys.sql_modules m on m.object_id = tr.object_id
join sys.tables t on t.object_id = tr.parent_id
To see all the default constraints, use the following query:
select
t.name as [TableName],
c.name as [ColumnName],
dc.[name] as [ConstraintName],
dc.[definition]
from sys.tables t
join sys.columns c on c.object_id = t.object_id
join sys.objects do on do.object_id = c.default_object_id
join sys.default_constraints dc on dc.object_id= do.object_id
If your insert statement is O.K., then the only reason would be a trigger that alters the value of the insert. (Providing there isn't a bug on your SQL Server. :-) )
So check if your table you're inserting into does have triggers and what they do.
To see the list of triggers either select from sys.triggers in the DB where is the table or in SQL Server Management Studio in the Object Explorer go to the table and then expand it / Triggers - then you can check each trigger. You need to check INSTEAD OF triggers. But you might have a look also onto AFTER triggers if INSTEAD OF triggers don't cause this.
The other option is that the insert statement has a bug and the column defaults to 1900. In that case, are you sure you insert into the column you want ? Do you use INSERT Table(List of columns) Values and the order of columns and order of values is correct ?
In my application, i want to show the newly added RECORDS by an import operation in a gridview. Is there is any method in sql to retrive newly added rows.
I tried to do it in using code and tried to get the difference before and after the insertion and its working perfectly but makes the application very slow. So, i want to do it in database itself.
Im using Mysql, ASP.NET.
Eg:
table may have these records before the import operation
ID Name
1 A
2 B
3 C
and after import the table may be like this.
ID Name
1 A
2 B
3 C
4 D
5 E
6 F
I want result like
ID Name
4 D
5 E
6 F
You need to have AUTO_INCREMENT column defined on table or alternatively you can use TIMESTAMP field to retrieve newly added records, try this:
SELECT *
FROM table_name
ORDER BY id DESC
LIMIT 10;
For single row insert you can use LAST_INSERT_ID after you INSERT query:
SELECT LAST_INSERT_ID();
For multi-row insert you can follow these steps:
START TRANSACTION;
SELECT MAX(id) INTO #var_max_id FROM table_name;
INSERT INTO table_name VALUES(..),(..),...;
SELECT MAX(id) INTO #var_max_id_new FROM table_name;
COMMIT;
SELECT *
FROM table_name
WHERE id BETWEEN (#var_max_id + 1) AND #var_max_id_new;
i think this will be more simple:
SELECT MAX(id) INTO #Max_table_Id FROM table;
// Insert operation here//////
SELECT * FROM table WHERE id>#Max_table_Id;
In case you use auto incremental IDs for your records, you can use:
SELECT * FROM [table] ORDER BY [id column] DESC LIMIT [number of records]
Otherwise you should add a TIMESTAMP colum to your records for this purpose and select by this column.
Personally, if there is an option, I wouldn't use the record IDs for this, as it is not what they are for. Record IDs can change throughout the lifetime of an application and they don't necessarily represent the order in which the items were added. Especially in data import/export scenarios. I'd prefer to create special columns to store such information, e.g. "CreatedAt", "ModifiedAt".