Unable to update desired rows - c#

I have the following query which is executed in a single string through C#.
SET #out_param := '';
SELECT
sightinguid
FROM
pc
INNER JOIN
c ON uid = id
WHERE
//... other conditions
AND (#out_param:=CONCAT_WS(',', sightinguid, #out_param))
LIMIT 50 FOR UPDATE;
UPDATE pc
SET
last_accessed_timestamp = NOW()
WHERE
sightinguid IN (#out_param);
SELECT #out_param;
I am basically trying to put the first 50 values of the first query in a comma separated string, and return this string at the end. Before doing so, I would like the update statement to execute on those same records. However, only the very first sightinguid is being updated. When I hardcode multiple values in the sightinguid IN (#out_param) part it works and updates them all - so I am assuming there is something wrong with that part.
I cannot put the SELECT in a subquery and update from there, due to the LIMIT 50 part, since MySQL does not let you put a LIMIT in a subquery.
Any ideas?

As you said, I don't know if you can use IN like that, i.e. using a variable.
Anyway, a simple workaround would be to use a temporary table to store information between the two queries:
CREATE TEMPORARY TABLE temp(
sightinguid #typeofsightinguid
)
INSERT INTO temp
SELECT //select1
sightinguid
FROM
pc
INNER JOIN
c ON uid = id
WHERE
//... other conditions
AND (#out_param:=CONCAT_WS(',', sightinguid, #out_param))
LIMIT 50 FOR UPDATE;
UPDATE pc
SET
last_accessed_timestamp = NOW()
WHERE
sightinguid IN (SELECT sightinguid FROM temp);
DROP TABLE temp;
SELECT #out_param;
If temporary tables are not an option (whatever the reason), then you're gonna have to do something like suggested here or here: basically, limit a subquery of the subquery. Like:
UPDATE pc
SET
last_accessed_timestamp = NOW()
WHERE
sightinguid IN (
SELECT sightinguid FROM (
SELECT //select2
sightinguid
FROM pc
INNER JOIN c
ON uid = id
WHERE //... other conditions
LIMIT 50
) tmp
)
Also, one more thing of note that I forgot to mention previously: using LIMIT without ORDER BY can result in non-deterministic queries, i.e. with different row order. So, following the example I wrote, you COULD get 2 different result sets on select1 and select2.

Related

Execute SELECT for all returned rows from another SELECT within the same query

With this query:
SELECT id FROM org.employees WHERE {some_condition}
For every row from the above query, I need to call:
SELECT * FROM org.work_schedule(#employeeId, #fromDate, #toDate)
where org.work_schedule is table-valued function that process all of the employee's available work schedules and constraints and return two DATETIME (start, end) columns representing the availabilities of the given employee for the provided date range.
I am thinking using a cursor on the first query and feed a temporary table that would be returned. Is this the only solution?
The project is in C# and I could also accomplish this in C# directly, but I suspect it would be more optimal to do this entirely in SQL (SQL Server 2008).
This seems localized, and I would generalize the question with :
How can I execute a query (SELECT) for every row returned by another query (SELECT) and return the entire results in one call (dynamically do SELECT UNION SELECT UNION ...)?
Thanks
You should use OUTER APPLY or CROSS APPLY instead of a cursor:
SELECT *
FROM ( SELECT id
FROM org.employees
WHERE {some_condition}) A
OUTER APPLY org.work_schedule(A.id, #fromDate, #toDate) B

sp_executesql runs in milliseconds in SSMS but takes 3 seconds from ado.net [duplicate]

This question already has an answer here:
Stored Proc slower from application than Management Studio
(1 answer)
Closed 9 years ago.
This is my dynamic query used on search form which runs in milliseconds in SSMS roughly between 300 to 400 ms:
exec sp_executesql N'set arithabort off;
set transaction isolation level read uncommitted;
With cte as
(Select ROW_NUMBER() OVER
(Order By Case When d.OldInstrumentID IS NULL
THEN d.LastStatusChangedDateTime Else d.RecordingDateTime End
desc) peta_rn,
d.DocumentID
From Documents d
Inner Join Users u on d.UserID = u.UserID
Inner Join IGroupes ig on ig.IGroupID = d.IGroupID
Inner Join ITypes it on it.ITypeID = d.ITypeID
Where 1=1
And (CreatedByAccountID = #0 Or DocumentStatusID = #1 Or DocumentStatusID = #2 )
And (d.JurisdictionID = #3 Or DocumentStatusID = #4 Or DocumentStatusID = #5)
AND ( d.DocumentStatusID = 9 )
)
Select d.DocumentID, d.IsReEfiled, d.IGroupID, d.ITypeID, d.RecordingDateTime,
d.CreatedByAccountID, d.JurisdictionID,
Case When d.OldInstrumentID IS NULL THEN d.LastStatusChangedDateTime
Else d.RecordingDateTime End as LastStatusChangedDateTime,
dbo.FnCanChangeDocumentStatus(d.DocumentStatusID,d.DocumentID) as CanChangeStatus,
d.IDate, d.InstrumentID, d.DocumentStatusID,ig.Abbreviation as IGroupAbbreviation,
u.Username, j.JDAbbreviation, inf.DocumentName,
it.Abbreviation as ITypeAbbreviation, d.DocumentDate,
ds.Abbreviation as DocumentStatusAbbreviation,
Upper(dbo.GetFlatDocumentName(d.DocumentID)) as FlatDocumentName
From Documents d
Left Join IGroupes ig On d.IGroupID = ig.IGroupID
Left Join ITypes it On d.ITypeID = it.ITypeID
Left Join Users u On u.UserID = d.UserID
Left Join DocumentStatuses ds On d.DocumentStatusID = ds.DocumentStatusID
Left Join InstrumentFiles inf On d.DocumentID = inf.DocumentID
Left Join Jurisdictions j on j.JurisdictionID = d.JurisdictionID
Inner Join cte on cte.DocumentID = d.DocumentID
Where 1=1
And peta_rn>=#6 AND peta_rn<=#7
Order by peta_rn',
N'#0 int,#1 int,#2 int,#3 int,#4 int,#5 int,#6 bigint,#7 bigint',
#0=44,#1=5,#2=9,#3=1,#4=5,#5=9,#6=94200,#7=94250
This sql is formed in C# code and the where clauses are added dynamically based on the value the user has searched in search form. It takes roughly 3 seconds to move from one page to 2nd. I already have necessary indexes on most of the columns where I search.
Any idea why would my Ado.Net code be slow?
Update: Not sure if execution plans would help but here they are:
It is possible that SQL server has created inappropriate query plan for ADO.NET connections. We have seen similar issues with ADO, usual solution is to clear any query plans and run slow query again - this may create better plan.
To clear query plans most general solution is to update statistics for involved tables. Like next for you:
update statistics documents with fullscan
Do same for other tables involved and then run your slow query from ADO.NET (do not run SSMS before).
Note that such timing inconsistencies may hint of bad query or database design - at least for us that is usually so :)
If you run a query repeatedly in SSMS, the database may re-use a previously created execution plan, and the required data may already be cached in memory.
There are a couple of things I notice in your query:
the CTE joins Users, IGroupes and ITypes, but the joined records are not used in the SELECT
the CTE performs an ORDER BY on a calculated expression (notice the 85% cost in (unindexed) Sort)
probably replacing the CASE expression with a computed persisted column which can be indexed speeds up execution.
note that the ORDER BY is executed on data resulting from joining 4 tables
the WHERE condition of the CTE states AND d.DocumentStatusID = 9, but AND's other DocumentStatusIDs
paging is performed on the result of 8 JOINed tables.
most likely creating an intermediate CTE which filters the first CTE based on peta_rn improves performance
.net by default uses UTF strings, which equates to NVARCHAR as opposed to VARCHAR.
When you are doing a WHERE ID = #foo in dot net, you are likely to be implicitly doing
WHERE CONVERT(ID, NVARCHAR) = #foo
The result is that this where clause can't be indexed, and must be table scanned. The solution is to actually pass each parameter into the SqlCommand as a DbParameter with the DbType set to VARCHAR (in the case of string).
A similar situation could of course occur with Int types if the .net parameter is "wider" than the SQL column equivalent.
PS The easiest way to "prove" this issue is to run your query in SSMS with the following above
DECLARE #p0 INT = 123
DECLARE #p1 NVARCHAR = "foobar" //etc etc
and compare with
DECLARE #p0 INT = 123
DECLARE #p1 VARCHAR = "foobar" //etc etc

Find Unused ID in Database using C# and SQL

I am trying to code a simple database management tool in C#. I am in the process of coding a function to insert a new row into the database, but I have run into a problem. I need to be able to detect which ID numbers are not already taken. I have done some research but haven't found any clear answers.
Example table:
ID Name
---------------
1 John
2 Linda
4 Mark
5 Jessica
How would I add a function that automatically detects that ID 3 is empty, and places a new entry there?
Edit: My real question is; When I want to insert a new row via C#, how do I handle a column which is auto-increment? An example would be fantastic :)
I don't like giving answers like this...but I am going to anyway on this occasion.
Don't
What if you store more data in another table which has a foreign key to the ID in this table? If you reuse numbers you are asking for trouble with referential integrity down the line.
I assume your field is an int? If so, an auto increment should give more than enough for most purposes. It makes your insert simpler, and maintains integrity.
Edit: You might have a very good reason to do it, but I wanted to make the point in case somebody comes along and sees this later on who thinks it is a good idea.
SQL:
SELECT ID From TABLE
OR
SELECT t.ID
FROM ( SELECT number + 1 AS ID
FROM master.dbo.spt_values
WHERE Type = 'p'
AND number <= ( SELECT MAX(ID) - 1
FROM #Table
)
) t
LEFT JOIN #Table ON t.ID = [#Table].ID
WHERE [#Table].ID IS NULL
C#
DataTable dt = new DataTable();
//Populate Dt with SQL
var tableInts = dt.Rows.Cast<DataRow>().Select(row => row.Field<int>("ID")).ToList<int>();
var allInts = Enumerable.Range(1, tableInts.Max()).ToList();
var minInt = allInts.Except(tableInts).Min();
SELECT #temp.Id
FROM #temp
LEFT JOIN table1 ON #temp.Id = table1.Id
WHERE table1.Id IS NULL
Try this?
But my suggestion is, just autoincrement the field.
How you do that is, you set the IDENTITY property of the column to true, and set it as Primary key too(not null).
To handle inserts, you might need triggers, which are like stored procedures, but they can act in place of insert or update or delete, or before/after insert/update/delete
Google triggers.
from How do I find a "gap" in running counter with SQL?
select
MIN(ID)
from (
select
0 ID
union all
select
[YourIdColumn]+1
from
[YourTable]
where
--Filter the rest of your key--
) foo
left join
[YourTable]
on [YourIdColumn]=ID
and --Filter the rest of your key--
where
[YourIdColumn] is null

How to get the newly added records

In my application, i want to show the newly added RECORDS by an import operation in a gridview. Is there is any method in sql to retrive newly added rows.
I tried to do it in using code and tried to get the difference before and after the insertion and its working perfectly but makes the application very slow. So, i want to do it in database itself.
Im using Mysql, ASP.NET.
Eg:
table may have these records before the import operation
ID Name
1 A
2 B
3 C
and after import the table may be like this.
ID Name
1 A
2 B
3 C
4 D
5 E
6 F
I want result like
ID Name
4 D
5 E
6 F
You need to have AUTO_INCREMENT column defined on table or alternatively you can use TIMESTAMP field to retrieve newly added records, try this:
SELECT *
FROM table_name
ORDER BY id DESC
LIMIT 10;
For single row insert you can use LAST_INSERT_ID after you INSERT query:
SELECT LAST_INSERT_ID();
For multi-row insert you can follow these steps:
START TRANSACTION;
SELECT MAX(id) INTO #var_max_id FROM table_name;
INSERT INTO table_name VALUES(..),(..),...;
SELECT MAX(id) INTO #var_max_id_new FROM table_name;
COMMIT;
SELECT *
FROM table_name
WHERE id BETWEEN (#var_max_id + 1) AND #var_max_id_new;
i think this will be more simple:
SELECT MAX(id) INTO #Max_table_Id FROM table;
// Insert operation here//////
SELECT * FROM table WHERE id>#Max_table_Id;
In case you use auto incremental IDs for your records, you can use:
SELECT * FROM [table] ORDER BY [id column] DESC LIMIT [number of records]
Otherwise you should add a TIMESTAMP colum to your records for this purpose and select by this column.
Personally, if there is an option, I wouldn't use the record IDs for this, as it is not what they are for. Record IDs can change throughout the lifetime of an application and they don't necessarily represent the order in which the items were added. Especially in data import/export scenarios. I'd prefer to create special columns to store such information, e.g. "CreatedAt", "ModifiedAt".

Returning 2 result sets from a sproc that has a CTE into a DataSet

My stored proc looks like:
WITH MYCTE(...)
AS
(
..
)
SELECT cte.*
FROM MYCTE cte
SELECT *
FROM table1
INNER JOIN MYCTE ...
So things were working fine with a single result set, I added the last SELECT statement to my sproc and now I get an error saying it doesn't know what MYCTE is.
Why am I not allowed to do this?
Assuming it works somehow (with your advice), do I have to change my dataset.fill call to bring back 2 result sets (tables)?
Are you trying to return a single result set? Then you'll have to do a UNION or else use a temp table or table variable to combine them.
But to answer the question: CTEs are only good for a single SELECT (or UPDATE, DELETE, etc.) They're actually part of the SELECT statement. So when you do the second select, it has no idea what MYCTE is supposed to be.

Categories