Why is exec sp_executesql much slower than inline sql? - c#

I have test this query in management studio and this execute very fast(less than a second)
declare #p_Message_0 varchar(3) = 'whh'
declare #p_CreatedTime_0 datetime = '2015-06-01'
SELECT count(1) FROM (SELECT * FROM [Logs](nolock) WHERE CONTAINS([Message], #p_Message_0) AND [CreatedTime]<#p_CreatedTime_0) t
SELECT t2.* FROM (SELECT t.*,ROW_NUMBER() OVER (ORDER BY Id DESC) as rownum FROM (SELECT * FROM [Logs](nolock) t WHERE CONTAINS([Message], #p_Message_0) AND [CreatedTime]<#p_CreatedTime_0) t) t2 WHERE rownum>0 AND rownum<=20
execution plan like this:
then I move it into C# ado.net, it run as this
exec sp_executesql N'SELECT count(1) FROM (SELECT * FROM [Logs](nolock) WHERE CONTAINS([Message], #p_Message_0) AND [CreatedTime]<#p_CreatedTime_0) t
SELECT t2.* FROM (SELECT t.*,ROW_NUMBER() OVER (ORDER BY Id desc) as rownum FROM (SELECT * FROM [Logs](nolock) t WHERE CONTAINS([Message], #p_Message_0) AND [CreatedTime]<#p_CreatedTime_0) t) t2 WHERE rownum>0 AND rownum<=20',N'#p_Message_0 varchar(3),#p_CreatedTime_0 datetime',#p_Message_0='whh',#p_CreatedTime_0='2015-06-01'
this one run really slow(about 30s). execution plan like:
I don't know what make these two plan different. Sql server is 2008 R2 with SP2, and I have tried parameter hint and OPTION (RECOMPILE), both not work for me.

Try updating statistics. The first one uses a variable with today's date. Variables aren't sniffed so you will get a guessed distribution. The second one uses a parameter. This can be sniffed.
If the stats haven't been updated today SQL Server will think no rows exist for that date so will give a plan on that basis. Such as a nested loops plan that is estimated to execute the TVF once but actually execute it many times.
AKA the ascending date problem.

Related

Why does System.Data.Linq generates ROW_NUMBER() for Paging instead of OFFSET/FETCH for SQL Server 2012

We are using Linq and Entity Framework to access a SQL Server 2012 database. We are having some performance issue, so after some investigation, we were able to fix some of the problems, but I would like to use SQL query with OFFSET/FETCH instead of ROW_NUMBER() and BETWEEN syntax.
The performance difference is not so big. OFFSET/FETCH is quicker by about 10%. Do you have any idea why the generated query uses ROW_NUMBER() and BETWEEN syntax? What can I do to force Linq to generate OFFSET/FETCH query?
C# code:
var orders = dc.Orders.OrderBy(q => q.LastModifiedTimestamp)
.Skip(q => skipCount)
.Take(q => takeCount)
.ToList();
The currently generated query:
-- Region Parameters
DECLARE #p0 Int = 10
DECLARE #p1 Int = 10
-- EndRegion
SELECT [t2].[OrderId], [t2].[CustomerId]
FROM (
SELECT ROW_NUMBER() OVER (ORDER BY [t1].[OrderId], [t1].[CustomerId]
FROM (
SELECT DISTINCT [t0].[OrderId], [t0].[CustomerId]
FROM [Order] AS [t0]
) AS [t1]
) AS [t2]
WHERE [t2].[ROW_NUMBER] BETWEEN #p0 + 1 AND #p0 + #p1
ORDER BY [t2].[ROW_NUMBER]
The preferred query:
SELECT *
FROM [Order]
ORDER BY LastModifiedTimestamp
OFFSET 10000 ROWS
FETCH NEXT 10000 ROWS ONLY
Do you have any idea why the generated query use ROW_NUMBER() and BETWEEN syntax? What can I do to force Linq to generate OFFSET/FETCH query?

SQL, is a date within range of two dates

I have an SQL Server Database, managed through SQL Server MS, interfacing with a c# application and i am struggling with a certain query.
The database is for a campsite booking system, which consists of the following tables relevant to the query.
BOOKING(BookingID, StaffID, CustomerID, PitchID, StartDate, EndDate)
PITCH(PitchID, TypeOfPitch, Capacity)
One pitch can occur in many bookings.
I am looking to create a query which will check the availability of a pitch on a certain date, which is input from a dateTimePicker. The query will return the available pitches and display them in a datagridview. Here is what i have so far.
SELECT * FROM dbo.PITCH, dbo.Booking
WHERE #Date
NOT BETWEEN dbo.BOOKING.[Start Date] AND dbo.BOOKING.[End Date]
This SQL code is not working, it is returning a pitch for every booking in the table.
All the c# around the SQL is working, i'm just not great at SQL queries and need some help!
Thanks in advance
SELECT * FROM dbo.PITCH
WHERE PitchID NOT IN
(
-- sub-query to take reserved pitches
select PitchID from dbo.Booking
where #Date BETWEEN [Start Date] AND [End Date]
)
The following should do what you want. The sub query pulls the pitches that are booked during the given date. So you want the pitches not in that query.
Select *
From dbo.PITCH
WHERE PitchID Not In (
select PitchID
from dbo.Booking
where #Date between [Start Date] And [End Date])
DECLARE #DATE DATETIME
SELECT
,b.BookingID
,b.StaffID
,b.CustomerID
,b.PitchID
,b.StartDate
,b.EndDate
,p.capacity
from Dbo.Pitch p
INNER JOIN Booking b ON b.PitchID = p.PitchID
WHERE #Date BETWEEN b.StartDate AND b.EndDate
ORDER BY #DATE DESC
Since you want the show the date picked from the PICKER at the top you need ORDER BY CLAUSE as well .
Get in the habit of writing like this as it will help you easily solve as well as analyse the problem .
HOPE IT HELPS

How to insert Huge dummy data to Sql server

Currently development team is done their application, and as a tester needs to insert 1000000 records into the 20 tables, for performance testing.
I gone through the tables and there is relationship between all the tables actually.
To insert that much dummy data into the tables, I need to understand the application completely in very short span so that I don't have the dummy data also by this time.
In SQL server is there any way to insert this much data insertion possibility.
please share the approaches.
Currently I am planning with the possibilities to create dummy data in excel, but here I am not sure the relationships between the tables.
Found in Google that SQL profiler will provide the order of execution, but waiting for the access to analyze this.
One more thing I found in Google is red-gate tool can be used.
Is there any script or any other solution to perform this tasks in simple way.
I am very sorry if this is a common question, I am working first time in SQL real time scenario. but I have the knowledge on SQL.
Why You don't generate those records in SQL Server. Here is a script to generate table with 1000000 rows:
DECLARE #values TABLE (DataValue int, RandValue INT)
;WITH mycte AS
(
SELECT 1 DataValue
UNION all
SELECT DataValue + 1
FROM mycte
WHERE DataValue + 1 <= 1000000
)
INSERT INTO #values(DataValue,RandValue)
SELECT
DataValue,
convert(int, convert (varbinary(4), NEWID(), 1)) AS RandValue
FROM mycte m
OPTION (MAXRECURSION 0)
SELECT
v.DataValue,
v.RandValue,
(SELECT TOP 1 [User_ID] FROM tblUsers ORDER BY NEWID())
FROM #values v
In table #values You will have some random int value(column RandValue) which can be used to generate values for other columns. Also You have example of getting random foreign key.
Below is a simple procedure I wrote to insert millions of dummy records into the table, I know its not the most efficient one but serves the purpose for a million records it takes around 5 minutes. You need to pass the no of records you need to generate while executing the procedure.
IF EXISTS (SELECT 1 FROM dbo.sysobjects WHERE id = OBJECT_ID(N'[dbo].[DUMMY_INSERT]') AND type in (N'P', N'PC'))
BEGIN
DROP PROCEDURE DUMMY_INSERT
END
GO
CREATE PROCEDURE DUMMY_INSERT (
#noOfRecords INT
)
AS
BEGIN
DECLARE #count int
SET #count = 1;
WHILE (#count < #noOfRecords)
BEGIN
INSERT INTO [dbo].[LogTable] ([UserId],[UserName],[Priority],[CmdName],[Message],[Success],[StartTime],[EndTime],[RemoteAddress],[TId])
VALUES(1,'user_'+CAST(#count AS VARCHAR(256)),1,'dummy command','dummy message.',0,convert(varchar(50),dateadd(D,Round(RAND() * 1000,1),getdate()),121),convert(varchar(50),dateadd(D,Round(RAND() * 1000,1),getdate()),121),'160.200.45.1',1);
SET #count = #count + 1;
END
END
you can use the cursor for repeat data:
for example this simple code:
Declare #SYMBOL nchar(255), --sample V
#SY_ID int --sample V
Declare R2 Cursor
For SELECT [ColumnsName]
FROM [TableName]
For Read Only;
Open R2
Fetch Next From R2 INTO #SYMBOL,#SY_ID
While (##FETCH_STATUS <>-1 )
Begin
Insert INTO [TableName] ([ColumnsName])
Values (#SYMBOL,#SY_ID)
Fetch Next From R2 INTO #SYMBOL,#SY_ID
End
Close R2
Deallocate R2
/*wait a ... moment*/
SELECT COUNT(*) --check result
FROM [TableName]

Execute SELECT for all returned rows from another SELECT within the same query

With this query:
SELECT id FROM org.employees WHERE {some_condition}
For every row from the above query, I need to call:
SELECT * FROM org.work_schedule(#employeeId, #fromDate, #toDate)
where org.work_schedule is table-valued function that process all of the employee's available work schedules and constraints and return two DATETIME (start, end) columns representing the availabilities of the given employee for the provided date range.
I am thinking using a cursor on the first query and feed a temporary table that would be returned. Is this the only solution?
The project is in C# and I could also accomplish this in C# directly, but I suspect it would be more optimal to do this entirely in SQL (SQL Server 2008).
This seems localized, and I would generalize the question with :
How can I execute a query (SELECT) for every row returned by another query (SELECT) and return the entire results in one call (dynamically do SELECT UNION SELECT UNION ...)?
Thanks
You should use OUTER APPLY or CROSS APPLY instead of a cursor:
SELECT *
FROM ( SELECT id
FROM org.employees
WHERE {some_condition}) A
OUTER APPLY org.work_schedule(A.id, #fromDate, #toDate) B

sp_executesql runs in milliseconds in SSMS but takes 3 seconds from ado.net [duplicate]

This question already has an answer here:
Stored Proc slower from application than Management Studio
(1 answer)
Closed 9 years ago.
This is my dynamic query used on search form which runs in milliseconds in SSMS roughly between 300 to 400 ms:
exec sp_executesql N'set arithabort off;
set transaction isolation level read uncommitted;
With cte as
(Select ROW_NUMBER() OVER
(Order By Case When d.OldInstrumentID IS NULL
THEN d.LastStatusChangedDateTime Else d.RecordingDateTime End
desc) peta_rn,
d.DocumentID
From Documents d
Inner Join Users u on d.UserID = u.UserID
Inner Join IGroupes ig on ig.IGroupID = d.IGroupID
Inner Join ITypes it on it.ITypeID = d.ITypeID
Where 1=1
And (CreatedByAccountID = #0 Or DocumentStatusID = #1 Or DocumentStatusID = #2 )
And (d.JurisdictionID = #3 Or DocumentStatusID = #4 Or DocumentStatusID = #5)
AND ( d.DocumentStatusID = 9 )
)
Select d.DocumentID, d.IsReEfiled, d.IGroupID, d.ITypeID, d.RecordingDateTime,
d.CreatedByAccountID, d.JurisdictionID,
Case When d.OldInstrumentID IS NULL THEN d.LastStatusChangedDateTime
Else d.RecordingDateTime End as LastStatusChangedDateTime,
dbo.FnCanChangeDocumentStatus(d.DocumentStatusID,d.DocumentID) as CanChangeStatus,
d.IDate, d.InstrumentID, d.DocumentStatusID,ig.Abbreviation as IGroupAbbreviation,
u.Username, j.JDAbbreviation, inf.DocumentName,
it.Abbreviation as ITypeAbbreviation, d.DocumentDate,
ds.Abbreviation as DocumentStatusAbbreviation,
Upper(dbo.GetFlatDocumentName(d.DocumentID)) as FlatDocumentName
From Documents d
Left Join IGroupes ig On d.IGroupID = ig.IGroupID
Left Join ITypes it On d.ITypeID = it.ITypeID
Left Join Users u On u.UserID = d.UserID
Left Join DocumentStatuses ds On d.DocumentStatusID = ds.DocumentStatusID
Left Join InstrumentFiles inf On d.DocumentID = inf.DocumentID
Left Join Jurisdictions j on j.JurisdictionID = d.JurisdictionID
Inner Join cte on cte.DocumentID = d.DocumentID
Where 1=1
And peta_rn>=#6 AND peta_rn<=#7
Order by peta_rn',
N'#0 int,#1 int,#2 int,#3 int,#4 int,#5 int,#6 bigint,#7 bigint',
#0=44,#1=5,#2=9,#3=1,#4=5,#5=9,#6=94200,#7=94250
This sql is formed in C# code and the where clauses are added dynamically based on the value the user has searched in search form. It takes roughly 3 seconds to move from one page to 2nd. I already have necessary indexes on most of the columns where I search.
Any idea why would my Ado.Net code be slow?
Update: Not sure if execution plans would help but here they are:
It is possible that SQL server has created inappropriate query plan for ADO.NET connections. We have seen similar issues with ADO, usual solution is to clear any query plans and run slow query again - this may create better plan.
To clear query plans most general solution is to update statistics for involved tables. Like next for you:
update statistics documents with fullscan
Do same for other tables involved and then run your slow query from ADO.NET (do not run SSMS before).
Note that such timing inconsistencies may hint of bad query or database design - at least for us that is usually so :)
If you run a query repeatedly in SSMS, the database may re-use a previously created execution plan, and the required data may already be cached in memory.
There are a couple of things I notice in your query:
the CTE joins Users, IGroupes and ITypes, but the joined records are not used in the SELECT
the CTE performs an ORDER BY on a calculated expression (notice the 85% cost in (unindexed) Sort)
probably replacing the CASE expression with a computed persisted column which can be indexed speeds up execution.
note that the ORDER BY is executed on data resulting from joining 4 tables
the WHERE condition of the CTE states AND d.DocumentStatusID = 9, but AND's other DocumentStatusIDs
paging is performed on the result of 8 JOINed tables.
most likely creating an intermediate CTE which filters the first CTE based on peta_rn improves performance
.net by default uses UTF strings, which equates to NVARCHAR as opposed to VARCHAR.
When you are doing a WHERE ID = #foo in dot net, you are likely to be implicitly doing
WHERE CONVERT(ID, NVARCHAR) = #foo
The result is that this where clause can't be indexed, and must be table scanned. The solution is to actually pass each parameter into the SqlCommand as a DbParameter with the DbType set to VARCHAR (in the case of string).
A similar situation could of course occur with Int types if the .net parameter is "wider" than the SQL column equivalent.
PS The easiest way to "prove" this issue is to run your query in SSMS with the following above
DECLARE #p0 INT = 123
DECLARE #p1 NVARCHAR = "foobar" //etc etc
and compare with
DECLARE #p0 INT = 123
DECLARE #p1 VARCHAR = "foobar" //etc etc

Categories