Problem with SQL Query Tracking - c#

Okay so here's my issue.
The user can go onto my site and retrieve 8 records at a time, then he/she is given the option to load more. These 8 records can be sorted by a param passed into the proc. Now when I get these 8 records on the front end, I have their ID's (hidden to the user though obviously), but their ID's are not in any specific order because the records are sorted by a variety of possible things.
When they click "Load More", I should be able to get the next 8 records from the database, sorted in the SAME fashion as the first 8 were.
For example, "Give me the top 8 records sorted by age". -> Click Load More -> Give me the next 8 oldest records without showing me the onces I just saw.
How can I call the proc and make sure none from the first result set are returned though? I only want to return 8 records at a time for efficiency reasons.
SELECT TOP 8
m.message,
m.votes,
(geography::Point(#latitude, #longitude, 4326).STDistance(m.point)) * 0.000621371192237334 as distance,
m.location,
datediff(hour,m.timestamp, getdate()) as age,
m.messageId,
ml.voted,
ml.flagged
FROM
tblMessages m
left join tblIPMessageLink ml on m.messageid = ml.messageid
WHERE
m.timestamp >= DATEADD(day, DATEDIFF(day, 0, #date), 0)
and
m.timestamp < DATEADD(day, DATEDIFF(day, 0, #date), 1)
ORDER BY
CASE WHEN #sort = 'votes1' THEN m.votes END DESC,
CASE WHEN #sort = 'votes2' THEN m.votes END ASC,
CASE WHEN #sort = 'age1' THEN datediff(hour,m.timestamp, getdate()) END ASC,
CASE WHEN #sort = 'age2' THEN datediff(hour,m.timestamp, getdate()) END DESC,
CASE WHEN #sort = 'distance1' THEN (geography::Point(#latitude, #longitude, 4326).STDistance(m.point)) * 0.000621371192237334 END ASC,
CASE WHEN #sort = 'distance2' THEN (geography::Point(#latitude, #longitude, 4326).STDistance(m.point)) * 0.000621371192237334 END DESC
END
That's my current query. How would I change it to work with paging?

use row_number
example
call 1
;WITH cte AS(SELECT *,row_number() OVER( ORDER BY name) AS rows FROM sysobjects)
SELECT * FROM cte WHERE ROWS BETWEEN 1 AND 8
ORDER BY rows
call 2
;WITH cte AS(SELECT *,row_number() OVER( ORDER BY name) AS rows FROM sysobjects)
SELECT * FROM cte WHERE ROWS BETWEEN 9 AND 16
ORDER BY rows
of course you want to use parameters instead of hardcoding the numbers, this way you can reuse the query, if the column can be sorted arbitrarily then you might need to use dynamic SQL
edit, here is what it should look like, you probably also want to return the max rownumber so that you know how many rows can be potentially returned
also you can make rows per page dynamic, in that case it would be something like
where Rows between #StartRow and (#StartRow + #RowsPerPage) -1
make sure to read Dynamic Search Conditions in T-SQL Version for SQL 2008 to see how you can optimize this to get plan reuse and a better plan in general
anyway, here is the proc, untested of course since I can't run it here
DECLARE #StartRow INT,#EndRow INT
--SELECT #StartRow =1, #EndRow = 8
;WITH cte AS (SELECT ROW_NUMBER() OVER (ORDER BY
CASE WHEN #sort = 'votes1' THEN m.votes END DESC,
CASE WHEN #sort = 'votes2' THEN m.votes END ASC,
CASE WHEN #sort = 'age1' THEN datediff(hour,m.timestamp, getdate()) END ASC,
CASE WHEN #sort = 'age2' THEN datediff(hour,m.timestamp, getdate()) END DESC,
CASE WHEN #sort = 'distance1' THEN (geography::Point(#latitude, #longitude, 4326).STDistance(m.point)) * 0.000621371192237334 END ASC,
CASE WHEN #sort = 'distance2' THEN (geography::Point(#latitude, #longitude, 4326).STDistance(m.point)) * 0.000621371192237334 END DESC
END) AS rows
m.message,
m.votes,
(geography::Point(#latitude, #longitude, 4326).STDistance(m.point)) * 0.000621371192237334 as distance,
m.location,
datediff(hour,m.timestamp, getdate()) as age,
m.messageId,
ml.voted,
ml.flagged
FROM
tblMessages m
left join tblIPMessageLink ml on m.messageid = ml.messageid
WHERE
m.timestamp >= DATEADD(day, DATEDIFF(day, 0, #date), 0)
and
m.timestamp < DATEADD(day, DATEDIFF(day, 0, #date), 1)
)
SELECT *
FROM cte WHERE ROWS BETWEEN #StartRow AND #EndRow
ORDER BY rows

David Hayden has a nice article on paging. You'll just need to keep track of the number of records and offset.
Also you'll still need to merge and resort the records on the client every time they load more
Here's the SP from that article
CREATE PROCEDURE dbo.ShowLog
#PageIndex INT,
#PageSize INT
AS
BEGIN
WITH LogEntries AS (
SELECT ROW_NUMBER() OVER (ORDER BY Date DESC)
AS Row, Date, Description
FROM LOG)
SELECT Date, Description
FROM LogEntries
WHERE Row between
(#PageIndex - 1) * #PageSize + 1 and #PageIndex*#PageSize
END

Related

Insert all dates between start and end date into table [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
let's say i have 2 table
First one is "Orders"
Select * from Orders
give me this results.
Order_ID Date_Start Date_End Order_Name
2059 2020-11-13 00:00:00.000 2020-11-14 00:00:00.000 order1
2060 2020-12-12 00:00:00.000 2020-12-22 00:00:00.000 order2
and second table say it "Dates"
This is desired results for Dates table.i need to insert dates between two dates to that table for each order ID.
Date Type1 Type2 Type3 Type4 Type5 Order_ID
2020-11-13 00:00:00.000 NULL NULL NULL NULL NULL 2059
2020-11-14 00:00:00.000 NULL NULL NULL NULL NULL 2059
i hope this is more clear now.
Actually quite simple using OVER/PARTITION and then dateadd().
First, we need to get however many records you want in your final list of records. To do this, pick any table that has at least as many rows as you want. Could be an employee table, customers, orders, whatever. For your example, as long as it had 14 days. From that, lets just create a temp result set giving you a simple run of numbers 1 through whatever... 10, 14, 127, whatever, as long as the table has that many records.
Now, the partition by order by is part of the trick. You can't partition by constants, but you CAN do based on an equation. So, pick the "ID" column of whatever table and multiply by 0 will always give you 0. So your partitioning will group all values with an equated value of 0... Tricky huh... So now, all the records fall into this one group and get assigned a row number within that group. Finish that off with a "TOP 14", and you get your 14 records to start your list basis.
SELECT top 10
ROW_NUMBER() OVER(PARTITION BY SomeTableID * 0 order by SomeTableID * 0) AS "MyRow"
FROM
SomeTable
So now, I have a result set with 10 rows in it with the values running from 1 to 10.
Now, lets build the dates. As long as you are building consecutively, such as per day, per month, per year, or whatever pattern, use one date as your baseline and keep adding. In the sample below, I am using the current date and just keep adding 1 month, but again, you can do for days, weeks, whatever.
select
dateadd( month, Counter.MyRow, convert( date, getdate() )) ListOfDates
from
( SELECT top 10 ROW_NUMBER()
OVER(PARTITION BY SomeTableID * 0 order by SomeTableID * 0) AS "MyRow"
FROM SomeTable ) Counter
So, in the above example, it will return 10 rows starting with today and generate
2020-11-20
2020-12-20
2021-01-20
...
2021-08-20
FOLLOW-UP.
Your query is failing because you are explicitly concatenating strings to build your command... BAD technique. You should parameterize your queries. Build a SQL Command object, add parameters, and THEN call your fill.
var sqlcmd = new SqlCommand("", con);
sqlcmd.CommandText =
#"WITH theDates AS
(
SELECT #parmStartDate as theDate
UNION ALL
SELECT DATEADD(day, 1, theDate)
FROM theDates
WHERE DATEADD(day, 1, theDate) <= #parmEndDate
)
SELECT theDate
FROM theDates
OPTION(MAXRECURSION 0)";
sqlcmd.Parameters.AddWithValue("parmStartDate", dataGridView.CurrentRow.Cells[2] );
sqlcmd.Parameters.AddWithValue("parmEndDate", dataGridView.CurrentRow.Cells[3] );
var ds = new DataSet();
var dtbl2 = new DataTable();
// pass the pre-formatted and parameterized query command to the SQL Data Adapter
var sda2 = new SqlDataAdapter(sqlcmd);
Here is some sql to build a dynamic date range table. You will need to customize it for your needs in the /* replace with your column after join / and / Join your table sections */
/*
script to build table with dynamic columns
*/
DROP TABLE IF EXISTS #tempDateRange
DROP TABLE IF EXISTS #dateRangeTable
DECLARE #StartDate datetime = DATEADD(DAY, -14, GETDATE()),
#EndDate datetime = GETDATE()
/*
Generate date range table
*/
SELECT DATEADD(DAY, nbr - 1, #StartDate) AS [Date],
UPPER(LEFT(DATENAME(mm, DATEADD(DAY, nbr - 1, #StartDate)), 3)) AS [MonthShort],
MONTH( DATEADD(DAY, nbr - 1, #StartDate)) AS [Month],
YEAR(DATEADD(DAY, nbr - 1, #StartDate)) AS [Year],
CONCAT(UPPER(LEFT(DATENAME(mm, DATEADD(DAY, nbr - 1, #StartDate)), 3)), '-', YEAR(DATEADD(DAY, nbr - 1, #StartDate))) AS MonthYear
INTO #tempDateRange
FROM ( SELECT TOP(DATEDIFF(DAY, #StartDate, #EndDate)) ones.n + 10*tens.n + 100*hundreds.n + 1000*thousands.n AS Nbr
FROM (VALUES(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) ones(n),
(VALUES(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) tens(n),
(VALUES(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) hundreds(n),
(VALUES(0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) thousands(n)
WHERE ones.n + 10*tens.n + 100*hundreds.n + 1000*thousands.n BETWEEN 1 AND DATEDIFF(DAY, #StartDate, #EndDate)
ORDER BY 1
) nbrs
WHERE nbr - 1 <= DATEDIFF(DAY, #StartDate, #EndDate)
/*
Generate columns for date range
*/
DECLARE
#columns NVARCHAR(MAX) = ''
SELECT #columns+=QUOTENAME(convert(nvarchar(10), Date, 120)) + ' NVARCHAR(10),'
FROM (
SELECT DISTINCT Date, [Month], [Year] FROM #tempDateRange
) x
ORDER BY x.[Year], x.[Month]
SET #columns = LEFT(#columns, LEN(#columns) - 1);
DECLARE #sql NVARCHAR(MAX) = ''
SET #sql = '
INSERT #dateRangeTable
SELECT *
FROM (
SELECT a.TestData AS Data, /* replace with your column after join */
convert(nvarchar(10), Date, 120) AS [Date]
FROM #tempDateRange [date]
/* Join your table */
LEFT JOIN (
SELECT ''start test'' AS TestData, CAST('''+CONVERT(NVARCHAR, #StartDate)+''' AS DATE) AS TargetDate
UNION
SELECT ''end test'' AS TestData, CAST('''+CONVERT(NVARCHAR, DATEADD(DAY, -1, #EndDate))+''' AS DATE) AS TargetDate
) AS a ON CAST(a.TargetDate AS DATE) = CAST(date.[date] AS DATE)
WHERE [date].[Date] BETWEEN CAST('''+CONVERT(NVARCHAR, #StartDate)+''' AS DATE) AND CAST('''+CONVERT(NVARCHAR, #EndDate)+''' AS DATE)
) o
PIVOT(
MAX(Data)
FOR [Date] IN ('+ REPLACE(#columns, 'NVARCHAR(10)', '') +')
) AS pivot_table;
'
SET #sql = N'
DROP TABLE IF EXISTS #dateRangeTable
CREATE TABLE #dateRangeTable('+#columns+')
' +
#sql
+ N'
SELECT * FROM #dateRangeTable
DROP TABLE IF EXISTS #dateRangeTable
'
PRINT (#sql)
--EXECUTE sp_executesql #sql

How to select +1 record from MSSQL using EF with single query?

In short:
I have records that have CreationTime column in database. I want to select records from last 2 days PLUS one record that follows (sort by creation date desc) that can be any time old.
So from records (knowing that today date is 11th March) I want to select all records that are at max 2 days old + 1:
1. 2019-03-11
2. 2019-03-11
3. 2019-03-10
4. 2019-03-08
5. 2019-03-07
6. 2019-03-16
So result should contain records 1,2,3,4. (4. even though it is 3 days old, it is that "+1" record I need).
I'm using MSSQL and .NET 4.6.1 Entity Framework.
IMO cleaner way to achieve this is to write two queries: first to get data from last two days and second is to get the latest record older than 2 days.
To get records from last 2 days:
select * from MyTable where CreationTime between getdate() and getdate() - 2
To get additional record:
select top 1 * from MyTable where CreationTme < getdate() - 2 order by CreationTime desc
Using EF with LINQ methods (dc is database context):
To get records from last 2 days:
dc.Entitites.Where(e => e.CreationTime <= DateTime.Now && e.CreationTime >= DateTime.Now.AddDays(-2));
additional record:
dc.Entities.Where(e => e.CreationTime < DateTime.Now.AddDays(-2)).OrderByDescending(e => e.CreationTime).First();
Try the following Logic
DECLARE #T TABLE
(
SeqNo INT IDENTITY(1,1),
MyDate DATETIME
)
INSERT INTO #T
VALUES(GETDATE())
,(DATEADD(MINUTE,-23,GETDATE()))
,(DATEADD(MINUTE,-78,GETDATE()))
,(DATEADD(MINUTE,-5443,GETDATE()))
,(DATEADD(MINUTE,-34,GETDATE()))
,(DATEADD(MINUTE,-360,GETDATE()))
,(DATEADD(MINUTE,-900,GETDATE()))
,(DATEADD(MINUTE,-1240,GETDATE()))
,(DATEADD(MINUTE,-3600,GETDATE()))
;WITH CTE
AS
(
SELECT
RN = ROW_NUMBER() OVER(PARTITION BY CAST(MyDate AS DATE) ORDER BY MyDate DESC),
DateSeq = DATEDIFF(DAY,MyDate,GETDATE()),
*
FROM #T
)
SELECT
*
FROM CTE
WHERE
DateSeq <2
OR
(
DateSeq = 2
AND
RN = 1
)
You can try the following query.
DECLARE #table TABLE(StartDate DATETIME)
INSERT INTO #table
VALUES('2019-03-11'),('2019-03-11'),('2019-03-10'),
('2019-03-08'),('2019-03-07'),('2019-03-16')
SELECT * FROM #table WHERE StartDate BETWEEN GETDATE()-4 AND GETDATE()
For getting old 4th entry,
SELECT * FROM #table
ORDER BY (select null)
OFFSET (select Count(*) from #table where StartDate BETWEEN GETDATE()-2 AND
GETDATE()) ROWS
FETCH NEXT 1 ROWS ONLY

How to use string type column in SQL pivot

I have a table like below
Name Year Bonus
---- ----- ------
Ram 2011 1000
Ram 2011 2000
Shyam 2011 'No Bonus'
Shyam 2012 5000
I want to display the total bonus year wise for each person.I tried below query
SELECT [Year],[Ram],[Shyam] FROM
(SELECT Name, [Year] , Bonus FROM Employee )Tab1
PIVOT
(
SUM(Bonus) FOR Name IN (Ram,Shyam)) AS Tab2
ORDER BY [Tab2].[Year]
My Output Should be like below
Name 2011 2012
---- ------ ------
Ram 3000 0
Shyam 'No Bonus' 5000
But it is not working.
Can anyone help me on this?
If your dbms is sql-server you can try to use SUM condition aggregate function in a CTE
then use CAST with coalesce to make it.
;WITH CTE AS(
SELECT Year,Name,
SUM(CASE WHEN Bonus LIKE '%[0-9]%' THEN CAST(Bonus AS DECIMAL) ELSE 0 END) total,
COUNT(CASE WHEN Bonus = 'No Bonus' THEN 1 END) cnt
FROM T
GROUP BY Year,Name
)
SELECT Name,
coalesce(MAX(CASE WHEN Year = 2011 THEN CAST(total AS VARCHAR(50)) END),'No Bonus') '2011',
coalesce(MAX(CASE WHEN Year = 2012 THEN CAST(total AS VARCHAR(50)) END),'No Bonus') '2012'
FROM CTE
GROUP BY Name
sqlfiddle
If you want to create columns dynamically you can try to use dynamic PIVOT.
DECLARE #cols AS NVARCHAR(MAX),
#query AS NVARCHAR(MAX);
;WITH CTE AS(
SELECT Year,Name,
SUM(CASE WHEN Bonus LIKE '%[0-9]%' THEN CAST(Bonus AS DECIMAL) ELSE 0 END) total,
COUNT(CASE WHEN Bonus = 'No Bonus' THEN 1 END) cnt
FROM T
GROUP BY Year,Name
)
SELECT #cols = STUFF((SELECT distinct ',coalesce(MAX(CASE WHEN cnt > 0 and Year = ' + cast(Year as varchar(5)) + ' THEN ''No Bonus'' WHEN Year = ' + cast(Year as varchar(5)) + ' and cnt = 0 THEN CAST(total AS VARCHAR(50)) END),''0'')' + QUOTENAME(Year)
FROM CTE c
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set #query = '
;WITH CTE AS(
SELECT Year,Name,
SUM(CASE WHEN Bonus LIKE ''%[0-9]%'' THEN CAST(Bonus AS DECIMAL) ELSE 0 END) total,
COUNT(CASE WHEN Bonus = ''No Bonus'' THEN 1 END) cnt
FROM T
GROUP BY Year,Name
)
SELECT Name, ' + #cols + '
from CTE
GROUP BY Name'
exec(#query)
sqlfiddle
According to your problem the following query is what I understood. Not the ideal solution but this will do.
You can modify the query if you need to make it dynamic.
SELECT [Name]
, case when [2011] = 0 then 'No Bonus' when [2011] is null then '0' else cast([2011] as varchar(50)) end as [2011]
, case when [2012] = 0 then 'No Bonus' when [2012] is null then '0' else cast([2012] as varchar(50)) end as [2012]
FROM
(SELECT Name, [Year] , cast(Bonus as int) Bonus FROM Employee)Tab1
PIVOT
(
SUM(Bonus) FOR Year IN ([2011],[2012])) AS Tab2
ORDER BY [Tab2].[Name]
You need to pass 0 in the table though and then modify in the PIVOT
I would just give up on storing numbers as strings. I don't see the difference between 0/NULL and 'No Bonus', except that the latter makes queries prone to really bad type conversion problems.
So, my advice is to do:
SELECT [Year],[Ram],[Shyam]
FROM (SELECT Name, [Year], TRY_CONVERT(int, Bonus) as Bonus
FROM Employee
) e
PIVOT (SUM(Bonus) FOR Name IN (Ram, Shyam)) AS Tab2
ORDER BY [Tab2].[Year] ;
You probably don't like that solution -- although I really do strongly recommend it because I have spent way too many hours debugging problems with numbers and dates stored as strings.
So, if you persist with storing values as string, use conditional aggregation and a bunch of logic:
select year,
coalesce( sum(case when name = 'Ram'
then convert(varchar(255), try_convert(int, bonus))
end),
'No Bonus'
) as Ram,
coalesce( sum(case when name = 'Shyam'
then convert(varchar(255), try_convert(int, bonus))
end),
'No Bonus'
) as Shyam
from employee e
group by year
order by year;

Given a sorted row of n numbers in SQL server, how do i find at what point the sum of the rows reach the value k?

Assume i got the below row of number and max quantity value is 10.
Quantity BatchValue
2 0
4 0
4 0
6 1
8 2
Summation of 2+4+4 gives me a value less than or equal to max quatity 10 and so the batch value for those rows become 0. The pending rows are 6 and 8. They cannot be summed up to be < max quantity. So they will be seperate. Can we get an sql query or an algorith that can do this?
Here's a nice running sum routine you can use
create table #temp (rowid int identity, quantity int)
insert #temp
select quantity from yourtable order by your order
declare #holding table (quantity int, runningsum int)
declare #quantity int
declare #running int=0
declare #iterator int = 1
while #iterator<=(select max(rowid) from #temp)
begin
select #quantity=quantity from #temp where rowid=#iterator
set #running=#quantity+#running
insert #holding
select #quantity, #running
set #iterator=#iterator+1
end
Edited code from Daniel Marcus above to give the actual response requested in query.
CREATE TABLE #temp(rowid int identity(1,1), quantity int)
INSERT INTO #temp
SELECT 2
UNION ALL SELECT 4
UNION ALL SELECT 4
UNION ALL SELECT 6
UNION ALL SELECT 8
declare #batchValue int = 0 ,#maxquantity int = 10
declare #holding table (quantity int, batchvalue int)
declare #quantity int
declare #running int=0
declare #iterator int = 1
while #iterator<=(select max(rowid) from #temp)
begin
select #quantity=quantity from #temp where rowid=#iterator
set #running=#quantity+#running
-- Newly added condition
if (#running > #maxquantity) BEGIN
SET #batchValue = #batchValue + 1 -- increment the batch value
insert #holding select #quantity, #batchValue
SET #running = #quantity -- reset the running value
END
ELSE
insert #holding select #quantity, #batchValue
set #iterator=#iterator+1
end
SELECT * FROM #holding
DROP TABLE #temp
Hope the snippet works for your purpose. I tested this in SQL azure and provides the result you mentioned.
--The below query will help you if you working on sql server 2012 or higher
CREATE TABLE #RUN_TOT(ID INT)
INSERT INTO #RUN_TOT VALUES(2),(4),(4),(6),(8)
;WITH CTE AS
(
SELECT ID,
ROW_NUMBER() OVER(ORDER BY ID) RNUM,
CASE
WHEN SUM(ID) OVER(ORDER BY ID ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) <=
(SELECT MAX(ID) FROM #RUN_TOT) THEN 0
ELSE
ID
END VAL
FROM #RUN_TOT
)
SELECT ID,VAL FROM CTE WHERE VAL=0
UNION ALL
SELECT ID,ROW_NUMBER() OVER(ORDER BY VAL) VAL FROM CTE WHERE VAL<>0

SQL Execution Time Slowing Down, Code First

I've got a bit of a strange one here. I'm using Entity Framework Code First in a console app that runs a batch process. The code loops round a series of dates executing a stored procedure every time.
Currently it loops about 300 times and over time, each execution gets slower and slower till near the end when its crawling.
I've tried memory profiling and that's not it. Here's example code.
_dbContext = new FooContext();
_barService = new BarService(new GenericRepository<Bar>(), _dbContext);
for (var date = lastCalculatedDate.AddDays(1); date <= yesterday; date = date.AddDays(1))
{
_barService.CalculateWeightings(date);
}
And all CalculateWeightings does is (I'm using nlog as well)
public void CalculateWeightings(DateTime dateTime)
{
_logger.Info("Calculating weightings for {1}", dateTime);
Context.Database.ExecuteSqlCommand("EXEC CalculateWeightings #dateTime", new SqlParameter("#dateTime", dateTime);
}
The stored procedure just populates a table with some records. Nothing complicated, the table ends up with a couple of 1000 rows in it so the problem isn't there
Any thoughts?
For those of you wanting to see the sql. Its a bit of a behemoth but I can't see any reason this would slow down over time. The number of rows dealt with are pretty low.
CREATE PROCEDURE [dbo].[CalculateWeightings]
#StartDate DateTime,
#EndDate DateTime,
#TradedMonthStart DateTime,
#InstrumentGroupId int
AS
BEGIN
---- GET ALL THE END OF DAY PRICINGS FOR MONTHLYS ----
SELECT
ROW_NUMBER() OVER
(
PARTITION BY RawTrades.FirstSequenceItemName,
CONVERT(VARCHAR, RawTrades.LastUpdate, 103)
ORDER BY RawTrades.FirstSequenceItemName, RawTrades.LastUpdate DESC
) AS [Row],
RawTrades.FirstSequenceItemID AS MonthId,
Sequences.ActualStartMonth,
Sequences.ActualEndMonth,
RawTrades.FirstSequenceItemName AS [MonthName],
CONVERT(VARCHAR, RawTrades.LastUpdate, 103) AS LastUpdate,
RawTrades.Price
INTO #monthly
FROM RawTrades
INNER JOIN Sequences ON RawTrades.FirstSequenceItemId = Sequences.SequenceItemId AND RawTrades.FirstSequenceId = Sequences.SequenceId
WHERE RawTrades.FirstSequenceID IN (SELECT MonthlySequenceId FROM Instruments WHERE InstrumentGroupId = #InstrumentGroupId)
AND [Action] <> 'Remove'
AND LastUpdate >= #StartDate
AND LastUpdate < #EndDate
AND ActualStartMonth >= #TradedMonthStart
ORDER BY RawTrades.FirstSequenceItemID, RawTrades.LastUpdate DESC
---- GET ALL THE END OF DAY PRICINGS FOR QUARTERLYS ----
SELECT
ROW_NUMBER() OVER
(
PARTITION BY RawTrades.FirstSequenceItemName,
CONVERT(VARCHAR, RawTrades.LastUpdate, 103)
ORDER BY RawTrades.FirstSequenceItemName, RawTrades.LastUpdate DESC
) AS [Row],
CONVERT(VARCHAR, RawTrades.LastUpdate, 103) AS LastUpdate,
Sequences.ActualStartMonth,
Sequences.ActualEndMonth,
RawTrades.Price
INTO #quarterly
FROM RawTrades
INNER JOIN Sequences ON RawTrades.FirstSequenceItemId = Sequences.SequenceItemId AND RawTrades.FirstSequenceId = Sequences.SequenceId
WHERE RawTrades.FirstSequenceID IN (SELECT QuarterlySequenceId FROM Instruments WHERE InstrumentGroupId = #InstrumentGroupId)
AND Action <> 'Remove'
AND LastUpdate >= #StartDate
AND LastUpdate < #EndDate
AND RawTrades.Price > 20
ORDER BY RawTrades.FirstSequenceItemID, RawTrades.LastUpdate DESC
---- GET ALL THE END OF DAY PRICINGS FOR QUARTERLYS ----
SELECT
ROW_NUMBER() OVER
(
PARTITION BY RawTrades.FirstSequenceItemName,
CONVERT(VARCHAR, RawTrades.LastUpdate, 103)
ORDER BY RawTrades.FirstSequenceItemName, RawTrades.LastUpdate DESC
) AS [Row],
CONVERT(VARCHAR, RawTrades.LastUpdate, 103) AS LastUpdate,
Sequences.ActualStartMonth,
Sequences.ActualEndMonth,
RawTrades.Price
INTO #seasonal
FROM RawTrades
INNER JOIN Sequences ON RawTrades.FirstSequenceItemId = Sequences.SequenceItemId AND RawTrades.FirstSequenceId = Sequences.SequenceId
WHERE RawTrades.FirstSequenceID IN (SELECT SeasonalSequenceId FROM Instruments WHERE InstrumentGroupId = #InstrumentGroupId)
AND Action <> 'Remove'
AND LastUpdate >= #StartDate
AND LastUpdate < #EndDate
AND RawTrades.Price > 20
ORDER BY RawTrades.FirstSequenceItemID, RawTrades.LastUpdate DESC
---- BEFORE WE INSERT RECORDS MAKE SURE WE DON'T ADD DUPLICATES ----
DELETE FROM LiveCurveWeightings
WHERE InstrumentGroupId = #InstrumentGroupId
AND CalculationDate = #EndDate
---- CALCULATE AND INSERT THE WEIGHTINGS ----
INSERT INTO LiveCurveWeightings (InstrumentGroupId, CalculationDate, TradedMonth, QuarterlyWeighting, SeasonalWeighting)
SELECT
#InstrumentGroupId,
#EndDate,
#monthly.ActualStartMonth,
AVG(COALESCE(#monthly.Price / #quarterly.Price,1)) AS QuarterlyWeighting,
AVG(COALESCE(#monthly.Price / #seasonal.Price,1)) AS SeasonalWeighting
FROM #monthly
LEFT JOIN #quarterly
ON #monthly.ActualStartMonth >= #quarterly.ActualStartMonth
AND #monthly.ActualEndMonth <= #quarterly.ActualEndMonth
AND #quarterly.[Row] = 1
AND #monthly.LastUpdate = #quarterly.LastUpdate
LEFT JOIN #seasonal
ON #monthly.ActualStartMonth >= #seasonal.ActualStartMonth
AND #monthly.ActualEndMonth <= #seasonal.ActualEndMonth
AND #seasonal.[Row] = 1
AND #monthly.LastUpdate = #seasonal.LastUpdate
WHERE #monthly.[Row] = 1
GROUP BY #monthly.ActualStartMonth
DROP TABLE #monthly
DROP TABLE #quarterly
DROP TABLE #seasonal
END
I think this issue may be due to your EF tracking graph getting too large. If you re-use your context in a batch operation with the tracking graph on every time you perform an operation it needs to enumerate the graph. With a few hundread items this isnt an issue but when you get into the 000s it can become a massive problem. Take a look at my article on this here and see if you think it matches the issue.
If you take a look at the graph below for insert operations you can see around 1000 inserts (when tracking is on) starts to sharply spike in execution time. (also note the log scales on the axis)

Categories