C# calculate difference from two rows based on a sql query - c#

I have a task to solve. I am trying to display the operation time of two machines (number1 & number 2) in a diagram. Therefore i store information in a table. The columns are id, date, number1, number2.
Lets assume i have this specific dataset:
id date number1 number2
1| 24.09.14 | 100 | 120
2| 01.10.14 | 150 | 160
For displaying the information I need to retrieve the following data.
((number1(2)- number1(1)) + number2(2) - number1(1))/2)/(number of days (date2 - date1))
This should result in the following specific numbers.
((150-100 + 160-120)/2)/7= 6,42
Or in plain words. The result should be the average daily operation time from all of my machines. Substracting saturdays and sundays from the number of dates would be nice but not necessary.
I hope that you understand my question. In essence I am facing the problem that i dont know how to work with different rows from a simple sql query.
The programming language is c# in a razor based web project.

First I doubt that you have only 2 records in database. Here some code that makes calculation for every 2 rows in DataSet.
for(int i=0; i < dst.Tables[0].Rows.Count - 1; i+=2)
{
if(dst.Tables[0].Rows.Count % 2 != 0)
Console.WriteLine("Wrong records count")
int number1Row1 =Convert.ToInt32(dst.Tables[0].Rows[i]["Number1"]);
int number1Row2 =Convert.ToInt32(dst.Tables[0].Rows[i]["Number2"]);
int number2Row1 =Convert.ToInt32(dst.Tables[0].Rows[i+1]["Number1"]);
int number2Row2 =Convert.ToInt32(dst.Tables[0].Rows[i+1]["Number2"]);
DateTime dateRow1 =Convert.ToDateTime(dst.Tables[0].Rows[i]["Date"]);
DateTime dateRow2 =Convert.ToDateTime(dst.Tables[0].Rows[i+1]["Date"]);
double calc = ((number1Row2- number1Row1 + number2Row2 - number2Row1)/2)*(dateRow1 - dateRow2).TotalDays
Console.WriteLine(calc);
}
It is wroted to be maximum clear to understand.

Your formule have probably a mistake in front of your numerical sample :
((number1(2)- number1(1)) + number2(2) - number2(1))/2)/(number of days (date2 - date1))
If the values โ€‹โ€‹of the id column are chronological and have no holes (1.2, 3, 4, ... OK but 1,3,4, 6 KO ...) you can try the following script :
SELECT t2.number1 , t1.number1, t2.number2 , t1.number1 , DATEDIFF(DAY, t2.date, t1.date)
, (((t2.number1 - t1.number1) + t2.number2 - t1.number2) /2 ) / DATEDIFF(DAY, t2.date, t1.date) as result
FROM #tmp t1
INNER JOIN #tmp t2 ON t1.id + 1 = t2.id
--- I create a #tmp table for test
CREATE table #tmp
(
id int,
Date DateTime,
number1 float,
number2 float
)
--- insert samples data
INSERT INTO #tmp (id, Date, number1, number2) VALUES (1, '2014-09-24T00:00:00', 100, 120), (2, '2014-10-01T00:00:00', 150, 160)
it work great on my SQL Server

Yes you can do it with sql query. Try the below query.
SELECT
N1.Date as PeriodStartDate,
N2.Date as PeriodEndDate,
CAST(CAST((((N2.number1- N1.number1) + (n2.number2 - N1.number2))/2) AS DECIMAL(18,2))/(datediff(d,n1.date,n2.date)) AS DECIMAL(18,2) ) AS AverageDailyOperation
FROM
[dbo].[NumberTable] N1
INNER JOIN
[dbo].[NumberTable] N2
ON N2.Date>N1.Date
I have assumed the table name as NumberTable, I have added PeriodStartDate and PeriodEndDate to make it meaningful. You can remove it as per your need.

Related

Select the latest hour for rates

I have this scenario where I need to get the exchange rate for several coin pairs. I have 2 tables, one with info related to a bank operation and another with the daily exchange rates considered by the bank. I'm starting to learn about data analytic, so be patient please. My english not that great also.
Consider this example:
Table 1 (Bank Operations):
Op Number | Coin_1 | Coin_2 | Date | Hour 1 | Weekday |
1 | EUR | GBP | 2020/06/01 | 03:30 | Monday |
Table 2 (Exchange rates):
Coin_1 | Coin_2 | Date | Hour 2 | Weekday | Rate
EUR | GBP | 2020/03/01 | 11:30 | Friday | 0.6
EUR | GBP | 2020/03/01 | 18:30 | Friday | 0.5
EUR | GBP | 2020/06/01 | 12:30 | Monday | 0.55
Note: The exchange rates are not actualized on weekends.
I do not know how will I get this value. Using a script component? If so can you help me with the algorithm? I've done all the ETL needed this far, but can't seem to find a workaround for this task.
This can be done in sql using the lead windowing functions and some datetime maths.
create table #t1(
[Case] int,
[Op Number] int,
[Coin_1] varchar(10),
[Coin_2] varchar(10),
[Date] date,
[Hour 1] time,
[Weekday] varchar(10)
)
insert into #t1 values
( 1, 1, 'EUR', 'GBP', '2020/06/01', '03:30', 'Monday')
create table #t2(
[Case] int,
[Coin_1] varchar(10),
[Coin_2] varchar(10),
[Date] date,
[Hour 2] time,
[Weekday] varchar(10),
[Rate] decimal(10,2)
)
insert into #t2 values
( 1, 'EUR', 'GBP', '2020/03/01', '11:30', 'Friday', 0.6),
( 1, 'EUR', 'GBP', '2020/03/01', '18:30', 'Friday', 0.5 ),
( 1, 'EUR', 'GBP', '2020/06/01', '12:30', 'Monday', 0.55)
; with t1 as (
select *, dt = CAST(CONCAT([Date], ' ', [hour 1]) AS datetime2(0))
from #t1
)
, x as (
select *, dt = CAST(CONCAT([Date], ' ', [hour 2]) AS datetime2(0))
from #t2
)
, t2 as (
select [Case],
[Coin_1],
[Coin_2],
[Rate],
[Date]
[Hour 2],
[Weekday],
dt as start_dt,
isnull(lead(dt) over(partition by [case] order by dt asc), '20990101') end_dt
from x
)
select *
from t1
inner join t2 on t2.[case] = t1.[case]
and t1.dt >= t2.start_dt
and t1.dt < t2.end_dt
If this is a learning exercise, great use the componentry of SSIS to do it. If this is real world stuff, trust my experience on this, trying to use the SSIS pieces to make this happen will not be pleasant.
One of the bigger challenges in your existing data model is that you store date and time separately. I assume that the source system stores it as a date and time(0) data types. I create an actual datetime2 column in my queries so that I can leverage the fine engineers at Microsoft to worry about getting comparison logic correct.
Instead of a lead/lag solution as Steve proposes, I saw this as an OUTER APPLY with TOP 1 problem.
CREATE TABLE dbo.BankOperations
(
CaseNumber int
, Coin_1 char(3)
, Coin_2 char(3)
, TransactionDate date
, TransactionTime time(0)
);
CREATE TABLE dbo.ExchangeRates
(
CaseNumber int
, Coin_1 char(3)
, Coin_2 char(3)
, TransactionDate date
, TransactionTime time(0)
, Rate decimal(4, 2)
);
INSERT INTO
dbo.BankOperations
VALUES
(
1, 'EUR', 'GBP', '2020-06-01', '03:30'
)
-- boundary checking exact
,( 2, 'EUR', 'GBP', '2020-06-01', '12:30')
-- boundary beyond/not defined
,( 3, 'EUR', 'GBP', '2020-06-01', '13:30')
-- boundary before
,( 4, 'EUR', 'GBP', '2020-03-01', '10:30')
-- boundary first at
,( 5, 'EUR', 'GBP', '2020-03-01', '11:30')
INSERT INTO
dbo.ExchangeRates
VALUES
(
1, 'EUR', 'GBP', '2020-03-01', '11:30', .6
)
, (
2, 'EUR', 'GBP', '2020-03-01', '18:30', .5
)
, (
3, 'EUR', 'GBP', '2020-06-01', '12:30', .55
);
-- Creating a temp table version of the above as the separate date and time fields will
-- crush performance at scale (so too might duplicating data as we're about to do)
SELECT
X.*
, CAST(CONCAT(X.TransactionDate, 'T', X.TransactionTime) AS datetime2(0)) AS IsThisWorking
INTO
#BankOperations
FROM
dbo.BankOperations AS X;
SELECT
X.*
, CAST(CONCAT(X.TransactionDate, 'T', X.TransactionTime) AS datetime2(0)) AS IsThisWorking
INTO
#ExchangeRates
FROM
dbo.ExchangeRates AS X;
-- Option A for pinning data
-- Outer apply will show use the TOP 1 to get the closest without going over
SELECT
BO.*
-- assuming surrogate key
, EX.CaseNumber
, EX.Rate
FROM
#BankOperations AS BO
OUTER APPLY
(
SELECT TOP 1 *
FROM
#ExchangeRates AS ER
WHERE
-- Match based on all of our keys
ER.Coin_1 = BO.Coin_1
AND ER.Coin_2 = BO.Coin_2
-- Eliminate
AND BO.IsThisWorking >= ER.IsThisWorking
ORDER BY
ER.IsThisWorking DESC
)EX
;
-- Option B
-- Use lead/lag function to get the value
-- but my brain isn't seeing it at the moment
/*
SELECT
BO.*
-- assuming surrogate key
, LAG()
FROM
#BankOperations AS BO
INNER JOIn #ExchangeRates
*/
If I were forced to provide a purely SSIS based answer, I'd use the Lookup Component and rather than the default FULL Cache, I'd operate it in None. The performance implication is that for every row that enters the buffer, we are going to fire off a query to the source system to retrieve the one row of data. Depending on volume, this may be "heavy."
As a source, you have an OLE DB Source component pointed at BankOperations. That flows into a Lookup which we'll parameterize.
SELECT TOP 1 *
FROM
dbo.ExchangeRates AS ER
CROSS APPLY (SELECT CAST(CONCAT(ER.TransactionDate, 'T', ER.TransactionTime) AS datetime2(0)) AS IsThisWorking) ITW
WHERE
-- Match based on all of our keys
ER.Coin_1 = ?
AND ER.Coin_2 = ?
-- Eliminate what's too new
AND CAST(CONCAT(?, 'T', ?) AS datetime2(0)) >= ITW.IsThisWorking
ORDER BY
ITW.IsThisWorking DESC
All the ? in there are ordinal specific place holders, starting at 0. What we're looking to do is mimic the logic of the original query. Full disclosure, it's been ages since I've done a parameterized none/partial cache lookup so some of the finer points you'll have to read up on. What I do remember is that you'll be clicking on advanced "stuff" to get this to work.
A different approach I've seen using SSIS componentry will involve two sources and an join. I think it was Matt Masson who demoed this technique but it's been years since I've had to do it. Again, you'll have better performance if you do this in your source query as this approach will require two sorts + the blocking transform of a Join.
The best Script Component approach is going to take emulate the parameterized Lookup component approach. It remains Synchronous (1 row in, 1 row out) and we'd enrich the data flow by adding our Rate column.
Psuedocode approximately
// make local variables with values from the row buffer
var coin_1 = Row.coin1;
var coin_2 = Row.coin2;
var transactionDate = Row.IsThisWorking;
// standard OLE DB parameterized query stuff here
using (SqlConnection conn = new SQLConnection)
{
conn.Open();
using(SqlCommand command = new SqlCommand())
{
command.Text = "SELECT TOP 1 ER.Rate FROM dbo.ExchangeRate AS ER WHERE #txnDate >= ER.IsThisWorking AND ER.Coin_1 = #coin1 AND ER.Coin_2 = #coin2;";
// I don't remember exact syntax
command.Parameters.AddWithValue("#txnDate", transactionDate);
command.Parameters.AddWithValue("#coin1", coin_1);
command.Parameters.AddWithValue("#coin2", coin_2);
}
}

How to count the number of columns with specific data in MS SQL

I have a temp table as shown in following screen shot. I populate this table using an SP as an intermediate step to generate a report.
This table contains Employee ID, PID (working location) and days from 1st to 31st. If an employee has worked a Day Shift its denoted by D and night shifts are denoted by N. If an employee has worked both shifts its denoted by D/N. Now I have to get the totals to last columns as follows.
sub_totals - total of "D" and "N" separately. ex. "15/08"
shift_totals - total of shifts together. ex. "23" (All "D"s and "N"s)
day_totals - number of days worked (count of D, N and D/N) ex. "20"
NOT: When calculating day-total, "D/N" should be treated as a one day worked.
Finally I want to show this on a report. (development language is C# and I'm using ADO.NET)
Could someone please show me how to do this in SQL if possible?
You would have to do something like this:
SELECT
EmployeeID,
PID,
(
CASE WHEN [1] = 'D' OR [1] = 'N' THEN 1 ELSE IF [1] = 'D/N' THEN 2 ELSE 0 END
+ CASE WHEN [2] = ...
+ ...
+ CASE WHEN [31] = 'D' OR [31] = 'N' .....
) AS shift_totals,
(
CASE WHEN [1] IS NOT NULL THEN 1 ELSE 0 END
+ CASE WHEN [2] IS ...
+ ...
) AS day_totals

Multi-column duplicates query

I'm using t-sql from csharp to query a database table that contains 5 columns of
integers which tracks the number of times certain actions have taken place in a
program. There are also other columns in the table.
Example:
Num1 Num2 Num3 Num4 Num5
1 15 22 23 32
15 4 21 17 19
6 5 15 18 20
I need to construct a query that returns each duplicated set of integers and a
count of all rows where all of these 5 columns, when considered as a set of values, have been duplicated.
To clarify further, I need to know how many times Num1=6, Num2=5, Num3=15, Num4=18, and Num5=20, if it does occur. I also need to know if any other sets of duplicates occur in these five columns.
I know some SQL, but this is a complex query that I need help with. I've tried
many subqueries etc, but I just can't figure out the right combination of
SELECT and ORDER BY's to make it work. The data table in question has about 7000
records in it and is expected to grow no larger than about 10k, so performance is secondary.
THANKS in advance.
This looks like a straight-forward SELECT COUNT with a GROUP BY on the five columns.
Something along the lines of:
SELECT Num1, Num2, Num3, Num4, Num5, COUNT(someColumn) GROUP BY Num1, Num2, Num3, Num4, Num5
You can do this with a join on a subquery, which takes only duplicates and count them
select a.Col1, a.Col2, a.Col3, a.Num1, a.Num2, a.Num3, a.Num4, a.Num5, t.cnt as numberOfDuplicates
from tableA a
join (select Num1, Num2, Num3, Num4, Num5, count(*) as cnt
from tableA
group by Num1, Num2, Num3, Num4, Num5
having count(*) > 1
) t
on a.Num1 = t.Num1 and a.Num2 = t.Num2 and a.Num3 = t.Num3 and a.Num4 = t.Num4 and a.Num5 = t.Num5

Method to find hits in comma-separated number string on SQL Server

I have a Windows forms (c#) application and a table in SQL Server that has two columns like this:
ticket (int) | numbers (string)
12345 | '01, 02, 04, 05, 09, 10, 23'
This table may have like 100.000 rows or more.
Where I have to do is to found the amount of hits giving an array of numbers like a lottery.
I have 12 hits, 11 hits and 9 hits for example and for each raffled number I have to perform the search of what win the 12 hits, 11 hits or 9 hits.
So, how is the best way to get this approach? I need the best performance.
For now I have this code:
string sentSQL = " SELECT ticket, numbers FROM tableA";
/* CODE TO PERFORM THE CONNECTION */
/*...*/
DbDataReader reader = connection.ExecuteReader();
int hits12, hits11, hits9 = 0;
int count;
while (reader.Read())
{
count = 0;
string numbers = reader["numbers"].ToString();
string ticketNumber = reader["ticket"].ToString();
int maxJ = balls.Count; //balls is the ArrayList with the numbers currently extracted in the raffle
for (int j = 0; j < maxJ; j++)
{
if (numbers.Contains(balls[j].ToString()))
{
count++;
}
}
switch (count)
{
case 12:
hits12++;
break;
case 11:
hits11++;
break;
case 9:
hits9++;
break;
}
}
This is working but maybe there is a better method to make it possible.
I'm using SQL Server 2012, maybe is there a function that help me?
Edit: Can i perform in the sql query a SUM of the CHARINDEX of each number to get the amount of hits inside the sql query?
You currently have a totally tacky solution.
create table ticket (
ticketId int not null -- PK
)
create table TicketNumbers *
ticketId int not null,
numberSelected int not null
)
TicketNumbers has an FK to Ticket, and a PK of TicketNumber + numberSelected.
select t.ticketId, count(*) CorrectNumbers
from ticket t
inner join TicketNumbers tn on tn.ticketId = t.TicketId
where tn.numberSelected in (9, 11, 12, 15) -- list all winning numbers
group by t.ticketId
order by count(*) desc
Cheers -
One simple way to improve this is to update your select statement to get only records with numbers greater than your first ball number and less that your last ball number + 1 ...
Example (probably not correct SQL):
SELECT ticket, numbers FROM tableA where '10' < numbers and '43' > numbers

Get UnitsSold for Today, Same day last week and same day last year

I'm trying to build a multiple subquery query so that I can databind the results to a chart
This is my current query:
SELECT TOP (100) PERCENT Sum(DBO.ORDERLINE.QTY) AS UnitsSold,
{ fn HOUR(dbo.[Order].PaymentDate) } AS MyHour
FROM DBO.[ORDER]
INNER JOIN DBO.ORDERLINE
ON DBO.[ORDER].ORDERID = DBO.ORDERLINE.ORDERID
WHERE ( DBO.[ORDER].WEBSITEID = 2 )
AND ( DBO.[ORDER].ORDERSTATUSID = 2 )
AND ( Day(DBO.[ORDER].PAYMENTDATE) = 01 )
AND ( Month(DBO.[ORDER].PAYMENTDATE) = 08 )
AND ( Year(DBO.[ORDER].PAYMENTDATE) = 2013 )
GROUP BY { fn HOUR(dbo.[Order].PaymentDate) }
This brings back two columns, UnitsSold and MyHour based on yesterdays data - this works great..
However I want to also get that same data for the same day last week, and the same day last year, I can provide the MONTH/DAY/YEAR values myself via c# - I'm just not sure how to do this complicated query.
Let me know if you need any more info.
Thanks,
Michael
The NRF Retail Calendar was created specifically to address the business problem of sales comparatives - the retail industry has solved this problem in the 1930's by standardizing the calendar into a "4-5-4" calendar, where the first month of each trimester has 4 weeks, second has 5 weeks, third has 4, makes 52 weeks in 4 quarters, with 364 days per year. And they addressed this other problem by periodically making 53-week years - more details here, quote below). It accounts for leap years, and ensures a Friday is always compared to a Friday.
What is the purpose of the 4-5-4 Calendar?
The 4-5-4 Calendar serves as a voluntary guide for the retail industry and ensures sales comparability between years by dividing the year into months based on a 4 weeks โ€“ 5 weeks โ€“ 4 weeks format. The layout of the calendar lines up holidays and ensures the same number of Saturdays and Sundays in comparable months. Hence, like days are compared to like days for sales reporting purposes.
Step 1: Set it up
By modeling this calendar into some dbo.RetailTime table, you make it much easier to compare sales from the correct dates - TY week 26 day 6 compares to LY week 26 day 6, whichever actual calendar dates that was. The calendar is an abstraction of the concept of time.
Something like this:
public interface IRetailTime
{
DateTime Date { get; } // does not have "time" info ("00:00:00")
int DayHour { get; }
int WeekDay { get; }
int YearWeek { get; }
int YearMonth { get; }
int YearQuarter { get; }
int Year { get; }
}
You could flesh this up further by adding fields QuarterDay, QuarterWeek, QuarterMonth, MonthDay, and MonthWeek, depending on your reporting needs. Retailers typically concanenate the Year with the YearWeek to identify each calendar week, so week 26 of year 2013 would be "201326".
Then you write a script to import the NRF calendar data into your model, and you can create a function, stored procedure, view or whatever, to give you the RetailTimeId for LY and, heck why not, for LLY (2 years ago) fields (which could both be null) for each Id in your calendar table.
The result gives you something like this (below assumes hour-level granularity, with 24 hours per day):
RetailTimeId LYId LLYId
1 NULL NULL
2 NULL NULL
... ... ...
8737 1 NULL
8738 2 NULL
... ... ...
17472 8737 1
17473 8738 2
... ... ...
This gives you a lookup table (persisting it to an actual dbo.RetailTimeLookup table doesn't hurt) with an Id for LY & LLY, for each Id in your dbo.RetailTime table (RetailTimeId). You'll want a unique index on the RetailTimeId column, but not on the other two, because of 53-week years, where you'll probably want to compare the 53rd week against the 1st week of that same year.
Step 2: Correlate with your sales data
The next step is to lookup the Id that corresponds to your PaymentDate, by matching the "date" part (without the "time" part) with RetailTime.Date and the "time" part (well just the hour) with RetailTime.DayHour. This can be an expensive operation, you may prefer having a scheduled overnight process (ETL) that will populate a "SalesInfo" data table with the RetailTimeId for the PaymentDate already looked up, so with sales your data formatted like this:
public interface ISalesInfo
{
int RetailTimeId { get; }
int UnitsSold { get; }
}
All that's missing is a join with the TY/LY/LLY lookup view from above, and you can now "slice" your sales figures across a "time" dimension - I used to have a view for this year sales and another for last year sales at the lowest granularity level, like this:
CREATE VIEW vwSALES_TY AS
BEGIN
SELECT t.Id RetailTimeId,
t.Year,
t.YearQuarter,
t.YearMonth,
t.YearWeek,
t.WeekDay,
t.DayHour,
sales.UnitsSold Units -- total units sold
--,sales.CostAmount CostAmount -- cost value of sold units
--,sales.RetailAmount RetailAmount -- full-price value of sold units
--,sales.CurrentAmount CurrentAmount -- actual sale value of sold units
FROM dbo.RetailTime t
INNER JOIN dbo.SalesInfo sales ON t.Id = sales.RetailTimeId
WHERE t.Year = 2013
END
CREATE VIEW vwSALES_LY AS
BEGIN
SELECT t.Id RetailTimeId,
t.Year,
t.YearQuarter,
t.YearMonth,
t.YearWeek,
t.WeekDay,
t.DayHour,
sales.UnitsSold Units -- total units sold
--,sales.CostAmount CostAmount -- cost value of sold units
--,sales.RetailAmount RetailAmount -- full-price value of sold units
--,sales.CurrentAmount CurrentAmount -- actual sale value of sold units
FROM dbo.RetailTime t
INNER JOIN dbo.SalesInfo sales ON t.Id = sales.RetailTimeId
WHERE t.Year = 2012
END
The meaning of numbers
I put CostAmount, RetailAmount and CurrentAmount in there because from a business standpoint, knowing units sold is good, but it doesn't tell you how profitable those sales were - you might have sold twice as many units LY, if you gave them away at a high discount, your gross margin (GM%) might have been very slim or even negative, and selling half as many units TY might turn out to be a much, much better situation... if inventory is turning at a healthy rate - every single bit of information is related with another, one way or another.
GM% is (1-CostAmount/CurrentAmount)*100 - that's the profitability figure every suit needs to know. %Discount is (1-CurrentAmount/RetailAmount)*100 - that's how discounted your sales were. A "units sold" figure alone doesn't tell much; there's a saying in the Retail World that goes "Sales is for vanity, profits for sanity". But I'm drifting. The idea is to include as much information as possible in your granular sales data - and that does include product (ideal is a SKU), point of sale and even client info, if that's available. Anything that's missing can never make it onto a report.
Step 3: ...Profit!
With a view that gives you TY sales and another that gives you LY sales ready to be lined-up, all that's left to do is ask the database:
SELECT t.Year,
t.YearQuarter,
t.YearMonth,
t.YearWeek,
t.WeekDay,
t.DayHour,
SUM(ISNULL(ty.Units,0)) UnitsTY,
SUM(ISNULL(ly.Units,0)) UnitsLY
FROM dbo.RetailTime t
INNER JOIN dbo.RetailTimeLookup lookup ON t.Id = lookup.RetailTimeId
LEFT JOIN dbo.vwSALES_TY ty ON lookup.RetailTimeId = ty.RetailTimeId
LEFT JOIN dbo.vwSALES_LY ly ON lookup.LYId = ly.RetailTimeId
WHERE t.Year = 2013
Now this will give you TY vs LY for each hour of each day of retail calendar year 2013 (preserving 2012 history where there's not yet a record in 2013), but that's not yet exactly what you want, although all the information is already there.
If you took the above and selected into a temporary table (or used it as a sub-query), you would need to do something like this in order to fetch only the figures you're interested in:
SELECT t.DayHour,
SUM(lw.UnitsTY) LastWeekUnitsTY,
SUM(lw.UnitsLY) LastWeekUnitsLY,
SUM(tw.UnitsTY) ThisWeekUnitsTY,
SUM(tw.UnitsLY) ThisWeekUnitsLY
FROM (SELECT DayHour FROM #above GROUP BY DayHour) t
LEFT JOIN (SELECT UnitsTY, UnitsLY
FROM #above
WHERE YearWeek = 25 AND WeekDay = 6) lw
ON t.DayHour = lw.DayHour
LEFT JOIN (SELECT UnitsTY, UnitsLY
FROM #above
WHERE YearWeek = 26 AND WeekDay = 6) tw
ON t.DayHour = tw.DayHour
GROUP BY t.DayHour
...But this would be comparing only the sales of the Friday. If you wanted to calculate a week-to-date (WTD) amount that lines up against the previous year, you would simply replace WeekDay = 6 with WeekDay <= 6 in both WHERE clauses. That's why I put SUMs and a GROUP BY.
Note
The %variance between TY and LY is (TY/LY - 1) * 100. If you have more than a single point of sale (/store), you may have fewer stores LY than TY and that thwarts the comparison. Retailers have addressed this other problem with door-for-door %variances, often referred to as "comp increase". This is achieved by not only lining up When (the "time" dimension), but also Where (the "store" dimension), only accounting for stores that were opened LY, ignoring "non-comp stores". For reports that break figures down a product hierarchy, the What also requires a join with some product data.
One last thing
The idea is to compare apples with apples - there's a reason why you need to pull these numbers: every retailer wants to know whether they're improving over LY figures. Anyone can divide two numbers and come up with a percentage figure. Unfortunately in the real-life business world, reporting accurate data is not always that simple.
Disclaimer: I have worked 9 years in the retail industry.
If I understand correctly your question, you want just one query with 3 results.
You could use a union all.
This will combine the 3 queries with the different date intervals.
You will get back one resultset with 3 rows.
UPDATE
You could combine the queries like this (not tested, not at my pc)
SELECT TOP (100) PERCENT Sum(DBO.ORDERLINE.QTY) AS UnitsSold, { fn HOUR(dbo.[Order].PaymentDate) } AS MyHour
FROM DBO.[ORDER]
INNER JOIN DBO.ORDERLINE
ON DBO.[ORDER].ORDERID = DBO.ORDERLINE.ORDERID
WHERE ( DBO.[ORDER].WEBSITEID = 2 )
AND ( DBO.[ORDER].ORDERSTATUSID = 2 )
AND ( Day(DBO.[ORDER].PAYMENTDATE) = 01 )
AND ( Month(DBO.[ORDER].PAYMENTDATE) = 08 )
AND ( Year(DBO.[ORDER].PAYMENTDATE) = 2013 )
GROUP BY { fn HOUR(dbo.[Order].PaymentDate) }
union all
SELECT TOP (100) PERCENT Sum(DBO.ORDERLINE.QTY) AS UnitsSold, { fn HOUR(dbo.[Order].PaymentDate) } AS MyHour
FROM DBO.[ORDER]
INNER JOIN DBO.ORDERLINE
ON DBO.[ORDER].ORDERID = DBO.ORDERLINE.ORDERID
WHERE ( DBO.[ORDER].WEBSITEID = 2 )
AND ( DBO.[ORDER].ORDERSTATUSID = 2 )
AND ( Day(DBO.[ORDER].PAYMENTDATE) = 24 )
AND ( Month(DBO.[ORDER].PAYMENTDATE) = 07 )
AND ( Year(DBO.[ORDER].PAYMENTDATE) = 2013 )
GROUP BY { fn HOUR(dbo.[Order].PaymentDate) }
union all
SELECT TOP (100) PERCENT Sum(DBO.ORDERLINE.QTY) AS UnitsSold, { fn HOUR(dbo.[Order].PaymentDate) } AS MyHour
FROM DBO.[ORDER]
INNER JOIN DBO.ORDERLINE
ON DBO.[ORDER].ORDERID = DBO.ORDERLINE.ORDERID
WHERE ( DBO.[ORDER].WEBSITEID = 2 )
AND ( DBO.[ORDER].ORDERSTATUSID = 2 )
AND ( Day(DBO.[ORDER].PAYMENTDATE) = 01 )
AND ( Month(DBO.[ORDER].PAYMENTDATE) = 08 )
AND ( Year(DBO.[ORDER].PAYMENTDATE) = 2012 )
GROUP BY { fn HOUR(dbo.[Order].PaymentDate) }
You will get 3 rows with your data. If you want you can also add a fake columon saying wich is what (today, lastweek, last year) for the chart
Instead of using the supplied values as parts of a date, if you combine them into a properly cast DATE variable you can use DATEADD():
SELECT TOP (100) PERCENT Sum(DBO.ORDERLINE.QTY) AS UnitsSold,
{ fn HOUR(dbo.[Order].PaymentDate) } AS MyHour
FROM DBO.[ORDER]
INNER JOIN DBO.ORDERLINE
ON DBO.[ORDER].ORDERID = DBO.ORDERLINE.ORDERID
WHERE ( DBO.[ORDER].WEBSITEID = 2 )
AND ( DBO.[ORDER].ORDERSTATUSID = 2 )
AND ( DBO.[ORDER].PAYMENTDATE = #date
OR DBO.[ORDER].PAYMENTDATE = Dateadd(WEEK, -1, #date)
OR DBO.[ORDER].PAYMENTDATE = Dateadd(YEAR, -1, #date) )
GROUP BY { fn HOUR(dbo.[Order].PaymentDate) }
Also keep in mind that if you've got DATETIME as your data type on either end of the equation you'd want to CAST them as DATE to ignore the TIME portion.

Categories