I have this scenario where I need to get the exchange rate for several coin pairs. I have 2 tables, one with info related to a bank operation and another with the daily exchange rates considered by the bank. I'm starting to learn about data analytic, so be patient please. My english not that great also.
Consider this example:
Table 1 (Bank Operations):
Op Number | Coin_1 | Coin_2 | Date | Hour 1 | Weekday |
1 | EUR | GBP | 2020/06/01 | 03:30 | Monday |
Table 2 (Exchange rates):
Coin_1 | Coin_2 | Date | Hour 2 | Weekday | Rate
EUR | GBP | 2020/03/01 | 11:30 | Friday | 0.6
EUR | GBP | 2020/03/01 | 18:30 | Friday | 0.5
EUR | GBP | 2020/06/01 | 12:30 | Monday | 0.55
Note: The exchange rates are not actualized on weekends.
I do not know how will I get this value. Using a script component? If so can you help me with the algorithm? I've done all the ETL needed this far, but can't seem to find a workaround for this task.
This can be done in sql using the lead windowing functions and some datetime maths.
create table #t1(
[Case] int,
[Op Number] int,
[Coin_1] varchar(10),
[Coin_2] varchar(10),
[Date] date,
[Hour 1] time,
[Weekday] varchar(10)
)
insert into #t1 values
( 1, 1, 'EUR', 'GBP', '2020/06/01', '03:30', 'Monday')
create table #t2(
[Case] int,
[Coin_1] varchar(10),
[Coin_2] varchar(10),
[Date] date,
[Hour 2] time,
[Weekday] varchar(10),
[Rate] decimal(10,2)
)
insert into #t2 values
( 1, 'EUR', 'GBP', '2020/03/01', '11:30', 'Friday', 0.6),
( 1, 'EUR', 'GBP', '2020/03/01', '18:30', 'Friday', 0.5 ),
( 1, 'EUR', 'GBP', '2020/06/01', '12:30', 'Monday', 0.55)
; with t1 as (
select *, dt = CAST(CONCAT([Date], ' ', [hour 1]) AS datetime2(0))
from #t1
)
, x as (
select *, dt = CAST(CONCAT([Date], ' ', [hour 2]) AS datetime2(0))
from #t2
)
, t2 as (
select [Case],
[Coin_1],
[Coin_2],
[Rate],
[Date]
[Hour 2],
[Weekday],
dt as start_dt,
isnull(lead(dt) over(partition by [case] order by dt asc), '20990101') end_dt
from x
)
select *
from t1
inner join t2 on t2.[case] = t1.[case]
and t1.dt >= t2.start_dt
and t1.dt < t2.end_dt
If this is a learning exercise, great use the componentry of SSIS to do it. If this is real world stuff, trust my experience on this, trying to use the SSIS pieces to make this happen will not be pleasant.
One of the bigger challenges in your existing data model is that you store date and time separately. I assume that the source system stores it as a date and time(0) data types. I create an actual datetime2 column in my queries so that I can leverage the fine engineers at Microsoft to worry about getting comparison logic correct.
Instead of a lead/lag solution as Steve proposes, I saw this as an OUTER APPLY with TOP 1 problem.
CREATE TABLE dbo.BankOperations
(
CaseNumber int
, Coin_1 char(3)
, Coin_2 char(3)
, TransactionDate date
, TransactionTime time(0)
);
CREATE TABLE dbo.ExchangeRates
(
CaseNumber int
, Coin_1 char(3)
, Coin_2 char(3)
, TransactionDate date
, TransactionTime time(0)
, Rate decimal(4, 2)
);
INSERT INTO
dbo.BankOperations
VALUES
(
1, 'EUR', 'GBP', '2020-06-01', '03:30'
)
-- boundary checking exact
,( 2, 'EUR', 'GBP', '2020-06-01', '12:30')
-- boundary beyond/not defined
,( 3, 'EUR', 'GBP', '2020-06-01', '13:30')
-- boundary before
,( 4, 'EUR', 'GBP', '2020-03-01', '10:30')
-- boundary first at
,( 5, 'EUR', 'GBP', '2020-03-01', '11:30')
INSERT INTO
dbo.ExchangeRates
VALUES
(
1, 'EUR', 'GBP', '2020-03-01', '11:30', .6
)
, (
2, 'EUR', 'GBP', '2020-03-01', '18:30', .5
)
, (
3, 'EUR', 'GBP', '2020-06-01', '12:30', .55
);
-- Creating a temp table version of the above as the separate date and time fields will
-- crush performance at scale (so too might duplicating data as we're about to do)
SELECT
X.*
, CAST(CONCAT(X.TransactionDate, 'T', X.TransactionTime) AS datetime2(0)) AS IsThisWorking
INTO
#BankOperations
FROM
dbo.BankOperations AS X;
SELECT
X.*
, CAST(CONCAT(X.TransactionDate, 'T', X.TransactionTime) AS datetime2(0)) AS IsThisWorking
INTO
#ExchangeRates
FROM
dbo.ExchangeRates AS X;
-- Option A for pinning data
-- Outer apply will show use the TOP 1 to get the closest without going over
SELECT
BO.*
-- assuming surrogate key
, EX.CaseNumber
, EX.Rate
FROM
#BankOperations AS BO
OUTER APPLY
(
SELECT TOP 1 *
FROM
#ExchangeRates AS ER
WHERE
-- Match based on all of our keys
ER.Coin_1 = BO.Coin_1
AND ER.Coin_2 = BO.Coin_2
-- Eliminate
AND BO.IsThisWorking >= ER.IsThisWorking
ORDER BY
ER.IsThisWorking DESC
)EX
;
-- Option B
-- Use lead/lag function to get the value
-- but my brain isn't seeing it at the moment
/*
SELECT
BO.*
-- assuming surrogate key
, LAG()
FROM
#BankOperations AS BO
INNER JOIn #ExchangeRates
*/
If I were forced to provide a purely SSIS based answer, I'd use the Lookup Component and rather than the default FULL Cache, I'd operate it in None. The performance implication is that for every row that enters the buffer, we are going to fire off a query to the source system to retrieve the one row of data. Depending on volume, this may be "heavy."
As a source, you have an OLE DB Source component pointed at BankOperations. That flows into a Lookup which we'll parameterize.
SELECT TOP 1 *
FROM
dbo.ExchangeRates AS ER
CROSS APPLY (SELECT CAST(CONCAT(ER.TransactionDate, 'T', ER.TransactionTime) AS datetime2(0)) AS IsThisWorking) ITW
WHERE
-- Match based on all of our keys
ER.Coin_1 = ?
AND ER.Coin_2 = ?
-- Eliminate what's too new
AND CAST(CONCAT(?, 'T', ?) AS datetime2(0)) >= ITW.IsThisWorking
ORDER BY
ITW.IsThisWorking DESC
All the ? in there are ordinal specific place holders, starting at 0. What we're looking to do is mimic the logic of the original query. Full disclosure, it's been ages since I've done a parameterized none/partial cache lookup so some of the finer points you'll have to read up on. What I do remember is that you'll be clicking on advanced "stuff" to get this to work.
A different approach I've seen using SSIS componentry will involve two sources and an join. I think it was Matt Masson who demoed this technique but it's been years since I've had to do it. Again, you'll have better performance if you do this in your source query as this approach will require two sorts + the blocking transform of a Join.
The best Script Component approach is going to take emulate the parameterized Lookup component approach. It remains Synchronous (1 row in, 1 row out) and we'd enrich the data flow by adding our Rate column.
Psuedocode approximately
// make local variables with values from the row buffer
var coin_1 = Row.coin1;
var coin_2 = Row.coin2;
var transactionDate = Row.IsThisWorking;
// standard OLE DB parameterized query stuff here
using (SqlConnection conn = new SQLConnection)
{
conn.Open();
using(SqlCommand command = new SqlCommand())
{
command.Text = "SELECT TOP 1 ER.Rate FROM dbo.ExchangeRate AS ER WHERE #txnDate >= ER.IsThisWorking AND ER.Coin_1 = #coin1 AND ER.Coin_2 = #coin2;";
// I don't remember exact syntax
command.Parameters.AddWithValue("#txnDate", transactionDate);
command.Parameters.AddWithValue("#coin1", coin_1);
command.Parameters.AddWithValue("#coin2", coin_2);
}
}
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I have an old database with a table with dateEntered(varchar datatype) column which can take the dates in various formats like
A> 'wk231216' which means (Week)23-(Date)12-(Year)2016
B> '231216' which means (Week)23-(Date)12-(Year)2016
C> 'wk132717' which means (week)13-(Date)27-(year)2017
Now I need to modify the above dates into this format as 'YYYY-MM-DD'
A> should become 2016-06-12(wk23 of 2016 is in June(06))
B> should become 2016-06-12
C> should become 2017-03-27
Can anyone suggest how to achieve this?
Thank You!!
Don't use a UDF, it won't scale, don't use a cursor, it won't scale either.
As a starter for ten (either there is a mistake in your examples, or I don't follow the logic 100%) you could try something like this:
DECLARE #table TABLE (naff_date VARCHAR(50));
INSERT INTO #table SELECT 'wk231216';
INSERT INTO #table SELECT '231216';
INSERT INTO #table SELECT 'wk132717';
--Preformat
WITH x AS (SELECT CASE WHEN naff_date LIKE 'wk%' THEN SUBSTRING(naff_date, 3, 50) ELSE naff_date END AS naff_date FROM #table WHERE LEN(naff_date) IN (6, 8))
SELECT
'20' + RIGHT(naff_date, 2) + '-' + RIGHT('0' + CONVERT(VARCHAR(2), MONTH(CONVERT(INT, DATEADD(WEEK, CONVERT(INT, SUBSTRING(naff_date, 1, 2)), '20170101')))), 2) + '-' + SUBSTRING(naff_date, 3, 2) AS new_date
FROM
x
WHERE
ISNUMERIC(naff_date) = 1;
There's a lots of "iffs and buts" in there, and I did my best to remove any dates that would cause an error. I would suggest trying this, but obviously adjusting the query to use your real table, then write the results somewhere else for sanity checking, hoovering up errors, etc.
I get the following results:
new_date
2016-06-12
2016-06-12
2017-04-27
So the same for the first two examples, but a month out for the other (but I think my result is correct here?).
I guess if your week was "zero based" then this would work better, it's also not assuming that each year doesn't have a leap year (as my original answer did):
DECLARE #table TABLE (naff_date VARCHAR(50));
INSERT INTO #table SELECT 'wk231216';
INSERT INTO #table SELECT '231216';
INSERT INTO #table SELECT 'wk132717';
--Preformat
WITH x AS (SELECT CASE WHEN naff_date LIKE 'wk%' THEN SUBSTRING(naff_date, 3, 50) ELSE naff_date END AS naff_date FROM #table WHERE LEN(naff_date) IN (6, 8)),
--Get the year, week and day
y AS (
SELECT
'20' + RIGHT(naff_date, 2) AS [year],
SUBSTRING(naff_date, 1, 2) AS [week],
SUBSTRING(naff_date, 3, 2) AS [day]
FROM
x
WHERE
ISNUMERIC(naff_date) = 1)
--Now we have enough information for the whole date
SELECT
[year] + '-' + RIGHT('0' + CONVERT(VARCHAR(2), MONTH(DATEADD(WEEK, CONVERT(INT, [week]) - 1, CONVERT(DATE, [year] + '0101')))), 2) + '-' + [day] AS new_date
FROM
y;
Being new to SQL I'm sure there is a lot of better ways of doing this but you could create the following function:
CREATE FUNCTION ConvertFromWeekDateToMyDate(#input VARCHAR(10))
RETURNS DateTime
AS BEGIN
declare
#Year char(4),
#Week TINYINT,
#Day TINYINT,
#month TINYINT,
#hasWk bit
set #hasWk = case when LEFT(#input, 2) = 'wk' then 1 else 0 end
set #Week = case when #hasWk = 1 then substring(#input, 3, 2) else substring(#input, 1, 2) end
set #Day = case when #hasWk = 1 then substring(#input, 5, 2) else substring(#input, 3, 2) end
set #Year = '20' + RIGHT(#input, 2)
set #month = DATEPART(MM,CAST(CONVERT(CHAR(3),
DATEADD(WW,#WEEK - 1,
CONVERT(datetime,'01/01/'+ CONVERT(char(4),#Year))),100) + ' 1900' AS DATETIME))
RETURN DATEFROMPARTS (#Year, #month, #Day)
END
The call it similar to
update YourTable set DateColumn = ConvertFromWeekDateToMyDate(DateColumn)
You can tweak the variable types, names, function names, etc.. as needed but it should get you started.
The above is using the Week-To-Month conversion code from this SO post ► Get Month from Calendar Week (SQL Server)
Testing the above Function using the following inputs:
SELECT dbo.ConvertFromWeekDateToMyDate('wk231216') as 'wk231216',
dbo.ConvertFromWeekDateToMyDate('231216') as '231216',
dbo.ConvertFromWeekDateToMyDate('wk132717') as 'wk132717'
Gave me this result (which seems to match your expected result):
wk231216 231216 wk132717
----------------------- ----------------------- -----------------------
2016-06-12 00:00:00.000 2016-06-12 00:00:00.000 2017-03-27 00:00:00.000
I have a task to solve. I am trying to display the operation time of two machines (number1 & number 2) in a diagram. Therefore i store information in a table. The columns are id, date, number1, number2.
Lets assume i have this specific dataset:
id date number1 number2
1| 24.09.14 | 100 | 120
2| 01.10.14 | 150 | 160
For displaying the information I need to retrieve the following data.
((number1(2)- number1(1)) + number2(2) - number1(1))/2)/(number of days (date2 - date1))
This should result in the following specific numbers.
((150-100 + 160-120)/2)/7= 6,42
Or in plain words. The result should be the average daily operation time from all of my machines. Substracting saturdays and sundays from the number of dates would be nice but not necessary.
I hope that you understand my question. In essence I am facing the problem that i dont know how to work with different rows from a simple sql query.
The programming language is c# in a razor based web project.
First I doubt that you have only 2 records in database. Here some code that makes calculation for every 2 rows in DataSet.
for(int i=0; i < dst.Tables[0].Rows.Count - 1; i+=2)
{
if(dst.Tables[0].Rows.Count % 2 != 0)
Console.WriteLine("Wrong records count")
int number1Row1 =Convert.ToInt32(dst.Tables[0].Rows[i]["Number1"]);
int number1Row2 =Convert.ToInt32(dst.Tables[0].Rows[i]["Number2"]);
int number2Row1 =Convert.ToInt32(dst.Tables[0].Rows[i+1]["Number1"]);
int number2Row2 =Convert.ToInt32(dst.Tables[0].Rows[i+1]["Number2"]);
DateTime dateRow1 =Convert.ToDateTime(dst.Tables[0].Rows[i]["Date"]);
DateTime dateRow2 =Convert.ToDateTime(dst.Tables[0].Rows[i+1]["Date"]);
double calc = ((number1Row2- number1Row1 + number2Row2 - number2Row1)/2)*(dateRow1 - dateRow2).TotalDays
Console.WriteLine(calc);
}
It is wroted to be maximum clear to understand.
Your formule have probably a mistake in front of your numerical sample :
((number1(2)- number1(1)) + number2(2) - number2(1))/2)/(number of days (date2 - date1))
If the values of the id column are chronological and have no holes (1.2, 3, 4, ... OK but 1,3,4, 6 KO ...) you can try the following script :
SELECT t2.number1 , t1.number1, t2.number2 , t1.number1 , DATEDIFF(DAY, t2.date, t1.date)
, (((t2.number1 - t1.number1) + t2.number2 - t1.number2) /2 ) / DATEDIFF(DAY, t2.date, t1.date) as result
FROM #tmp t1
INNER JOIN #tmp t2 ON t1.id + 1 = t2.id
--- I create a #tmp table for test
CREATE table #tmp
(
id int,
Date DateTime,
number1 float,
number2 float
)
--- insert samples data
INSERT INTO #tmp (id, Date, number1, number2) VALUES (1, '2014-09-24T00:00:00', 100, 120), (2, '2014-10-01T00:00:00', 150, 160)
it work great on my SQL Server
Yes you can do it with sql query. Try the below query.
SELECT
N1.Date as PeriodStartDate,
N2.Date as PeriodEndDate,
CAST(CAST((((N2.number1- N1.number1) + (n2.number2 - N1.number2))/2) AS DECIMAL(18,2))/(datediff(d,n1.date,n2.date)) AS DECIMAL(18,2) ) AS AverageDailyOperation
FROM
[dbo].[NumberTable] N1
INNER JOIN
[dbo].[NumberTable] N2
ON N2.Date>N1.Date
I have assumed the table name as NumberTable, I have added PeriodStartDate and PeriodEndDate to make it meaningful. You can remove it as per your need.
I'm trying to build a multiple subquery query so that I can databind the results to a chart
This is my current query:
SELECT TOP (100) PERCENT Sum(DBO.ORDERLINE.QTY) AS UnitsSold,
{ fn HOUR(dbo.[Order].PaymentDate) } AS MyHour
FROM DBO.[ORDER]
INNER JOIN DBO.ORDERLINE
ON DBO.[ORDER].ORDERID = DBO.ORDERLINE.ORDERID
WHERE ( DBO.[ORDER].WEBSITEID = 2 )
AND ( DBO.[ORDER].ORDERSTATUSID = 2 )
AND ( Day(DBO.[ORDER].PAYMENTDATE) = 01 )
AND ( Month(DBO.[ORDER].PAYMENTDATE) = 08 )
AND ( Year(DBO.[ORDER].PAYMENTDATE) = 2013 )
GROUP BY { fn HOUR(dbo.[Order].PaymentDate) }
This brings back two columns, UnitsSold and MyHour based on yesterdays data - this works great..
However I want to also get that same data for the same day last week, and the same day last year, I can provide the MONTH/DAY/YEAR values myself via c# - I'm just not sure how to do this complicated query.
Let me know if you need any more info.
Thanks,
Michael
The NRF Retail Calendar was created specifically to address the business problem of sales comparatives - the retail industry has solved this problem in the 1930's by standardizing the calendar into a "4-5-4" calendar, where the first month of each trimester has 4 weeks, second has 5 weeks, third has 4, makes 52 weeks in 4 quarters, with 364 days per year. And they addressed this other problem by periodically making 53-week years - more details here, quote below). It accounts for leap years, and ensures a Friday is always compared to a Friday.
What is the purpose of the 4-5-4 Calendar?
The 4-5-4 Calendar serves as a voluntary guide for the retail industry and ensures sales comparability between years by dividing the year into months based on a 4 weeks – 5 weeks – 4 weeks format. The layout of the calendar lines up holidays and ensures the same number of Saturdays and Sundays in comparable months. Hence, like days are compared to like days for sales reporting purposes.
Step 1: Set it up
By modeling this calendar into some dbo.RetailTime table, you make it much easier to compare sales from the correct dates - TY week 26 day 6 compares to LY week 26 day 6, whichever actual calendar dates that was. The calendar is an abstraction of the concept of time.
Something like this:
public interface IRetailTime
{
DateTime Date { get; } // does not have "time" info ("00:00:00")
int DayHour { get; }
int WeekDay { get; }
int YearWeek { get; }
int YearMonth { get; }
int YearQuarter { get; }
int Year { get; }
}
You could flesh this up further by adding fields QuarterDay, QuarterWeek, QuarterMonth, MonthDay, and MonthWeek, depending on your reporting needs. Retailers typically concanenate the Year with the YearWeek to identify each calendar week, so week 26 of year 2013 would be "201326".
Then you write a script to import the NRF calendar data into your model, and you can create a function, stored procedure, view or whatever, to give you the RetailTimeId for LY and, heck why not, for LLY (2 years ago) fields (which could both be null) for each Id in your calendar table.
The result gives you something like this (below assumes hour-level granularity, with 24 hours per day):
RetailTimeId LYId LLYId
1 NULL NULL
2 NULL NULL
... ... ...
8737 1 NULL
8738 2 NULL
... ... ...
17472 8737 1
17473 8738 2
... ... ...
This gives you a lookup table (persisting it to an actual dbo.RetailTimeLookup table doesn't hurt) with an Id for LY & LLY, for each Id in your dbo.RetailTime table (RetailTimeId). You'll want a unique index on the RetailTimeId column, but not on the other two, because of 53-week years, where you'll probably want to compare the 53rd week against the 1st week of that same year.
Step 2: Correlate with your sales data
The next step is to lookup the Id that corresponds to your PaymentDate, by matching the "date" part (without the "time" part) with RetailTime.Date and the "time" part (well just the hour) with RetailTime.DayHour. This can be an expensive operation, you may prefer having a scheduled overnight process (ETL) that will populate a "SalesInfo" data table with the RetailTimeId for the PaymentDate already looked up, so with sales your data formatted like this:
public interface ISalesInfo
{
int RetailTimeId { get; }
int UnitsSold { get; }
}
All that's missing is a join with the TY/LY/LLY lookup view from above, and you can now "slice" your sales figures across a "time" dimension - I used to have a view for this year sales and another for last year sales at the lowest granularity level, like this:
CREATE VIEW vwSALES_TY AS
BEGIN
SELECT t.Id RetailTimeId,
t.Year,
t.YearQuarter,
t.YearMonth,
t.YearWeek,
t.WeekDay,
t.DayHour,
sales.UnitsSold Units -- total units sold
--,sales.CostAmount CostAmount -- cost value of sold units
--,sales.RetailAmount RetailAmount -- full-price value of sold units
--,sales.CurrentAmount CurrentAmount -- actual sale value of sold units
FROM dbo.RetailTime t
INNER JOIN dbo.SalesInfo sales ON t.Id = sales.RetailTimeId
WHERE t.Year = 2013
END
CREATE VIEW vwSALES_LY AS
BEGIN
SELECT t.Id RetailTimeId,
t.Year,
t.YearQuarter,
t.YearMonth,
t.YearWeek,
t.WeekDay,
t.DayHour,
sales.UnitsSold Units -- total units sold
--,sales.CostAmount CostAmount -- cost value of sold units
--,sales.RetailAmount RetailAmount -- full-price value of sold units
--,sales.CurrentAmount CurrentAmount -- actual sale value of sold units
FROM dbo.RetailTime t
INNER JOIN dbo.SalesInfo sales ON t.Id = sales.RetailTimeId
WHERE t.Year = 2012
END
The meaning of numbers
I put CostAmount, RetailAmount and CurrentAmount in there because from a business standpoint, knowing units sold is good, but it doesn't tell you how profitable those sales were - you might have sold twice as many units LY, if you gave them away at a high discount, your gross margin (GM%) might have been very slim or even negative, and selling half as many units TY might turn out to be a much, much better situation... if inventory is turning at a healthy rate - every single bit of information is related with another, one way or another.
GM% is (1-CostAmount/CurrentAmount)*100 - that's the profitability figure every suit needs to know. %Discount is (1-CurrentAmount/RetailAmount)*100 - that's how discounted your sales were. A "units sold" figure alone doesn't tell much; there's a saying in the Retail World that goes "Sales is for vanity, profits for sanity". But I'm drifting. The idea is to include as much information as possible in your granular sales data - and that does include product (ideal is a SKU), point of sale and even client info, if that's available. Anything that's missing can never make it onto a report.
Step 3: ...Profit!
With a view that gives you TY sales and another that gives you LY sales ready to be lined-up, all that's left to do is ask the database:
SELECT t.Year,
t.YearQuarter,
t.YearMonth,
t.YearWeek,
t.WeekDay,
t.DayHour,
SUM(ISNULL(ty.Units,0)) UnitsTY,
SUM(ISNULL(ly.Units,0)) UnitsLY
FROM dbo.RetailTime t
INNER JOIN dbo.RetailTimeLookup lookup ON t.Id = lookup.RetailTimeId
LEFT JOIN dbo.vwSALES_TY ty ON lookup.RetailTimeId = ty.RetailTimeId
LEFT JOIN dbo.vwSALES_LY ly ON lookup.LYId = ly.RetailTimeId
WHERE t.Year = 2013
Now this will give you TY vs LY for each hour of each day of retail calendar year 2013 (preserving 2012 history where there's not yet a record in 2013), but that's not yet exactly what you want, although all the information is already there.
If you took the above and selected into a temporary table (or used it as a sub-query), you would need to do something like this in order to fetch only the figures you're interested in:
SELECT t.DayHour,
SUM(lw.UnitsTY) LastWeekUnitsTY,
SUM(lw.UnitsLY) LastWeekUnitsLY,
SUM(tw.UnitsTY) ThisWeekUnitsTY,
SUM(tw.UnitsLY) ThisWeekUnitsLY
FROM (SELECT DayHour FROM #above GROUP BY DayHour) t
LEFT JOIN (SELECT UnitsTY, UnitsLY
FROM #above
WHERE YearWeek = 25 AND WeekDay = 6) lw
ON t.DayHour = lw.DayHour
LEFT JOIN (SELECT UnitsTY, UnitsLY
FROM #above
WHERE YearWeek = 26 AND WeekDay = 6) tw
ON t.DayHour = tw.DayHour
GROUP BY t.DayHour
...But this would be comparing only the sales of the Friday. If you wanted to calculate a week-to-date (WTD) amount that lines up against the previous year, you would simply replace WeekDay = 6 with WeekDay <= 6 in both WHERE clauses. That's why I put SUMs and a GROUP BY.
Note
The %variance between TY and LY is (TY/LY - 1) * 100. If you have more than a single point of sale (/store), you may have fewer stores LY than TY and that thwarts the comparison. Retailers have addressed this other problem with door-for-door %variances, often referred to as "comp increase". This is achieved by not only lining up When (the "time" dimension), but also Where (the "store" dimension), only accounting for stores that were opened LY, ignoring "non-comp stores". For reports that break figures down a product hierarchy, the What also requires a join with some product data.
One last thing
The idea is to compare apples with apples - there's a reason why you need to pull these numbers: every retailer wants to know whether they're improving over LY figures. Anyone can divide two numbers and come up with a percentage figure. Unfortunately in the real-life business world, reporting accurate data is not always that simple.
Disclaimer: I have worked 9 years in the retail industry.
If I understand correctly your question, you want just one query with 3 results.
You could use a union all.
This will combine the 3 queries with the different date intervals.
You will get back one resultset with 3 rows.
UPDATE
You could combine the queries like this (not tested, not at my pc)
SELECT TOP (100) PERCENT Sum(DBO.ORDERLINE.QTY) AS UnitsSold, { fn HOUR(dbo.[Order].PaymentDate) } AS MyHour
FROM DBO.[ORDER]
INNER JOIN DBO.ORDERLINE
ON DBO.[ORDER].ORDERID = DBO.ORDERLINE.ORDERID
WHERE ( DBO.[ORDER].WEBSITEID = 2 )
AND ( DBO.[ORDER].ORDERSTATUSID = 2 )
AND ( Day(DBO.[ORDER].PAYMENTDATE) = 01 )
AND ( Month(DBO.[ORDER].PAYMENTDATE) = 08 )
AND ( Year(DBO.[ORDER].PAYMENTDATE) = 2013 )
GROUP BY { fn HOUR(dbo.[Order].PaymentDate) }
union all
SELECT TOP (100) PERCENT Sum(DBO.ORDERLINE.QTY) AS UnitsSold, { fn HOUR(dbo.[Order].PaymentDate) } AS MyHour
FROM DBO.[ORDER]
INNER JOIN DBO.ORDERLINE
ON DBO.[ORDER].ORDERID = DBO.ORDERLINE.ORDERID
WHERE ( DBO.[ORDER].WEBSITEID = 2 )
AND ( DBO.[ORDER].ORDERSTATUSID = 2 )
AND ( Day(DBO.[ORDER].PAYMENTDATE) = 24 )
AND ( Month(DBO.[ORDER].PAYMENTDATE) = 07 )
AND ( Year(DBO.[ORDER].PAYMENTDATE) = 2013 )
GROUP BY { fn HOUR(dbo.[Order].PaymentDate) }
union all
SELECT TOP (100) PERCENT Sum(DBO.ORDERLINE.QTY) AS UnitsSold, { fn HOUR(dbo.[Order].PaymentDate) } AS MyHour
FROM DBO.[ORDER]
INNER JOIN DBO.ORDERLINE
ON DBO.[ORDER].ORDERID = DBO.ORDERLINE.ORDERID
WHERE ( DBO.[ORDER].WEBSITEID = 2 )
AND ( DBO.[ORDER].ORDERSTATUSID = 2 )
AND ( Day(DBO.[ORDER].PAYMENTDATE) = 01 )
AND ( Month(DBO.[ORDER].PAYMENTDATE) = 08 )
AND ( Year(DBO.[ORDER].PAYMENTDATE) = 2012 )
GROUP BY { fn HOUR(dbo.[Order].PaymentDate) }
You will get 3 rows with your data. If you want you can also add a fake columon saying wich is what (today, lastweek, last year) for the chart
Instead of using the supplied values as parts of a date, if you combine them into a properly cast DATE variable you can use DATEADD():
SELECT TOP (100) PERCENT Sum(DBO.ORDERLINE.QTY) AS UnitsSold,
{ fn HOUR(dbo.[Order].PaymentDate) } AS MyHour
FROM DBO.[ORDER]
INNER JOIN DBO.ORDERLINE
ON DBO.[ORDER].ORDERID = DBO.ORDERLINE.ORDERID
WHERE ( DBO.[ORDER].WEBSITEID = 2 )
AND ( DBO.[ORDER].ORDERSTATUSID = 2 )
AND ( DBO.[ORDER].PAYMENTDATE = #date
OR DBO.[ORDER].PAYMENTDATE = Dateadd(WEEK, -1, #date)
OR DBO.[ORDER].PAYMENTDATE = Dateadd(YEAR, -1, #date) )
GROUP BY { fn HOUR(dbo.[Order].PaymentDate) }
Also keep in mind that if you've got DATETIME as your data type on either end of the equation you'd want to CAST them as DATE to ignore the TIME portion.
I am looking for a way that would allow me to have a column that I would simply manually input a number and then the list would sort itself based off this list, this would be using c# as the language for the listing and it's ms SQL
I don't have much information, but if anyone wants to know anything else feel free to ask and I will try and answer to my best ability.
They are currently stored as strings.
The thing that causes it to get thrown out is because some of the lists contains ranges but some contain fractions, both which are displayed the same I.E 1/2 can mean 1-2 or 1/2(half)
All of the SQL connection is done using NHibernate.
The reason for not simply sorting it normally, is that the list is ordered using fractions currently and the list works fine, however when it gets over 1, it seems to break and throws all of them to the bottom of the list I.E:
Example of how I would like this to work:
I would have a column in my database named "DisplayOrder" or something along those lines.
Database row says "1", this would be the first item in a list to appear.
Database row says "8", this would be the 8th item in the list to appear.
I've put together a quick function to parse your fractions and convert them to floats, which can then be used for comparison and sorting. You may need more in the "sanitation" step, possibly normalizing whitespace, removing other characters, etc., and may have to check the divisor for zero (only you know your data).
create function dbo.FractionToFloat(#fraction varchar(100))
returns float
as
begin
declare #input varchar(100)
, #output float
, #whole int
, #dividend int
, #divisor int
-- Sanitize input
select #input = ltrim(rtrim(replace(replace(#fraction, '''', ''), '"', '')))
select #whole = cast(case
when charindex('/', #input) = 0 then #input
when charindex(' ', #input) = 0 then '0'
else left(#input, charindex(' ', #input) - 1)
end as int)
select #dividend = cast(case
when charindex('/', #input) = 0 then '0'
when charindex(' ', #input) = 0 then left(#input, charindex('/', #input) - 1)
else substring(#input, charindex(' ', #input) + 1, charindex('/', #input) - charindex(' ', #input) - 1)
end as int)
select #divisor = cast(case
when charindex('/', #input) = 0 then '1'
else right(#input, charindex('/', reverse(#input)) - 1)
end as int)
select #output = cast(#whole as float) + (cast(#dividend as float) / cast(#divisor as float))
return #output
end
This way, you can simply order by the function's output like so:
select *
from MyTable
order by dbo.FractionToFloat(MyFractionColumn)
I wouldn't normally suggest rolling your own parser that you have to maintain as the data changes, but this seems simple enough on the surface, and probably better than manually maintaining an ordinal column. Also, if you had to compare various units (feet to inches, minutes to seconds), then this gets more complicated. I'm removing anything that looks like a unit in your demo data.
The sorting result seems perfectly normal, since I really think that the database doesn't understand what "1 1/4" means.
Which type has your database field? Text/varchar/string I guess?
Maybe you should create a stored proc to convert the fraction values to numeric ones, an then call
SELECT field1, field2, ParseAndConvertProc(DisplayOrder) as disp_order ORDER by disp_order
Ignoring for the moment that 1/2 can mean between 1 and 2, you can deal with the madness and try to get some order out of it.
CREATE TABLE #Fractions (
Frac varchar(15)
);
INSERT #Fractions VALUES
('1 1/2'), ('1 1/4'), ('1"'), ('1/2'), ('1/4'), ('1/8'),
('2 1/2'), ('2"'), ('3"'), ('3/4'), ('3/8'), ('4"');
WITH Canonical AS (
SELECT
Frac,
F,
CharIndex(' ', F) S,
CharIndex('/', F) D
FROM
(SELECT Frac, Replace(Frac, '"', '') F FROM #Fractions) F
WHERE
F NOT LIKE '%[^0-9 /]%'
), Parts AS (
SELECT
Frac,
Convert(int, Left(F, CASE WHEN D = 0 THEN Len(F) ELSE S END)) W,
Convert(int, CASE WHEN D > 0 THEN Substring(F, S + 1, D - S - 1) ELSE '' END) N,
Convert(int, CASE WHEN D > 0 THEN Substring(F, D + 1, 2147483647) ELSE '1' END) D
FROM Canonical
)
SELECT
Frac,
W + N * 1.0 / D Calc
FROM Parts
ORDER BY Calc;
DROP TABLE #Fractions;
But I don't recommend this. Use my query as a base to get the correct value, and switch to using decimal values.
Note you also have to search your data for characters this code didn't account for, using the pattern match above with LIKE instead of NOT LIKE. Then remove them or account for them somehow.
If what you said is true that 1/2 can mean two things, andyou can tell the difference between them, then you can do something about it. Store the range in two columns. If they are the same then there is no range.
If you can't tell the difference between the two meanings then you're just majorly messed up and may as well quit your job and go race sled dogs in Alaska.
I am building a C#/ASP.NET app with an SQL backend. I am on deadline and finishing up my pages, out of left field one of my designers incorporated a full text search on one of my pages. My "searches" up until this point have been filters, being able to narrow a result set by certain factors and column values.
Being that I'm on deadline (you know 3 hours sleep a night, at the point where I am looking like something the cat ate and threw up), I was expecting this page to be very similar to be others and I'm trying to decide whether or not to make a stink. I have never done a full text search on a page before.... is this a mountain to climb or is there a simple solution?
thank you.
First off, you need to enabled Full text Searching indexing on the production servers, so if thats not in scope, your not going to want to go with this.
However, if that's already ready to go, full text searching is relatively simple.
T-SQL has 4 predicates used for full text search:
FREETEXT
FREETEXTTABLE
CONTAINS
CONTAINSTABLE
FREETEXT is the simplest, and can be done like this:
SELECT UserName
FROM Tbl_Users
WHERE FREETEXT (UserName, 'bob' )
Results:
JimBob
Little Bobby Tables
FREETEXTTABLE works the same as FreeTEXT, except it returns the results as a table.
The real power of T-SQL's full text search comes from the CONTAINS (and CONTAINSTABLE) predicate...This one is huge, so I'll just paste its usage in:
CONTAINS
( { column | * } , '< contains_search_condition >'
)
< contains_search_condition > ::=
{ < simple_term >
| < prefix_term >
| < generation_term >
| < proximity_term >
| < weighted_term >
}
| { ( < contains_search_condition > )
{ AND | AND NOT | OR } < contains_search_condition > [ ...n ]
}
< simple_term > ::=
word | " phrase "
< prefix term > ::=
{ "word * " | "phrase * " }
< generation_term > ::=
FORMSOF ( INFLECTIONAL , < simple_term > [ ,...n ] )
< proximity_term > ::=
{ < simple_term > | < prefix_term > }
{ { NEAR | ~ } { < simple_term > | < prefix_term > } } [ ...n ]
< weighted_term > ::=
ISABOUT
( { {
< simple_term >
| < prefix_term >
| < generation_term >
| < proximity_term >
}
[ WEIGHT ( weight_value ) ]
} [ ,...n ]
)
This means you can write queries such as:
SELECT UserName
FROM Tbl_Users
WHERE CONTAINS(UserName, '"little*" NEAR tables')
Results:
Little Bobby Tables
Good luck :)
Full text search in SQL Server is really easy, a bit of configuration and a slight tweak on the queryside and you are good to go! I have done it for clients in under 20 minutes before, being familiar with the process
Here is the 2008 MSDN article, links go out to the 2005 versions from there
I have used dtSearch before for adding full text searching to files and databases, and their stuff is pretty cheap and easy to use.
Short of adding all that and configuring SQL, This script will search through all columns in a database and tell you what columns contain the values you are looking for. I know its not the "proper" solution, but may buy you some time.
/*This script will find any text value in the database*/
/*Output will be directed to the Messages window. Don't forget to look there!!!*/
SET NOCOUNT ON
DECLARE #valuetosearchfor varchar(128), #objectOwner varchar(64)
SET #valuetosearchfor = '%staff%' --should be formatted as a like search
SET #objectOwner = 'dbo'
DECLARE #potentialcolumns TABLE (id int IDENTITY, sql varchar(4000))
INSERT INTO #potentialcolumns (sql)
SELECT
('if exists (select 1 from [' +
[tabs].[table_schema] + '].[' +
[tabs].[table_name] +
'] (NOLOCK) where [' +
[cols].[column_name] +
'] like ''' + #valuetosearchfor + ''' ) print ''SELECT * FROM [' +
[tabs].[table_schema] + '].[' +
[tabs].[table_name] +
'] (NOLOCK) WHERE [' +
[cols].[column_name] +
'] LIKE ''''' + #valuetosearchfor + '''''' +
'''') as 'sql'
FROM information_schema.columns cols
INNER JOIN information_schema.tables tabs
ON cols.TABLE_CATALOG = tabs.TABLE_CATALOG
AND cols.TABLE_SCHEMA = tabs.TABLE_SCHEMA
AND cols.TABLE_NAME = tabs.TABLE_NAME
WHERE cols.data_type IN ('char', 'varchar', 'nvchar', 'nvarchar','text','ntext')
AND tabs.table_schema = #objectOwner
AND tabs.TABLE_TYPE = 'BASE TABLE'
ORDER BY tabs.table_catalog, tabs.table_name, cols.ordinal_position
DECLARE #count int
SET #count = (SELECT MAX(id) FROM #potentialcolumns)
PRINT 'Found ' + CAST(#count as varchar) + ' potential columns.'
PRINT 'Beginning scan...'
PRINT ''
PRINT 'These columns contain the values being searched for...'
PRINT ''
DECLARE #iterator int, #sql varchar(4000)
SET #iterator = 1
WHILE #iterator <= (SELECT Max(id) FROM #potentialcolumns)
BEGIN
SET #sql = (SELECT [sql] FROM #potentialcolumns where [id] = #iterator)
IF (#sql IS NOT NULL) and (RTRIM(LTRIM(#sql)) <> '')
BEGIN
--SELECT #sql --use when checking sql output
EXEC (#sql)
END
SET #iterator = #iterator + 1
END
PRINT ''
PRINT 'Scan completed'
I've been there. It works like a charm until you start to consider scalability and advanced search functionalities like search over multiple columns with giving each one different weight values.
For example, the only way to search over Title and Summary columns is to have a computed column with SearchColumn = CONCAT(Title, Summary) and index over SearchColumn. Weighting? SearchColumn = CONCAT(CONCAT(Title,Title), Summary) something like that. ;) Filtering? Forget about it.
"How hard is it" is a tough question to answer. For example, someone who's already done it 10 times will probably reckon it's a snap. All I can really say is that you're likely to find it a lot easier if you use something like NLucene rather than rolling your own.