Related
I have a table that looks like the following which was created using the following code...
SELECT Orders.ID, Orders.CHECKIN_DT_TM, Orders.CATALOG_TYPE,
Orders.ORDER_STATUS, Orders.ORDERED_DT_TM, Orders.COMPLETED_DT_TM,
Min(DateDiff("n",Orders.ORDERED_DT_TM,Orders.COMPLETED_DT_TM)) AS
Time_to_complete
FROM Orders
GROUP BY Orders.ORDER_ID, Orders.ID,
Orders.CHECKIN_DT_TM, Orders.CATALOG_TYPE, Orders.ORDERED_DT_TM,
Orders.COMPLETED_DT_TM, HAVING (((Orders.CATALOG_TYPE)="radiology");
ID Time_to_complete ... .....
1 5
1 7
1 8
2 23
2 6
3 7
4 16
4 14
I'd like to add to this code which would select the smallest Time_to_complete value per subject ID. Leaving the desired table:
ID Time_to_complete ... .....
1 5
2 6
3 7
4 14
I'm using Access and prefer to continue using Access to finish this code but I do have the option to use SQL Server if this is not possible in Access. Thanks!
I suspect you need correlated subquery :
SELECT O.*, DateDiff("n", O.ORDERED_DT_TM, O.COMPLETED_DT_TM) AS Time_to_complete
FROM Orders O
WHERE DateDiff("n", O.ORDERED_DT_TM, O.COMPLETED_DT_TM) = (SELECT Min(DateDiff("n", O1.ORDERED_DT_TM, O1.COMPLETED_DT_TM))
FROM Orders O1
WHERE O1.ORDER_ID = O.ORDER_ID AND . . .
);
EDIT : If you want unique records then you can do instead :
SELECT O.*, DateDiff("n", O.ORDERED_DT_TM, O.COMPLETED_DT_TM) AS Time_to_complete
FROM Orders O
WHERE o.pk = (SELECT TOP (1) o1.pk
FROM Orders O1
WHERE O1.ORDER_ID = O.ORDER_ID AND . . .
ORDER BY DateDiff("n", O.ORDERED_DT_TM, O.COMPLETED_DT_TM) ASC
);
pk is your identity column that specifies unique entry in Orders table, so you can change it accordingly.
Have a look at this:
DECLARE #myTable AS TABLE (ID INT, Time_to_complete INT);
INSERT INTO #myTable
VALUES (1, 5)
, (1, 7)
, (1, 8)
, (2, 23)
, (2, 6)
, (3, 7)
, (4, 16)
, (4, 14);
WITH cte AS
(SELECT *
, ROW_NUMBER() OVER (PARTITION BY ID ORDER BY Time_to_complete) AS RN
FROM #myTable)
SELECT cte.ID
, cte.Time_to_complete
FROM cte
WHERE RN = 1;
Results :
ID Time_to_complete
----------- ----------------
1 5
2 6
3 7
4 14
It uses row numbers over groups, then selects the first row for each group. You should be able to adjust your code to use this technique. If in doubt wrap your entire query in a cte first then apply the technique here.
It's worth becoming familiar with this process as it gets used in a lot of places - especially around de-duping data.
Try This
DECLARE #myTable AS TABLE (ID INT, Time_to_complete INT);
INSERT INTO #myTable
VALUES (1, 5)
, (1, 7)
, (1, 8)
, (2, 23)
, (2, 6)
, (3, 7)
, (4, 16)
, (4, 14);
SELECT O.ID, O.Time_to_complete
FROM #myTable O
WHERE o.Time_to_complete = (Select min(m.Time_to_complete) FROM #myTable m
Where o.id=m.ID
);
Result :
ID Time_to_complete
1 5
2 6
3 7
4 14
Table structure like this
select * from TimeTable;
userid | 1am|2am|3am| 4am| 5am
1002 | 1 |1 |1 | 1 | 1
1003 | 1 |1 |1 | 1 | 1
1004 | 1 |1 |1 | 1 | 1
1005 | 1 |1 |1 | 1 | 1
I want select users that have column value 1 of specific timecolumn
I have used following query but it is throwing error
com.microsoft.sqlserver.jdbc.SQLServerException: Conversion failed
when converting the varchar value '[ 3PM]' to data type int.
select * from UserDetail u,TimeTable t
where u.userid=t.userid
and CONCAT(SUBSTRING(CONVERT(VARCHAR, getdate(), 100), 13, 2) ,'',RIGHT(CONVERT(VARCHAR, GETDATE(), 100),2)) = 1
Like this
when I use hardcoded column name then it works fine
I want to select a column name dynamically.
select * from UserDetail u,TimeTable t
where u.userid = t.userid
and [3AM] = 1
Option 1
Use a long list of short-circuited WHERE clauses like below which seems fine given the table design and quite easy to understand
DECLARE #currentHour int = DATEPART(HOUR, getdate())
SELECT * FROM TimeTable
WHERE
(#currentHour = 1 AND [1am] = 1) OR
(#currentHour = 2 AND [2am] = 1) OR
(#currentHour = 3 AND [3am] = 1) OR
(#currentHour = 4 AND [4am] = 1) OR
(#currentHour = 5 AND [5am] = 1) -- Etc
Option 2
Use UNPIVOT or VALUES to rotate the hour based columns into rows. As part of the rotate you can translate the column into a number indicating the hour. This you can compare with the current time’s hour component.
Option 3
Use dynamic sql which might or might not be ok for your environment or usage.
Below is the UNPIVOT approach (Option 2) which is a bit more complex to understand.
create table Test (id int, [1am] int, [2am] int, [3am] int, [4am] int)
insert Test values
(1, 1, 2, 3, 4)
, (2, 11, 12, 13, 14)
, (3, 1, 21, 23, 24)
, (4, 31, 32, 33, 34)
declare #time datetime = '2018-01-01 01:10:00' -- change this to getdate()
;WITH MatchingIds (id) AS
(
SELECT id
FROM
(SELECT
id,
[1am] AS [1], [2am] AS [2], [3am] AS [3], [4am] AS [4] --etc
FROM Test) AS Source
UNPIVOT
(val FOR hour IN
([1], [2], [3], [4]) --etc
) AS asRows
WHERE
hour = CONVERT(VARCHAR, DATEPART(HOUR, #time))
AND val = 1
)
SELECT * FROM MatchingIds
-- MatchingIds now contains the rows that match your criteris
-- This can be joined with other tables to generate your full result
Intermediate output from MatchingIds for above example with time param set to around 1am
| id |
|----|
| 1 |
| 3 |
http://sqlfiddle.com/#!18/85c9c/5
Try:
select * from UserDetail u,TimeTable t
where u.userid=t.userid
and
(select COLUMN_NAME
from INFORMATION_SCHEMA.COLUMNS
where TABLE_NAME = 'tblName'
AND TABLE_CATALOG = 'DBNAME'
AND COLUMN_NAME = CONCAT(SUBSTRING(CONVERT(VARCHAR, getdate(), 100), 13, 2) ,'',RIGHT(CONVERT(VARCHAR, GETDATE(), 100),2))) = 1
select CONCAT(SUBSTRING(CONVERT(VARCHAR, getdate(), 100), 13, 2) ,'',RIGHT(CONVERT(VARCHAR, GETDATE(), 100),2)) is a value of type varchar, not the syntax of a query. What you are currently doing is for sql similar to:
select * from UserDetail u,TimeTable t
where u.userid=t.userid
and '12PM' = 1
Even if parser accepted your query the condition would always be false as '12PM' is not the same as 12PM or [12PM]. You cannot dynamically modify a query within the query itself to make it work.
Yes, you could do tricks like dynamic execution (build entire query as string and then execute it), but please -don't. The real problem in you code is that you have a rather bad table design. Consider redesigning your TimeTable table to have timeOfDay values a single column and actual time values as data in that column. You'll save yourself a lot of headache later.
I have something like this:
Transaction Customer
1 Cust1
2 Cust2
3 Cust3
4 Cust4
TransID Code
2 A
2 B
2 D
3 A
4 B
4 C
If I want to be able to do something like "IF Customer 'Cust1' Has code 'A'", how should I best build a view? I want to end up being able to query something like "Select Customer from View where Code in [Some list of codes]" OR "Select Cust1 from View Having Codes in [Some list of codes]"
While I can do something like
Customer | Codes
Cust1 | A, B, D
Etc.
SELECT Transaction from Tbl where Codes like 'A'
This seems to me to be an impractical way to do it.
Here's how I'd do it
;with xact_cust (xact, cust) as
(
select 1, 'cust1' union all
select 2, 'cust2' union all
select 3, 'cust3' union all
select 4, 'cust4'
), xact_code (xact, code) as
(
select 2, 'A' union all
select 2, 'B' union all
select 2, 'D' union all
select 3, 'A' union all
select 4, 'B' union all
select 4, 'C'
)
select Cust, Code
from xact_cust cust
inner join xact_code code
on cust.xact = code.xact
where exists (select 1
from xact_code i
where i.xact = code.xact
and i.code = 'A')
If you NEED the codes serialized into a delimited list, take a look at this article: What this query does to create comma delimited list SQL Server?
Here's another option...
IF OBJECT_ID('tempdb..#CustomerTransaction', 'U') IS NOT NULL
DROP TABLE #CustomerTransaction;
CREATE TABLE #CustomerTransaction (
TransactionID INT NOT NULL PRIMARY KEY,
Customer CHAR(5) NOT NULL
);
INSERT #CustomerTransaction (TransactionID, Customer) VALUES
(1, 'Cust1'), (2, 'Cust2'), (3, 'Cust3'),
(4, 'Cust4'), (5, 'Cust5');
IF OBJECT_ID('tempdb..#TransactionCode', 'U') IS NOT NULL
DROP TABLE #TransactionCode;
CREATE TABLE #TransactionCode (
TransactionID INT NOT NULL,
Code CHAR(1) NOT NULL
);
INSERT #TransactionCode (TransactionID, Code) VALUES
(2, 'A'), (2, 'B'), (2, 'D'), (3, 'A'), (4, 'B'), (4, 'C');
--SELECT * FROM #CustomerTransaction ct;
--SELECT * FROM #TransactionCode tc;
--=============================================================
SELECT
ct.TransactionID,
ct.Customer,
CodeList = STUFF(tcx.CodeList, 1, 1, '')
FROM
#CustomerTransaction ct
CROSS APPLY (
SELECT
', ' + tc.Code
FROM
#TransactionCode tc
WHERE
ct.TransactionID = tc.TransactionID
ORDER BY
tc.Code ASC
FOR XML PATH('')
) tcx (CodeList);
Results...
TransactionID Customer CodeList
------------- -------- -----------
1 Cust1 NULL
2 Cust2 A, B, D
3 Cust3 A
4 Cust4 B, C
5 Cust5 NULL
I need to count totals number of records in a table, 'a', where a field in 'a', say 'type', has a certain value, 'v'. From all these records where a.type = 'v', I need to group these twice: first by field 'b_id', and again by month. The date range for these records must be restricted to the last year from the current date
I already have the totals for the 'b_id' field with ISNULL() as follows:
SELECT ISNULL(
SELECT COUNT(*)
FROM a
WHERE a.type = 'v'
AND b.b_id = a.b_id
), 0) AS b_totals
The data lies in table a, and is joined on table b. 'b_id' is the primary key for table b, and is found in table a (thought it is not part of a's key). The key for a is irrelevant to the data I need to pull, but can be stated as "a_id" for simplicity.
How do I:
Restrict these records to the past twelve months from the current date.
Take the total for any and all values of b.id, and categorize them by month. This is in addition to the totals of b.id by year. The date is stored in field "date_occurred" in table 'a' as a standard date/time type.
The schema at the end should look something like this, assuming that the current month is October and the year is 2016:
b.id | b_totals | Nov. 2015 | Dec. 2015 | Jan. 2016 .... Oct. 2016
__________________________________________________________________
ID_1 1 0 0 0 1
ID_2 3 2 0 1 0
ID_3 5 1 1 3 0
EDIT: I should probably clarify that I'm counting the records in table 'a' where field 'f' has a certain value 'v.' From these records, I need to group them by building then by month/date. I updated my ISNULL query to make this more clear, as well as the keys for a and b. "date_occured" should be in table a, not b, that was a mistake/typo on my end.
If it helps, the best way I can describe the data from a high level without giving away any sensitive data:
'b' is a table of locations, and 'b.b_id' is the ID for each location
'a' is a table of events. The location for these events is found in 'a.b_id' and joined on 'b.b_id' The date that each event occured is in 'a.date_occurred'
I need to restrict the type of events to a certain value. In this case, the type is field 'type.' This is the "where" clause in my ISNULL SQL query that gets the totals by location.
From all the events of this particular type, I need to count how many times this event occurred in the past year for each location. Once I have these totals from the past year, I need to count them by month.
Table structure:
The table structure of a is something like
a.a_id | a.b_id | a.type | a.date_occurred
Again, I do not need the ID's from a: just a series of counts based on type, b_id, and date_occurred.
EDIT 2: I restricted the totals of b_id to the past year with the following query:
SELECT ISNULL(
SELECT COUNT(*)
FROM a
WHERE a.type = 'v'
AND b.b_id = a.b_id
AND a.date_occurred BETWEEN (DATEADD(yyyy, -1, GETDATE()) AND (GETDATE())
), 0) AS b_totals
Now need to do this with a PIVOT and the months.
In an attempt to make this sufficiently detailed from the absolute minimum of detail provided in the question I have created these 2 example tables with some data:
CREATE TABLE Bexample
([ID] int)
;
INSERT INTO Bexample
([ID])
VALUES
(1),
(2),
(3),
(4),
(5),
(6),
(7),
(8),
(9)
;
CREATE TABLE Aexample
([ID] int, [B_PK] int, [SOME_DT] datetime)
;
INSERT INTO Aexample
([ID], [B_PK], [SOME_DT])
VALUES
(1, 1, '2015-01-01 00:00:00'),
(2, 2, '2015-02-01 00:00:00'),
(3, 3, '2015-03-01 00:00:00'),
(4, 4, '2015-04-01 00:00:00'),
(5, 5, '2015-05-01 00:00:00'),
(6, 6, '2015-06-01 00:00:00'),
(7, 7, '2015-07-01 00:00:00'),
(8, 8, '2015-08-01 00:00:00'),
(9, 9, '2015-09-01 00:00:00'),
(10, 1, '2015-10-01 00:00:00'),
(11, 2, '2015-11-01 00:00:00'),
(12, 3, '2015-12-01 00:00:00'),
(13, 1, '2016-01-01 00:00:00'),
(14, 2, '2016-02-01 00:00:00'),
(15, 3, '2016-03-01 00:00:00'),
(16, 4, '2016-04-01 00:00:00'),
(17, 5, '2016-05-01 00:00:00'),
(18, 6, '2016-06-01 00:00:00'),
(19, 7, '2016-07-01 00:00:00'),
(20, 8, '2016-08-01 00:00:00'),
(21, 9, '2016-09-01 00:00:00'),
(22, 1, '2016-10-01 00:00:00'),
(23, 2, '2016-11-01 00:00:00'),
(24, 3, '2016-12-01 00:00:00')
;
Now, using those tables and data I can generate a result table like this:
id Nov 2015 Dec 2015 Jan 2016 Feb 2016 Mar 2016 Apr 2016 May 2016 Jun 2016 Jul 2016 Aug 2016 Sep 2016 Oct 2016
1 0 0 1 0 0 0 0 0 0 0 0 1
2 1 0 0 1 0 0 0 0 0 0 0 0
3 0 1 0 0 1 0 0 0 0 0 0 0
4 0 0 0 0 0 1 0 0 0 0 0 0
5 0 0 0 0 0 0 1 0 0 0 0 0
6 0 0 0 0 0 0 0 1 0 0 0 0
7 0 0 0 0 0 0 0 0 1 0 0 0
8 0 0 0 0 0 0 0 0 0 1 0 0
9 0 0 0 0 0 0 0 0 0 0 1 0
Using a query that needs both a "common table expression (CTE) and "dynamic sql" to produce that result:
"Dynamic SQL" is a query that generates SQL which is then executed. This is needed because the column names change month to month. So, for the dynamic sql we declare 2 variables that will hold the generated SQL. One of these is to store the columns names, which gets used in 2 places, and the other is to hold the completed query. Note instead of executing this you may display the generated SQL as you evelop your solution (note the comments near execute at the end of the query).
In addition to the example tables and data, we also have a "time series" of 12 months to consider. This is "dynamic" as it is calculated from today's date and I have assumed that if today is any day within November 2016, that "the last 12 months" starts at 1 Nov 2015, and concludes at 31 Oct 2016 (i.e. 12 full months, no partial months).
The core of calculating this is here:
DATEADD(month,-12, DATEADD(month, DATEDIFF(month,0,GETDATE()), 0) )
which firstly locates the first day of the current month with DATEDIFF(month,0,GETDATE()) then deducts a further 12 months from that date. With that as a start date a "recursive CTE" is used to generate 12 rows, one for each month for the past 12 full months.
The purpose of these 12 rows is to ensure that when we consider the actual table data there will be no gaps in the 12 columns. This is achieved by using the generated 12 rows as the "from table" in our query, and the "A" table is LEFT JOINED based on the year/month of a date column [some_dt] to the 12 monthly rows.
So, we generate 12 rows join the sample data to these which is used to generate the SQL necessary for a "PIVOT" of the data. Here it is useful to actually see that an example of that generated sql, which looks like this:
SELECT id, [Nov 2015],[Dec 2015],[Jan 2016],[Feb 2016],[Mar 2016],[Apr 2016],[May 2016],[Jun 2016],[Jul 2016],[Aug 2016],[Sep 2016],[Oct 2016] FROM
(
select
format([mnth],'MMM yyyy') colname
, b.id
, a.b_pk
from #mylist
cross join bexample b
left join aexample a on #mylist.mnth = DATEADD(month, DATEDIFF(month,0,a.some_dt), 0)
and b.id = a.b_pk
) sourcedata
pivot
(
count([b_pk])
FOR [colname] IN ([Nov 2015],[Dec 2015],[Jan 2016],[Feb 2016],[Mar 2016],[Apr 2016],[May 2016],[Jun 2016],[Jul 2016],[Aug 2016],[Sep 2016],[Oct 2016])
) p
So, hopefully you can see in that generated SQL code that the dynamically created 12 rows become 12 columns. Note that because we are executing "dynamic sql" the 12 rows we generated as a CTE need to be stored as a "temporary table" (#mylist).
The query to generate AND execute that SQL is this.
DECLARE #cols AS VARCHAR(MAX)
DECLARE #query AS VARCHAR(MAX)
;with mylist as (
select DATEADD(month,-12, DATEADD(month, DATEDIFF(month,0,GETDATE()), 0) ) as [mnth]
union all
select DATEADD(month,1,[mnth])
from mylist
where [mnth] < DATEADD(month,-1, DATEADD(month, DATEDIFF(month,0,GETDATE()), 0) )
)
select [mnth]
into #mylist
from mylist
SELECT #cols = STUFF((SELECT ',' + QUOTENAME(format([mnth],'MMM yyyy'))
FROM #mylist
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
SET #query = 'SELECT id, ' + #cols + ' FROM
(
select
format([mnth],''MMM yyyy'') colname
, b.id
, a.b_pk
from #mylist
cross join bexample b
left join aexample a on #mylist.mnth = DATEADD(month, DATEDIFF(month,0,a.some_dt), 0)
and b.id = a.b_pk
) sourcedata
pivot
(
count([b_pk])
FOR [colname] IN (' + #cols + ')
) p '
--select #query -- use select to inspect the generated sql
execute(#query) -- once satisfied that sql is OK, use execute
drop table #mylist
You can see this working at: http://rextester.com/VVGZ39193
I want to share another attempt at explaining the issues faced by your requirements.
To follow this you MUST understand this sample data. I have 2 tables #events (a) and #locations (b). The column names should be easy to follow I hope. The
declare #Events table
( [id] int IDENTITY(1007,2)
, [b_id] int
, [date_occurred] datetime
, [type] varchar(20)
)
;
INSERT INTO #Events
([b_id], [date_occurred],[type])
VALUES
(1, '2015-01-11 00:00:00','v'),
(2, '2015-02-21 00:00:00','v'),
(3, '2015-03-11 00:00:00','v'),
(4, '2015-04-21 00:00:00','v'),
(5, '2015-05-11 00:00:00','v'),
(6, '2015-06-21 00:00:00','v'),
(1, '2015-07-11 00:00:00','v'),
(2, '2015-08-11 00:00:00','v'),
(3, '2015-09-11 00:00:00','v'),
(5, '2015-10-11 00:00:00','v'),
(5, '2015-11-21 00:00:00','v'),
(6, '2015-12-21 00:00:00','v'),
(1, '2016-01-21 00:00:00','v'),
(2, '2016-02-21 00:00:00','v'),
(3, '2016-03-21 00:00:00','v'),
(4, '2016-04-21 00:00:00','v'),
(5, '2016-05-21 00:00:00','v'),
(6, '2016-06-21 00:00:00','v'),
(1, '2016-07-11 00:00:00','v'),
(2, '2016-08-21 00:00:00','v'),
(3, '2016-09-21 00:00:00','v'),
(4, '2016-10-11 00:00:00','v'),
(5, '2016-11-11 00:00:00','v'),
(6, '2016-12-11 00:00:00','v');
declare #Locations table
([id] int, [name] varchar(13))
;
INSERT INTO #Locations
([id], [name])
VALUES
(1, 'Atlantic City'),
(2, 'Boston'),
(3, 'Chicago'),
(4, 'Denver'),
(5, 'Edgbaston'),
(6, 'Melbourne')
;
OK. So with that data we can easily create a set of counts using this query:
select
b.id
, b.name
, format(a.date_occurred,'yyyy MMM') mnth
, count(*)
FROM #events a
inner join #locations b ON b.id = a.b_id
WHERE a.type = 'v'
and a.date_occurred >= DATEADD(month,-12, DATEADD(month, DATEDIFF(month,0,GETDATE()), 0) )
group by
b.id
, b.name
, format(a.date_occurred,'yyyy MMM')
And that output looks like this:
id name mnth
-- ------------- -------- -
1 Atlantic City 2016 Jan 1
1 Atlantic City 2016 Jul 1
2 Boston 2016 Aug 1
2 Boston 2016 Feb 1
3 Chicago 2016 Mar 1
3 Chicago 2016 Sep 1
4 Denver 2016 Apr 1
4 Denver 2016 Oct 1
5 Edgbaston 2015 Nov 1
5 Edgbaston 2016 May 1
5 Edgbaston 2016 Nov 1
6 Melbourne 2015 Dec 1
6 Melbourne 2016 Dec 1
6 Melbourne 2016 Jun 1
So, with a "simple" query that is easy to pass parameters into, the output is BY ROWS
and the column headings are FIXED
NOW do you understand why transposing those rows into columns, with VARIABLE COLUMN HEADING forces the use of dynamic sql?
Your requirements, no matter how many words you throw at it, leads to complexity in the sql.
You can run the above data/query here: https://data.stackexchange.com/stackoverflow/query/574718/count-and-group-records-by-month-and-field-value-from-the-last-year
I am working on a Data Warehouse project and the client provides daily sales data. On-hand quantities are provided in most lines but are often missing. I need help on how to fill those missing values based on prior OH and sales information.
Here's a sample data:
Line# Store Item OnHand SalesUnits DateKey
-----------------------------------------------
1 001 A 100 20 1
2 001 A 80 10 2
3 001 A null 30 3 --[OH updated with 70 (80-10)]
4 001 A null 5 4 --[OH updated with 40 (70-30)]
5 001 A 150 10 5 --[OH untouched]
6 001 B null 4 1 --[OH untouched - new item]
7 001 B 80 12 2
8 001 B null 10 3 --[OH updated with 68 (80-12]
Lines 1 and 2 are not to be updated because OnHand quantities exist.
Lines 3 and 4 are to be updated based on their preceding rows.
Line 5 is to be left untouched because OnHand is provided.
Line 6 is to be left untouched because it is the first row for Item B
Is there a way I can do this in a set operation? I know I can do it easily using a fast_forward cursor but it will take a long time (15M+ rows).
Thanks for your help!
Test data:
declare #t table(
Line# int, Store char(3), Item char, OnHand int, SalesUnits int, DateKey int
)
insert #t values
(1, '001', 'A', 100, 20, 1),
(2, '001', 'A', 80 , 10, 2),
(3, '001', 'A', null, 30, 3),
(4, '001', 'A', null, 5, 4),
(5, '001', 'A', 150, 10, 5),
(6, '001', 'B', null, 4, 1),
(7, '001', 'B', null, 4, 2),
(8, '001', 'B', 80, 12, 3),
(9, '001', 'B', null, 10, 4)
Script to populate not using cursor:
;with a as
(
select Line#, Store, Item, OnHand, SalesUnits, DateKey, 1 correctdata from #t where DateKey = 1
union all
select t.Line#, t.Store, t.Item, coalesce(t.OnHand, a.onhand - a.salesunits), t.SalesUnits, t.DateKey, t.OnHand from #t t
join a on a.DateKey = t.datekey - 1 and a.item = t.item and a.store = t.store
)
update t
set OnHand = a.onhand
from #t t join a on a.line# = t.line#
where a.correctdata is null
Script to populate using cursor:
declare #datekey int, #store int, #item char, #Onhand int,
#calculatedonhand int, #salesunits int, #laststore int, #lastitem char
DECLARE sales_cursor
CURSOR FOR
SELECT datekey+1, store, item, OnHand -SalesUnits, salesunits
FROM #t sales
order by store, item, datekey
OPEN sales_cursor;
FETCH NEXT FROM sales_cursor
INTO #datekey, #store, #item, #Onhand, #salesunits
WHILE ##FETCH_STATUS = 0
BEGIN
SELECT #calculatedonhand = case when #laststore = #store and #lastitem = #item
then coalesce(#onhand, #calculatedonhand - #salesunits) else null end
,#laststore = #store, #lastitem = #item
UPDATE s
SET onhand=#calculatedonhand
FROM #t s
WHERE datekey = #datekey and #store = store and #item = item
and onhand is null and #calculatedonhand is not null
FETCH NEXT FROM sales_cursor
INTO #datekey, #store, #item, #Onhand, #salesunits
END
CLOSE sales_cursor;
DEALLOCATE sales_cursor;
I recommand you use the cursor version, I doubt you can get a decent performance using the recursive query. I know people in here hate cursors, but when your table has that size, it can be the only solution.