Printing the current value and previous value between the date range - sql-server

I have a sample data like this
ID DATE TIME STATUS
---------------------------------------------
A 01-01-2000 0900 ACTIVE
A 05-02-2000 1000 INACTIVE
A 01-07-2000 1300 ACTIVE
B 01-05-2005 1000 ACTIVE
B 01-08-2007 1050 ACTIVE
C 01-01-2010 0900 ACTIVE
C 01-07-2010 1900 INACTIVE
From the above data set, if we only focus on ID='A' we note that A was initally active, then became inactive on 05-02-2000 and then it was inactive until 01-07-2000.
Which means that A was inactive from 05-Feb-2000 to 01-July-2000.
My questions are:
if I execute a query with (ID=A, Date=01-04-2000) it should give me
A 05-02-2000 1000 INACTIVE
because since that date is not available in that data set, it should search for the previous one and print that
Also, if my condition is (ID=A, Date=01-07-2000) it should not only print the value which is present in the table, but also print a previous value
A 05-02-2000 1000 INACTIVE
A 01-07-2000 1300 ACTIVE
I would really appreciate if any one can assist me solve this query. I am trying my best to solve this.
Thank you every one.
Any take on this?
Afaq

Something like the following should work:
SELECT ID, Date, Time, Status
from (select ID, Date, Time, Status, row_number() over (order by Date) Ranking
from MyTable
where ID = #SearchId
and Date <= #SearchDate) xx
where Ranking < 3
order by Date, Time
This will return at most two rows. Its not clear if you are using Date and Time datatyped columns, or if you are actually using reserved words as column names, so you'll have to fuss with that. (I left out Time, but you could easily add that to the various orderings and filterings.)
Given the revised criteria, it gets a bit trickier, as the inclusion or exclusion of a row depends upon the value returned in a different row. Here, the “second” row, if there are two or more rows, is included only if the “first” row equals a particular value. The standard way to do this is to query the data to get the max value, then query it again while referencing the result of the first set.
However, you can do a lot of screwy things with row_number. Work on this:
SELECT ID, Date, Time, Status
from (select
ID, Date, Time, Status
,row_number() over (partition by case when Date = #SearchDate then 0 else 1 end
order by case when Date = #SearchDate then 0 else 1 end
,Date) Ranking
from MyTable
where ID = #SearchId
and Date <= #SearchDate) xx
where Ranking = 1
order by Date, Time
You'll have to resolve the date/time issue, since this only works against dates.

Basically you need to pull a row if, for the specified date, it is:
1) the last record, or
2) the last inactive record.
And the two conditions may match the same row as well as two distinct rows.
Here's how this logic could be implemented in SQL Server 2005+:
WITH ranked AS (
SELECT
ID,
Date,
Time,
Status,
RankOverall = ROW_NUMBER() OVER ( ORDER BY Date DESC),
RankByStatus = ROW_NUMBER() OVER (PARTITION BY Status ORDER BY Date DESC)
FROM Activity
WHERE ID = #ID
AND Date <= #Date
)
SELECT
ID,
Date,
Time,
Status,
FROM ranked
WHERE RankOverall = 1
OR Status = 'INACTIVE' AND RankByStatus = 1

Related

Calculating Days Between Dates in Separate Rows For Same UnitID

I am trying to calculate the time a commercial real estate space sits vacant. I have move-in & move-out dates for each tenant that has occupied that unit. It is easy to calculate the occupied time of each tenant as that data is within the same row. However, I want to calculate the vacant time: the time between move-out of the previous tenant and move-in of the next tenant. These dates appear in separate rows.
Here is a sample of what I have currently:
SELECT
uni_vch_UnitNo AS UnitNumber,
uty_vch_Code AS UnitCode,
uty_int_Id AS UnitID, tul_int_FacilityId AS FacilityID,
tul_dtm_MoveInDate AS Move_In_Date,
tul_dtm_MoveOutDate AS Move_Out_Date,
DATEDIFF(day, tul_dtm_MoveInDate, tul_dtm_MoveOutDate) AS Occupancy_Days
FROM TenantUnitLeases
JOIN units
ON tul_int_UnitId = uni_int_UnitId
JOIN UnitTypes
ON uni_int_UnitTypeId = uty_int_Id
WHERE
tul_int_UnitId = '26490'
ORDER BY tul_dtm_MoveInDate ASC
Is there a way to assign an id to each row in chronological, sequential order and find the difference between row 2 move-in date less row 1 move-out date and so on?
Thank you in advance for the help.
I can't really tell which tables provide which columns for your query. Please alias and dot-qualify them in the future.
If you're using SQL 2012 or later, you've got LEAD and LAG functions which do exactly what you want: bring a "leading" or "lagging" row into a current row. See if this works (hopefully it should at least get you started):
SELECT
uni_vch_UnitNo AS UnitNumber,
uty_vch_Code AS UnitCode,
uty_int_Id AS UnitID, tul_int_FacilityId AS FacilityID,
tul_dtm_MoveInDate AS Move_In_Date,
tul_dtm_MoveOutDate AS Move_Out_Date,
DATEDIFF(day, tul_dtm_MoveInDate, tul_dtm_MoveOutDate) AS Occupancy_Days
, LAG(tul_dtm_MoveOutDate) over (partition by uni_vch_UnitNo order by tul_dtm_MoveOutDate) as Previous_Move_Out_Date
, DATEDIFF(day,LAG(tul_dtm_MoveOutDate) over (partition by uni_vch_UnitNo order by tul_dtm_MoveOutDate),tul_dtm_MoveInDate) as Days_Vacant
FROM TenantUnitLeases
JOIN units
ON tul_int_UnitId = uni_int_UnitId
JOIN UnitTypes
ON uni_int_UnitTypeId = uty_int_Id
WHERE
tul_int_UnitId = '26490'
ORDER BY tul_dtm_MoveInDate ASC
Just comparing a value from the current row with a value in the previous row is functionality provided by the lag() function.
Try this in your query:
select...
tul_dtm_MoveInDate AS Move_In_Date,
tul_dtm_MoveOutDate AS Move_Out_Date,
DateDiff(day, Lag(tul_dtm_MoveOutDate,1) over(partition by uty_vch_Code, tul_int_FacilityId order by tul_dtm_MoveInDate), tul_dtm_MoveInDate) DaysVacant,
...
This needs a window function or correlated sub query. The goal is to provide the previous move out date for each row, which is in turn a function of that row. The term 'window' in this context means to apply an aggregate function over a smaller range than the whole set.
If you had a function called GetPreviousMoveOutDate, the parameters would be the key to filter on, and the ranges to search within the filter. So we would pass the UnitID as the key and the MoveInDate for this row, and the function should return the most recent MoveOutDate for the same unit that is before the passed in date. By getting the max date before this one, we will ensure we get only the previous occupancy if it exists.
To use a sub-query in ANSI-SQL you just add the select as a column. This should work on MS-SQL as well as other DB platforms; however, it requires using aliases for the table names so they can be referenced in the query more than once. I've updated your sample SQL with aliases using the AS syntax, although it looks redundant to your table naming convention. I added a uni_dtm_UnitFirstAvailableDate to your units table to handle the first vacancy, but this can be a default:
SELECT
uni.uni_vch_UnitNo AS UnitNumber,
uty.uty_vch_Code AS UnitCode,
uty.uty_int_Id AS UnitID, tul_int_FacilityId AS FacilityID,
tul.tul_dtm_MoveInDate AS Move_In_Date,
tul.tul_dtm_MoveOutDate AS Move_Out_Date,
DATEDIFF(day, tul.tul_dtm_MoveInDate, tul.tul_dtm_MoveOutDate) AS Occupancy_Days,
-- select the date:
(SELECT MAX (prev_tul.tul_dtm_MoveOutDate )
FROM TenantUnitLeases AS prev_tul
WHERE prev_tul.tul_int_UnitId = tul.tul_int_UnitId
AND prev_tul.tul_dtm_MoveOutDate > tul.tul_dtm_MoveInDate
AND prev_tul.tul_dtm_MoveOutDate is not null
) AS previous_moveout,
-- use the date in a function:
DATEDIFF(day, tul.tul_dtm_MoveInDate,
ISNULL(
(SELECT MAX (prev_tul.tul_dtm_MoveOutDate )
FROM TenantUnitLeases AS prev_tul
WHERE prev_tul.tul_int_UnitId = tul.tul_int_UnitId
AND prev_tul.tul_dtm_MoveOutDate > tul.tul_dtm_MoveInDate
AND prev_tul.tul_dtm_MoveOutDate is not null
) , uni.uni_dtm_UnitFirstAvailableDate) -- handle first occupancy
) AS Vacancy_Days
FROM TenantUnitLeases AS tul
JOIN units AS uni
ON tul.tul_int_UnitId = uni.uni_int_UnitId
JOIN UnitTypes AS uty
ON uni.uni_int_UnitTypeId = uty.uty_int_Id
WHERE
tul.tul_int_UnitId = '26490'
ORDER BY tul.tul_dtm_MoveInDate ASC

Creating sequential date ranges for items in a queue

I have a table 'item_queue' containing, items, groups and a sequence number.
Each item is unique and is held against a group with a number indicating the sequence. The count is a total for that item e.g.
group_id|item_id|sequence_order_number|count
--------------------------------------------
A |123 |1 |20
A |124 |2 |30
B |125 |1 |10
Given this information I am trying to set up sequential start and end dates
The start datetime of the first item for a group is the current time, for example assume start of item 123 is '2019-04-04 12:00:00.000' then
end datetime would be start + (count * minutes) so '2019-04-04 12:20:00.000'
The start of item 124 would equal that end date as it is the next in the sequence for that group. the end is then calculated the same way to be '2019-04-04 12:50:00.000'
item 125 would start the time again at '2019-04-04 12:00:00.000' as it is in a different group
I have attempted a few ways to do this, and I think the answer is a recursive cte, but I can't wrap my head around it to make it work for one or multiple groups, my unsuccessful attempt for a single group:
;with cte as
(
select
group_id,
item_id,
count,
GETDATE() as start_datetime,
DATEADD(MINUTE, count, GETDATE()) as end_datetime,
iq.sequence_order_number
from item_queue iq
where iq.group_id = 'A'
union all
select
group_id,
item_id,
count,
cte.end_datetime,
DATEADD(MINUTE, count, cte2.end_datetime) as end_datetime,
iq.sequence_order_number
from item_queue iq
inner join cte
on cte.group_id = iq.group_id
and cte.sequence_order_number > iq.sequence_order_number
where iq.group_id = 'A'
)
select * from cte
I suspect the answer may involve a row number window something like
ROW_NUMBER() OVER (Partition By iq.group_id Order By iq.sequence_order_number ASC)
But I have had trouble using it recursively.
Using SQL server 2012, without the ability to upgrade this database.
The minutes you want to add are practically a cumulative sum. The sum() over() window function is available in 2012 and performs exactly that. Try:
select
*,
isnull(sum([count]) over
(
partition by group_id
order by item_id asc
rows between unbounded PRECEDING and 1 PRECEDING
)
,0) as cum_count_start,
sum([count]) over ( partition by group_id order by item_id asc ) as cum_count_end
from item_queue
You already know how to use dateadd after this point.
What the individual window function caluses do:
partition by group_id : Seperate (partition) the calculations for each group_id value subset
order by item_id asc : make a virtual sorting of the rows on which the window range will be applied
rows between.... : The actual window. For the start date, we want to consider all the lines from the start (thus unbounded preceding) to the previous one (thus 1 preceding), since you don't want the start date to include the current line's [count]. Note that ommitting this clause like we did on the cum_count_end is equivelant to rows between unbounded preceding and current row.
The isnull(...,0) is needed because for the first line of each group_id you want to add 0 to the start date, but the window function sees no rows and returns NULL, so we need to change this to 0.

T-SQL - Get last as-at date SUM(Quantity) was not negative

I am trying to find a way to get the last date by location and product a sum was positive. The only way i can think to do it is with a cursor, and if that's the case I may as well just do it in code. Before i go down that route, i was hoping someone may have a better idea?
Table:
Product, Date, Location, Quantity
The scenario is; I find the quantity by location and product at a particular date, if it is negative i need to get the sum and date when the group was last positive.
select
Product,
Location,
SUM(Quantity) Qty,
SUM(Value) Value
from
ProductTransactions PT
where
Date <= #AsAtDate
group by
Product,
Location
i am looking for the last date where the sum of the transactions previous to and including it are positive
Based on your revised question and your comment, here another solution I hope answers your question.
select Product, Location, max(Date) as Date
from (
select a.Product, a.Location, a.Date from ProductTransactions as a
join ProductTransactions as b
on a.Product = b.Product and a.Location = b.Location
where b.Date <= a.Date
group by a.Product, a.Location, a.Date
having sum(b.Value) >= 0
) as T
group by Product, Location
The subquery (table T) produces a list of {product, location, date} rows for which the sum of the values prior (and inclusive) is positive. From that set, we select the last date for each {product, location} pair.
This can be done in a set based way using windowed aggregates in order to construct the running total. Depending on the number of rows in the table this could be a bit slow but you can't really limit the time range going backwards as the last positive date is an unknown quantity.
I've used a CTE for convenience to construct the aggregated data set but converting that to a temp table should be faster. (CTEs get executed each time they are called whereas a temp table will only execute once.)
The basic theory is to construct the running totals for all of the previous days using the OVER clause to partition and order the SUM aggregates. This data set is then used and filtered to the expected date. When a row in that table has a quantity less than zero it is joined back to the aggregate data set for all previous days for that product and location where the quantity was greater than zero.
Since this may return multiple positive date rows the ROW_NUMBER() function is used to order the rows based on the date of the positive quantity day. This is done in descending order so that row number 1 is the most recent positive day. It isn't possible to use a simple MIN() here because the MIN([Date]) may not correspond to the MIN(Quantity).
WITH x AS (
SELECT [Date],
Product,
[Location],
SUM(Quantity) OVER (PARTITION BY Product, [Location] ORDER BY [Date] ASC) AS Quantity,
SUM([Value]) OVER(PARTITION BY Product, [Location] ORDER BY [Date] ASC) AS [Value]
FROM ProductTransactions
WHERE [Date] <= #AsAtDate
)
SELECT [Date], Product, [Location], Quantity, [Value], Positive_date, Positive_date_quantity
FROM (
SELECT x1.[Date], x1.Product, x1.[Location], x1.Quantity, x1.[Value],
x2.[Date] AS Positive_date, x2.[Quantity] AS Positive_date_quantity,
ROW_NUMBER() OVER (PARTITION BY x1.Product, x1.[Location] ORDER BY x2.[Date] DESC) AS Positive_date_row
FROM x AS x1
LEFT JOIN x AS x2 ON x1.Product=x2.Product AND x1.[Location]=x2.[Location]
AND x2.[Date]<x1.[Date] AND x1.Quantity<0 AND x2.Quantity>0
WHERE x1.[Date] = #AsAtDate
) AS y
WHERE Positive_date_row=1
Do you mean that you want to get the last date of positive quantity come to positive in group?
For example, If you are using SQL Server 2012+:
In following scenario, when the date going to 01/03/2017 the summary of quantity come to 1(-10+5+6).
Is it possible the quantity of following date come to negative again?
;WITH tb(Product, Location,[Date],Quantity) AS(
SELECT 'A','B',CONVERT(DATETIME,'01/01/2017'),-10 UNION ALL
SELECT 'A','B','01/02/2017',5 UNION ALL
SELECT 'A','B','01/03/2017',6 UNION ALL
SELECT 'A','B','01/04/2017',2
)
SELECT t.Product,t.Location,SUM(t.Quantity) AS Qty,MIN(CASE WHEN t.CurrentSum>0 THEN t.Date ELSE NULL END ) AS LastPositiveDate
FROM (
SELECT *,SUM(tb.Quantity)OVER(ORDER BY [Date]) AS CurrentSum FROM tb
) AS t GROUP BY t.Product,t.Location
Product Location Qty LastPositiveDate
------- -------- ----------- -----------------------
A B 3 2017-01-03 00:00:00.000

Filtering values based on the date ranges

Request your help in acheiving the following result from the date set below
I have the below result set
CampaignName Matchfrom MatchTo
a 08-09-2013 07-11-2013
a 10-09-2013 10-11-2013
a 08-11-2013 07-01-2014
a 09-11-2013 08-01-2014
above set is sorted on matchfrom date column. First row will be considered as a master
now the query should filter out the rows in which matchfrom lies in the date range of the master.
This, I achieved using a self join. But now the third row is completely out of range of the master(1st row). This should now be considered as the master and it should filter out the 4th row.
Final result set will be like the below, marked as pass and fail
CampaignName Matchfrom MatchTo
a 08-09-2013 07-11-2013 PASS
a 10-09-2013 10-11-2013 FAIL
a 08-11-2013 07-01-2014 PASS
a 09-11-2013 08-01-2014 FAIL
Can someone advise me on this
With you data you'll have to do a bit more scrubbing but the code below should get you in the right direction. You have to be careful because your MatchFrom and MatchTo in your "Master Record"go opposite directions than all of your other data.
CREATE TABLE #tmpCampaign(
CampaignName varchar(1),
Matchfrom Date,
MatchTo Date
)
INSERT INTO #tmpCampaign VALUES
('a','08-09-2013','07-11-2013'),
('a','10-09-2013','10-11-2013'),
('a','08-11-2015','07-01-2014'),
('a','09-11-2013','08-01-2014')
;WITH Campaign AS(
SELECT *,
ROW_NUMBER() OVER (PARTITION BY campaignName ORDER BY MatchFrom) as CampRank
FROM #tmpCampaign)
SELECT c1.*, c2.MatchFrom as MasterFrom, c2.MatchTo as MasterTo,
CASE WHEN c1.Matchfrom >= c2.MatchFrom AND c1.Matchfrom <= c2.MatchTo THEN 'Pass'
ELSE 'Fail' END as PassFail
FROM Campaign as c1
JOIN Campaign as c2
ON c1.CampaignName = c2.CampaignName and c2.CampRank = 1
may be this is create problem when date duplication happens but as for your result set i have picked the datekey and done the partition according to that to achieve results
;With Cte as
(select Campaignname,
matchfrom,
matchto,
ROW_number()OVER(PARTITION BY right(matchfrom, len(matchfrom) - charindex('-', matchfrom) - 3)ORDER BY Campaignname)RN
from #tmpCampaign )
select Campaignname,
matchfrom,
matchto,
Case when RN = 1 then 'Pass' ELSE 'Fail' END
from Cte

how to create a mssql view for getting last state information

lets say, i have two tables, one for object records and one for activity records about these objects.
i'm inserting a new record in this activity table every time an object is inserted or updated.
for telling it in a simple way, assume i have four fields in activity table; objectId, type, status and date.
when an object is about to be updated, i'm planning to get the last state for the object and look for the changes. if there is a difference between the updating value and the previous value, i'll set the value with new input, otherwise i'll set it null. so for example in an update process, user only changes the status value of the object but leaves the type value as the same, so i'll insert a new row with a null value for type and a new value for the status.
SELECT * FROM Activity;
oid type status date
-----------------------------------------
1 0 1 2009.03.05 17:58:07
1 null 2 2009.03.06 07:00:00
1 1 null 2009.03.07 20:18:07
1 3 null 2009.03.08 07:00:00
so i have to create a view tells me the current state of my object like,
SELECT * FROM ObjectStateView Where oid = 1;
oid type status date
-----------------------------------------
1 3 2 2009.03.08 07:00:00
how do i achieve this_?
Assuming date can be used to find latest record:
CREATE VIEW foo
AS
SELECT
A.oid,
(SELECT TOP 1 type FROM Activity At WHERE At.OID = A.oid AND At.Date <= MAX(A.date) AND type IS NOT NULL),
(SELECT TOP 1 status FROM Activity Ast WHERE Ast.OID = A.oid AND Ast.Date <= MAX(A.date) AND status IS NOT NULL),
MAX(A.date) AS date
FROM
Activity A
GO
Edit: if you want a JOIN (untested)
CREATE VIEW foo
AS
SELECT TOP 1
A.oid,
At.type,
Ast.status,
A.date
FROM
Activity A
LEFT JOIN
(SELECT TOP 1 oid, date, type FROM Activity WHERE type IS NOT NULL ORDER BY date DESC) At ON A.OID = At.oid
LEFT JOIN
(SELECT TOP 1 oid, date, status FROM Activity WHERE status IS NOT NULL ORDER BY date DESC) Ast ON A.OID = Ast.oid
ORDER BY date DESC
GO
Should have added this earlier:
It will scale exponentially because you have to touch the table 11 different times.
A better solution would be to maintain a "current" table and maintain it via a trigger on activity.
Have you considered using MAX function?
select oid, type, status, MAX(date) as max_date
from ObjectStateView
where oid = 1
Not really sure why you'd want the nulls in there. You can track what's changed between inputs by comparing the latest entry to the previous. Then the current state of the object is the latest entry in the table. You can determine if an object has changed by creating a hash of the parts of the object that you want to track changes to and storing that as an extra column.
Historical values:
Since you track changes, you may want to see the status of the object historically:
SELECT a.oid,
a.date,
a_type.type,
a_status.status
FROM Activity a
LEFT JOIN Activity a_type
ON a_type.oid = a.oid
AND a_type.date = (SELECT TOP 1 date FROM Activity WHERE oid = a.oid AND date <= a.date AND type IS NOT NULL ORDER BY date DESC)
LEFT JOIN Activity a_status
ON a_status.oid = a.oid
AND a_status.date = (SELECT TOP 1 date FROM Activity where oid = a.oid AND date <= a.date AND status IS NOT NULL ORDER BY date DESC)
which will return:
oid date type status
----------- ---------- ----------- -----------
1 2009-03-05 0 1
1 2009-03-06 0 2
1 2009-03-07 1 2
1 2009-03-08 3 2
Performance consideration:
On the other hand, if you have more then just a few fields, and the table is big, the performance would become an issue. In this case I would make sense also to store/cache the whole values in another table MyDataHistory, which would contain data like in the table shown above. Then selecting the current(latest) version is trivial using a SQL view filtering the latest row (1 row only) by oid and date.

Resources