Postgresql Average and Group By - database

So I have a table with a column id(INTEGER), temperature(REAL) and one time(TIMESTAMP) and I want to make the average temperature by hour but I can't work it around.
For a sample with
temperature | time
-------------+---------------------
21.88 | 2018-06-01 07:30:00
23.21 | 2018-06-01 07:45:00
23.57 | 2018-06-01 08:15:00
24.91 | 2018-06-01 08:30:00
25.5 | 2018-06-01 08:45:00
25.98 | 2018-06-01 09:00:00
| 2018-06-01 09:30:00
24.45 | 2018-06-01 09:45:00
| 2018-06-01 10:00:00
And the request:
SELECT DISTINCT ON(DATE_PART('hour',time)) time, avg(temperature)
FROM Measure GROUP BY time , DATE_PART('hour',time);
I get:
time | avg
---------------------+------------------
2018-06-01 07:30:00 | 21.8799991607666
2018-06-01 08:15:00 | 23.5699996948242
2018-06-01 09:00:00 | 25.9799995422363
Something is happening here, but it's not an average...
Resolved:
Thanks to the comment, I get something correct with the query:
SELECT DISTINCT ON(DATE_TRUNC('hour',time)) DATE_TRUNC('hour',time), avg(temperature)
FROM Measure GROUP BY DATE_TRUNC('hour',time), DATE_TRUNC('day',time);

Related

Working hours between two dates in Snowflake

How to calculate working hours between two dates in snowflake without creating tables?
i have tried function like (datediff) and timestamp but i could not reach the solution
i would like to get something like that
+---------------------+---------------------+---------------+
| create_Task | Solved_Task | BusinessHours |
+---------------------+---------------------+---------------+
| 2012-03-05 09:00:00 | 2012-03-05 15:00:00 | 6.000000 |
| 2012-03-05 10:00:00 | 2012-03-06 10:00:00 | 8.000000 |
| 2012-03-05 11:00:00 | 2012-03-06 10:00:00 | 7.000000 |
| 2012-03-05 10:00:00 | 2012-03-06 15:00:00 | 13.000000 |
| 2012-03-09 16:00:00 | 2012-03-12 10:00:00 | 2.000000 |
| 2012-03-06 16:00:00 | 2012-03-15 10:00:00 | 50.000000 |
| 2012-03-09 16:00:00 | 2012-03-19 10:00:00 | 42.000000 |
+---------------------+---------------------+---------------+
and i would like to specify the working hours so then i can calculate the business hours
One way to do this is by creating a working hours table. Then you can run a fairly simple query:
select
t.id
, sum(datediff(‘second’,
-- calculate the max of the two start time
(case when t.start <=
w.working_day_start_timestamp
then w.working_day_start_timestamp
else t.start
end),
-- calculate the min of the two end times
(case when t.end >=
w.working_day_end_timestamp
then w.working_day_end_timestamp
else t.end
end)
)) / 3600 -- convert to hourly
as working_hour_diff
from
working_days_times w,
cross join time_intervals t
where -- select all intersecting intervals
(
t.start <= w.working_day_end_timestamp
and
t.end >= w.working_day_start_timestamp
)
and -- select only working days
w.is_working_day
group by
t.id
If you need a function, this article describes the implementation of a Javascript UDF in Snowflake:
https://medium.com/dandy-engineering-blog/how-to-calculate-the-number-of-working-hours-between-two-timestamps-in-sql-b5696de66e51

How to add 1 month to 30th or 31st Jan and after Feb it should take 30th or 31st Mar respectively

I was facing this problem and spend a lot of time today. So, i thought to share it here:
I have a table where we store debitDate and we have a stored procedure where every month we set the debit date to next month in the table.
So, if its debit date is 29th Jan, 2020 -> 29th Feb, 2020 -> 29th March, 2020 - so it should go on like this. I am using DATEADD() function in the stored procedure.
But for 30th & 31st i am facing issue. It should work like below in upcoming years:
Desired Behaviour:
30th Jan, 2020 -> 29th Feb, 2020 -> 30th Mar, 2020 -> 30th Apr, 2020
30th Jan, 2021 -> 28th Feb, 2021 -> 30th Mar, 2021 -> 30th Apr, 2021
31st Jan, 2020 -> 29th Feb, 2020 -> 31st Mar, 2020 -> 30th Apr, 2020
Issue:
30th Jan, 2020 -> 29th Feb, 2020 -> 29th Mar, 2020 -> 29th Apr, 2020
30th Jan, 2021 -> 28th Feb, 2021 -> 28th Mar, 2021 -> 28th Apr, 2021
31st Jan, 2020 -> 29th Feb, 2020 -> 29th Mar, 2020 -> 29th Apr, 2020
Solution 1:
For solution i have thought i can add a new column to the table as previousDebitDate and when we update the debit date we will check, if previousDebitDate day is 30 or 31.
If true then
DATEADD(MONTH, 2, #previousDebitDate)
else
DATEADD(MONTH, 1, #debitDate)
If anyone has a better solution please feel free to post your answer.
Solution 2:
For this issue a better solution is to add debitDay as a new column to the table and save only day part (ex: 30) and calculate each month debit date on the fly.
I think Solution 2 is better! Thanks #Arvo!!!
Maybe I 've understand very well & maybe not, but here's what I think you're looking for
CREATE TABLE Data
(
Dates DATE
);
INSERT Data(Dates) VALUES
('2020-01-30');
WITH CTE AS
(
SELECT Dates,
DATEADD(Month, 1, Dates) NextMonth,
DAY(EOMONTH(DATEADD(Month, 1, Dates))) LastDay
FROM Data
UNION ALL
SELECT DATEADD(Month, 1, Dates),
DATEADD(Month, 1, NextMonth),
DAY(EOMONTH(DATEADD(Month, 1, NextMonth)))
FROM CTE
WHERE Dates <= '2021-12-31'
)
SELECT Dates, NextMonth, DATEFROMPARTS(YEAR(Dates), MONTH(NextMonth),
CASE WHEN LastDay > 30 THEN 30 ELSE LastDay END) Value
FROM CTE;
Which 'll returns:
+------------+------------+------------+
| Dates | NextMonth | Value |
+------------+------------+------------+
| 2020-01-30 | 2020-02-29 | 2020-02-29 |
| 2020-02-29 | 2020-03-29 | 2020-03-30 |
| 2020-03-29 | 2020-04-29 | 2020-04-30 |
| 2020-04-29 | 2020-05-29 | 2020-05-30 |
| 2020-05-29 | 2020-06-29 | 2020-06-30 |
| 2020-06-29 | 2020-07-29 | 2020-07-30 |
| 2020-07-29 | 2020-08-29 | 2020-08-30 |
| 2020-08-29 | 2020-09-29 | 2020-09-30 |
| 2020-09-29 | 2020-10-29 | 2020-10-30 |
| 2020-10-29 | 2020-11-29 | 2020-11-30 |
| 2020-11-29 | 2020-12-29 | 2020-12-30 |
| 2020-12-29 | 2021-01-29 | 2020-01-30 |
| 2021-01-29 | 2021-02-28 | 2021-02-28 |
| 2021-02-28 | 2021-03-28 | 2021-03-30 |
| 2021-03-28 | 2021-04-28 | 2021-04-30 |
| 2021-04-28 | 2021-05-28 | 2021-05-30 |
| 2021-05-28 | 2021-06-28 | 2021-06-30 |
| 2021-06-28 | 2021-07-28 | 2021-07-30 |
| 2021-07-28 | 2021-08-28 | 2021-08-30 |
| 2021-08-28 | 2021-09-28 | 2021-09-30 |
| 2021-09-28 | 2021-10-28 | 2021-10-30 |
| 2021-10-28 | 2021-11-28 | 2021-11-30 |
| 2021-11-28 | 2021-12-28 | 2021-12-30 |
| 2021-12-28 | 2022-01-28 | 2021-01-30 |
| 2022-01-28 | 2022-02-28 | 2022-02-28 |
+------------+------------+------------+
Much better
WITH CTE AS
(
SELECT 1 N, Dates, Dates ExpectedValue
FROM Data
UNION ALL
SELECT N+1, DATEADD(Month, 1, Dates), DATEFROMPARTS(YEAR(ExpectedValue), MONTH(DATEADD(Month, 1, ExpectedValue)),
CASE WHEN DAY(EOMONTH(DATEADD(Month, 1, ExpectedValue))) > 30 THEN 30
ELSE DAY(EOMONTH(DATEADD(Month, 1, ExpectedValue)))
END)
FROM CTE
WHERE N < 15
)
SELECT *
FROM CTE
ORDER BY N;
Returns:
+----+------------+---------------+
| N | Dates | ExpectedValue |
+----+------------+---------------+
| 1 | 2020-01-30 | 2020-01-30 |
| 2 | 2020-02-29 | 2020-02-29 |
| 3 | 2020-03-29 | 2020-03-30 |
| 4 | 2020-04-29 | 2020-04-30 |
| 5 | 2020-05-29 | 2020-05-30 |
| 6 | 2020-06-29 | 2020-06-30 |
| 7 | 2020-07-29 | 2020-07-30 |
| 8 | 2020-08-29 | 2020-08-30 |
| 9 | 2020-09-29 | 2020-09-30 |
| 10 | 2020-10-29 | 2020-10-30 |
| 11 | 2020-11-29 | 2020-11-30 |
| 12 | 2020-12-29 | 2020-12-30 |
| 13 | 2021-01-29 | 2020-01-30 |
| 14 | 2021-02-28 | 2020-02-29 |
| 15 | 2021-03-28 | 2020-03-30 |
+----+------------+---------------+
Here is a db<>fiddle

Reduce data in SQL table created due to a bug

Due to a software bug that was unfortunately not obvious enough in the develop environment to be recognized, it happened that we created massive loads of SQL records we do not actually need. The records do not harm data integrity or anything else, but they are simply unnecessary.
We are looking at a database schema like the following:
entity_static (just some static data that won't change):
id | val1 | val2 | val3
-----------------------
1 | 50 | 183 | 93
2 | 60 | 823 | 123
entity_dynamic (some dynamic data we need a historical record of):
id | entity_static_id | val1 | val2 | valid_from | valid_to
-------------------------------------------------------------------------------
1 | 1 | 50 | 75 | 2018-01-01 00:00:00 | 2018-01-01 00:59:59
2 | 1 | 50 | 75 | 2018-01-01 01:00:00 | 2018-01-01 01:59:59
3 | 1 | 50 | 75 | 2018-01-01 02:00:00 | 2018-01-01 02:59:59
4 | 1 | 50 | 75 | 2018-01-01 03:00:00 | 2018-01-01 03:59:59
5 | 2 | 60 | 75 | 2018-01-01 00:00:00 | 2018-01-01 00:59:59
6 | 2 | 60 | 75 | 2018-01-01 01:00:00 | 2018-01-01 01:59:59
7 | 2 | 60 | 75 | 2018-01-01 02:00:00 | 2018-01-01 02:59:59
8 | 2 | 60 | 75 | 2018-01-01 03:00:00 | 2018-01-01 03:59:59
There are some more columns besides val1 and val2, this is just an example.
The entity_dynamic table describes what parameters were valid for a given period of time. It is not a recording for a certain point in time (like sensor data).
Therefor all equal records could easily be aggregated into one record like the following:
id | entity_static_id | val1 | val2 | valid_from | valid_to
-------------------------------------------------------------------------------
1 | 1 | 50 | 75 | 2018-01-01 00:00:00 | 2018-01-01 03:59:59
5 | 2 | 60 | 75 | 2018-01-01 00:00:00 | 2018-01-01 03:59:59
It is possible that the data in the valid_to column is NULL.
My question is now, with what query am I able to aggregate similar records with consecutive validity ranges into one record. Grouping should be done by the foreign key on entity_static_id.
with entity_dynamic as
(
select
*
from
(values
('1','1','50','75',' 2018-01-01 00:00:00 ',' 2018-01-01 00:59:59')
,('2','1','50','75',' 2018-01-01 01:00:00 ',' 2018-01-01 01:59:59')
,('3','1','50','75',' 2018-01-01 02:00:00 ',' 2018-01-01 02:59:59')
,('4','1','50','75',' 2018-01-01 03:00:00 ',' 2018-01-01 03:59:59')
,('5','2','60','75',' 2018-01-01 00:00:00 ',' 2018-01-01 00:59:59')
,('6','2','60','75',' 2018-01-01 01:00:00 ',' 2018-01-01 01:59:59')
,('7','2','60','75',' 2018-01-01 02:00:00 ',' 2018-01-01 02:59:59')
,('8','2','60','75',' 2018-01-01 03:00:00 ',' 2018-01-01 03:59:59')
,('9','1','60','75',' 2018-01-01 04:00:00 ',' 2018-01-01 04:59:59')
,('10','1','60','75',' 2018-01-01 05:00:00 ',' 2018-01-01 05:59:59')
,('11','2','70','75',' 2018-01-01 04:00:00 ',' 2018-01-01 04:59:59')
,('12','2','70','75',' 2018-01-01 05:00:00 ',' 2018-01-01 05:59:59')
,('13','2','60','75',' 2018-01-01 06:00:00 ',' 2018-01-01 06:59:59')
)
a(id , entity_static_id , val1 , val2 , valid_from , valid_to)
)
,
First add rownumbers for the unique combinations of val1 and val2 for each entity_static_id (unique group), add a row number for entity_static_id. Order by valid_from descending
step1 as
(
select
id , entity_static_id , val1 , val2 , valid_from , valid_to
,row_number() over (partition by entity_static_id,val1,val2 order by valid_from) valrn
,ROW_NUMBER() over (partition by entity_static_id order by valid_from desc) rn
from entity_dynamic
)
This gives:
+----------------------------------------------------------------------------------------+
|id|entity_static_id|val1|val2|valid_from |valid_to |unique_group|rn|
+----------------------------------------------------------------------------------------+
|10|1 |60 |75 | 2018-01-01 05:00:00 | 2018-01-01 05:59:59|2 |1 |
|9 |1 |60 |75 | 2018-01-01 04:00:00 | 2018-01-01 04:59:59|1 |2 |
|4 |1 |50 |75 | 2018-01-01 03:00:00 | 2018-01-01 03:59:59|4 |3 |
|3 |1 |50 |75 | 2018-01-01 02:00:00 | 2018-01-01 02:59:59|3 |4 |
|2 |1 |50 |75 | 2018-01-01 01:00:00 | 2018-01-01 01:59:59|2 |5 |
|1 |1 |50 |75 | 2018-01-01 00:00:00 | 2018-01-01 00:59:59|1 |6 |
|13|2 |60 |75 | 2018-01-01 06:00:00 | 2018-01-01 06:59:59|5 |1 |
|12|2 |70 |75 | 2018-01-01 05:00:00 | 2018-01-01 05:59:59|2 |2 |
|11|2 |70 |75 | 2018-01-01 04:00:00 | 2018-01-01 04:59:59|1 |3 |
|8 |2 |60 |75 | 2018-01-01 03:00:00 | 2018-01-01 03:59:59|4 |4 |
|7 |2 |60 |75 | 2018-01-01 02:00:00 | 2018-01-01 02:59:59|3 |5 |
|6 |2 |60 |75 | 2018-01-01 01:00:00 | 2018-01-01 01:59:59|2 |6 |
|5 |2 |60 |75 | 2018-01-01 00:00:00 | 2018-01-01 00:59:59|1 |7 |
+----------------------------------------------------------------------------------------+
Step2 is to add the rownumber for each unique group and the overall row num, since the last is descending, row with equal values following each other vil have the same sum, called tar in this example
,step2 as
(
select
*
,unique_group+rn tar
from step1
)
Step 2 gives:
+--------------------------------------------------------------------------------------------+
|id|entity_static_id|val1|val2|valid_from |valid_to |unique_group|rn|tar|
+--------------------------------------------------------------------------------------------+
|10|1 |60 |75 | 2018-01-01 05:00:00 | 2018-01-01 05:59:59|2 |1 |3 |
|9 |1 |60 |75 | 2018-01-01 04:00:00 | 2018-01-01 04:59:59|1 |2 |3 |
|4 |1 |50 |75 | 2018-01-01 03:00:00 | 2018-01-01 03:59:59|4 |3 |7 |
|3 |1 |50 |75 | 2018-01-01 02:00:00 | 2018-01-01 02:59:59|3 |4 |7 |
|2 |1 |50 |75 | 2018-01-01 01:00:00 | 2018-01-01 01:59:59|2 |5 |7 |
|1 |1 |50 |75 | 2018-01-01 00:00:00 | 2018-01-01 00:59:59|1 |6 |7 |
|13|2 |60 |75 | 2018-01-01 06:00:00 | 2018-01-01 06:59:59|5 |1 |6 |
|12|2 |70 |75 | 2018-01-01 05:00:00 | 2018-01-01 05:59:59|2 |2 |4 |
|11|2 |70 |75 | 2018-01-01 04:00:00 | 2018-01-01 04:59:59|1 |3 |4 |
|8 |2 |60 |75 | 2018-01-01 03:00:00 | 2018-01-01 03:59:59|4 |4 |8 |
|7 |2 |60 |75 | 2018-01-01 02:00:00 | 2018-01-01 02:59:59|3 |5 |8 |
|6 |2 |60 |75 | 2018-01-01 01:00:00 | 2018-01-01 01:59:59|2 |6 |8 |
|5 |2 |60 |75 | 2018-01-01 00:00:00 | 2018-01-01 00:59:59|1 |7 |8 |
+--------------------------------------------------------------------------------------------+
Finally, you can find the valid_from and vallid_to dates by using min and maxm and group by the correct values:
select
min(id) id
,entity_static_id
,val1
,val2
,min(valid_from) valid_from
,max(valid_to) valid_to
from step2
group by entity_static_id,val1
,val2
,tar
order by entity_static_id,valid_from
In totality the code is:
with entity_dynamic as
(
select
*
from
(values
('1','1','50','75',' 2018-01-01 00:00:00 ',' 2018-01-01 00:59:59')
,('2','1','50','75',' 2018-01-01 01:00:00 ',' 2018-01-01 01:59:59')
,('3','1','50','75',' 2018-01-01 02:00:00 ',' 2018-01-01 02:59:59')
,('4','1','50','75',' 2018-01-01 03:00:00 ',' 2018-01-01 03:59:59')
,('5','2','60','75',' 2018-01-01 00:00:00 ',' 2018-01-01 00:59:59')
,('6','2','60','75',' 2018-01-01 01:00:00 ',' 2018-01-01 01:59:59')
,('7','2','60','75',' 2018-01-01 02:00:00 ',' 2018-01-01 02:59:59')
,('8','2','60','75',' 2018-01-01 03:00:00 ',' 2018-01-01 03:59:59')
,('9','1','60','75',' 2018-01-01 04:00:00 ',' 2018-01-01 04:59:59')
,('10','1','60','75',' 2018-01-01 05:00:00 ',' 2018-01-01 05:59:59')
,('11','2','70','75',' 2018-01-01 04:00:00 ',' 2018-01-01 04:59:59')
,('12','2','70','75',' 2018-01-01 05:00:00 ',' 2018-01-01 05:59:59')
,('13','2','60','75',' 2018-01-01 06:00:00 ',' 2018-01-01 06:59:59')
)
a(id , entity_static_id , val1 , val2 , valid_from , valid_to)
)
,step1 as
(
select
id , entity_static_id , val1 , val2 , valid_from , valid_to
,row_number() over (partition by entity_static_id,val1,val2 order by valid_from) unique_group
,ROW_NUMBER() over (partition by entity_static_id order by valid_from desc) rn
from entity_dynamic
)
,step2 as
(
select
*
,dense_rank() over (partition by entity_static_id order by unique_group) f
,unique_group+rn tar
from step1
)
select
min(id) id
,entity_static_id
,val1
,val2
,min(valid_from) valid_from
,max(valid_to) valid_to
from step2
group by entity_static_id,val1
,val2
,tar
order by entity_static_id,valid_from
The result is
+------------------------------------------------------------------------+
|id|entity_static_id|val1|val2|valid_from |valid_to |
+------------------------------------------------------------------------+
|1 |1 |50 |75 | 2018-01-01 00:00:00 | 2018-01-01 03:59:59|
|10|1 |60 |75 | 2018-01-01 04:00:00 | 2018-01-01 05:59:59|
|5 |2 |60 |75 | 2018-01-01 00:00:00 | 2018-01-01 03:59:59|
|11|2 |70 |75 | 2018-01-01 04:00:00 | 2018-01-01 05:59:59|
|13|2 |60 |75 | 2018-01-01 06:00:00 | 2018-01-01 06:59:59|
+------------------------------------------------------------------------+
If group is defined by entity_dynamic then this should be all you need
with entity_dynamic as
( select *
from (values ('1' ,'1','50','75',' 2018-01-01 00:00:00 ',' 2018-01-01 00:59:59')
,('2' ,'1','50','75',' 2018-01-01 01:00:00 ',' 2018-01-01 01:59:59')
,('3' ,'1','50','75',' 2018-01-01 02:00:00 ',' 2018-01-01 02:59:59')
,('4' ,'1','50','75',' 2018-01-01 03:00:00 ',' 2018-01-01 03:59:59')
,('5' ,'2','60','75',' 2018-01-01 00:00:00 ',' 2018-01-01 00:59:59')
,('6' ,'2','60','75',' 2018-01-01 01:00:00 ',' 2018-01-01 01:59:59')
,('7' ,'2','60','75',' 2018-01-01 02:00:00 ',' 2018-01-01 02:59:59')
,('8' ,'2','60','75',' 2018-01-01 03:00:00 ',' 2018-01-01 03:59:59')
) a(id , entity_static_id , val1 , val2 , valid_from , valid_to)
)
, entity_dynamicPlus as
( select *
, ROW_NUMBER() over (partition by entity_static_id order by valid_to asc ) as rnA
, ROW_NUMBER() over (partition by entity_static_id order by valid_to desc) as rnD
from entity_dynamic
)
select eStart.id, eStart.entity_static_id, eStart.val1, eStart.val2, eStart.valid_from, eEnd.valid_to
, eEnd.valid_to
from entity_dynamicPlus as eStart
join entity_dynamicPlus as eEnd
on eStart.entity_static_id = eEnd.entity_static_id
and eStart.rnA = 1
and eEnd.rnD = 1
order by eStart.entity_static_id

Self join using case statement in SQL Server

Below is the data in a table Star. I want a query which returns only 1 record per StarID per assessdate but if there are same assessdate for one starid then compare the askdate and return that record which has most recent askdate.
StarID | assessdate | artid | pep |manager | Notes | followup| askdate
DEC1660 | 2016-05-18 00:00:00.000 | 20979 | Yes |BRIGGS, SIMON |NULL | 6 Weeks | NULL
DEC1660 | 2016-05-19 00:00:00.000 | 20982 | No |BRIGGS, SIMON |Other, sdf, AZT, TDF, RAL | 12 Weeks| 2016-05-11 00:00:00.000
ANW4477 | 2016-05-27 00:00:00.000 |21008 | Yes |Mundt, Susan |NFV, DRV, MVC, Other, test| 6 Weeks | 2016-05-27 00:00:00.000
ANW4477 | 2016-05-28 00:00:00.000 |21011 | No |Henley, Rebecca |NULL | 12 Weeks| NULL
REP2893 | 2016-05-30 00:00:00.000 |21305 | Yes |Henley, Rebecca |AZT, 3TC | 12 Weeks| 2016-05-30 00:00:00.000
REP2893 | 2016-05-30 00:00:00.000 |21305 | Yes |Henley, Rebecca |TDF, FTC | 12 Weeks| 2016-06-02 00:00:00.000
Thanks in advance!
WITH X AS (
Select *
, ROW_NUMBER() OVER (PARTITION BY StarID, assessdate
ORDER BY askdate DESC) rn
FROM Star )
SELECT *
FROM X
WHERE rn = 1

Payment analysis - SQL server query

Please can someone give me some pointers.
We are required to send out a Statutory Arrears Notification.
The criteria for sending it out is when an account has missed 2 full payments or the equivalent of. I.e if they are due to pay £100 a month but only paid £50 over 4 months, they are due the notice.
I have pulled a query which separates out the Repayment Schedule into date ranges, and between each range I have tallied up the payments made within that period. Also for each result I have tallied up DueToDate and PaidToDate.
The issue I have is working out some form of scoring system for each row, then at the end of the query, tally up to give an overall score which determines if the notice is due or not.
Results structure is like this.
DueDate | DateFrom | DateTo | AmountDue | AmountPaid | DueToDate | PaidToDate
If you mean that you already have a query that produces a result set like the following
DueDate | DateFrom | DateTo | AmountDue | AmountPaid | DueToDate | PaidToDate
20120301 | 20120201 | 20120229 | 100.00 | 50.00 | 100.00 | 50.00
20120401 | 20120301 | 20120331 | 100.00 | 50.00 | 200.00 | 100.00
20120501 | 20120401 | 20120430 | 100.00 | 50.00 | 300.00 | 150.00
20120601 | 20120501 | 20120531 | 100.00 | 50.00 | 400.00 | 200.00
Then here are two ways forward depending on whether the AmountDue per month is constant. If it is, then you can use
select *
from QueryResult
where DueToDate - PaidToDate >= 2 * AmountPaid;
If it is not constant, then you can use LAG() in SQL Server 2012 to add AmountPaid from the prior row to the current
;WITH Lagged AS (
select *, PriorAmount = LAG(AmountPaid, 1, 0) OVER (order by DueDate)
from QueryResult
)
select *
from Lagged
where DueToDate - PaidToDate >= AmountPaid + PriorAmount;
Or just keep a running total in yet another column in your original query, e.g.
DueDate | DateFrom | DateTo | AmountDue | AmountPaid | DueToDate | PaidToDate | TwoPeriods
20120301 | 20120201 | 20120229 | 100.00 | 50.00 | 100.00 | 50.00 | 100.00
20120401 | 20120301 | 20120331 | 100.00 | 50.00 | 200.00 | 100.00 | 200.00
20120501 | 20120401 | 20120430 | 100.00 | 50.00 | 300.00 | 150.00 | 200.00
20120601 | 20120501 | 20120531 | 100.00 | 50.00 | 400.00 | 200.00 | 200.00

Resources