I've got a dataset that has id, start date and a claim value (in dollars) in each row - most ids have more than one row - some span over 50 rows. The earliest date for each ID/claim varies, and the claim values are mostly different.
I'd like to do a rolling sum of the value of IDs that have claims within 365 days of each other, to report each ID that has claims that have exceeded a limiting value across each period. So for an ID that had a claim date on 1 January, I'd sum all claims to 31 December (inclusive). Most IDs have several years of data so for the example above, I'd also need to check that if they had a claim on 1 May that they hadn't exceeded the limit by 30 April the following year and so on. I normally see this referred to as a 'rolling sum'. My site has many SAS products including base, stat, ets, and others.
I'm currently testing code on a small mock dataet and so far I've converted a thin file to a fat file with one column for each claim value and each date of the claim. The mock dataset is similar to the client dataset that I'll be using. Here's what I've done so far (noting that the mock data uses days rather than dates - I'm not at the stage where I want to test on real data yet).
data original_data;
input ppt $1. day claim;
datalines;
a 1 7
a 2 12
a 4 12
a 6 18
a 7 11
a 8 10
a 9 14
a 10 17
b 1 27
b 2 12
b 3 14
b 4 12
b 6 18
b 7 11
b 8 10
b 9 14
b 10 17
c 4 2
c 6 4
c 8 8
;
run;
proc sql;
create table ppt_counts as
select ppt, count(*) as ppts
from work.original_data
group by ppt;
select cats('value_', max(ppts) ) into :cats
from work.ppt_counts;
select cats('dates_',max(ppts)) into :cnts
from work.ppt_counts;
quit;
%put &cats;
%put &cnts;
data flipped;
set original_data;
by ppt;
array vars(*) value_1 -&cats.;
array dates(*) dates_1 - &cnts.;
array m_vars value_1 - &cats.;
array m_dates dates_1 - &cnts.;
if first.ppt then do;
i=1;
do over m_vars;
m_vars="";
end;
do over m_dates;
m_dates="";
end;
end;
if first.ppt then do:
i=1;
vars(i) = claim;
dates(i)=day;
if last.ppt then output;
i+1;
retain value_1 - &cats dates_1 - &cnts. 0.;
run;
data output;
set work.flipped;
max_date =max(of dates_1 - &cnts.);
max_value =max(of value_1 - &cats.);
run;
This doesn't give me even close to what I need - not sure how to structure code to make this correct.
What I need to end up with is one row per time that an ID exceeds the yearly limit of claim value (say in the mock data if a claim exceeds 75 across a seven day period), and to include the sum of the claims. So it's likely that there may be multiple lines per ID and the claims from one row may also be included in the claims for the same ID on another row.
type of output:
ID sum of claims
a $85
a $90
b $80
On separate rows.
Any help appreciated.
Thanks
If you need to perform a rolling sum, you can do this with proc expand. The code below will perform a rolling sum of 5 days for each group. First, expand your data to fill in any missing gaps:
proc expand data = original_data
out = original_data_expanded
from = day;
by ppt;
id day;
convert claim / method=none;
run;
Any days with gaps will have missing value of claim. Now we can calculate a moving sum and ignore those missing days when performing the moving sum:
proc expand data = original_data
out = want(where=(NOT missing(claim)));
by ppt;
id day;
convert claim = rolling_sum / transform=(movsum 5) method=none;
run;
Output:
ppt day rolling_sum claim
a 1 7 7
a 2 19 12
a 4 31 12
a 6 42 18
a 7 41 11
...
b 9 53 14
b 10 70 17
c 4 2 2
c 6 6 4
c 8 14 8
The reason we use two proc expand statements is because the rolling sum is calculated before the days are expanded. We need the rolling sum to occur after the expansion. You can test this by running the above code all in a single statement:
/* Performs moving sum, then expands */
proc expand data = original_data
out = test
from = day;
by ppt;
id day;
convert claim = rolling_sum / transform=(movsum 5) method=none;
run;
Use a SQL self join with the dates being within 365 days of itself. This is time/resource intensive if you have a very large data set.
Assuming you have a date variable, the intnx is probably the better way to calculate the date interval than 365 depending on how you want to account for leap years.
If you have a claim id to group on, that would also be better than using the group by clause in this example.
data have;
input ppt $1. day claim;
datalines;
a 1 7
a 2 12
a 4 12
a 6 18
a 7 11
a 8 10
a 9 14
a 10 17
b 1 27
b 2 12
b 3 14
b 4 12
b 6 18
b 7 11
b 8 10
b 9 14
b 10 17
c 4 2
c 6 4
c 8 8
;
run;
proc sql;
create table want as
select a.*, sum(b.claim) as total_claim
from have as a
left join have as b
on a.ppt=b.ppt and
b.day between a.day and a.day+365
group by 1, 2, 3;
/*b.day between a.day and intnx('year', a.day, 1, 's')*/;
quit;
Assuming that you have only one claim per day you could just use a circular array to keep track of the pervious N days of claims to generate the rolling sum. By circular array I mean one where the indexes wrap around back to the beginning when you increment past the end. You can use the MOD() function to convert any integer into an index into the array.
Then to get the running sum just add all of the elements in the array.
Add an extra DO loop to zero out the days skipped when there are days with no claims.
%let N=5;
data want;
set original_data;
by ppt ;
array claims[0:%eval(&n-1)] _temporary_;
lagday=lag(day);
if first.ppt then call missing(of lagday claims[*]);
do index=max(sum(lagday,1),day-&n+1) to day-1;
claims[mod(index,&n)]=0;
end;
claims[mod(day,&n)]=claim;
running_sum=sum(of claims[*]);
drop index lagday ;
run;
Results:
running_
OBS ppt day claim sum
1 a 1 7 7
2 a 2 12 19
3 a 4 12 31
4 a 6 18 42
5 a 7 11 41
6 a 8 10 51
7 a 9 14 53
8 a 10 17 70
9 b 1 27 27
10 b 2 12 39
11 b 3 14 53
12 b 4 12 65
13 b 6 18 56
14 b 7 11 55
15 b 8 10 51
16 b 9 14 53
17 b 10 17 70
18 c 4 2 2
19 c 6 4 6
20 c 8 8 14
Working in a known domain of date integers, you can use a single large array to store the claims at each date and slice out the 365 days to be summed. The bookkeeping needed for the modular approach is not needed.
Example:
data have;
call streaminit(20230202);
do id = 1 to 10;
do date = '01jan2012'd to '02feb2023'd;
date + rand('integer', 25);
claim = rand('integer', 5, 100);
output;
end;
end;
format date yymmdd10.;
run;
options fullstimer;
data want;
set have;
by id;
array claims(100000) _temporary_;
array slice (365) _temporary_;
if first.id then call missing(of claims(*));
claims(date) = claim;
call pokelong(
peekclong(
addrlong (claims(date-365))
, 8*365)
,
addrlong(slice(1))
);
rolling_sum_365 = sum(of slice(*));
if dif1(claim) < 365 then
claims_out_365 = lag(claim) - dif1(rolling_sum_365);
if first.id then claims_out_365 = .;
run;
Note: SAS Date 100,000 is 16OCT2233
Serial Number
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
Parameter 1
22.95
23.46
22.71
23.41
23.36
23.18
23.52
23.35
22.86
22.47
22.63
23.72
23.22
23.17
22.8
23.18
23.15
23.12
23.16
23.22
23.58
23.68
23.33
23.52
23.54
23.48
Parameter 2
19.97
20.83
19.22
20.39
20.33
20.44
20.62
20.26
21.22
20.31
20.53
20.62
20.65
20.14
19.43
20.66
20.09
20.52
20.41
20.63
20.98
21.15
19.97
20.72
20.71
20.32
We have to divide this table into groups of 4 columns such that:-
1.Max difference between the values of parameter 1 of the columns in a group is less than 1
2.Max difference between the values parameter 2 of the columns in a group is less than 0.5
for eg. The columns 4,5,6,7 form a valid group
I need an algorithm that will return the maximum number of groups that this table can be divided into and will also return those groups.
for eg. In this case the maximum number of groups possible are 6, and the remaining 2 columns do not fall under any group. The algorithm should return these 6 groups.
I've looked for an example question like this, I ask for grace if it's been answered (I thought it would have been but have a hard time finding meaningful results with the terms I searched.)
I work at a manufacturing plant where at ever manufacturing operation a part is issued a new serial number. The database table I have to work with has the serial number recorded in the Container field and the previous serial number the part had recorded in the From_Container field.
I'm trying to SUM the Extended_Cost column on parts we've had to re-do operations on.
Here's a sample of data from tbl_Container:
Container From_Container Extended_Cost Part_Key Operation
10 9 10 PN_100 60
9 8 10 PN_100 50
8 7 10 PN_100 40
7 6 10 PN_100 30
6 5 10 PN_100 20
5 4 10 PN_100 50
4 3 10 PN_100 40
3 2 10 PN_100 30
2 1 10 PN_100 20
1 100 10 PN_100 10
In this example the SUM I would expect returned is 40, because operations 20, 30, 40 and 50 were all re-done and cost $10 each.
So far I've been able to do this by rejoining the table to itself 10 times using aliases in the following fashion:
LEFT OUTER JOIN tbl_Container AS FCP_1
ON tbl_Container.From_Container = FCP_1.Container
AND FCP_1.Operation <= tbl_Container.Operation
AND tbl_Container.Part_Key = FCP_1.Part_Key
And then using SUM to add the Extended_Cost fields together. However, I'm violating the DRY principle and there has got to be a better way.
Thank you in advance for your help,
Me
You can try this query.
;WITH CTE AS
(
SELECT TOP 1 *, I = 0 FROM tbl_Container C ORDER BY Container
UNION ALL
SELECT T.*, I = I + 1 FROM CTE
INNER JOIN tbl_Container T
ON CTE.Container = T.From_Container
AND CTE.Part_Key = T.Part_Key
)
SELECT Part_Key, SUM(T1.Extended_Cost) Sum_Extended_Cost FROM CTE T1
WHERE
EXISTS( SELECT * FROM
CTE T2 WHERE
T1.Operation = T2.Operation
AND T1.I > T2.I )
GROUP BY Part_Key
Result:
Part_Key Sum_Extended_Cost
---------- -----------------
PN_100 40
I have a dataset in the below format:
Date 1 Date 1 Date 1 Date 2 Date 2 Date 3 Date 3
Product 1 10 20 10 5 10 20 30
Product 2 5 5 10 10 10 5 30
Product 3 30 10 5 10 30 30 40
Product 4 5 10 10 20 5 10 20
and I am trying to sum the sales of the products by the date, to create the below:
Date 1 Date 2 Date 3
Product 1 40 15 50
Product 3 45 40 70
Product 4 25 25 30
Product 2 20 20 35
The products in the second table will often be in a different order, so a simple SUMIF will not suffice.
I've attempted a combination of SUM, INDEX and MATCH, as well as SUM with nested IF function, but no amount of Googling or trial and error is getting me there. I keep just bringing back the values in one cell, but not managing to sum.
With the following setup:
I used the following formula
=SUMIF($B$1:$H$1,B$10,INDIRECT("$B" & MATCH($A11,$A$1:$A$5,0) & ":$H" &MATCH($A11,$A$1:$A$5,0)))
To get what was wanted. I put the formula in B11 and then copied across and Down
Say I have a table of subtractions and divisions sorted by date:
tblFactors
dt sub divide
2014-07-01 1 1
2014-06-01 0 5
2014-05-01 2 1
2014-05-01 0 3
I have another table of values, sorted by date:
tblValues
dt val
2014-07-05 4
2014-06-15 5
2014-05-15 21
2014-04-14 31
2014-03-15 71
I need to perform some sequential calculations. For the first value in tblFactors, I need to subtract 1 from every val where tblValues.dt < '2014-07-01'.
Next, I need to process the second row in tblFactors. There is nothing to subtract. However, the divide = 5 means that I need to divide every val by 5 where tblValues.dt < '2014-06-01'. The tricky thing is that I need to do this on the modified val from the row before (divide 20 / 5, not 21 / 5).
Each row in tblFactors would process in this manner, giving a sequence like this:
tblFactors: Row 1 Row 2 Row 3 Row 4
Dt Original Val Subtract 1 Divide by 5 Subtract 2 Divide by 3
7/5/2014 4
6/15/2014 5 4
5/15/2014 21 20 4
4/14/2014 31 30 6 4
3/25/2014 71 70 14 12 4
This would leave me with:
qryValues
dt val
2014-07-05 4
2014-06-15 4
2014-05-15 4
2014-04-14 4
2014-03-15 4
Right now I'm doing vector multiplications over loops in R. I was wondering if there was a clever way to accomplish this in the native sql. I tried doing some aggregations but I've had limited success.