Obtaining data from an array to a dataframe - arrays

so i have 2 datasets, the first one is a dataframe
df1 <- data.frame(user=c(1:10), h01=c(3,3,6,8,9,10,4,1,2,5), h12=c(5,5,3,4,1,2,8,8,9,10),a=numeric(10))
the first column represents the user id, and h01 represents the id of a cell phone antenna from which the user is connected for a period of time (00:00 - 1:00AM) and h12 represents the same but between 1:00AM and 2:00AM.
And then i have an array
array1 <- array(c(23,12,63,11,5,6,9,41,23,73,26,83,41,51,29,10,1,5,30,2), dim=c(10,2))
The rows represent the cell phone antenna id, the columns represent the periods of time and the values in array1 represent how many people is connected to the antenna at that period of time. So array1[1,1] will print how many people is connected between 00:00 and 1:00 to antenna 1, array1[2,2] will print how many people is connected between 1:00 and 2:00 to antenna 2 and so on.
What i want to do is for each user in df1 obtain from array1 how many people in total is connected to the same antennas in the same period of time and place the value in column a.
For example, the first user is connected to antenna 3 between 00:00 and 1:00AM, and antenna 5 between 1:00AM and 2:00AM, so the value in a should be array1[3,1] plus array1[5,2]
I used a for loop to do this
aux1 <- df1[,2]
aux2 <- df1[,3]
for(i in 1:length(df1$user)){
df1[i,4] <- sum(array1[aux1[i],1],array1[aux2[i],2])
}
which gives
user h01 h02 a
1 1 3 5 92
2 2 3 5 92
3 3 6 3 47
4 4 8 4 92
5 5 9 1 49
6 6 10 2 156
7 7 4 8 16
8 8 1 8 28
9 9 2 9 42
10 10 5 10 7
This loop works and gives the correct values, the problem is the 2 datasets (df1 and array1) are really big. df1 has over 20.000 users and 24 periods of time, and array1 has over 1300 antennas, not to mention that this data corresponds to users from one socioeconomic level, and i have 5 in total, so simplifying the code is mandatory.
I would love if someone could show me a different approach to this, specially if its withouth a for loop.

Try this approach:
df1$a <- array1[df1$h01,1] + array1[df1$h12,2]

Related

sum values across any 365 day period

I've got a dataset that has id, start date and a claim value (in dollars) in each row - most ids have more than one row - some span over 50 rows. The earliest date for each ID/claim varies, and the claim values are mostly different.
I'd like to do a rolling sum of the value of IDs that have claims within 365 days of each other, to report each ID that has claims that have exceeded a limiting value across each period. So for an ID that had a claim date on 1 January, I'd sum all claims to 31 December (inclusive). Most IDs have several years of data so for the example above, I'd also need to check that if they had a claim on 1 May that they hadn't exceeded the limit by 30 April the following year and so on. I normally see this referred to as a 'rolling sum'. My site has many SAS products including base, stat, ets, and others.
I'm currently testing code on a small mock dataet and so far I've converted a thin file to a fat file with one column for each claim value and each date of the claim. The mock dataset is similar to the client dataset that I'll be using. Here's what I've done so far (noting that the mock data uses days rather than dates - I'm not at the stage where I want to test on real data yet).
data original_data;
input ppt $1. day claim;
datalines;
a 1 7
a 2 12
a 4 12
a 6 18
a 7 11
a 8 10
a 9 14
a 10 17
b 1 27
b 2 12
b 3 14
b 4 12
b 6 18
b 7 11
b 8 10
b 9 14
b 10 17
c 4 2
c 6 4
c 8 8
;
run;
proc sql;
create table ppt_counts as
select ppt, count(*) as ppts
from work.original_data
group by ppt;
select cats('value_', max(ppts) ) into :cats
from work.ppt_counts;
select cats('dates_',max(ppts)) into :cnts
from work.ppt_counts;
quit;
%put &cats;
%put &cnts;
data flipped;
set original_data;
by ppt;
array vars(*) value_1 -&cats.;
array dates(*) dates_1 - &cnts.;
array m_vars value_1 - &cats.;
array m_dates dates_1 - &cnts.;
if first.ppt then do;
i=1;
do over m_vars;
m_vars="";
end;
do over m_dates;
m_dates="";
end;
end;
if first.ppt then do:
i=1;
vars(i) = claim;
dates(i)=day;
if last.ppt then output;
i+1;
retain value_1 - &cats dates_1 - &cnts. 0.;
run;
data output;
set work.flipped;
max_date =max(of dates_1 - &cnts.);
max_value =max(of value_1 - &cats.);
run;
This doesn't give me even close to what I need - not sure how to structure code to make this correct.
What I need to end up with is one row per time that an ID exceeds the yearly limit of claim value (say in the mock data if a claim exceeds 75 across a seven day period), and to include the sum of the claims. So it's likely that there may be multiple lines per ID and the claims from one row may also be included in the claims for the same ID on another row.
type of output:
ID sum of claims
a $85
a $90
b $80
On separate rows.
Any help appreciated.
Thanks
If you need to perform a rolling sum, you can do this with proc expand. The code below will perform a rolling sum of 5 days for each group. First, expand your data to fill in any missing gaps:
proc expand data = original_data
out = original_data_expanded
from = day;
by ppt;
id day;
convert claim / method=none;
run;
Any days with gaps will have missing value of claim. Now we can calculate a moving sum and ignore those missing days when performing the moving sum:
proc expand data = original_data
out = want(where=(NOT missing(claim)));
by ppt;
id day;
convert claim = rolling_sum / transform=(movsum 5) method=none;
run;
Output:
ppt day rolling_sum claim
a 1 7 7
a 2 19 12
a 4 31 12
a 6 42 18
a 7 41 11
...
b 9 53 14
b 10 70 17
c 4 2 2
c 6 6 4
c 8 14 8
The reason we use two proc expand statements is because the rolling sum is calculated before the days are expanded. We need the rolling sum to occur after the expansion. You can test this by running the above code all in a single statement:
/* Performs moving sum, then expands */
proc expand data = original_data
out = test
from = day;
by ppt;
id day;
convert claim = rolling_sum / transform=(movsum 5) method=none;
run;
Use a SQL self join with the dates being within 365 days of itself. This is time/resource intensive if you have a very large data set.
Assuming you have a date variable, the intnx is probably the better way to calculate the date interval than 365 depending on how you want to account for leap years.
If you have a claim id to group on, that would also be better than using the group by clause in this example.
data have;
input ppt $1. day claim;
datalines;
a 1 7
a 2 12
a 4 12
a 6 18
a 7 11
a 8 10
a 9 14
a 10 17
b 1 27
b 2 12
b 3 14
b 4 12
b 6 18
b 7 11
b 8 10
b 9 14
b 10 17
c 4 2
c 6 4
c 8 8
;
run;
proc sql;
create table want as
select a.*, sum(b.claim) as total_claim
from have as a
left join have as b
on a.ppt=b.ppt and
b.day between a.day and a.day+365
group by 1, 2, 3;
/*b.day between a.day and intnx('year', a.day, 1, 's')*/;
quit;
Assuming that you have only one claim per day you could just use a circular array to keep track of the pervious N days of claims to generate the rolling sum. By circular array I mean one where the indexes wrap around back to the beginning when you increment past the end. You can use the MOD() function to convert any integer into an index into the array.
Then to get the running sum just add all of the elements in the array.
Add an extra DO loop to zero out the days skipped when there are days with no claims.
%let N=5;
data want;
set original_data;
by ppt ;
array claims[0:%eval(&n-1)] _temporary_;
lagday=lag(day);
if first.ppt then call missing(of lagday claims[*]);
do index=max(sum(lagday,1),day-&n+1) to day-1;
claims[mod(index,&n)]=0;
end;
claims[mod(day,&n)]=claim;
running_sum=sum(of claims[*]);
drop index lagday ;
run;
Results:
running_
OBS ppt day claim sum
1 a 1 7 7
2 a 2 12 19
3 a 4 12 31
4 a 6 18 42
5 a 7 11 41
6 a 8 10 51
7 a 9 14 53
8 a 10 17 70
9 b 1 27 27
10 b 2 12 39
11 b 3 14 53
12 b 4 12 65
13 b 6 18 56
14 b 7 11 55
15 b 8 10 51
16 b 9 14 53
17 b 10 17 70
18 c 4 2 2
19 c 6 4 6
20 c 8 8 14
Working in a known domain of date integers, you can use a single large array to store the claims at each date and slice out the 365 days to be summed. The bookkeeping needed for the modular approach is not needed.
Example:
data have;
call streaminit(20230202);
do id = 1 to 10;
do date = '01jan2012'd to '02feb2023'd;
date + rand('integer', 25);
claim = rand('integer', 5, 100);
output;
end;
end;
format date yymmdd10.;
run;
options fullstimer;
data want;
set have;
by id;
array claims(100000) _temporary_;
array slice (365) _temporary_;
if first.id then call missing(of claims(*));
claims(date) = claim;
call pokelong(
peekclong(
addrlong (claims(date-365))
, 8*365)
,
addrlong(slice(1))
);
rolling_sum_365 = sum(of slice(*));
if dif1(claim) < 365 then
claims_out_365 = lag(claim) - dif1(rolling_sum_365);
if first.id then claims_out_365 = .;
run;
Note: SAS Date 100,000 is 16OCT2233

How to convert data frame columns values into an array without loop

I have a data frame like this:
df = pd.DataFrame({'A': [10,10,11,14], 'B':[2,3,3,5]})
It looks like this:
A B
0 10 2
1 10 3
2 11 3
3 14 5
I want to convert to this, with A as the row index, and store B's values inside the array or matrix:
10 2 3
11 3
14 5
Is there python way of doing this without looking in each row in data frame df?
many thanks
Use groupby:
df.groupby('A')
Then you can (for instance) get the mean of the grouped version by:
df.groupby('A').mean()
which result in:
B
A
10 2.5
11 3.0
14 5.0

Need some simple logic help, been stuck for a few hours

The problem is asking to take any amount of numbers, and find the highest possible sum of difference(using absolute value) between consecutive numbers. For example numbers 1 2 and 3 would be arranged 3 1 2 to get a sum of 3 (3-1 = 2, and 1-2 = 1).
Now my first thoughts were to take the highest number in the list followed by the lowest number and arrange in that way through the end, but that doesnt work out as the end of the list will end up having all of the numbers in the middle accumulating almost no differences. The only other thing I have thought of is to find every single possible order and return the highest sum, but with a longer list this will take way too long and I assume there might be a better way.
For reference here are some sample input and output numbers
9 2 5 3 1 -> 21
7 3 4 5 5 7 6 8 5 4 -> 24
Any help at all would be much appreciated, even if its just pointing me in the right direction.
There are 2 approaches to this problem.
Approach 1:
Brute force.
Approach 2:
Figure out an algorithm for how to arrange the numbers.
I always like approach 2 better if it is feasible.
It seems reasonable that you would get a high sum if you order the numbers high-low-high-low-high...
So start by sorting the numbers and then divide them into two equally large groups of low and high numbers. If there is an odd number of numbers the middle number will be left over.
Then you just pick numbers alternately from the two groups.
It is easy to prove that the order of the interior numbers doesn't matter as long as you stick with the high-low-high-low ordering.
However, since the start and end number only has one neighbour, the first and last number should be the middle numbers.
Finally, if you have an odd number of numbers, place the last number at the start or end, whatever gives the biggest difference.
Example:
7 3 4 5 5 7 6 8 5 4 -> [sort] -> 3 4 4 5 5 5 6 7 7 8
high numbers: 5 6 7 7 8
low numbers: 3 4 4 5 5
Arranged:
5 3 6 4 7 4 7 5 8 5 = 24
Example:
9 2 5 3 1 -> [sort] -> 1 2 3 5 9
high numbers: 5 9
low numbers: 1 2
left over: 3
Arranged:
3 5 1 9 2 = 21 (3 goes at the start, because |3-5| > |3-2|)

Adding and multiplying tables' data by values in another table

Say I have a table of subtractions and divisions sorted by date:
tblFactors
dt sub divide
2014-07-01 1 1
2014-06-01 0 5
2014-05-01 2 1
2014-05-01 0 3
I have another table of values, sorted by date:
tblValues
dt val
2014-07-05 4
2014-06-15 5
2014-05-15 21
2014-04-14 31
2014-03-15 71
I need to perform some sequential calculations. For the first value in tblFactors, I need to subtract 1 from every val where tblValues.dt < '2014-07-01'.
Next, I need to process the second row in tblFactors. There is nothing to subtract. However, the divide = 5 means that I need to divide every val by 5 where tblValues.dt < '2014-06-01'. The tricky thing is that I need to do this on the modified val from the row before (divide 20 / 5, not 21 / 5).
Each row in tblFactors would process in this manner, giving a sequence like this:
tblFactors: Row 1 Row 2 Row 3 Row 4
Dt Original Val Subtract 1 Divide by 5 Subtract 2 Divide by 3
7/5/2014 4
6/15/2014 5 4
5/15/2014 21 20 4
4/14/2014 31 30 6 4
3/25/2014 71 70 14 12 4
This would leave me with:
qryValues
dt val
2014-07-05 4
2014-06-15 4
2014-05-15 4
2014-04-14 4
2014-03-15 4
Right now I'm doing vector multiplications over loops in R. I was wondering if there was a clever way to accomplish this in the native sql. I tried doing some aggregations but I've had limited success.

Assigning a single value to all cells within a specified time period, matrix format

I have the following example dataset which consists of the # of fish caught per check of a net. The nets are not checked at uniform intervals. The day of the check is denoted in julian days as well as the number of days the net had been fishing since last checked (or since it's deployment in the case of the first check)
http://textuploader.com/9ybp
Site_Number Check_Day_Julian Set_Duration_Days Fish_Caught
2 5 3 100
2 10 5 70
2 12 2 65
2 15 3 22
100 4 3 45
100 10 6 20
100 18 8 8
450 10 10 10
450 14 4 4
In any case, I would like to turn the raw data above into the following format:
http://textuploader.com/9y3t
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
2 0 0 100 100 100 70 70 70 70 70 65 65 22 22 22 0 0 0
100 0 45 45 45 20 20 20 20 20 20 8 8 8 8 8 8 8 8
450 10 10 10 10 10 10 10 10 10 10 4 4 4 4 0 0 0 0
This is a matrix which assigns the # of fish caught during the period to EACH of the days that were within that period. The columns of the matrix are Julian days, the rows are site numbers.
I have tried to do this with some matrix functions but I have had much difficulty trying to populate all the fields that are within the time period, but I do not necessarily have a row of data for?
I had posted my small bit of code here, but upon reflection, my approach is quite archaic and a bit off point. Can anyone suggest a method to convert the data into the matrix provided? I've been scratching my head and googling all day but now I am stumped.
Cheers,
C
Two answers, the second one is faster but a bit low level.
Solution #1:
library(IRanges)
with(d, {
ir <- IRanges(end=Check_Day_Julian, width=Set_Duration_Days)
cov <- coverage(split(ir, Site_Number),
weight=split(Fish_Caught, Site_Number),
width=max(end(ir)))
do.call(rbind, lapply(cov, as.vector))
})
Solution #2:
with(d, {
ir <- IRanges(end=Check_Day_Julian, width=Set_Duration_Days)
site <- factor(Site_Number, unique(Site_Number))
m <- matrix(0, length(levels(site)), max(end(ir)))
ind <- cbind(rep(site, width(ir)), as.integer(ir))
m[ind] <- rep(Fish_Caught, width(ir))
m
})
I don't see a super obvious matrix transformation here. This is all i've got assuming the raw data is in a data.frame called dd
dd$Site_Number<-factor(dd$Site_Number)
mm<-matrix(0, nrow=nlevels(dd$Site_Number), ncol=18)
for(i in 1:nrow(dd)) {
mm[as.numeric(dd[i,1]), (dd[i,2]-dd[i,3]):dd[i,2] ] <- dd[i,4]
}
mm

Resources