I am trying to iterate over the rows of a DataFrame in Julia to generate a new column for the data frame. I haven't come across a clear example of how to do this. In R this type of thing is vectorized but from my understanding not all of Julia's operations are vectorized so I need to loop over the rows. I know I can do this with indexing but I believe there must be a better way. I want to be able to reference the column values by name. Here is that I have:
test_df = DataFrame( A = [1,2,3,4,5], B = [2,3,4,5,6])
test_df["C"] = [ test_df[i,"A"] * test_df[i,"B"] for i in 1:size(test_df,1)]
Is this the Julia/DataFrames way of doing this? Is there a more Julia-eque way of doing this? Thanks for any feedback.
You'd be better off doing test_df[i,"A"] .* test_df[i,"B"]. In general, Julia uses a dot prefix to indicate operations that are elementwise. All of these element-wise operations are vectorized.
You also don't want to use an Array comprehension since you probably want a DataArray as your output. There are no DataArray comprehensions for now since comprehensions are built into the Julia parser, which makes them hard to override in libraries like DataArrays.jl.
The better, and already vectorized wa, to do what you want in your example would be
test_df[!, "C"] = test_df["A"] .* test_df["B"]
Now if for some reason you can't vectorize your operations and you really want to loop over rows (unlikely...) Then you can do as follows:
for row in eachrow( test_df )
# do something with row which is of type DataFrameRow
end
If you need the row index do
for (i, row) in enumerate( eachrow( test_df ) )
# do something with row and i
end
Related
I have a lookup table in the form of a 2d array and a list of indices (in the form of two 1d arrays xs, ys) at which I would like to evaluate the lookup table. How to accomplish this in a fast manner?
It looks like a standard problem, however I found nothing about looking up array values at a general list of indices (e.g. not a cartesian product) in the docs. I tried
result = zeros((10^6,))
for i in [1:10^6]
x = xs[i]
y = ys[i]
result[i] = lookup[x, y]
end
Besides looking a bit cumbersome, this code is also 10 times slower then an equivalent numpy code. Also it looks like a standard problem, however I found nothing about looking up array values at a general list of indices (e.g. not a cartesian product) in the docs.
So what would be a fast alternative to the above code?
You can try broadcast_getindex (see http://julia.readthedocs.org/en/latest/stdlib/arrays/#Base.broadcast_getindex).
Otherwise, it looks like your code should be pretty efficient if you just change [1:10^6] to 1:10^6.
Here are the updated links for Base.getindex (see https://docs.julialang.org/en/v1/base/collections/#Base.getindex). The broadcasted implementation found here.
I've a numpy array with shape N,2 and N>10000. I the first column I have e.g. 6 class values (e.g. 0.0,0.2,0.4,0.6,0.8,1.0) in the second column I have float values. Now I want to calculate the average of the second column for all different classes of the first column resulting in 6 averages one for each class.
Is there a numpy way to do this, to avoid manual loops especially if N is very large?
In pure numpy you would do something like:
unq, idx, cnt = np.unique(arr[:, 0], return_inverse=True,
return_counts=True)
avg = np.bincount(idx, weights=arr[:, 1]) / cnt
I copied the answer from Warren to here, since it solves my problem best and I want to check it as solved:
This is a "groupby/aggregation" operation. The question is this close
to being a duplicate of
getting median of particular rows of array based on index.
... You could also use scipy.ndimage.labeled_comprehension as
suggested there, but you would have to convert the first column to
integers (e.g. idx = (5*data[:, 0]).astype(int)
I did exactly this.
I have a cell array. Each cell contains a vector of variable length. For example:
example_cell_array=cellfun(#(x)x.*rand([length(x),1]),cellfun(#(x)ones(x,1), num2cell(ceil(10.*rand([7,4]))), 'UniformOutput', false), 'UniformOutput', false)
I need to concatenate the contents of the cells down through one dimension then perform an operation on each concatenated vector generating scalar for each column in my cell array (like sum() for example - the actual operation is complex, time consuming, and not naturally vectorisable - especially for diffent length vecotrs).
I can do this with loops easily (for my concatenated vector sum example) as follows:
[M N]=size(example_cell_array);
result=zeros(1,N);
cat_cell_array=cell(1,N);
for n=1:N
cat_cell_array{n}=[];
for m=1:M
cat_cell_array{n}=[cat_cell_array{n};example_cell_array{m,n}];
end
end
result=cell2mat(cellfun(#(x)sum(x), cat_cell_array, 'UniformOutput', false))
Unfortunately this is WAY too slow. (My cell array is 1Mx5 with vectors in each cell ranging in length from 100-200)
Is there a simple way to produce the concatenated cell array where the vectors contained in the cells have been concatenated down one dimension?
Something like:
dim=1;
cat_cell_array=(?concatcells?(dim,example_cell_array);
Edit:
Since so many people have been testing the solutions: Just FYI, the function I'm applying to each concatenated vector is circ_kappa(x) available from Circular Statistics Toolbox
Some approaches might suggest you to unpack the numeric data from example_cell_array using {..} and then after concatenation pack it back into bigger sized cells to form your cat_cell_array. Then, again you need to unpack numeric data from that concatenated cell array to perform your operation on each cell.
Now, in my view, this multiple unpacking and packing approaches won't be efficient ones if example_cell_array isn't one of your intended outputs. So, considering all these, let me suggest two approaches here.
Loopy approach
The first one is a for-loop code -
data1 = vertcat(example_cell_array{:}); %// extract all numeric data for once
starts = [1 sum(cellfun('length',example_cell_array),1)]; %// intervals lengths
idx = cumsum(starts); %// get indices to work on intervals basis
result = zeros(1,size(example_cell_array,2));
%// replace this with "result(size(example_cell_array,2))=0;" for performance
for k1 = 1:numel(idx)-1
result(k1) = sum(data1(idx(k1):idx(k1+1)-1));
end
So, you need to edit sum with your actual operation.
Almost-vectorized approach
If example_cell_array has a lot of columns, my second suggestion would be an almost vectorized approach, though it doesn't perform badly either with a small number of columns. Now this code uses cellfun at the first line to get the lengths for each cell in concatenated version. cellfun is basically a wrapper to a loop code, but this is not very expensive in terms of runtime and that's why I categorized this approach as an almost vectorized one.
The code would be -
lens = sum(cellfun('length',example_cell_array),1); %// intervals lengths
maxlens = max(lens);
numlens = numel(lens);
array1(maxlens,numlens)=0;
array1(bsxfun(#ge,lens,[1:maxlens]')) = vertcat(example_cell_array{:}); %//'
result = sum(array1,1);
The thing you need to do now, is to make your operation run on column basis with array1 using the mask created by the bsxfun implementation. Thus, if array1 is a M x 5 sized array, you need to select the valid elements from each column using the mask and then do the operation on those elements. Let me know if you need more info on the masking issue.
Hope one of these approaches would work for you!
Quick Tests: Using a 250000x5 sized example_cell_array, quick tests show that both these approaches for the sum operation perform very well and give about 400x speedup over the code in the question at my end.
For the concatenation itself, it sounds like you might want the functional form of cat:
for n=1:N
cat_cell_array{n} = cat(1, example_cell_array{:,n});
end
This will concatenate all the arrays in the cells in each column in the original input array.
You can define a function like this:
cellcat = #(C) arrayfun(#(k) cat(1, C{:, k}), 1:size(C,2), 'uni', 0);
And then just use
>> cellcat(example_cell_array)
ans =
[42x1 double] [53x1 double] [51x1 double] [47x1 double]
I think you are looking to generate cat_cell_array without using for loops. If so, you can do it as follows:
cat_cell_array=cellfun(#(x) cell2mat(x),num2cell(example_cell_array,1),'UniformOutput',false);
The above line can replace your entire for loop according to me. Then you can calculate your complex function over this cat_cell_array.
If only result is important to you and you do not want to store cat_cell_array, then you can do everything in a single line (not recommended for readability):
result=cell2mat(cellfun(#(x)sum(x), cellfun(#(x) cell2mat(x),num2cell(example_cell_array,1),'Uni',false), 'Uni', false));
I know this is a basic question but for some strange reason I am unable to find an answer.
How should I apply basic statistical functions like mean, median, etc. over entire array, matrix or dataframe to get unique answers and not a vector over rows or columns
Since this comes up a fair bit, I'm going to treat this a little more comprehensively, to include the 'etc.' piece in addition to mean and median.
For a matrix, or array, as the others have stated, mean and median will return a single value. However, var will compute the covariances between the columns of a two dimensional matrix. Interestingly, for a multi-dimensional array, var goes back to returning a single value. sd on a 2-d matrix will work, but is deprecated, returning the standard deviation of the columns. Even better, mad returns a single value on a 2-d matrix and a multi-dimensional array. If you want a single value returned, the safest route is to coerce using as.vector() first. Having fun yet?
For a data.frame, mean is deprecated, but will again act on the columns separately. median requires that you coerce to a vector first, or unlist. As before, var will return the covariances, and sd is again deprecated but will return the standard deviation of the columns. mad requires that you coerce to a vector or unlist. In general for a data.frame if you want something to act on all values, you generally will just unlist it first.
Edit: Late breaking news(): In R 3.0.0 mean.data.frame is defunctified:
o mean() for data frames and sd() for data frames and matrices are
defunct.
By default, mean and median etc work over an entire array or matrix.
E.g.:
# array:
m <- array(runif(100),dim=c(10,10))
mean(m) # returns *one* value.
# matrix:
mean(as.matrix(m)) # same as before
For data frames, you can coerce them to a matrix first (the reason this is by default over columns is because a dataframe can have columns with strings in it, which you can't take the mean of):
# data frame
mdf <- as.data.frame(m)
# mean(mdf) returns column means
mean( as.matrix(mdf) ) # one value.
Just be careful that your dataframe has all numeric columns before coercing to matrix. Or exclude the non-numeric ones.
You can use library dplyr via install.packages('dplyr') and then
dataframe.mean <- dataframe %>%
summarise_all(mean) # replace for median
Suppose I have an array of three dimensions:
set.seed(1)
foo <- array(rnorm(250),dim=c(5,10,5))
And I want to create a matrix of each row and layer summed over columns 4, 5 and 6. I can write do this like this:
apply(foo[,4:6,],c(1,3),sum)
But this splits the array per row and layer and is pretty slow since it is not vectorized. I could also just add the slices:
foo[,4,]+foo[,5,]+foo[,6,]
Which is faster but gets abit tedious to do manually for multiple slices. Is there a function that does the above expression without manually specifying each slice?
I think you are looking for rowSums / colSums (fast implementations of apply)
colSums(aperm(foo[,4:6,], c(2,1,3)))
> all.equal(colSums(aperm(foo[,4:6,], c(2,1,3))), foo[,4,]+foo[,5,]+foo[,6,])
[1] TRUE
How about this:
eval(parse(text=paste(sprintf('foo[,%i,]',4:6),collapse='+')))
I am am aware that there are reasons to avoid parse, but I am not sure how to avoid it in this case.