I'm looking to find a way to take an array in ruby, two indices in that array and return an enumerable object which will yield, in order, all the elements between and including the two indices. But for performance reasons, I want to do this subject to the following two conditions:
This slice to enum does not create a copy of the subarray I want a return an enum over. This rules out array[i..j].to_enum, for example because array[i..j] is creating a new array.
It's not necessary to loop over the entire array to create the enum.
I'm wondering if there's a way to do this using the standard library's enumerable or array functionality without having to explicitly create my own custom enumerator.
What I'm looking for is a cleaner way to create the below enumerator:
def enum_slice(array, i, j)
Enumerator.new do |y|
while i <= j
y << array[i] # this is confusing syntax for yield (see here: https://ruby-doc.org/core-2.6/Enumerator.html#method-c-new)
i += 1
end
end
end
That seems pretty reasonable, and could even be turned into an extension to Array itself:
module EnumSlice
def enum_slice(i, j)
Enumerator.new do |y|
while i <= j
y << self[i]
i += 1
end
end
end
end
Now within the Enumerator block, y represents a Proc you call when you have more data. If that block ends it's presumed you're done enumerating. There's no requirement to ever terminate, an infinite Enumerator is allowed, and in that case it's up to the caller to stop iterating.
So in other words, the y block argument can be called zero or more times, and each time it's called output is "emitted" from the enumerator. When that block exits the enumerator is considered done and is closed out, y is invalid at that point.
All y << x does is call the << method on Enumerator::Yielder, which is a bit of syntactical sugar to avoid having to do y.call(x) or y[x], both of which look kind of ugly.
Now you can add this to Array:
Array.include(EnumSlice)
Where now you can do stuff like this:
[ 1, 2, 3, 4, 5, 6 ].enum_slice(2, 4).each do |v|
p v
end
Giving you the correct output.
It's worth noting that despite having gone through all this work, this really doesn't save you any time. There's already built-in methods for this. Your enum_slice(a, i, j) method is equivalent to:
a.drop(i).take(j)
Is that close in terms of performance? A a quick benchmark can help test that theory:
require 'benchmark'
Benchmark.bm do |bm|
count = 10000
a = (0..100_000).to_a
bm.report(:enum_slice) do
count.times do
a.enum_slice(50_000, 25_000).each do
end
end
end
bm.report(:drop_take) do
count.times do
a.drop(50_000).take(25_000).each do
end
end
end
end
The results are:
user system total real
enum_slice 0.020536 0.000200 0.020736 ( 0.020751)
drop_take 7.682218 0.019815 7.702033 ( 7.720876)
So your approach is about 374x faster. Not bad!
When I try to compile my code using -fcheck=all I get a runtime error since it seems I step out of bounds of my array dimension size. It comes from the part of my code shown below. I think it is because my loops over i,j only run from -ny to ny, -nx to nx but I try to use points at i+1,j+1,i-1,j-1 which takes me out of bounds in my arrays. When the loop over j starts at -ny, it needs j-1, so it immediately takes me out of bounds since I'm trying to access -ny-1. Similarly when j=ny, i=-nx,nx.
My question is, how can I fix this problem efficiently using minimal code?
I need the array grad(1,i,j) correctly defined on the boundary, and it needs to be defined exactly as on the right hand side of the equality below, I just don't know an efficient way of doing this. I can explicitly define grad(1,nx,j), grad(1,-nx,j), etc, separately and only loop over i=-nx+1,nx-1,j=-ny+1,ny-1 but this causes lots of duplicated code and I have many of these arrays so I don't think this is the logical/efficient approach. If I do this, I just end up with hundreds of lines of duplicated code that makes it very hard to debug. Thanks.
integer :: i,j
integer, parameter :: nx = 50, ny = 50
complex, dimension (3,-nx:nx,-ny:ny) :: grad,psi
real, parameter :: h = 0.1
do j = -ny,ny
do i = -nx,nx
psi(1,i,j) = sin(i*h)+sin(j*h)
psi(2,i,j) = sin(i*h)+sin(j*h)
psi(3,i,j) = sin(i*h)+sin(j*h)
end do
end do
do j = -ny,ny
do i = -nx,nx
grad(1,i,j) = (psi(1,i+1,j)+psi(1,i-1,j)+psi(1,i,j+1)+psi(1,i,j-1)-4*psi(1,i,j))/h**2 &
- (psi(2,i+1,j)-psi(2,i,j))*psi(1,i,j)/h &
- (psi(3,i,j+1)-psi(3,i,j))*psi(1,i,j)/h &
- psi(2,i,j)*(psi(1,i+1,j)-psi(1,i,j))/h &
- psi(3,i,j)*(psi(1,i,j+1)-psi(1,i,j))/h
end do
end do
If I was to do this directly for grad(1,nx,j), grad(1,-nx,j), it would be given by
do j = -ny+1,ny-1
grad(1,nx,j) = (psi(1,nx,j)+psi(1,nx-2,j)+psi(1,nx,j+1)+psi(1,nx,j-1)-2*psi(1,nx-1,j)-2*psi(1,nx,j))/h**2 &
- (psi(2,nx,j)-psi(2,nx-1,j))*psi(1,nx,j)/h &
- (psi(3,nx,j+1)-psi(3,nx,j))*psi(1,nx,j)/h &
- psi(2,nx,j)*(psi(1,nx,j)-psi(1,nx-1,j))/h &
- psi(3,nx,j)*(psi(1,nx,j+1)-psi(1,nx,j))/h
grad(1,-nx,j) = (psi(1,-nx+2,j)+psi(1,-nx,j)+psi(1,-nx,j+1)+psi(1,-nx,j-1)-2*psi(1,-nx+1,j)-2*psi(1,-nx,j))/h**2 &
- (psi(2,-nx+1,j)-psi(2,-nx,j))*psi(1,-nx,j)/h &
- (psi(3,-nx,j+1)-psi(3,-nx,j))*psi(1,-nx,j)/h &
- psi(2,-nx,j)*(psi(1,-nx+1,j)-psi(1,-nx,j))/h &
- psi(3,-nx,j)*(psi(1,-nx,j+1)-psi(1,-nx,j))/h
end do
One possible way for you could be using an additional index variable for the boundaries, modified from the original index to avoid getting out-of-bounds. I mean something like this:
do j = -ny,ny
jj = max(min(j, ny-1), -ny+1)
do i = -nx,nx
ii = max(min(i, nx-1), -nx+1)
grad(1,i,j) = (psi(1,ii+1,j)+psi(1,ii-1,j)+psi(1,i,jj+1)+psi(1,i,jj-1)-4*psi(1,i,j))/h**2 &
- (psi(2,ii+1,j)-psi(2,ii,j))*psi(1,i,j)/h &
- (psi(3,i,jj+1)-psi(3,i,jj))*psi(1,i,j)/h &
- psi(2,i,j)*(psi(1,ii+1,j)-psi(1,ii,j))/h &
- psi(3,i,j)*(psi(1,i,jj+1)-psi(1,i,jj))/h
end do
end do
It's hard for me to write a proper code because it seems you trimmed part of the original expression in the code you presented in the question, but I hope you understand the idea and apply it correctly for your logic.
Opinions:
Even though this is what you are asking for (as far as I understand), I would not recommend doing this before profiling and checking if assigning the boundary conditions manually after a whole array operation wouldn't be more efficient, instead. Maybe those extra calculations on the indices on each iteration could impact on performance (arguably less than if conditionals or function calls). Using "ghost cells", as suggested by #evets, could be even more performant. You should profile and compare.
I'd recommend you declaring your arrays as dimension(-nx:nx,-ny:ny,3) instead. Fortran stores arrays in column-major order and, as you are accessing values on the neighborhood of the "x" and "y", they would be non-contiguous memory locations for a fixed "other" dimension is the leftest, and that could mean less cache-hits.
In somewhat pseudo-code, you can do
do j = -ny, ny
if (j == -ny) then
p1jm1 = XXXXX ! Some boundary condition
else
p1jm1 = psi(1,i,j-1)
end if
if (j == ny) then
p1jp1 = YYYYY ! Some other boundary condition
else
p1jp1 = psi(1,i,j+1)
end if
do i = -nx, ny
grad(1,i,j) = ... term involving p1jm1 ... term involving p1jp1 ...
...
end do
end do
The j-loop isn't bad in that you are adding 2*2*ny conditionals. The inner i-loop is adding 2*2*nx conditionals for each j iteration (or 2*2*ny * 2*2*nx conditional). Note, you need a temporary for each psi with the triplet indices are unique, ie., psi(1,i,j+1), psi(1,i,j-1), and psi(3,i,j+1).
I'm starting with Python and I have a basic question with "for" loop
I have two array which contains a values of a same variables:
A = data_lac[:,0]
In the first array, I have values of area and in the second on, values of mean depth.
I would like to find a way to automatize my calculation with different value of a parameter. The equation is the following one:
g= (np.sqrt(A/pi))/n
Here I can calculte my "g" for each row. Now I want to have a loop with differents values of "n". I did this:
i=0
while i <= len(A)-1:
for n in range(2,6):
g[i] = (np.sqrt(A[i]/pi))/n
i += 1
break
In this case, I just have one column with the calculation for n = 2 but not the following one. I tried to add a second dimension to my array but I have an error message saying that I have too many indices for array.
In other, I would like this array:
g[len(A),5]
g has 5 columns each one calculating with a different "n"
Any tips would be very helpful,
Thanks
Update of the code:
data_lac=np.zeros((106,7))
data_lac[:,0:2]=np.loadtxt("/home...", delimiter=';', skiprows=1, usecols=(0,1))
data_lac[:,1]=data_lac[:,1]*0.001
#Initialisation
A = data_lac[:,0]
#example for A with 4 elements
A=[2.1, 32.0, 4.6, 25]
g = np.zeros((len(A),))
I believe you share the indexes within both loops. You were increasing the i (index for the upper while loop) inside the inner for loop (which index with n).
I guess you have A (1 dim array) and you want to produce G (2 dim array) with size of (Len(A, 5))
I am not sure I'm fully understand your require output but I believe you want something like:
i=0
while i <= len(A)-1:
for n in range(2,6):
g[i][n-2] = (np.sqrt(A[i]/pi))/n # n-2 is to get first index as 0 and last as 4
i += 1 # notice the increace of the i is for the upper while loop
break
Important - remember that in python indentation means a lot -> so make sure the i +=1 is under the while scope and not indent to be inside the for loop
Notice - G definition should be as:
g = np.zeros((len(A),4), dtype=float)
The way you define it (without the 4) cause it to be 1 dim array and not 2-dim
Is there a neat function in julia which will merge two sorted arrays and return the sorted array for me? I have written:
c=1
p=1
i=1
n=length(tc)+length(tp)
t=Array{Float64}(n)
while(c<=length(tc) && p<=length(tp))
if(tp[p]<tc[c])
t[i]=tp[p]
p=p+1;
i=i+1;
else
t[i]=tc[c]
c=c+1;
i=i+1;
end
end
while(p<=length(tp))
t[i]=tp[p]
i=i+1
p=p+1
end
while(c<=length(tc))
t[i]=tc[c]
i=i+1
c=c+1
end
but is there no native function in base julia to do this?
Contrary to the other answers, there is in fact a method to do this in base Julia. BUT, it only works for arrays of integers, AND it will only work if the arrays are unique (in the sense that no integer is repeated in either array). Simply use the IntSet type as follows:
a = [2, 3, 4, 8]
b = [1, 5]
union(IntSet(a), IntSet(b))
If you run the above code, you'll note that the union function removes duplicates from the output, which is why I stated initially that your arrays must be unique (or else you must be happy to have duplicates removed in the output). You'll also notice that the union operation on the IntSet works much faster than union on a sorted Vector{Int}, since the former exploits the fact that an IntSet is pre-sorted.
Of course, the above is not really in the spirit of the question, which more concerns a solution for any type for which the lt operator is defined, as well as allowing for duplicates.
Here is a function that efficiently finds the union of two pre-sorted unique vectors. I've never had a need for the non-unique case myself so have not written a function that covers that case I'm afraid:
"union <- Return the union of the inputs as a new sorted vector"
function union_vec(x::Vector{T}, y::Vector{T})::Vector{T} where {T}
(nx, ny) = (1, 1)
z = T[]
while nx <= length(x) && ny <= length(y)
if x[nx] < y[ny]
push!(z, x[nx])
nx += 1
elseif y[ny] < x[nx]
push!(z, y[ny])
ny += 1
else
push!(z, x[nx])
nx += 1
ny += 1
end
end
if nx <= length(x)
[ push!(z, x[n]) for n = nx:length(x) ]
elseif ny <= length(y)
[ push!(z, y[n]) for n = ny:length(y) ]
end
return z
end
Another option is to look at sorted dictionaries, available in the DataStructures.jl package. I haven't done it myself, but a method that just inserts all observations into a sorted dictionary (checking for key duplication as you go) and then iterates over (keys, values) should also be a fairly efficient way to attack this problem.
Although an explicit function to merge two sorted vectors seems to be missing, one can be constructed easily from the existing building blocks (the question actually demonstrated this, but it doesn't define a function).
The following method tries to leverage the existing sort code and still remain efficient.
In code:
mergesorted(a,b) = sort!(vcat(a,b))
The following is an example:
julia> a = [1:2:11...];
julia> b = [2:3:20...];
julia> show(a)
[1,3,5,7,9,11]
julia> show(b)
[2,5,8,11,14,17,20]
julia> show(mergesorted(a,b))
[1,2,3,5,5,7,8,9,11,11,14,17,20]
I didn't benchmark the function, but QuickSort (the default sort algorithm) is usually good performing on pre-sorted arrays, so it should be OK and the allocation of a result vector is required in any implementation.
I keep coming across this in different projects, so I made a package MergeSorted (https://github.com/vvjn/MergeSorted.jl). You can use it as follows.
using MergeSorted
a = sort!(rand(1000))
b = sort!(rand(1000))
c = mergesorted(a,b)
sort!(vcat(a,b)) == c
Or without allocating new memory.
mergesorted!(c, a, b)
You can also use all of the sort options.
a = sort!(rand(1000), order=Base.Reverse)
b = sort!(rand(1000), order=Base.Reverse)
c = mergesorted(a,b, order=Base.Reverse)
sort!(vcat(a,b), order=Base.Reverse) == c
It's around 4-6 times faster than sort!(vcat(a,b)), which uses QuickSort by default, and twice as fast as sort!(vcat(a,b), alg=MergeSort) but MergeSort uses more memory.
No, such function does not exist. And actually I have not seen a language which has such function out of the box.
To do this, you have to maintain two pointers in each of the arrays, compare the values and move the smaller (based on what I see, this is exactly what you do).