Create a view which is a vector created from different columns? - arrays

For matrices A, B, and C, and some integer i, is there an easy way to make view whose result is
vec([A[:,i];B[:,i];C[:,i]])
without creating any temporaries? My current try is:
A = rand(4,4); B = rand(4,4); C = rand(4,4)
[#view(A[:,1]);#view(B[:,1]);#view(C[:,1])]
that obviously creates the vector at the end instead of a view to the four columns stacked as a vector.

It looks like you're looking for a lazy version of cat. Here's one implementation:
lazy cat http://www.mrwallpaper.com/wallpapers/Lazy-Cat.jpg
But in all seriousness, consider the (still experimental) solution ahwillia has here: CatViews.jl. In your situation, CatView(#view(A[:,1]), #view(B[:,1]), #view(C[:,1])) would work.

Related

Eiffel: How to wipe_out an ARRAY object without recreating it

Trying to do something like
a: ARRAY[STRING]
create a.make_empty
a.put("foo foo fool")
a.wipe_out
Do I have to? or is there another way as STRING doesn't seem to have a .has_default
create a.make_empty
a.put("foo foo fool")
create a.make_empty
The most straightforward way is to use keep_head (n). It keeps only first n items, therefore, when n = 0, all items are removed altogether:
a.keep_head (0)
Another way is to use a creation procedure, for example, make_empty as a regular one. It is going to set an array to the state of a newly created one:
a.make_empty
However, this approach looks a bit odd. And it can change lower index of the array. So, keep_head is preferable.
Note. ARRAYED_LIST is a good alternative to ARRAY: it has almost all features of ARRAY, is more flexible, has other features, and wipe_out among them.

Using N-D interpolation with a generic rank?

I'm looking for an elegant way of useing ndgrid and interpn in a more "general" way - basically for any given size of input and not treat each rank in a separate case.
Given an N-D source data with matching N-D mesh given in a cell-array of 1D vectors for each coordinate Mesh={[x1]; [x2]; ...; [xn]} and the query/output coordinates given in the same way (QueryMesh), how do I generate the ndgrid matrices and use them in the interpn without setting a case for each dimension?
Also, if there is a better way the define the mesh - I am more than willing to change.
Here's a pretty obvious, conceptual (and NOT WORKING) schematic of what I want to get, if it wasn't clear
Mesh={linspace(0,1,10); linspace(0,4,20); ... linsapce(0,10,15)};
QueryMesh={linspace(0,1,20); linspace(0,4,40); ... linsapce(0,10,30)};
Data=... (whatever)
NewData=InterpolateGeneric(Mesh,QueryMesh,Data);
function NewData=InterpolateGeneric(Mesh,QueryMesh,Data)
InGrid=ndgrid(Mesh{:});
OutGrid=ndgrid(QueryMesh{:});
NewData=interpn(InGrid{:},Data,OutGrid{:},'linear',0.0)
end
I think what you are looking for is how to get multiple outputs from this line:
OutGrid = ndgrid(QueryMesh{:});
Since ndgrid produces as many output arrays as input arrays it receives, you can create an empty cell array in this way:
OutGrid = cell(size(QueryMesh));
Next, prove each of the elements of OutGrid as an output argument:
[OutGrid{:}] = ndgrid(QueryMesh{:});

Efficient/Parallel Apply an operation to each element in an Array in Java <= 1.7?

I am interested in ways of applying an operation (independetly) to each element in a Java Array. For example, clipping each element of a numeric array to be no more than a given element.
Example:
myArray.clip(5)
Or
Utils.clip(myArray, 5)
woud set each element greater than 5 to be 5
The most straightforward method is to iterate over the elements, but I don't like it for two reasons:
It is Iterative and doesn't exploit the parrallization chances
The code doesn't look beautiful, a mapping or vecotorized syntax would be nicer
I need to do such a clipping operation about 5000 times, each time on a 70 X 10 (2D Array)
If Java <= 1.7 doesn't provide ways to do that in the standard library. How can I achieve what I want (two points above) using other libs or Java 1.8 Features.
The traditional Java 7 technique for operating on the data in a collection in parallel is the "fork/join" pattern. The notion is that one decomposes the work to be done into smaller pieces, forks off a task to complete those pieces, and then join each task at the end.
I'm not sure your case warrants this, though. Depending on how complicated a "clip" operation is, it might be more straightforward to create a pair of threads to operate on the two axes simultaneously. Though, I may be misunderstanding your requirements.
Another solution might be a time/space trade-off. It may be more useful to maintain a clipped version of the data as you update the primary collection. Sort of a producer-consumer model, where there is something that looks for changes to the collection, and maintains a clipped version.
I am not sure about Arrays themselves, but if you have a "Collection" compatible type you can use parallel aggregate operations in Java 8.
Refer to https://docs.oracle.com/javase/tutorial/collections/streams/index.html
Using Java 8 features, this is how I went about it, I iterated over the rows and did parallel operations on it. I didn't find any standard methods that takes a
double [] []
and use it as a stream or a
parallelSetAll(double [] [] ...)
My code, I use Apache commons-math library for my matrix implementation because it provides utility functions based on the RealMatrix type implemented in the library
Can this code be optimized further without having to go too much too level. Like take things out of the for loop or do the assignments later , etc
public static RealMatrix clipLower(RealMatrix m, double lowerBound){
// TODO see if you really need to modify m or just return a modified copy of m
//process the matrix row by row, each row gets processed in parallel
// TODO See if you can directly parallelize on the entire matrix not row by row
// OPEN implement using Arrays.parallelSetAll Method?
int nrOfRows = m.getRowDimension();
for(int i = 0; i < nrOfRows; i++){
// TODO move the declaration outside the for loop
double[] currRow = m.getRow(i);
double[] newRow = Arrays.stream(currRow)
.parallel()
.map((number) -> (number < lowerBound) ? lowerBound : number)
.toArray();
m.setRow(i, newRow);
}
return m;
}

matlab: structural data and multi-level indexing

I have a simple problem with structures.
Lets create:
x(1).a(:, :) = magic(2);
x(2).a(:, :) = magic(2)*2;
x(3).a(:, :) = magic(2)*3;
how to list a(1, 1) from all x-es?
i wanted to do it like:
x(1, :).a(1,1)
but there is an error "Scalar index required for this type of multi-level indexing."
How to approach it? I know I can do it with a loop, but that's probably the worst solution :)
Thanks!
This is not the best datastructure to use if this is the sort of query you'd like to make on it, precisely because this sort of indexing cannot be done directly.
However, here is one approach that works:
cellfun(#(X) X(1,1), {x.a})
The syntax {x.a} converts x from a 'struct array' into a cell array. Then we use cellfun to apply a function as a map over the cell array. The anonymous function #(X) X(1,1) takes one argument X and returns X(1,1).
You can also get your data in this way:
B = cat(3,x.a);
out = reshape(B(1,1,:),1,[]);
By the way, loops are not evil. Sometimes they are even faster than vectorized indexation. Try it both ways, see what suits you best in terms of:
Speed - use the profiler to check
Code clarity - depends on the context. Sometimes vectorized code looks better, sometimes the opposite.

Array.isDefinedAt for n-dimensional arrays in scala

Is there an elegant way to express
val a = Array.fill(2,10) {1}
def do_to_elt(i:Int,j:Int) {
if (a.isDefinedAt(i) && a(i).isDefinedAt(j)) f(a(i)(j))
}
in scala?
I recommend that you not use arrays of arrays for 2D arrays, for three main reasons. First, it allows inconsistency: not all columns (or rows, take your pick) need to be the same size. Second, it is inefficient--you have to follow two pointers instead of one. Third, very few library functions exist that work transparently and usefully on arrays of arrays as 2D arrays.
Given these things, you should either use a library that supports 2D arrays, like scalala, or you should write your own. If you do the latter, among other things, this problem magically goes away.
So in terms of elegance: no, there isn't a way. But beyond that, the path you're starting on contains lots of inelegance; you would probably do best to step off of it quickly.
You just need to check the array at index i with isDefinedAt if it exists:
def do_to_elt(i:Int, j:Int): Unit =
if (a.isDefinedAt(i) && a(i).isDefinedAt(j)) f(a(i)(j))
EDIT: Missed that part about the elegant solution as I focused on the error in the code before your edit.
Concerning elegance: no, per se there is no way to express it in a more elegant way. Some might tell you to use the pimp-my-library-Pattern to make it look more elegant but in fact it does not in this case.
If your only use case is to execute a function with an element of a multidimensional array when the indices are valid then this code does that and you should use it. You could generalize the method by changing the signature of to take the function to apply to the element and maybe a value if the indices are invalid like this:
def do_to_elt[A](i: Int, j: Int)(f: Int => A, g: => A = ()) =
if (a.isDefinedAt(i) && a(i).isDefinedAt(j)) f(a(i)(j)) else g
but I would not change anything beyond this. This also does not look more elegant but widens your use case.
(Also: If you are working with arrays you mostly do that for performance reasons and in that case it might even be better to not use isDefinedAt but perform validity checks based on the length of the arrays.)

Resources