six-dimensional arrays in Eigen - eigen3

I am new to coding. I am trying to write a computational code using Eigen library. I want to know how can I make and initialize a 6-dimensional array in Eigen like:
FF(n0, n1, n2, n3, n4, n5)
Regards.

Related

Matlab: How to do `repelem` where each element is a matrix?

Suppose I have m-by-n matrices A, B, C arranged in an m-by-n-by-3 tensor P:
P = cat(3, A, B, C);
I now want to make a new tensor where each matrix is repeated K times, making the third dimension size 3K. That is, if K=2 then I want to build the tensor
Q = cat(3, A, A, B, B, C, C);
Is there a nice builtin way to achieve this, or do I need to write a loop for it? Preferably as fast or faster than the manual way.
If A, B, C were scalars I could have used repelem, but it does not work the way I want for matrices. repmat can be used to build
cat(3, A, B, C, A, B, C)
but that is not what I am after either.
As noted by #Cris Luengo, repelem(P, 1, 1, k) will actually do what you want (in spite of what the MATLAB documentation says), but I can think of two other ways to achieve this.
First, you could use repmat to duplicate the tensor k times in the second dimension and then reshape:
Q = reshape(repmat(P, 1, k, 1), m, n, []);
Second, you could use repelem to give you the indices of the third dimension to construct Q from:
Q = P(:, :, repelem(1:size(P,3), k));

MKL Matrix Transpose

I have a very large rectangular and square float as well as complex matrix. I want to know Is there any in place MKL transpose routine? There is mkl_?imatcopy in MKL, please help me with an example.
I have tried this, but it didnot transpose matrix
size_t nEle = noOfCols * noOfRows;
float *data = (float*)calloc(nEle,sizeof(float));
initalizeData(data,noOfCols,noOfRows);
printdata(data,noOfCols,noOfRows);
printf("After transpose \n\n");
mkl_simatcopy('R','T',noOfCols,noOfRows,1,data,noOfPix,noOfCols);
//writeDataFile((char *)data,"AfterTranspose.img",nEle*sizeof(float));
printdata(data,noOfCols,noOfRows);
You may try to look at the existing in-place transposition routines for float real and complex datatypes. MKL package contains such examples: cimatcopy.c dimatcopy.c simatcopy.c zimatcopy.c. Please refer to the mklroot/examples/transc/source directory

Elementwise product between a vector and a matrix using GNU Blas subroutines

I am working on C, using GNU library for scientific computing. Essentially, I need to do the equivalent of the following MATLAB code:
x=x.*(A*x);
where x is a gsl_vector, and A is a gsl_matrix.
I managed to do (A*x) with the following command:
gsl_blas_dgemv(CblasNoTrans, 1.0, A, x, 1.0, res);
where res is an another gsl_vector, which stores the result. If the matrix A has size m * m, and vector x has size m * 1, then vector res will have size m * 1.
Now, what remains to be done is the elementwise product of vectors x and res (the results should be a vector). Unfortunately, I am stuck on this and cannot find the function which does that.
If anyone can help me on that, I would be very grateful. In addition, does anyone know if there is some better documentation of GNU rather than https://www.gnu.org/software/gsl/manual/html_node/GSL-BLAS-Interface.html#GSL-BLAS-Interface which so far is confusing me.
Finally, would I lose in time performance if I do this step by simply using a for loop (the size of the vector is around 11000 and this step will be repeated 500-5000 times)?
for (i = 0; i < m; i++)
gsl_vector_set(res, i, gsl_vector_get(x, i) * gsl_vector_get(res, i));
Thanks!
The function you want is:
gsl_vector_mul(res, x)
I have used Intel's MKL, and I like the documentation on their website for these BLAS routines.
The for-loop is ok if GSL is well designed. For example gsl_vector_set() and gsl_vector_get() can be inlined. You could compare the running time with gsl_blas_daxpy. The for-loop is well optimized if the timing result is similar.
On the other hand, you may want to try a much better matrix library Eigen, with which you can implement your operation with the code similar to this
x = x.array() * (A * x).array();

Finding if two arrays have common numbers using recursion

The question goes like this:
write a recursive function that gets two positive int arrays and their sizes.
The function returns 0 if there is at least one common object and 1 if there isn't.
int disjoint(int a[], int n1, int b[], int n2)
The requirements are as follows:
no helper functions
no loops
no changing the array
only recursion
I am currently having problems in getting all the combinations for comparison, in other words how to translate nested loops to recursion.
update
this one works but it makes unnecessary iterations.
int disjoint(int g1[], int n1 , int g2[], int n2 ){
if(g1[0]==g2[0])return 0;
if(n2-1>0&&n1-1>=0){
return disjoint(g1+1,n1-1,g2,n2)*disjoint(g1,n1,g2+1,n2-1);
}
return 1;
}
I'm not solving it for you but...
Using pseudo-annotation: The arrays a[0..n] and b[0..m] are disjoint if a[0] != b[0] and if a[1..n] and b[0..m] are disjoint and a[0..n] and b[1..m] are disjoint.
I think I got that right...
Ok, another hint: One of the recursive calls might look like disjoint(a + 1, n1 - 1, b, n2)
When doing recursion, it's (at least for me) better to look at the problem and formulate the solution in terms of the problem itself, rather than writing an iterative solution and then trying to "translate" it.

Solve a banded matrix system of equations

I need to solve a 2D Poisson equation, that is, a system of equations in the for AX=B where A is an n-by-n matrix and B is a n-by-1 vector. Being A a discretization matrix for the 2D Poisson problem, I know that only 5 diagonals will be not null. Lapack doesn't provide functions to solve this particular problem, but it has functions for solving banded matrix system of equations, namely DGBTRF (for LU factorization) and DGBTRS. Now, the 5 diagonals are: the main diagonal, the first diagonals above and below the main and two diagonals above and below by m diagonals wrt the main diagonal. After reading the lapack documentation about band storage, I learned that I have to create a (3*m+1)-by-n matrix to store A in band storage format, let's call this matrix AB. Now the questions:
1) what is the difference between dgbtrs and dgbtrs_? Intel MKL provides both but I can't understand why
2) dgbtrf requires the band storage matrix to be an array. Should I linearize AB by rows or by columns?
3) is this the correct way to call the two functions?
int n, m;
double *AB;
/*... fill n, m, AB, with appropriate numbers */
int *pivots;
int nrows = 3 * m + 1, info, rhs = 1;
dgbtrf_(&n, &n, &m, &m, AB, &nrows, pivots, &info);
char trans = 'N';
dgbtrs_(&trans, &n, &m, &m, &rhs, AB, &nrows, pivots, B, &n, &info);
It also provides DGBTRS and DGBTRS_. Those are fortran administrativa that you should not care about. Just call dgbtrs (reason is that on some architectures, fortran routine names have underscore appended, on other not, and names may be either upper or lower case -- Intel MKL #defines the right one to dgbtrs).
LAPACK routines expects column major matrices (ie. Fortran style): store columns one after the others. The banded storage you must use is not hard : http://www.netlib.org/lapack/lug/node124.html.
It seems good to me, but please try it on small problems beforehand (always a good idea by the way). Also make sure you handle non-zero info (this is the way errors are reported).
Better style is to use MKL_INT instead of plain int, this is a typedef to the right type (may be different on some architectures).
Also make sure to allocate memory for pivots before calling dgbtrf.
This might be off topic. But for Poisson equation, FFT based solution is much faster. Just do 2D FFT of your potential field, divided by -(k^2+lambda^2) then do IFFT. lambda is a small number to avoid divergence for k=0. The 5-diagonal equation is a band-limited approximation of the Poisson equation, which approximate the differential operator by finite difference.
http://en.wikipedia.org/wiki/Screened_Poisson_equation

Resources