How do I implement the Laplace expansion algorithm in c? - c

I'm having trouble figuring out a way to make this algorithm work, as I can't figure out how to do the middle part of the problem. Here's my code so far:
int det(int matrixSize, int matrix[][matrixSize]){
int determinant = 0, matrixValues[matrixSize * matrixSize], matrixFirstRowValues[matrixSize * matrixSize];
for(int i = matrixSize; i > 2; i--){
for(int row = 0; row < matrixSize; row++){
for(int col = 0; col < matrixSize; col++){
matrixFirstRowValues[row + (matrixSize - i)] = matrix[1][col + (matrixSize - i)];
//copies the first row values for an array until we have a 2x2 matrix
}
}
}
//multiply the values in matrix Values by their respective matrix without
//the row and column of these values times -1^row+col
determinant += (matrix[matrixSize-1][matrixSize-1] * matrix[matrixSize-2][matrixSize-2])
- (matrix[matrixSize-1][matrixSize-2] * matrix[matrixSize-2][matrixSize-1]);
return determinant;
}
Being the matrix, a 2-dimensional array with the size of matrixSize, I iterate through it until I'm left with a 2x2 matrix, copying each value of the first row to a new array.
Those values have to be multiplied by the matrix that it's left when I remove the row and column where that value is.
This is the principle of the laplace expansion. The part that's giving me trouble is getting those matrices that are left by removing rows and columns, as I want this to work for a nxn matrix.
Then, in the end the sum that to the det of a 2x2 matrix.
How can I do the middle part (where the comments are) with my current setup?

Those values have to be multiplied by the matrix that it's left when I
remove the row and column where that value is.
You have to multiply with the cofactor matrix whose entries are the determinants of the matrices that are left over when removing the i-th row and j-th column.
Naturally, this is the setup for a recursive algorithm, since the determinant of the bigger matrix is expressed in terms of the determinants of smaller matrices: if A = (a_{ij}) is the matrix, then det(A) = sum j = 1..n: a_{ij} * det(M_{ij}), where M_{ij} is the minor matrix that arises from A when removing the i-th row and j-th column where i is fixed. The base case being the 2-by-2 matrices, maybe also 3-by-3 matrices.
The problem that arises is that an n-by-n matrix produces n matrices M_{ij} of size (n-1)-by-(n-1), each of which produces n-1 matrices of size one less and so on until the base case is reached, at which point you'll have to keep track of n!/2 matrices. (It becomes apparent at this point that Laplace expansion is a rather costly algorithm, any algorithm based on Gauss elimination will be far more efficient. But that is just an aside, since we are discussing Laplace expansion.) If done in an iterative fashion, this has to be done manually, a recursive algorithm will have implicit means of bookkeeping via stack frames.
Your approach
Let's have a look at the piece of code that you have provided. It eludes me what precisely you are trying to achieve. Take for instance the statement in the innermost loop which iterates over col:
matrixFirstRowValues[row + (matrixSize - i)] = matrix[1][col + (matrixSize - i)];
For as long as col changes in the innermost loop, both row and i are fixed, so you are assigning and reassigning from (apparently) the second row in matrix to the same entry in matrixFirstRowValues. Not only that, you assign from an index range (matrixSize-i) .. (2*matrixSize - (i+1)) which exceeds the range of the column unless i == matrixSize, which is only the case for the first value of i.
As I mentioned before, in the end you do not end up with just one 2-by-2 matrix but n!/2.
Copying except i-th row and j-th column
Looking at the matrix with i-th row and j-th column removed, you end up with four submatrices (some of which may be empty). Restricting yourself to expansion along the first row, then you are dealing with just two submatrices (still some of which may be empty). You may use two loops, one for the matrix to the left of the j-th column and to the right - or, as suggested in a previous answer - choose to skip the j-th column using continue to cycle the loop without updating the target column index. If col marks the current colum to remove (the current row is always 0), iterate r over all rows, and c over all columns and break the column loop in two pieces at c == col. Let's say, the target matrix is called minor, then it would look like this:
// expand along first row
for(col = 0; col < matrix_size; col++) {
// copy into minor matrix, left side then right side
for(r = 1; r < matrix_size; r++) {
for(c = 0; c < col; c++) {
minor[r-1][c] = matrix[r][c];
}
for(c = col+1; c < matrix_size; c++) {
minor[r-1][c-1] = matrix[r][c];
}
}
// use "minor" matrix at this point to calculte
// its determinant
}
The index shift r-1 is due to the removal of the first row.
A complete recursive Laplace expansion
As I mentioned before, the Laplace expansion of the determinant lends itself naturally to a recursive algorithm. I will do some changes to your setup, i will not use variable length arrays which are stack allocated, I will instead use heap allocated memory. Since the expansion, if the space is not reused, has an exponential space requirement, the stack might quickly get exhausted already for matrices of moderate size. Consequently, I will need an additional parameter to report back memory allocation failures via an intent out parameter which I call is_valid.
You will recognise the above matrix copy procedure with a little different names and an additional pointer dereference, since I operate with n-by-n matrices on the heap. I hope it will not lead to too much confusion.
#include <stdio.h>
#include <stdlib.h>
#define SQR(x) ((x)*(x))
int laplace_det(int matrix_size, const int (*mat)[][matrix_size], int *is_valid) {
// base cases
if(matrix_size == 1)
return (*mat)[0][0];
if(matrix_size == 2)
return (*mat)[0][0] * (*mat)[1][1] - (*mat)[1][0] * (*mat)[0][1];
// recusive case, matrix_size > 2
// expansion indiscriminately along the first row
//
// minor matrix with i-th row and j-th column
// removed for the purpose of calculating
// the minor.
// r, c row and column index variables
// col current column in expansion
// d determinant accumulator
//
int r, c, col, d = 0;
int (*minor)[matrix_size-1][matrix_size-1] = calloc(SQR(matrix_size-1), sizeof(int));
if(!minor) {
*is_valid = 0;
return 0;
}
// expand along first row
for(col = 0; col < matrix_size; col++) {
// copy into minor matrix, left side then right side
for(r = 1; r < matrix_size; r++) {
for(c = 0; c < col; c++) {
(*minor)[r-1][c] = (*mat)[r][c];
}
for(c = col+1; c < matrix_size; c++) {
(*minor)[r-1][c-1] = (*mat)[r][c];
}
}
// calculate minor
int temp_d = laplace_det(matrix_size-1, minor, is_valid);
if(!is_valid) {
free(minor);
return 0;
}
d += (col & 1 ? -1 : 1) * (*mat)[0][col] * temp_d;
}
// free resources
free(minor);
return d;
}
Example driver program:
int main(void) {
int is_valid = 1;
int matrix[3][3] = {
{1, 2, 3},
{4, 5, 6},
{7, 8, 9}
};
int det_m = laplace_det(3, &matrix, &is_valid);
if(is_valid) {
printf("determinant %d\n", det_m);
}
}
An iterative approach
If you wanted to do the same thing iteratively, you will need to provide space for all n-1 submatrices of smaller size. As the recursive case shows, you can reuse the same space for all submatrices of the same size, but you cannot use that space for matrices of smaller size because each matrix has to spawn all submatrices of size one smaller, one for each column.
Since the original size of the matrix is not known beforehand, traversing all these matrices in a general way is difficult to realise and will require a lot of bookkeeping that we get for free keeping these values in their respective stack frames in the recursive case. But I suppose keeping the current column selector in an array of size matrixSize, as well as an array of pointers to the submatrices, it will be possible to rewrite this iteratively.

I tried to solve the laplace expansion using recursion method. May this help you
Code:
#include <stdio.h>
int determinant(int size,int det[][4]) // size & row of the square matrix
{
int temp[4][4],a=0,b=0,i,j,k;
int sum=0,sign; /* sum will hold value of determinant of the current matrix */
if(size==2)
return (det[0][0]*det[1][1]-det[1][0]*det[0][1]);
sign=1;
for(i=0;i<size;i++) // note that 'i' will indicate column no.
{
a=0;
b=0;
// copy into submatrix and recurse
for(j=1;j<size;j++) // should start from the next row !!
{
for(k=0;k<size;k++)
{
if(k==i) continue;
temp[a][b++]=det[j][k];
}
a++;
b=0;
}
sum+=sign*det[0][i]*determinant(size-1,temp); // increnting row & decrementing size
sign*=-1;
}
return sum;
}
//Main function
int main()
{
int i,j;
int det[4][4] = {{1, 0, 2, -1},
{3, 0, 0, 5},
{2, 1, 4, -3},
{1, 0, 5, 0}
};
printf("%d",determinant(4,det));
}
Cheers!

Related

Obtaining a submatrix from a squared matrix in C

I want to find a way to obtain a submatrix from an initial bigger squared matrix in c , more specifically , the bottom right submatrix . Then i want the for cycle to give me all the submatrix that i can obtain from the original squared matrix.
I found some code online :
int initial_matrix[3][3]
int submatrix[2][2];
for( int i = 0; i < R - r + 1; i++){
for(int j = 0; j < C - c + 1; j++){
submatrix[i][j]=initial_matrix[i][j]
}
}
where :
R is the number of rows of the initial matrix (so in this case R=3)
C is the number of columns of the initial matrix (so in this case C=3)
r is the number of rows of the submatrix that i want to obtain (so in this case r=2)
c is the number of columns of the submatrix that i want to obtain (so in this case c=2)
But this cycle only gives me the upper left submatrix , while I want the bottom right and then expand it so that it gives me all the possible submatrix of the initial matrix.
At first your indices in the loops are not correct! You want to fill in your target matrix rows from indices 0 to r and columns from indices 0 to c so your loops need to look like:
for(size_t i = 0; i < r; ++i)
{
for(size_t j = 0; j < c; ++j)
{
// ...
}
}
From here on you now can simply assign your target matrix at indices i and j:
submatrix[i][j] = ...;
Problem is that these are not the same indices as in your source matrix (unless you want to use the top-left corner submatrix), so you need to translate the indices to the appropriate positions within the source matrix. Luckily, this is not difficult; if the top-left corner of the submatrix within the source matrix is at row r0 and column c0 then you simply need to add these to the corresponding loop indices, thus you get:
... = initial_matrix[r0 + i][c0 + j];
In your case this would mean e.g. [1 + i][1 + j] to get the bottom-right submatrix with both i and j counting up from 0 to excluded 2 (i.e. counting 0 and 1).

Is there a point in transforming Sparse Matrix Multiplication into block form?

In an assignment for a parallel computing class we have been assigned to program Sparse Binary Matrix-Matrix multiplication (SpGEMM) in C. Julia has a relatively easy to follow implementation based on Gustavson's algorithm that works great.
Thing is we also need to do the multiplication in block form, which I already did, but I don't really see any speedup in doing so. From what I understand you're supposed to use
the result of A(i,k)*B(k,j), where (i,j) are coordinates in the block matrix, as a mask/filter for the next block multiplication in the sum C(i,j) = Σ( A(i,k)*B(k,j) ).
Julia's implementation though, which I followed, already has a dense boolean array when computing each row that acts as a "flag" for when not to add something again in the resulting matrix.
My question is, is there any merit in turning this into block matrix multiplication or is there something that I might be doing wrong myself.
Keep in mind my C code currently runs in half the time Matlab takes in multiplying a 5,000,000 x 5,000,000 sparse matrix. The blocked version, which I really tried to optimize and I'm also doing in the Gustavson order, gets slower and slower the smaller the block-size is set.
Here is my current code
//C=D+(A*B) (basically OR)
bool SpGEMM_dor(int *Acol, int *Arow, int An,
int *Bcol, int *Brow, int Bm,
int **Ccol, int *Crow, int *Csize,//output
int *Dcol, int *Drow)//previous
{
//printCSR(Arow,Acol,An,An,An);
int nnzcum=0;
bool *xb = calloc(An,sizeof(bool)); //boolean flag
for(int i=0; i<An; i++){
int nnzpv = nnzcum;//nnz of previous row;
Crow[i] = nnzcum;
if(nnzcum + An > *Csize){ //make sure theres enough space
*Csize += MAX(An, *Csize/4);
*Ccol = realloc(*Ccol,*Csize*sizeof(int));
}
//---OR---
//add previous row items in order to exist in the next block
for(int jj=Drow[i]; jj<Drow[i+1]; jj++){
int j = Dcol[jj];
xb[j] = true;
(*Ccol)[nnzcum] = j;
nnzcum++;
}
//--------
//add new row items
for(int jj=Arow[i]; jj<Arow[i+1]; jj++){
int j = Acol[jj];
for(int kp=Brow[j]; kp<Brow[j+1]; kp++){
int k = Bcol[kp];
if(!xb[k]){
xb[k] = true;
(*Ccol)[nnzcum] = k;
nnzcum++;
}
}
}
if(nnzcum > nnzpv){
quickSort(*Ccol,nnzpv,nnzcum-1);
for(int p=nnzpv; p<nnzcum; p++){
xb[ (*Ccol)[p] ] = false;
}
}
}
Crow[An] = nnzcum;
free(xb);
return Crow[An];
}
The part of code that I have inside of the ----OR---- section only happens in the block version in order to add the previous block to the now-calculating one. It basically does C = D+(A*B). I've also tried calculating the next block and then merging the 2 sorted arrays of each row of the 2 CSR matrices, which seems to be slower. Also all matrices are in CSR format.

Find the indices of the k smallest values in C

I'm implementing K Nearest Neighbor in C and I've gotten to the point where I've computed a distance matrix of every point in my to-be-labeled set of size m to every point in my already-labeled set of size n. The format of this matrix is
[[dist_0,0 ... dist_0,n-1]
.
.
.
[dist_m-1,0 ... dist_m-1,n-1]]
Next, I need to find the k smallest distances in each row so I can use the column indices to access the labels of those points and then compute the label for the point the row index is referring to. The latter part is trivial but computing the indices of the k smallest distances has me stumped. Python has easy ways to do something like this but the bare bones nature of C has gotten me a bit frustrated. I'd appreciate some pointers (no pun intended) on what to go about doing and any helpful functions C might have to help.
Without knowing k, and assuming that it can be variable, the simplest way to do this would be to:
Organize each element in a structure which holds the original column index.
Sort each row of the matrix in ascending order and take the first k elements of that row.
struct item {
unsigned value;
size_t index;
};
int compare_items(void *a, void *b) {
struct item *item_a = a;
struct item *item_b = b;
if (item_a->value < item_b->value)
return -1;
if (item_a->value > item_b->value)
return 1;
return 0;
}
// Your matrix:
struct item matrix[N][M];
/* Populate the matrix... make sure that each index is set,
* e.g. matrix[0][0] has index = 0.
*/
size_t i, j;
for (i = 0; i < M; i++) {
qsort(matrix[i], N, sizeof(struct item), compare_items);
/* Now the i-th row is sorted and you can take a look
* at the first k elements of the row.
*/
for (j = 0; j < k; j++) {
// Do something with matrix[i][j].index ...
}
}

Applying a function on sorted array

Taken from the google interview question here
Suppose that you have a sorted array of integers (positive or negative). You want to apply a function of the form f(x) = a * x^2 + b * x + c to each element x of the array such that the resulting array is still sorted. Implement this in Java or C++. The input are the initial sorted array and the function parameters (a, b and c).
Do you think we can do it in-place with less than O(n log(n)) time where n is the array size (e.g. apply a function to each element of an array, after that sort the array)?
I think this can be done in linear time. Because the function is quadratic it will form a parabola, ie the values decrease (assuming a positive value for 'a') down to some minimum point and then after that will increase. So the algorithm should iterate over the sorted values until we reach/pass the minimum point of the function (which can be determined by a simple differentiation) and then for each value after the minimum it should just walk backward through the earlier values looking for the correct place to insert that value. Using a linked list would allow items to be moved around in-place.
The quadratic transform can cause part of the values to "fold" over the others. You will have to reverse their order, which can easily be done in-place, but then you will need to merge the two sequences.
In-place merge in linear time is possible, but this is a difficult process, normally out of the scope of an interview question (unless for a Teacher's position in Algorithmics).
Have a look at this solution: http://www.akira.ruc.dk/~keld/teaching/algoritmedesign_f04/Artikler/04/Huang88.pdf
I guess that the main idea is to reserve a part of the array where you allow swaps that scramble the data it contains. You use it to perform partial merges on the rest of the array and in the end you sort back the data. (The merging buffer must be small enough that it doesn't take more than O(N) to sort it.)
If a is > 0, then a minimum occurs at x = -b/(2a), and values will be copied to the output array in forward order from [0] to [n-1]. If a < 0, then a maximum occurs at x = -b/(2a) and values will be copied to the output array in reverse order from [n-1] to [0]. (If a == 0, then if b > 0, do a forward copy, if b < 0, do a reverse copy, If a == b == 0, nothing needs to be done). I think the sorted array can be binary searched for the closest value to -b/(2a) in O(log2(n)) (otherwise it's O(n)). Then this value is copied to the output array and the values before (decrementing index or pointer) and after (incrementing index or pointer) are merged into the output array, taking O(n) time.
static void sortArray(int arr[], int n, int A, int B, int C)
{
// Apply equation on all elements
for (int i = 0; i < n; i++)
arr[i] = A*arr[i]*arr[i] + B*arr[i] + C;
// Find maximum element in resultant array
int index=-1;
int maximum = -999999;
for (int i = 0; i< n; i++)
{
if (maximum < arr[i])
{
index = i;
maximum = arr[i];
}
}
// Use maximum element as a break point
// and merge both subarrays usin simple
// merge function of merge sort
int i = 0, j = n-1;
int[] new_arr = new int[n];
int k = 0;
while (i < index && j > index)
{
if (arr[i] < arr[j])
new_arr[k++] = arr[i++];
else
new_arr[k++] = arr[j--];
}
// Merge remaining elements
while (i < index)
new_arr[k++] = arr[i++];
while (j > index)
new_arr[k++] = arr[j--];
new_arr[n-1] = maximum;
// Modify original array
for (int p = 0; p < n ; p++)
arr[p] = new_arr[p];
}

How to find a subset of a 2d array by selecting rows and columns in c?

In an (m by n) array stored as double *d (column major), what is the fastest way of selecting a range of rows and or columns:
double *filter(double *mat, int m, int n, int rows[], int cols[]);
invoked as:
double *B;
int rows[]= {1,3,5}; int cols[]={2,4};
B = filter(A, 5, 4, rows, cols);
which is expected to return a 3-by-2 subset of A consisting of elements (1,2), (1,4), (3,2)...
c provides no native support for this, so you'll have to find a library that supports it, or define it by hand.
pseudocode:
a=length of rows // no built in c functionality for this either,
b=length of cols // use a sentinel, or pass to the function
nmat = allocate (sizeof(double)*a*b) on the heap
for (i=0; i<a; ++i)
for (j=0; j<b; ++j)
// Note the column major storage convention here (bloody FORTRAN!)
nmat[i + j*b] = mat[ rows[i] + cols[j]*n ];
return nmat
// caller responsible for freeing the allocated memeory
I've commented on the two big problems you face.
I'm pretty sure that m and n should be the dimension of the new array (i.e. m should be the number of items of rows and n should be the number of items in cols) and not the dimension of the source array as seems to be the case in your example because a) you don't need to know the dimension of the source array if you already know which indices to pick from it (and are presumably allowed to assume that those will be valid) and b) if you don't know the size of rows and columns they are useless as you can't iterate over them.
So under that assumption the algorithm could look like this:
Allocate memory for an m x n array
For each i in 0..m, j in 0..n
B[i,j] = A[ rows[i], cols[j] ];

Resources