I am new to using cuda and the magma libraries. I'm trying out some functions on a test problem, a 2D heat equation. The code I wrote seemed to work perfectly for grid sizes of 32, 64, and 128. But it produced wrong results for grid sizes of 256 or larger. I am only posting part of the code here, just enough to reproduce the error. Transferring the final matrix and looking at it in matlab shows that the second call to magmablas_dgemm introduced errors into the solution.
Is there anyone out there who can see why this code would break for larger grid sizes?
int main(int argc, char* argv[])
{
// Get parameters for problem set up
int side_width = atoi(argv[1]); //assuming square grid, N/32 integer
double dx = 2.0 / (side_width-1);
double dt = 0.25 * dx;
//double Tend = dt*3;// 0.5;
// create memory pointers for derivative operator matrices and solution matrix
double* U;
double* Dleft;
double* Dright;
double* dev_U;
double* dev_Dleft;
double* dev_Dright;
//initialize the MAGMA system
magma_init();
magma_int_t N = side_width;
// temp variables required by MAGMA functions
magma_int_t *piv, info, err;
piv = (magma_int_t*)malloc(N*sizeof(magma_int_t));
// Allocate memory for matrices on host and device
err = magma_dmalloc_cpu(&U, N*N);
err += magma_dmalloc_cpu(&Dleft, N*N);
err += magma_dmalloc_cpu(&Dright, N*N);
err += magma_dmalloc(&dev_U, N*N);
err += magma_dmalloc(&dev_Dleft, N*N);
err += magma_dmalloc(&dev_Dright, N*N);
if (err){
printf("error in allocation. err number = %d\n", err);
exit(1);
}
// zero out matrices (not efficient but correct)
for (int k=0; k<N*N; ++k ){
U[k] = 1.0;
Dleft[k] = 0.0;
Dright[k] = 0.0;
}
//create derivative operator matrices
double a = dt/2.0/dx/dx;
double b = dt/dx/dx;
Dleft[0] = 1.0;
Dleft[N*N-1] = 1.0;
for (int k=1; k<N-1; ++k) {
Dleft[k*N + k-1] = -a;
Dleft[k*N + k] = 1+b;
Dleft[k*N + k+1] = -a;
Dright[k*N + k-1] = a;
Dright[k*N + k] = 1-b;
Dright[k*N + k+1] = a;
}
// Determine block and thread amounts
int grid_dim = ((side_width + 31)/32) ;
int block_dim = 32;
dim3 gridDim(grid_dim, grid_dim);
dim3 blockDim(block_dim, block_dim);
//copy data from host to device
magma_dsetmatrix(N, N, U, N, dev_U, N);
magma_dsetmatrix(N, N, Dleft, N, dev_Dleft, N);
magma_dsetmatrix(N, N, Dright, N, dev_Dright, N);
// LU factorize the left hand operator matrix
magma_dgetrf_gpu(N, N, dev_Dleft, N, piv, &info);
double tn = 0; //time counter
// needed to take first step outside while loop because of some tricky transpose nonsense happening
tn += dt;
// compute explicit step : Uhat=Dright*U^T
magmablas_dgemm(MagmaTrans,MagmaNoTrans, N, N, N, 1.0f, dev_Dright, N, dev_U, N, 0.0f, dev_U, N);
// implicit step solve : Dleft*U=Uhat
magma_dgetrs_gpu(MagmaTrans, N, N, dev_Dleft, N, piv, dev_U, N, &info);
// compute explicit step : Uhat=Dright*U^T
magmablas_dgemm(MagmaTrans, MagmaTrans, N, N, N, 1.0f, dev_Dright, N, dev_U, N, 0.0f, dev_U, N);
printf("GPU matrix U at time %3.3f \n ", tn);
magma_dprint_gpu(16, 16, dev_U, N);
//copy solution from device to host
magma_dgetmatrix(N, N, dev_U, N, U, N);
//write data to file
char filename[256];
char str_t[128];
sprintf(str_t, "%d", N );
sprintf(filename, "ADI_%s.bin", str_t);
FILE* fileID = fopen(filename, "wb");
for (int i=0; i<N*N; ++i){
fwrite(&U[i],sizeof(double),1,fileID);
}
fclose(fileID);
free(U);
free(Dleft);
free(Dright);
magma_free(dev_U);
magma_free(dev_Dleft);
magma_free(dev_Dright);
free(piv);
magma_finalize();
return 0;
}
To the best of my knowledge, BLAS/LAPACK gemm has never supported in-place operations, ie.
C := alpha*op( A )*op( B ) + beta*C
cannot be transformed into
A := alpha*op( A )*op( B ) + beta*A
or
B := alpha*op( A )*op( B ) + beta*B
with any guarantee of correctness, even for the canonical case with alpha = 1, beta = 0. If you can follow the fortran, I would recommend having a look at the reference code from the Dongarra group. That implementation will break if the pointer for the matrix passed as C aliaises either A or B.
In multi-threaded or massively parallel BLAS implementations, this is particularly true. Most parallel execution environments don't support any sort of strong or fixed execution ordering. That can mean that operations which unintentionally work in serial versions of linear algebra routines break in parallel, because of the lack of execution order guarantee. If a routine in a parallel BLAS or LAPACK implementation doesn't explicitly say it supports in-place operations, don't assume otherwise, because there be dragons and all of that...
Your MAGMA gemm calls only work at small sizes by accident, and probably because very small matrix sizes don't expose enough parallelism to hit the correctness problems that will arise from aliasing an input and output pointer. If you change your code so that the inputs and output are different memory allocations, I suspect the problem will disappear.
Related
I am doing a project which is locating a sound source. I am using three microphones to capture data and then do cross correlation to find the time difference. The time difference is then used to locate the sound source. The issue I think am having is with the cross correlation of my code. I am doing this in C and from my understanding FFT needs to be done and so I am using the FFTW library to do this. The result of my cross correlation is always 0. The microphones are capturing data by sampling them, and then after a loud sound it captures data for half a buffer and then writes the data to a text file. I am using a circular buffer for storing the data. After the trigger event occurs, the data is reordered and then fast fourier transformed. Then multiplied and then inverse transformed. I then determine when the peak occurs from the output of the IFFT. Most of the time it is 0 but I am not sure why. So I wrote the data into a text file where I then imported the data into MATLAB. This was to test the xcorr function with my data. The results of this match my cross correlation code in C. So I am now not sure of where the issue lies with my code.
I have added a link to the data I acquired in a text file.1 Then for my cross correlation in C I have the following code.
#include <string.h>
#include <stdlib.h>
#include <stdint.h>
#include <math.h>
#include <complex.h> /* Standard Library of Complex Numbers */
#include <fftw3.h>
int i,size = 8; //defining the size of the buffer and FFT
int peak_detec(double);
void reverse(double a[]);
int peak_detec(double a) // function which calculates the maximum value of an array
{
int b= 0;
double Largest;
int Position;
Largest = a;
for(b=1; b<10; b++)
{
if(Largest<a)
{
Largest=a;
Position = b;
}
}
return Position;
}
void reverse(double a[]) // function which reverses the order of the array send to the function
{
int i;
double temp[size];
for(i = 1; i < size+1; i++)
{
temp[i-1] = a[size-i];
}
for (i=0; i< size;i++)
{
a[i]=temp[i];
}
}
int main(void)
{
double array[] = {0.7,0.1,0.1,0.1,0.1,0.1,0.1,0.1}; //my two arrays for FFT
double array2[] = {0.1,0.1,0.1,0.1,0.1,0.1,0.7,0.1};
double *out,*out2,*out3; //pointer for the output
double *err,*err2; //pointer for errors
double *double_array;
double *double_array2;
//****pointer for complex arrays****//
double complex *out_cpx;
double complex *out_cpx2;
double complex *out_cpx3;
//****defining all the plans for fft****//
fftw_plan fft;
fftw_plan ifft;
fftw_plan fft2;
fftw_plan ifft2;
fftw_plan ifft3;
//****allocating memory****//
out_cpx = (double complex*) fftw_malloc(sizeof(double complex)*(2*size));
out = (double *) malloc((2*size)*sizeof(double));
err = (double *) malloc((2*size)*sizeof(double));
out_cpx2 = (double complex*) fftw_malloc(sizeof(double complex)*(2*size));
out2 = (double *) malloc((2*size)*sizeof(double));
err2 = (double *) malloc((2*size)*sizeof(double));
out_cpx3 = (double complex*) fftw_malloc(sizeof(double complex)*(2*size));
out3 = (double *) malloc((2*size)*sizeof(double));
double_array = (double*) malloc(sizeof(double)*(2*size));
double_array2 = (double*) malloc(sizeof(double)*(2*size));
//****setting up fftw and ifft plans****//
fft = fftw_plan_dft_r2c_1d((2*size), double_array, out_cpx, FFTW_ESTIMATE); //Setup fftw plan for fft
ifft = fftw_plan_dft_c2r_1d((2*size), out_cpx, out, FFTW_ESTIMATE); //Setup fftw plan for ifft
fft2 = fftw_plan_dft_r2c_1d((2*size), double_array2, out_cpx2, FFTW_ESTIMATE); //Setup fftw plan for fft
ifft2 = fftw_plan_dft_c2r_1d((2*size), out_cpx2, out2, FFTW_ESTIMATE); //Setup fftw plan for ifft
ifft3 = fftw_plan_dft_c2r_1d((2*size), out_cpx3, out3, FFTW_ESTIMATE); //Setup fftw plan for ifft
//reverse(array2);
memcpy (double_array, array, sizeof(double)*(size));
memset (double_array + size+1, 0, sizeof(double) * (size-1));
memcpy (double_array2, array2, sizeof(double)*(size));
memset (double_array2 + size+1, 0, sizeof(double) * (size-1));
//****executing fft****//
fftw_execute(fft);
fftw_execute(fft2);
double complex scale = 1.0/(2 * size);
for (i=0;i<2*size;i++) //loop which multiplies the FFT arrays
{
out_cpx3[i] = out_cpx[i]*out_cpx2[i]*scale;
}
//****executing ifft****//
fftw_execute(ifft);
fftw_execute(ifft2);
fftw_execute(ifft3);
//****for loop which finds the largest value of the array****//
int Position =0;
double Largest;
Largest = out3[0];
for(i=1; i<2*size; i++)
{
if(Largest<=out3[i])
{
Largest=out3[i];
Position = i ;
}
}
printf("Position of peak value: %d\n",Position);
//****frees the memory****//
fftw_destroy_plan(fft);
fftw_destroy_plan(fft2);
fftw_destroy_plan(ifft);
fftw_destroy_plan(ifft2);
fftw_destroy_plan(ifft3);
fftw_free(out_cpx);
fftw_free(out_cpx2);
fftw_free(out_cpx3);
free(double_array);
free(double_array2);
free(err);
free(out);
free(err2);
free(out2);
free(out3);
return 0;
}
Can anyone help with my issue? I am trying to do cross correlation and with the data in the shared link but does not seem to work.
It is my first time asking a question on here, I think I have given enough information but if not then I can try provide more if something does not make sense.
I have many small 2D arrays (e.g. M x 32 x 40) and fewer larger 2D arrays (e.g. N x 200 x 300).
I would like to 'put' the smaller matrices at indices n,i,j in the larger arrays (upper left index of the array at batch index n). These small arrays could overlap and should be aggregated by functions that are associative and commutative say plus, multiply, etc.
I figure this is a pretty basic scenario that many people should have come across, right? Is there a cuda implementation that supports this in an efficient way?
Typical values M = 10^6, N = 10^4
This is a reduction operation.
In addition to what is expressed in the comments, I'll make the assumption that the distribution of the M matrices in terms of which of the N matrices they belong to, is relatively uniform, i.e. evenly distributed. This means for the dimensions given, that there will be approximately 100 of the M matrices that intended to update N matrix 0, 100 for N matrix 1, and so on. Furthermore, if we inspect the n array, we would observe a uniformly random pattern of indices (i.e. no clumping or grouping).
Given that, in what may be a first for me, I'll suggest a lock/critical section algorithm, using the plumbing from here. Each threadblock will take one of the M arrays, and attempt to acquire a lock so that it can update the appropriate N array. When finished, release the lock.
I considered other approaches as well, some of which are evident in the code. In any event, for the stated conditions, the lock based approach had a kernel runtime of about 40ms on my V100 GPU, which was the best I observed.
I would also note that the stated dimensions result in a data working set of ~8GB. Not that that is a problem, just be aware if running this code as-is on your laptop GPU.
Here's an example:
$ cat t34.cu
#include <iostream>
#include <cstdlib>
const int N = 10000;
const int M = 1000000;
const int Mx = 32;
const int My = 40;
const int Nx = 200;
const int Ny = 300;
const int nTPB = 256;
template <typename T>
__host__ __device__
T reduction_op(T &a, const T &b){ return a+b;}
template <typename T>
__global__ void k(const T * __restrict__ M, T * __restrict__ N, const int * __restrict__ n, const int * __restrict__ i, const int * __restrict__ j, const int num_M){
for (int ii = 0; ii < num_M; ii++){
if (n[ii] == blockIdx.x) {
for (int jj = threadIdx.x; jj < Mx*My; jj += blockDim.x){
int y = jj/Mx;
int x = jj - (y*Mx);
N[blockIdx.x*Nx*Ny + i[ii] + (j[ii]+y)*Nx + x] = reduction_op(
N[blockIdx.x*Nx*Ny + i[ii] + (j[ii]+y)*Nx + x], M[ii*Mx*My + y*Mx + x]);}
}
__syncthreads();}
}
// assumes Ny is whole-number divisible by sl
template <typename T>
__global__ void ki(const T * __restrict__ M, T * __restrict__ N, const int * __restrict__ n, const int * __restrict__ i, const int * __restrict__ j, const int num_M, const int sl){
extern __shared__ T s[];
for (int c = 0; c < Ny; c+=sl){ // process per chunk of N array
// load shared
for (int t = threadIdx.x; t < sl*Nx; t += blockDim.x) s[t] = N[blockIdx.x*Nx*Ny + c*Nx + t];
__syncthreads();
// process chunk stack
for (int ii = 0; ii < num_M; ii++){ // iterate through "stack"
if ((n[ii] == blockIdx.x) && (j[ii] < (c+sl)) && ((j[ii]+My) > c)) {
for (int jj = threadIdx.x; jj < sl*Mx; jj += blockDim.x){
int y = jj/Mx;
int x = jj - (y*Mx);
//y += c;
if ((y+c >= j[ii]) && (y+c < (j[ii]+My)))
s[y*Nx+x+i[ii]] = reduction_op(s[y*Nx+x+i[ii]], M[ii*Mx*My + (y+c-j[ii])*Mx + x]);}
}
__syncthreads();}
// save shared
for (int t = threadIdx.x; t < sl*Nx; t += blockDim.x) N[blockIdx.x*Nx*Ny + c*Nx + t] = s[t];
}
}
template <typename T>
__global__ void ka(const T * __restrict__ M, T * __restrict__ N, const int * __restrict__ n, const int * __restrict__ i, const int * __restrict__ j, const int num_M){
int x = threadIdx.x;
for (int y = threadIdx.y; y < My; y += blockDim.y)
atomicAdd(N+n[blockIdx.x]*Nx*Ny+(j[blockIdx.x]+y)*Nx+i[blockIdx.x]+x, M[blockIdx.x*Mx*My+y*Mx+x]);
}
__device__ void acquire_semaphore(volatile int *lock){
while (atomicCAS((int *)lock, 0, 1) != 0);
}
__device__ void release_semaphore(volatile int *lock){
*lock = 0;
__threadfence();
}
template <typename T>
__global__ void kl(const T * __restrict__ M, T * __restrict__ N, const int * __restrict__ n, const int * __restrict__ i, const int * __restrict__ j, const int num_M, int * __restrict__ locks){
if ((threadIdx.x == 0) && (threadIdx.y == 0))
acquire_semaphore(locks+n[blockIdx.x]);
__syncthreads();
//begin critical section
int x = threadIdx.x;
for (int y = threadIdx.y; y < My; y += blockDim.y){
N[n[blockIdx.x]*Nx*Ny + i[blockIdx.x] + (j[blockIdx.x]+y)*Nx + x] = reduction_op(
N[n[blockIdx.x]*Nx*Ny + i[blockIdx.x] + (j[blockIdx.x]+y)*Nx + x], M[blockIdx.x*Mx*My + y*Mx + x]);}
// end critical section
__threadfence(); // not strictly necessary for the lock, but to make any global updates in the critical section visible to other threads in the grid
__syncthreads();
if ((threadIdx.x == 0) && (threadIdx.y == 0))
release_semaphore(locks+n[blockIdx.x]);
}
typedef float mt;
int main(){
mt *d_M, *h_M, *d_N, *h_N, *r1, *r2;
int *d_n, *h_n, *d_i, *h_i, *d_j, *h_j;
h_M = new mt[M*Mx*My];
h_N = new mt[N*Nx*Ny];
r1 = new mt[N*Nx*Ny];
r2 = new mt[N*Nx*Ny];
h_n = new int[M];
h_i = new int[M];
h_j = new int[M];
cudaMalloc(&d_M, M*Mx*My*sizeof(mt));
cudaMalloc(&d_N, N*Nx*Ny*sizeof(mt));
cudaMalloc(&d_n, M*sizeof(int));
cudaMalloc(&d_i, M*sizeof(int));
cudaMalloc(&d_j, M*sizeof(int));
for (int i = 0; i < M; i++){
h_n[i] = rand()%N;
h_i[i] = rand()%(Nx - Mx);
h_j[i] = rand()%(Ny - My);}
for (int i = 0; i < N*Nx*Ny; i++) h_N[i] = (mt)(i%3);
for (int i = 0; i < M*Mx*My; i++) h_M[i] = (mt)((i%3)+1);
cudaMemcpy(d_M, h_M, M*Mx*My*sizeof(mt), cudaMemcpyHostToDevice);
cudaMemcpy(d_N, h_N, N*Nx*Ny*sizeof(mt), cudaMemcpyHostToDevice);
cudaMemcpy(d_n, h_n, M*sizeof(int), cudaMemcpyHostToDevice);
cudaMemcpy(d_i, h_i, M*sizeof(int), cudaMemcpyHostToDevice);
cudaMemcpy(d_j, h_j, M*sizeof(int), cudaMemcpyHostToDevice);
#ifdef USE_SINGLE_N
cudaMemset(d_n, 0, M*sizeof(int));
#endif
#if 0
const int sl = 40;
const int sb = sl * Nx * sizeof(mt);
ki<<<N, nTPB, sb>>>(d_M, d_N, d_n, d_i, d_j, M, sl);
cudaMemcpy(r2, d_N, N*Nx*Ny*sizeof(mt), cudaMemcpyDeviceToHost);
#endif
dim3 block(Mx, 8);
#if 0
ka<<<M, block>>>(d_M, d_N, d_n, d_i, d_j, M);
cudaMemcpy(r2, d_N, N*Nx*Ny*sizeof(mt), cudaMemcpyDeviceToHost);
#endif
int *d_locks;
cudaMalloc(&d_locks, N*sizeof(int));
cudaMemset(d_locks, 0, N*sizeof(int));
kl<<<M, block>>>(d_M, d_N, d_n, d_i, d_j, M, d_locks);
cudaMemcpy(r2, d_N, N*Nx*Ny*sizeof(mt), cudaMemcpyDeviceToHost);
cudaMemcpy(d_N, h_N, N*Nx*Ny*sizeof(mt), cudaMemcpyHostToDevice);
k<<<N, nTPB>>>(d_M, d_N, d_n, d_i, d_j, M);
cudaMemcpy(r1, d_N, N*Nx*Ny*sizeof(mt), cudaMemcpyDeviceToHost);
for (int i = 0; i < N*Nx*Ny; i++) if (r1[i] != r2[i]) {std::cout << "mismatch at: " << i << " was: " << r2[i] << " should be: " << r1[i] << std::endl; return 0;}
}
$ nvcc -o t34 t34.cu -O3 -lineinfo
$ nvprof ./t34
==17970== NVPROF is profiling process 17970, command: ./t34
==17970== Profiling application: ./t34
==17970== Profiling result:
Type Time(%) Time Calls Avg Min Max Name
GPU activities: 34.57% 3.09036s 2 1.54518s 1.54294s 1.54742s [CUDA memcpy DtoH]
33.18% 2.96615s 1 2.96615s 2.96615s 2.96615s void k<float>(float const *, float*, int const *, int const *, int const *, int)
31.81% 2.84401s 6 474.00ms 1.4255ms 1.27035s [CUDA memcpy HtoD]
0.45% 39.949ms 1 39.949ms 39.949ms 39.949ms void kl<float>(float const *, float*, int const *, int const *, int const *, int, int*)
0.00% 2.1120us 1 2.1120us 2.1120us 2.1120us [CUDA memset]
API calls: 96.13% 8.94558s 8 1.11820s 1.9203ms 4.51030s cudaMemcpy
3.60% 334.59ms 6 55.765ms 277.58us 330.37ms cudaMalloc
0.15% 13.752ms 8 1.7190ms 1.3268ms 2.2025ms cuDeviceTotalMem
0.11% 10.472ms 808 12.959us 172ns 728.50us cuDeviceGetAttribute
0.01% 997.81us 8 124.73us 100.93us 176.73us cuDeviceGetName
0.00% 69.047us 2 34.523us 32.349us 36.698us cudaLaunchKernel
0.00% 68.013us 1 68.013us 68.013us 68.013us cudaMemset
0.00% 46.172us 8 5.7710us 1.8940us 23.025us cuDeviceGetPCIBusId
0.00% 8.5060us 16 531ns 260ns 1.5030us cuDeviceGet
0.00% 3.7870us 8 473ns 229ns 881ns cuDeviceGetUuid
0.00% 3.3980us 3 1.1320us 610ns 2.0780us cuDeviceGetCount
$
Extended discussion:
On performance:
This is a memory bound algorithm. Therefore, we can estimate optimal kernel performance by determining the minimum number of memory reads and writes needed to perform the operation, then dividing by the available memory bandwidth, to determine the optimal or lower-bound for kernel duration. Unfortunately the determination of the minimum number of reads and writes depends on the positioning of the M matrices, so cannot be easily generally determined, without inspecting the n, i, and j matrices.
However we can look for another way to estimate. Another approach to estimation would be to observe that each M matrix update will require reading 2 values and writing one value. If we then use that as our estimate, we come up with M*Mx*My*3*sizeof(element_of_M)/GPU_memory_bandwidth. On my V100 (~700GB/s BW) this works out to about 20ms lower bound on kernel duration.
On approaches considered:
"naive" approach, kernel k: Each threadblock will be responsible for one of the N matrices, and will iterate through the M matrices, inspecting n to determine if the M matrices will update the assigned N matrix. This gives a non-optimal run time of ~3s but seems to be mostly invariant performance-wise based on the distribution of n, and can use an "arbitrary" reduction op.
attempt at "optimal" approach, kernel ki: Each threadblock will be responsible for one of the N matrices, but will only load a chunk of that matrix at a time. It will then proceed through the M matrices updating that chunk, similar the the k kernel. This necessitates more loops through the matrices, but should "almost" only load or save each global memory item the minimum number of times necessary. Nevertheless, the run time is really long, ~40s
atomic approach, kernel ka: Each threadblock will be responsible for one of the M matrices, and will atomically update the relevant N matrix. Simplicity. And the runtime is "fast" at ~40ms. (The atomic approach may be even faster than this is non-uniform n distributions. I witnessed kernel runtimes as low as 8ms!) However this is not readily generalizable to operations that don't have an atomic equivalent, such as multiply.
lock based approach, kernel kl: Like the atomic approach, each threadblock will be responsible for one of the M matrices, and will first acquire a lock on the relevant N matrix. The lock means that atomics are not necessary. For the uniformly distributed n case presented, it has about the same performance as the atomic case. It has the benefit that it can handle other reduction ops, such as multiply, readily. A disadvantage is that in the presence of non-uniformly-random distribution in n the performance can suffer, with a worst case in the ballpark of the naive kernel (3-5s).
Overall if the requirement for an arbitrary reduction operator can be dropped (e.g. only use addition, for example) then the atomic method may be best.
Assume that the dimensions are very large (up to 1 billion elements in a matrix). How would I implement a cache oblivious algorithm for matrix-vector product? Based on wikipedia I will need to recursively divide and conquer however I feel like there would be a lot of overhead.. Would it be efficient to do so?
Follow up question and answer: OpenMP with matrices and vectors
So the answer to the question, "how do I make this basic linear algebra operation fast", is always and everywhere to find and link to a tuned BLAS library for your platform. Eg, GotoBLAS (whose work is being continued in OpenBLAS), or the slower autotuned ATLAS, or commercial packages like Intel's MKL. Linear algebra is so fundamental to so many other operations that enormous amounts of effort goes into optimizing these packages for various platforms, and there's just no chance you're going to come up with something in a few afternoon's work that will compete. The particular subroutine calls you're looking for for general dense matrix-vector multiplicaiton is SGEMV/DGEMV/CGEMV/ZGEMV.
Cache-oblivious algorithms, or autotuning, are for when you can't be bothered tuning for the specific cache architecture of your system - which might be fine, normally, but since people are willing to do that for BLAS routines, and then make the tuned results available, means that you're best off just using those routines.
The memory access pattern for GEMV is straightforward enough that you don't really need divide and conquer (same for the standard case of matrix transpose) - you just find the cache blocking size and use it. In GEMV (y = Ax), you still have to scan through the entire matrix once, so there's nothing to be done for reuse (and thus effective cache use) there, but you can try reuse x as much as possible so you load it once instead of (number of rows) times - and you still want access to A to be cache friendly. So the obvious cache blocking thing to do is to break along blocks:
A x -> [ A11 | A12 ] | x1 | = | A11 x1 + A12 x2 |
[ A21 | A22 ] | x2 | | A21 x1 + A22 x2 |
And you can certainly do that recursively. But doing a naive implementation, it's slower than the simple double-loop, and way slower than a proper SGEMV library call:
$ ./gemv
Testing for N=4096
Double Loop: time = 0.024995, error = 0.000000
Divide and conquer: time = 0.299945, error = 0.000000
SGEMV: time = 0.013998, error = 0.000000
The code follows:
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
#include "mkl.h"
float **alloc2d(int n, int m) {
float *data = malloc(n*m*sizeof(float));
float **array = malloc(n*sizeof(float *));
for (int i=0; i<n; i++)
array[i] = &(data[i*m]);
return array;
}
void tick(struct timeval *t) {
gettimeofday(t, NULL);
}
/* returns time in seconds from now to time described by t */
double tock(struct timeval *t) {
struct timeval now;
gettimeofday(&now, NULL);
return (double)(now.tv_sec - t->tv_sec) + ((double)(now.tv_usec - t->tv_usec)/1000000.);
}
float checkans(float *y, int n) {
float err = 0.;
for (int i=0; i<n; i++)
err += (y[i] - 1.*i)*(y[i] - 1.*i);
return err;
}
/* assume square matrix */
void divConquerGEMV(float **a, float *x, float *y, int n,
int startr, int endr, int startc, int endc) {
int nr = endr - startr + 1;
int nc = endc - startc + 1;
if (nr == 1 && nc == 1) {
y[startc] += a[startr][startc] * x[startr];
} else {
int midr = (endr + startr+1)/2;
int midc = (endc + startc+1)/2;
divConquerGEMV(a, x, y, n, startr, midr-1, startc, midc-1);
divConquerGEMV(a, x, y, n, midr, endr, startc, midc-1);
divConquerGEMV(a, x, y, n, startr, midr-1, midc, endc);
divConquerGEMV(a, x, y, n, midr, endr, midc, endc);
}
}
int main(int argc, char **argv) {
const int n=4096;
float **a = alloc2d(n,n);
float *x = malloc(n*sizeof(float));
float *y = malloc(n*sizeof(float));
struct timeval clock;
double eltime;
printf("Testing for N=%d\n", n);
for (int i=0; i<n; i++) {
x[i] = 1.*i;
for (int j=0; j<n; j++)
a[i][j] = 0.;
a[i][i] = 1.;
}
/* naive double loop */
tick(&clock);
for (int i=0; i<n; i++) {
y[i] = 0.;
for (int j=0; j<n; j++) {
y[i] += a[i][j]*x[j];
}
}
eltime = tock(&clock);
printf("Double Loop: time = %lf, error = %f\n", eltime, checkans(y,n));
for (int i=0; i<n; i++) y[i] = 0.;
/* naive divide and conquer */
tick(&clock);
divConquerGEMV(a, x, y, n, 0, n-1, 0, n-1);
eltime = tock(&clock);
printf("Divide and conquer: time = %lf, error = %f\n", eltime, checkans(y,n));
/* decent GEMV implementation */
tick(&clock);
float alpha = 1.;
float beta = 0.;
int incrx=1;
int incry=1;
char trans='N';
sgemv(&trans,&n,&n,&alpha,&(a[0][0]),&n,x,&incrx,&beta,y,&incry);
eltime = tock(&clock);
printf("SGEMV: time = %lf, error = %f\n", eltime, checkans(y,n));
return 0;
}
I'm generating two matrices using the following function (note some code is omitted):
srand(2007);
randomInit(h_A_data, size_A);
void randomInit(float* data, int size)
{
int i;
for (i = 0; i < size; ++i){
data[i] = rand() / (float)RAND_MAX;
}
}
This is called for matrix A and B. This populates the matrices with 0.something values, e.g. 0.748667. I then perform a matrix multiplication using a CPU. I compare the result to a GPU implementation via OpenCL. The resulting matrix has values in the range 20.something, e.g. 23.472757. Both the CPU and the GPU give the same result. The CPU implementation is taken from the Cuda toolkit distrib by nvidia:
void computeGold(float* C, const float* A, const float* B, unsigned int hA, unsigned int wA, unsigned int wB)
{
unsigned int i;
unsigned int j;
unsigned int k;
for (i = 0; i < hA; ++i)
for (j = 0; j < wB; ++j) {
double sum = 0;
for (k = 0; k < wA; ++k) {
double a = A[i * wA + k];
double b = B[k * wB + j];
sum += a * b;
}
C[i * wB + j] = (float)sum;
}
}
The weird thing is, all three matrices in memory are of the same size, i.e. sizeof(float)*size_A, or *size_B for matrix B etc. When I dump them to the disk, the file for the result stored in matrix C (the multiplied matrix) is bigger than matrix A and B.
Even more critical, for my application I'm transferring these over a network via a socket. In terms of the raw number of bytes, all matrices are the same, and yet it takes longer to transfer matrix C over the network. The problem is extrapolated for large matrix sizes. Why is this?
UPDATE/EDIT:
fprintf(matrix_c_file,"\n\nMatrix C\n");
for(i = 0; i < size_C; i++)
{
fprintf(matrix_c_file,"%f ", h_C_data[i]);
}
fprintf(matrix_c_file,"\n");
When matrix A and B contain only zero's, all three (matrix A, B and C) are the same size on disk.
I think that lijie has the correct (albeit terse) answer in the comments. The %f format specifier can result in a string with variable width. Consider the following C code:
printf("%f\n", 0.0);
printf("%f\n", 3.1415926535897932384626433);
printf("%f\n", 20.53);
printf("%f\n", 20.5e38);
which produces:
0.000000
3.141593
20.530000
2050000000000000019963732141023730597888.000000
All of the output has the same number of digits after the decimal point (6 by default), but a variable number to the left of the decimal point. If you need the textual representation of your matrix to be a consistent size and you don't mind sacrificing some precision, you can use the %e format specifier instead to force an exponential representation like 2.345e12.
I would like to be able to compute the inverse of a general NxN matrix in C/C++ using lapack.
My understanding is that the way to do an inversion in lapack is by using the dgetri function, however, I can't figure out what all of its arguments are supposed to be.
Here is the code I have:
void dgetri_(int* N, double* A, int* lda, int* IPIV, double* WORK, int* lwork, int* INFO);
int main(){
double M [9] = {
1,2,3,
4,5,6,
7,8,9
};
return 0;
}
How would you complete it to obtain the inverse of the 3x3 matrix M using dgetri_?
Here is the working code for computing the inverse of a matrix using lapack in C/C++:
#include <cstdio>
extern "C" {
// LU decomoposition of a general matrix
void dgetrf_(int* M, int *N, double* A, int* lda, int* IPIV, int* INFO);
// generate inverse of a matrix given its LU decomposition
void dgetri_(int* N, double* A, int* lda, int* IPIV, double* WORK, int* lwork, int* INFO);
}
void inverse(double* A, int N)
{
int *IPIV = new int[N];
int LWORK = N*N;
double *WORK = new double[LWORK];
int INFO;
dgetrf_(&N,&N,A,&N,IPIV,&INFO);
dgetri_(&N,A,&N,IPIV,WORK,&LWORK,&INFO);
delete[] IPIV;
delete[] WORK;
}
int main(){
double A [2*2] = {
1,2,
3,4
};
inverse(A, 2);
printf("%f %f\n", A[0], A[1]);
printf("%f %f\n", A[2], A[3]);
return 0;
}
First, M has to be a two-dimensional array, like double M[3][3]. Your array is, mathematically speaking, a 1x9 vector, which is not invertible.
N is a pointer to an int for the
order of the matrix - in this case,
N=3.
A is a pointer to the LU
factorization of the matrix, which
you can get by running the LAPACK
routine dgetrf.
LDA is an integer for the "leading
element" of the matrix, which lets
you pick out a subset of a bigger
matrix if you want to just invert a
little piece. If you want to invert
the whole matrix, LDA should just be
equal to N.
IPIV is the pivot indices of the
matrix, in other words, it's a list
of instructions of what rows to swap
in order to invert the matrix. IPIV
should be generated by the LAPACK
routine dgetrf.
LWORK and WORK are the "workspaces"
used by LAPACK. If you are inverting
the whole matrix, LWORK should be an
int equal to N^2, and WORK should be
a double array with LWORK elements.
INFO is just a status variable to
tell you whether the operation
completed successfully. Since not all
matrices are invertible, I would
recommend that you send this to some
sort of error-checking system. INFO=0 for successful operation, INFO=-i if the i'th argument had an incorrect input value, and INFO > 0 if the matrix is not invertible.
So, for your code, I would do something like this:
int main(){
double M[3][3] = { {1 , 2 , 3},
{4 , 5 , 6},
{7 , 8 , 9}}
double pivotArray[3]; //since our matrix has three rows
int errorHandler;
double lapackWorkspace[9];
// dgetrf(M,N,A,LDA,IPIV,INFO) means invert LDA columns of an M by N matrix
// called A, sending the pivot indices to IPIV, and spitting error
// information to INFO.
// also don't forget (like I did) that when you pass a two-dimensional array
// to a function you need to specify the number of "rows"
dgetrf_(3,3,M[3][],3,pivotArray[3],&errorHandler);
//some sort of error check
dgetri_(3,M[3][],3,pivotArray[3],9,lapackWorkspace,&errorHandler);
//another error check
}
Here is a working version of the above using OpenBlas interface to LAPACKE.
Link with openblas library (LAPACKE is already contained)
#include <stdio.h>
#include "cblas.h"
#include "lapacke.h"
// inplace inverse n x n matrix A.
// matrix A is Column Major (i.e. firts line, second line ... *not* C[][] order)
// returns:
// ret = 0 on success
// ret < 0 illegal argument value
// ret > 0 singular matrix
lapack_int matInv(double *A, unsigned n)
{
int ipiv[n+1];
lapack_int ret;
ret = LAPACKE_dgetrf(LAPACK_COL_MAJOR,
n,
n,
A,
n,
ipiv);
if (ret !=0)
return ret;
ret = LAPACKE_dgetri(LAPACK_COL_MAJOR,
n,
A,
n,
ipiv);
return ret;
}
int main()
{
double A[] = {
0.378589, 0.971711, 0.016087, 0.037668, 0.312398,
0.756377, 0.345708, 0.922947, 0.846671, 0.856103,
0.732510, 0.108942, 0.476969, 0.398254, 0.507045,
0.162608, 0.227770, 0.533074, 0.807075, 0.180335,
0.517006, 0.315992, 0.914848, 0.460825, 0.731980
};
for (int i=0; i<25; i++) {
if ((i%5) == 0) putchar('\n');
printf("%+12.8f ",A[i]);
}
putchar('\n');
matInv(A,5);
for (int i=0; i<25; i++) {
if ((i%5) == 0) putchar('\n');
printf("%+12.8f ",A[i]);
}
putchar('\n');
}
Example:
% g++ -I [OpenBlas path]/include/ example.cpp [OpenBlas path]/lib/libopenblas.a
% a.out
+0.37858900 +0.97171100 +0.01608700 +0.03766800 +0.31239800
+0.75637700 +0.34570800 +0.92294700 +0.84667100 +0.85610300
+0.73251000 +0.10894200 +0.47696900 +0.39825400 +0.50704500
+0.16260800 +0.22777000 +0.53307400 +0.80707500 +0.18033500
+0.51700600 +0.31599200 +0.91484800 +0.46082500 +0.73198000
+0.24335255 -2.67946180 +3.57538817 +0.83711880 +0.34704217
+1.02790497 -1.05086895 -0.07468137 +0.71041070 +0.66708313
-0.21087237 -4.47765165 +1.73958308 +1.73999641 +3.69324020
-0.14100897 +2.34977565 -0.93725915 +0.47383541 -2.15554470
-0.26329660 +6.46315378 -4.07721533 -3.37094863 -2.42580445
Here is a working version of Spencer Nelson's example above. One mystery about it is that the input matrix is in row-major order, even though it appears to call the underlying fortran routine dgetri. I am led to believe that all the underlying fortran routines require column-major order, but I am no expert on LAPACK, in fact, I'm using this example to help me learn it. But, that one mystery aside:
The input matrix in the example is singular. LAPACK tries to tell you that by returning a 3 in the errorHandler. I changed the 9 in that matrix to a 19, getting an errorHandler of 0 signalling success, and compared the result to that from Mathematica. The comparison was also successful and confirmed that the matrix in the example should be in row-major order, as presented.
Here is the working code:
#include <stdio.h>
#include <stddef.h>
#include <lapacke.h>
int main() {
int N = 3;
int NN = 9;
double M[3][3] = { {1 , 2 , 3},
{4 , 5 , 6},
{7 , 8 , 9} };
int pivotArray[3]; //since our matrix has three rows
int errorHandler;
double lapackWorkspace[9];
// dgetrf(M,N,A,LDA,IPIV,INFO) means invert LDA columns of an M by N matrix
// called A, sending the pivot indices to IPIV, and spitting error information
// to INFO. also don't forget (like I did) that when you pass a two-dimensional
// array to a function you need to specify the number of "rows"
dgetrf_(&N, &N, M[0], &N, pivotArray, &errorHandler);
printf ("dgetrf eh, %d, should be zero\n", errorHandler);
dgetri_(&N, M[0], &N, pivotArray, lapackWorkspace, &NN, &errorHandler);
printf ("dgetri eh, %d, should be zero\n", errorHandler);
for (size_t row = 0; row < N; ++row)
{ for (size_t col = 0; col < N; ++col)
{ printf ("%g", M[row][col]);
if (N-1 != col)
{ printf (", "); } }
if (N-1 != row)
{ printf ("\n"); } }
return 0; }
I built and ran it as follows on a Mac:
gcc main.c -llapacke -llapack
./a.out
I did an nm on the LAPACKE library and found the following:
liblapacke.a(lapacke_dgetri.o):
U _LAPACKE_dge_nancheck
0000000000000000 T _LAPACKE_dgetri
U _LAPACKE_dgetri_work
U _LAPACKE_xerbla
U _free
U _malloc
liblapacke.a(lapacke_dgetri_work.o):
U _LAPACKE_dge_trans
0000000000000000 T _LAPACKE_dgetri_work
U _LAPACKE_xerbla
U _dgetri_
U _free
U _malloc
and it looks like there is a LAPACKE [sic] wrapper that would presumably relieve us of having to take addresses everywhere for fortran's convenience, but I am probably not going to get around to trying it because I have a way forward.
EDIT
Here is a working version that bypasses LAPACKE [sic], using LAPACK fortran routines directly. I do not understand why a row-major input produces correct results, but I confirmed it again in Mathematica.
#include <stdio.h>
#include <stddef.h>
int main() {
int N = 3;
int NN = 9;
double M[3][3] = { {1 , 2 , 3},
{4 , 5 , 6},
{7 , 8 , 19} };
int pivotArray[3]; //since our matrix has three rows
int errorHandler;
double lapackWorkspace[9];
/* from http://www.netlib.no/netlib/lapack/double/dgetrf.f
SUBROUTINE DGETRF( M, N, A, LDA, IPIV, INFO )
*
* -- LAPACK routine (version 3.1) --
* Univ. of Tennessee, Univ. of California Berkeley and NAG Ltd..
* November 2006
*
* .. Scalar Arguments ..
INTEGER INFO, LDA, M, N
* ..
* .. Array Arguments ..
INTEGER IPIV( * )
DOUBLE PRECISION A( LDA, * )
*/
extern void dgetrf_ (int * m, int * n, double * A, int * LDA, int * IPIV,
int * INFO);
/* from http://www.netlib.no/netlib/lapack/double/dgetri.f
SUBROUTINE DGETRI( N, A, LDA, IPIV, WORK, LWORK, INFO )
*
* -- LAPACK routine (version 3.1) --
* Univ. of Tennessee, Univ. of California Berkeley and NAG Ltd..
* November 2006
*
* .. Scalar Arguments ..
INTEGER INFO, LDA, LWORK, N
* ..
* .. Array Arguments ..
INTEGER IPIV( * )
DOUBLE PRECISION A( LDA, * ), WORK( * )
*/
extern void dgetri_ (int * n, double * A, int * LDA, int * IPIV,
double * WORK, int * LWORK, int * INFO);
// dgetrf(M,N,A,LDA,IPIV,INFO) means invert LDA columns of an M by N matrix
// called A, sending the pivot indices to IPIV, and spitting error information
// to INFO. also don't forget (like I did) that when you pass a two-dimensional
// array to a function you need to specify the number of "rows"
dgetrf_(&N, &N, M[0], &N, pivotArray, &errorHandler);
printf ("dgetrf eh, %d, should be zero\n", errorHandler);
dgetri_(&N, M[0], &N, pivotArray, lapackWorkspace, &NN, &errorHandler);
printf ("dgetri eh, %d, should be zero\n", errorHandler);
for (size_t row = 0; row < N; ++row)
{ for (size_t col = 0; col < N; ++col)
{ printf ("%g", M[row][col]);
if (N-1 != col)
{ printf (", "); } }
if (N-1 != row)
{ printf ("\n"); } }
return 0; }
built and run like this:
$ gcc foo.c -llapack
$ ./a.out
dgetrf eh, 0, should be zero
dgetri eh, 0, should be zero
-1.56667, 0.466667, 0.1
1.13333, 0.0666667, -0.2
0.1, -0.2, 0.1
EDIT
The mystery no longer appears to be a mystery. I think the computations are being done in column-major order, as they must, but I am both inputting and printing the matrices as if they were in row-major order. I have two bugs that cancel each other out so things look row-ish even though they're column-ish.