C picture rotation optimization - c

This is for all you C experts out there..
The first function takes a two-dimensional matrix src[dim][dim] representing pixels of an image, and rotates it 90 degrees into a destination matrix dst[dim][dim]. The second function takes the same src[dim][dim] and smoothens the image by replacing every pixel value with the average of all the pixels around it (in a maximum of 3 × 3 window centered at that pixel).
I need to optimize the program in account for time and cycles, how else would I be able to optimize the following?:
void rotate(int dim, pixel *src, pixel *dst,)
{
int i, j, nj;
nj = 0;
/* below are the main computations for the implementation of rotate. */
for (j = 0; j < dim; j++) {
nj = dim-1-j; /* Code Motion moved operation outside inner for loop */
for (i = 0; i < dim; i++) {
dst[RIDX(nj, i, dim)] = src[RIDX(i, j, dim)];
}
}
}
/* A struct used to compute averaged pixel value */
typedef struct {
int red;
int green;
int blue;
int num;
} pixel_sum;
/* Compute min and max of two integers, respectively */
static int minimum(int a, int b)
{ return (a < b ? a : b); }
static int maximum(int a, int b)
{ return (a > b ? a : b); }
/*
* initialize_pixel_sum - Initializes all fields of sum to 0
*/
static void initialize_pixel_sum(pixel_sum *sum)
{
sum->red = sum->green = sum->blue = 0;
sum->num = 0;
return;
}
/*
* accumulate_sum - Accumulates field values of p in corresponding
* fields of sum
*/
static void accumulate_sum(pixel_sum *sum, pixel p)
{
sum->red += (int) p.red;
sum->green += (int) p.green;
sum->blue += (int) p.blue;
sum->num++;
return;
}
/*
* assign_sum_to_pixel - Computes averaged pixel value in current_pixel
*/
static void assign_sum_to_pixel(pixel *current_pixel, pixel_sum sum)
{
current_pixel->red = (unsigned short) (sum.red/sum.num);
current_pixel->green = (unsigned short) (sum.green/sum.num);
current_pixel->blue = (unsigned short) (sum.blue/sum.num);
return;
}
/*
* avg - Returns averaged pixel value at (i,j)
*/
static pixel avg(int dim, int i, int j, pixel *src)
{
int ii, jj;
pixel_sum sum;
pixel current_pixel;
initialize_pixel_sum(&sum);
for(ii = maximum(i-1, 0); ii <= minimum(i+1, dim-1); ii++)
for(jj = maximum(j-1, 0); jj <= minimum(j+1, dim-1); jj++)
accumulate_sum(&sum, src[RIDX(ii, jj, dim)]);
assign_sum_to_pixel(&current_pixel, sum);
return current_pixel;
}
void smooth(int dim, pixel *src, pixel *dst)
{
int i, j;
/* below are the main computations for the implementation of the smooth function. */
for (j = 0; j < dim; j++)
for (i = 0; i < dim; i++)
dst[RIDX(i, j, dim)] = avg(dim, i, j, src);
}
I moved dim-1-j outside of the inner for loop of rotate which reduces time and cycles used in the program, but is there anything else that can be used for either main function?
Thanks!

There are several oprimizations you can do; some a compiler might do for you but best to write it out yourself. For example: moving constant expressions out of the loop (you did that once; there are more places you can do that - don't forget that the condition is checked every iteration too, so optimize the loop condition in this manner too) and, as Chris pointed out, use pointers that you increment instead of full array indexing. I also see some function calls that can be rewritten in-line.
I also want to point to an article on stackoverflow about matrix multiplication and optimizing that to use the processor cache. In essence it first rearranges the arrrays into memory bocks that fit the cache, then performs the operation on those blocks, then moves to the next block, and so on. You may be able to re-use the ideas for your rotation.
See Optimizing assembly generated by Microsoft Visual Studio Compiler

For the rotation, you get a better utilization of the cache by decomposing in smaller image tiles.
For the smoothing,
1) expand the whole operation inside the main double loop, do not use these intermediate micro-functions;
2) completely unroll the accumulation and averaging (it's only a sum of 9 terms), hard coding the indexes;
3) process in different loops along the edges (where not all 9 pixels are available) and in the middle. The middle deserves maximum optimization (especially (2));
4) try and avoid the divisions by 9 (you can think of replacing the division by a table lookup).
Top speed will be obtained by handcrafting vectorized optimization (SSE/AVX), but this requires some deal of experience. Multicore parallelization is also an option.
To give you an idea, it is possible to apply a 3x3 average on a 1 MB grayscale image in less than 0.5 ms (monocore, Core i7#3.4 GHz). We can extrapolate to 2 ms or so for a 1 Mpixel RGB image.

Since you can't provide a running program these are just ideas of things that could help:
Assuming values in the range [0,256) then use uint8_t as your rgbn values. This takes up 1/4 of the memory of the int version but will likely require more cycles; I can't know if this would be faster or not without more knowledge. The idea is that since you use 1/4 of the memory you are more likely to keep more values in L1-L3 cache.
Since your neighbors are the same whether you are rotated or not, calculate the average before rotating. I suspect this would help out with caching but again can't be sure; it depends on some code I can't see.
Parallelize the outer loop. Since you have easy grid dimensions and the inputs and outputs don't have read/write conflicts this is a trivial thing to do. This will certainly take more cycles but will possibly be faster.
Hard-code your edges; you are currently doing maximum and minimum operations on every call to average, but for the inner points it is unneeded. Calculate the edges and the inner points separately.

Related

Optimization of 3D Direct Convolution Implementation in C

For my project, I've written a naive C implementation of direct 3D convolution with periodic padding on the input. Unfortunately, since I'm new to C, the performance isn't so good... here's the code:
int mod(int a, int b)
{
// calculate mod to get the correct index with periodic padding
int r = a % b;
return r < 0 ? r + b : r;
}
void convolve3D(const double *image, const double *kernel, const int imageDimX, const int imageDimY, const int imageDimZ, const int stencilDimX, const int stencilDimY, const int stencilDimZ, double *result)
{
int imageSize = imageDimX * imageDimY * imageDimZ;
int kernelSize = kernelDimX * kernelDimY * kernelDimZ;
int i, j, k, l, m, n;
int kernelCenterX = (kernelDimX - 1) / 2;
int kernelCenterY = (kernelDimY - 1) / 2;
int kernelCenterZ = (kernelDimZ - 1) / 2;
int xShift,yShift,zShift;
int outIndex, outI, outJ, outK;
int imageIndex = 0, kernelIndex = 0;
// Loop through each voxel
for (k = 0; k < imageDimZ; k++){
for ( j = 0; j < imageDimY; j++) {
for ( i = 0; i < imageDimX; i++) {
stencilIndex = 0;
// for each voxel, loop through each kernel coefficient
for (n = 0; n < kernelDimZ; n++){
for ( m = 0; m < kernelDimY; m++) {
for ( l = 0; l < kernelDimX; l++) {
// find the index of the corresponding voxel in the output image
xShift = l - kernelCenterX;
yShift = m - kernelCenterY;
zShift = n - kernelCenterZ;
outI = mod ((i - xShift), imageDimX);
outJ = mod ((j - yShift), imageDimY);
outK = mod ((k - zShift), imageDimZ);
outIndex = outK * imageDimX * imageDimY + outJ * imageDimX + outI;
// calculate and add
result[outIndex] += stencil[stencilIndex]* image[imageIndex];
stencilIndex++;
}
}
}
imageIndex ++;
}
}
}
}
by convention, all the matrices (image, kernel, result) are stored in column-major fashion, and that's why I loop through them in such way so they are closer in memory (heard this would help).
I know the implementation is very naive, but since it's written in C, I was hoping the performance would be good, but instead it's a little disappointing. I tested it with image of size 100^3 and kernel of size 10^3 (Total ~1GFLOPS if only count the multiplication and addition), and it took ~7s, which I believe is way below the capability of a typical CPU.
If possible, could you guys help me optimize this routine?
I'm open to anything that could help, with just a few things if you could consider:
The problem I'm working with could be big (e.g. image of size 200 by 200 by 200 with kernel of size 50 by 50 by 50 or even larger). I understand that one way of optimizing this is by converting this problem into a matrix multiplication problem and use the blas GEMM routine, but I'm afraid memory could not hold such a big matrix
Due to the nature of the problem, I would prefer direct convolution instead of FFTConvolve, since my model is developed with direct convolution in mind, and my impression of FFT convolve is that it gives slightly different result than direct convolve especially for rapidly changing image, a discrepancy I'm trying to avoid.
That said, I'm in no way an expert in this. so if you have a great implementation based on FFTconvolve and/or my impression on FFT convolve is totally biased, I would really appreciate if you could help me out.
The input images are assumed to be periodic, so periodic padding is necessary
I understand that utilizing blas/SIMD or other lower level ways would definitely help a lot here. but since I'm a newbie here I dont't really know where to start... I would really appreciate if you help pointing me to the right direction if you have experience in these libraries,
Thanks a lot for your help, and please let me know if you need more info about the nature of the problem
As a first step, replace your mod ((i - xShift), imageDimX) with something like this:
inline int clamp( int x, int size )
{
if( x < 0 ) return x + size;
if( x >= size ) return x - size;
return x;
}
These branches are very predictable because they yield same results for very large count of consecutive elements. Integer modulo is relatively slow.
Now, next step (ordered by cost/profit) is going to be parallelizing. If you have any modern C++ compiler, just enable OpenMP somewhere in project settings. After that you need 2 changes.
Decorate your very outer loop with something like this: #pragma omp parallel for schedule(guided)
Move your function-level variables within that loop. This also means you’ll have to compute initial imageIndex from your k, for each iteration.
Next option, rework your code so you only write each output value once. Compute the final value in your innermost 3 loops, reading from random locations from both image and kernel, and only write the result once. When you have that result[outIndex] += in the inner loop, CPU stalls waiting for the data from memory. When you accumulate in a variable that’s a register not memory, there’s no access latency.
SIMD is the most complicated optimization for that. But in short, you’ll need maximum width of the FMA your hardware has (if you have AVX and need double precision, that width is 4), and you’ll also need multiple independent accumulators for your 3 innermost loops, to avoid hitting the latency as opposed to saturating the throughput. Here’s my answer to much easier problem as an example what I mean.

Calculating potential energy in a molecular dynamics simulation

Background
Imagine that we have N particles inside a box of length L, which interact with each other (through a Lennard Jones potential).
I want to compute the total potential energy of the system. I implemented the function POT which calculates all the contributions from all the particles and gives the correct results (this is tested and can be assumed true).
I also wrote a function POT_ONE which only calculates the potential energy of one particle with respect to all the others. This means that if I want to calculate the total potential energy I will have to call this function N times (making sure that the particle does not interact with itself) and then divide by 2 since I double count the interactions.
Goal
My goal is to make the second function yield the same results as the first one.
Problem
There is something really strange going on: If I put 4 particles, the two functions give the same results. If I put a fifth one then there is deviation. Then for 6,7,8 particles,again, it gives correct results and then for N=9 I am getting a different result. In the case N=1000 the result that I am getting from POT_ONE is somemthing like 113383820348202024.
My results for N=5 are:
-0.003911 with POT and
12.864234 with POT_ONE
In case someone tries to run the code and wants to check the N=4 case, he/she should change the number of particles (np) which is defined as global variable and then comment the line pos[12]=1;pos[13]=1;pos[14]=1;.
Code
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
/*GENERAL PARAMETERS*/
int dim=3; //number of dimensions
int np=5; //number of particles
double L=36.413; //box length (A)
double invL=1/36.413; //inverse of box length
/*ARGON CHARACTERISTICS*/
double sig=3.4; // Angstroms (A)
double e=0.001; // eV
double distSQ(double array[]){
/*calculates the squared distance given the array x=[dx,dy,dz]*/
int i;
double r2=0;
for(i=0;i<dim;i++) r2+=array[i]*array[i];
return r2;
}//distSQ
void MIC(double dr[],double L, int dim){
/* MINIMUM IMAGE CONVENTION: dr[] is the array dr = [dx,dy,dz] describing relative
positions of two particles, L is the box length, dim the number
of dimensions */
int i;
for(i=0;i<dim;i++) dr[i]-=round(dr[i]*invL)*L;
}//MIC
void POT(double x1[],double* potential){
/*given the positions of each particle in the form x=[x0,y0,z0;x1,y1,z1;...;xn-1,yn-1,zn-1],
the number of dimensions dim and particles np, it calculates the potential energy of the configuration*/
//variables for potential calculation
int i,j,k;
double *x2;
double r2inv; // 1/r^2
double foo,bar;
double dr[dim];
*potential=0; // set potential energy to zero
//main part of POT
for(i=0;i<np-1;i++){
x2=x1+dim;
for(j=i+1;j<np;j++){
//calculate relative distances between particles i & j
//apply periodic BCs and then calculate squared distance
//and the potential energy between them.
for(k=0;k<dim;k++) dr[k] = x2[k]-x1[k];
MIC(dr,L,dim); //periodic boundary conditions
r2inv=1/distSQ(dr);
//calculate potential energy
foo = sig*sig*r2inv;
bar = foo*foo*foo;
*potential+=bar*(bar-1);
}//for j
x1+=dim;
}//for i
*potential*=4*e; //scale and give energy units
}//POT
void POT_ONE(int particle,double pos[],double* potential){
*potential=0;
int i,k;
double dr[dim];
double r2inv,foo,bar;
double par_pos[dim];
int index=particle*dim;
par_pos[0]=pos[index];
par_pos[1]=pos[index+1];
par_pos[2]=pos[index+2];
for(i=0;i<np;i++){
if(i!=particle){
for(k=0;k<dim;k++) dr[k]=pos[k]-par_pos[k];
MIC(dr,L,dim);
r2inv=1/distSQ(dr);
foo=sig*sig*r2inv;
bar=foo*foo*foo;
*potential+=bar*(bar-1);
}
pos+=dim;
}
*potential*=4*e; //scale and give energy units
}//POT_ONE
int main(){
int D=np*dim;
double* pos=malloc(D*sizeof(double));
double potential=0; //calculated with POT
double U=0; ////calculated with POT_ONE
double tempU=0;
pos[0]=0;pos[1]=0;pos[2]=0;
pos[3]=4;pos[4]=0;pos[5]=0;
pos[6]=0;pos[7]=4;pos[8]=0;
pos[9]=0;pos[10]=0;pos[11]=4;
pos[12]=1;pos[13]=1;pos[14]=1;
POT(pos,&potential);
printf("POT: %f\n", potential);
int i,j;
for(i=0;i<np;i++){
POT_ONE(i,pos,&tempU);
U+=tempU;
}
U=U/2;
printf("POT_ONE: %f\n\n", U);
return 0;
}
Your error is in POT, where you forgot to update x2 at the end of the inner loop.
for (i = 0; i < np - 1; i++) {
double *x2 = x1 + dim;
for (j = i + 1; j < np; j++) {
// ... calculate stuff ..
x2 += dim;
}
x1 += dim;
}
An easier and arguably more readable variant is to forgo pointer arithmetic altogether and use boring old indices:
for (k = 0; k < dim; k++) {
dr[k] = x[j * dim + k] - x[i * dim + k];
}
Further observations:
Please make your variables local to the scope where they are used. A large list of uninitialized variables at the top of the function makes it very hard to track variables, even in a short function like yours.
Please consider returning single values from functions instead of passing in pointers. In my opinion, that makes functions like the square of the distance more readable.
The structure of your code is hard to see, because everything is run togeher very tightly, even the comments.

How to avoid the error of AVX2 when the matrix dimension isn't multiples of 4?

I made matrix-vector multiplication program using AVX2, FMA in C. I compiled using GCC ver7 with -mfma, -mavx.
However, I got the error "incorrect checksum for freed object - object was probably modified after being freed."
I think the error would generate if the matrix dimension isn't multiples of 4.
I know AVX2 use ymm register that can use 4 double precision floating point number. Therefore, I can use AVX2 without error in case the matrix is multiples of 4.
But, here is my question.
How can I use AVX2 efficiently if the matrix isn't multiples of 4 ???
Here is my code.
#include "stdio.h"
#include "math.h"
#include "stdlib.h"
#include "time.h"
#include "x86intrin.h"
void mv(double *a,double *b,double *c, int m, int n, int l)
{
__m256d va,vb,vc;
int k;
int i;
for (k = 0; k < l; k++) {
vb = _mm256_broadcast_sd(&b[k]);
for (i = 0; i < m; i+=4) {
va = _mm256_loadu_pd(&a[m*k+i]);
vc = _mm256_loadu_pd(&c[i]);
vc = _mm256_fmadd_pd(vc, va, vb);
_mm256_storeu_pd( &c[i], vc );
}
}
}
int main(int argc, char* argv[]) {
// set variables
int m;
double* a;
double* b;
double* c;
int i;
int temp=0;
struct timespec startTime, endTime;
m=9;
// main program
// set vector or matrix
a=(double *)malloc(sizeof(double) * m*m);
b=(double *)malloc(sizeof(double) * m*1);
c=(double *)malloc(sizeof(double) * m*1);
for (i=0;i<m;i++) {
a[i]=1;
b[i]=1;
c[i]=0.0;
}
for (i=m;i<m*m;i++) {
a[i]=1;
}
// check start time
clock_gettime(CLOCK_REALTIME, &startTime);
mv(a, b, c, m, 1, m);
// check end time
clock_gettime(CLOCK_REALTIME, &endTime);
free(a);
free(b);
free(c);
return 0;
}
You load and store vectors of 4 double, but your loop condition only checks that the first vector element is in-bounds, so you can write outside objects by up to 3x8 = 24 bytes when m is not a multiple of 4.
You need something like i < (m-3) in main loop, and a cleanup strategy for handling the last partial vector of data. Vectorizing with SIMD is very much like unrolling: you have to check that it's ok to do multiple future elements in the loop condition.
A scalar cleanup loop works well, but we can do better. For example, do as many 128-bit vectors as possible after the last full 256-bit vector (i.e. up to 1), before going scalar.
In many cases (e.g. write-only destination) an unaligned vector load that ends at the end of your arrays is very good (when m>=4). It can overlap with your main loop if m%4 != 0, but that's fine because your output array doesn't overlap your inputs, so redoing an element as part of a single cleanup is cheaper than branching to avoid it.
But that doesn't work here, because your logic is c[i+0..3] += ..., so redoing an element would make it wrong.
// cleanup using a 128-bit FMA, then scalar if there's an odd element.
// untested
void mv(double *a,double *b,double *c, int m, int n, int l)
{
/* the loop below should actually work for m=1..3, but a separate strategy might be good.
if (m < 4) {
// maybe check m >= 2 and use __m128 vectors?
// or vectorize differently?
}
*/
for (int k = 0; k < l; k++) {
__m256 vb = _mm256_broadcast_sd(&b[k]);
int i;
for (i = 0; i < (m-3); i+=4) {
__m256d va = _mm256_loadu_pd(&a[m*k+i]);
__m256d vc = _mm256_loadu_pd(&c[i]);
vc = _mm256_fmadd_pd(vc, va, vb);
_mm256_storeu_pd( &c[i], vc );
}
if (i<(m-1)) {
__m128d lasta = _mm_loadu_pd(&a[m*k+i]);
__m128d lastc = _mm_loadu_pd(&c[i]);
lastc = _mm_fmadd_pd(lastc, va, _mm256_castpd256_pd128(vb));
_mm_storeu_pd( &c[i], lastc );
// i+=2; // last element only checks m odd/even, doesn't use i
}
// if (i<m)
if (m&1) {
// odd number of elements, do the last non-vector one
c[m-1] += a[m*k + m-1] * _mm256_cvtsd_f64(vb);
}
}
}
I haven't looked at exactly how gcc/clang -O3 compile that. Sometimes compilers try to get too smart with cleanup code (e.g. trying to auto-vectorize scalar cleanup loops).
Other strategies could include doing the last up-to-4 elements with an AVX masked store: you need the same mask for the end of every matrix row, so generating it once and then using it at the end of every row could be good. See Vectorizing with unaligned buffers: using VMASKMOVPS: generating a mask from a misalignment count? Or not using that insn at all. (To simplify branching, you'd set it up so your main loop only goes to i < (m-4), then you always run the cleanup. In the m%4 == 0 case, the mask is all-ones so you do the final full vector.) If you can't safely read past the end of the matrix, you probably need a masked load as well as masked store.
You could also look at aligning your rows for efficiency, or a row stride that's separate from the logical length of rows. (i.e. pad rows out to 32-byte boundaries). Leaving padding at the end of rows simplifies the cleanup, because you can always do whole vectors that write padding.
Special case m==2: instead of broadcasting one element from b[], you'd like to broadcast 2 elements into two 128-bit lanes of a __m256d, so one 256-bit FMA could do 2 rows at once.

how to generalize square matrix multiplication to handle arbitrary dimensions

I have written this program and I am having some trouble understanding how to use multiple blocks by using dim3 variable in the kernel call line. This code works fine when I am doing 1000*1000 matrix multiplication, but not getting correct answer for lower dimensions like 100*100 , 200*200.
#include <stdio.h>
#include <cuda.h>
#define width 1000
__global__ void kernel(int *a,int *b,int *c)
{
int tx = threadIdx.x + blockIdx.x*blockDim.x;
int ty = threadIdx.y + blockIdx.y*blockDim.y;
int sum=0,k;
for(k=0;k<(width);++k)
{
sum += a[ty*width +k]*b[k*width + tx];
}
c[ty*width + tx] = sum;
}
int main()
{
int a[width*width],c[width*width],b[width*width];
int *dev_a,*dev_b,*dev_c;
int i,count=0;
int size = (width*width)*sizeof(int);
for(i=0;i<(width*width);i++)
{
a[i] = 1;
b[i] = 1;
}
cudaMalloc((void **)&dev_a,size);
cudaMalloc((void **)&dev_b,size);
cudaMalloc((void **)&dev_c,size);
cudaMemcpy(dev_a,&a,size,cudaMemcpyHostToDevice);
cudaMemcpy(dev_b,&b,size,cudaMemcpyHostToDevice);
dim3 dimBlock(20,20);
dim3 blockID(50,50);
kernel<<<blockID,dimBlock>>>(dev_a,dev_b,dev_c);
cudaMemcpy(&c,dev_c,size,cudaMemcpyDeviceToHost);
for(i=0;i<(width*width);i++)
{
count++;
if(count == (width+1))
{
count = 1;
printf("\n");
}
printf("%d ",c[i]);
}
printf("\n");
return 0;
}
This code will work for very specific dimensions but not for others.
It will work for square matrix multiplication when width is exactly equal to the product of your block dimension (number of threads - 20 in the code you have shown) and your grid dimension (number of blocks - 50 in the code you have shown).
So when width is 20*50 (1000) it will work as shown. But if I change width to some other value (say 800) and make no other changes, your code won't work. In the case of 800, however, I could get your code working by changing the grid dimension from 50 to 40, then width = 800 = 20 *40.
But what if I need to multiply two matrices of width 799? I can't come up with a product of grid and block dimension that will match that width exactly.
This is a fairly standard problem in CUDA programming - I cannot come up with convenient block and grid dimensions to exactly match my work (i.e. data) size, and if I launch too many (threads/blocks) things don't seem to work.
To fix this problem we must do 2 things:
Be sure to launch at least enough, but maybe more than enough threads (blocks of threads) to cover the entire data set
Add conditional code in the kernel, so that only the threads corresponding to valid data do any real work.
To address item 1 above, we modify our grid dimension calculations to something like this:
dim3 dimBlock(16,16);
dim3 blockID((width+dimBlock.x-1)/dimBlock.x,(width+dimBlock.y-1)/dimBlock.y);
To address item 2 above we modify our kernel code to condition thread behavior on whether or not the thread corresponds to valid data:
__global__ void kernel(int *a,int *b,int *c, int mwidth)
{
int tx = threadIdx.x + blockIdx.x*blockDim.x;
int ty = threadIdx.y + blockIdx.y*blockDim.y;
if ((tx<mwidth)&&(ty<mwidth)){
int sum=0,k;
for(k=0;k<(mwidth);++k)
{
sum += a[ty*mwidth +k]*b[k*mwidth + tx];
}
c[ty*mwidth + tx] = sum;}
}
And since we've modified the kernel with a new parameter, we have to pass that parameter on invocation:
kernel<<<blockID,dimBlock>>>(dev_a,dev_b,dev_c, width);
That should be what is needed to logically extend the code you have shown to handle "arbitrary" dimensions. I would also suggest adding proper cuda error checking any time you are having trouble with a CUDA code.

Optimizing array transposing function

I'm working on a homework assignment, and I've been stuck for hours on my solution. The problem we've been given is to optimize the following code, so that it runs faster, regardless of how messy it becomes. We're supposed to use stuff like exploiting cache blocks and loop unrolling.
Problem:
//transpose a dim x dim matrix into dist by swapping all i,j with j,i
void transpose(int *dst, int *src, int dim) {
int i, j;
for(i = 0; i < dim; i++) {
for(j = 0; j < dim; j++) {
dst[j*dim + i] = src[i*dim + j];
}
}
}
What I have so far:
//attempt 1
void transpose(int *dst, int *src, int dim) {
int i, j, id, jd;
id = 0;
for(i = 0; i < dim; i++, id+=dim) {
jd = 0;
for(j = 0; j < dim; j++, jd+=dim) {
dst[jd + i] = src[id + j];
}
}
}
//attempt 2
void transpose(int *dst, int *src, int dim) {
int i, j, id;
int *pd, *ps;
id = 0;
for(i = 0; i < dim; i++, id+=dim) {
pd = dst + i;
ps = src + id;
for(j = 0; j < dim; j++) {
*pd = *ps++;
pd += dim;
}
}
}
Some ideas, please correct me if I'm wrong:
I have thought about loop unrolling but I dont think that would help, because we don't know if the NxN matrix has prime dimensions or not. If I checked for that, it would include excess calculations which would just slow down the function.
Cache blocks wouldn't be very useful, because no matter what, we will be accessing one array linearly (1,2,3,4) while the other we will be accessing in jumps of N. While we can get the function to abuse the cache and access the src block faster, it will still take a long time to place those into the dst matrix.
I have also tried using pointers instead of array accessors, but I don't think that actually speeds up the program in any way.
Any help would be greatly appreciated.
Thanks
Cache blocking can be useful. For an example, lets say we have a cache line size of 64 bytes (which is what x86 uses these days). So for a large enough matrix such that it's larger than the cache size, then if we transpose a 16x16 block (since sizeof(int) == 4, thus 16 ints fit in a cache line, assuming the matrix is aligned on a cacheline bounday) we need to load 32 (16 from the source matrix, 16 from the destination matrix before we can dirty them) cache lines from memory and store another 16 lines (even though the stores are not sequential). In contrast, without cache blocking transposing the equivalent 16*16 elements requires us to load 16 cache lines from the source matrix, but 16*16=256 cache lines to be loaded and then stored for the destination matrix.
Unrolling is useful for large matrixes.
You'll need some code to deal with excess elements if the matrix size isn't a multiple of the times you unroll. But this will be outside the most critical loop, so for a large matrix it's worth it.
Regarding the direction of accesses - it may be better to read linearly and write in jumps of N, rather than vice versa. This is because read operations block the CPU, while write operations don't (up to a limit).
Other suggestions:
1. Can you use parallelization? OpenMP can help (though if you're expected to deliver single CPU performance, it's no good).
2. Disassemble the function and read it, focusing on the innermost loop. You may find things you wouldn't notice in C code.
3. Using decreasing counters (stopping at 0) might be slightly more efficient that increasing counters.
4. The compiler must assume that src and dst may alias (point to the same or overlapping memory), which limits its optimization options. If you could somehow tell the compiler that they can't overlap, it may be great help. However, I'm not sure how to do that (maybe use the restrict qualifier).
Messyness is not a problem, so: I would add a transposed flag to each matrix. This flag indicates, whether the stored data array of a matrix is to be interpreted in normal or transposed order.
All matrix operations should receive these new flags in addition to each matrix parameter. Inside each operation implement the code for all possible combinations of flags. Perhaps macros can save redundant writing here.
In this new implementation, the matrix transposition just toggles the flag: The space and time needed for the transpose operation is constant.
Just an idea how to implement unrolling:
void transpose(int *dst, int *src, int dim) {
int i, j;
const int dim1 = (dim / 4) * 4;
for(i = 0; i < dim; i++) {
for(j = 0; j < dim1; j+=4) {
dst[j*dim + i] = src[i*dim + j];
dst[(j+1)*dim + i] = src[i*dim + (j+1)];
dst[(j+2)*dim + i] = src[i*dim + (j+2)];
dst[(j+3)*dim + i] = src[i*dim + (j+3)];
}
for( ; j < dim; j++) {
dst[j*dim + i] = src[i*dim + j];
}
__builtin_prefetch (&src[(i+1)*dim], 0, 1);
}
}
Of cource you should remove counting ( like i*dim) from the inner loop, as you already did in your attempts.
Cache prefetch could be used for source matrix.
you probably know this but register int (you tell the compiler that it would be smart to put this in register). And making the int's unsigned, may make things go little bit faster.

Resources