I am trying to optimize my alpha blending code with SIMD. SSE2, specifically.
At first I was hoping for SSE2, but at this point I would settle for SSE4.2 if it's easier. Reason being is that if I use SSE4.2 instead of SSE2, I cut out a significant number of older processors that can run this code. But at this point I'd take the compromise.
I am blitting a sprite onto the screen. Everything is in full 32-bit color, ARGB or BGRA, depending on which direction you read it.
I have read every other seemingly related question on SO and everything I could find on the web but I still have not been able to completely wrap my brain around this one specific concept, and I would appreciate some help. I've been at this for days.
Below is my code. This code works, in that it produces the visual effect that I want. A bitmap is drawn onto the background buffer with alpha blending. Everything looks fine and as expected.
But you will see that even though it works, my code misses the point of SIMD entirely. It is operating on each byte one at a time, just like as if it were completely serialized, and therefore the code sees no performance benefit over my more traditional code that operates on just one pixel at a time. With SIMD, I obviously want to work on 4 pixels (or every channel of one pixel - 128 bits) at a time, in parallel. (I am profiling by measuring frames rendered per second.)
I want to just run the formula once for each channel, i.e., blend all of the red channel at once, all of the green channel at once, all of the blue channel at once, and all of the alpha channel at once. Or alternatively, every channel (RGBA) of one of the pixels at once.
Then I should start to see the full benefit of SIMD.
I feel like I probably need to do some things with masks, but nothing I have tried gets me there.
I would be very grateful for some help.
(This is the inner loop. It only handles 4 pixels. I put this inside of a loop where I iterate over 4 pixels at a time with XPixel+=4.)
__m128i BitmapQuadPixel = _mm_load_si128((uint32_t*)Bitmap->Memory + BitmapOffset);
__m128i BackgroundQuadPixel = _mm_load_si128((uint32_t*)gRenderSurface.Memory + MemoryOffset);;
__m128i BlendedQuadPixel;
// 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
// R G B A R G B A R G B A R G B A
// This is the red component of the first pixel.
BlendedQuadPixel.m128i_u8[0] = BitmapQuadPixel.m128i_u8[0] * BitmapQuadPixel.m128i_u8[3] / 255 + BackgroundQuadPixel.m128i_u8[0] * (255 - BitmapQuadPixel.m128i_u8[3]) / 255;
// This is the green component of the first pixel.
BlendedQuadPixel.m128i_u8[1] = BitmapQuadPixel.m128i_u8[1] * BitmapQuadPixel.m128i_u8[3] / 255 + BackgroundQuadPixel.m128i_u8[1] * (255 - BitmapQuadPixel.m128i_u8[3]) / 255;
// And so on...
BlendedQuadPixel.m128i_u8[2] = BitmapQuadPixel.m128i_u8[2] * BitmapQuadPixel.m128i_u8[3] / 255 + BackgroundQuadPixel.m128i_u8[2] * (255 - BitmapQuadPixel.m128i_u8[3]) / 255;
BlendedQuadPixel.m128i_u8[4] = BitmapQuadPixel.m128i_u8[4] * BitmapQuadPixel.m128i_u8[7] / 255 + BackgroundQuadPixel.m128i_u8[4] * (255 - BitmapQuadPixel.m128i_u8[7]) / 255;
BlendedQuadPixel.m128i_u8[5] = BitmapQuadPixel.m128i_u8[5] * BitmapQuadPixel.m128i_u8[7] / 255 + BackgroundQuadPixel.m128i_u8[5] * (255 - BitmapQuadPixel.m128i_u8[7]) / 255;
BlendedQuadPixel.m128i_u8[6] = BitmapQuadPixel.m128i_u8[6] * BitmapQuadPixel.m128i_u8[7] / 255 + BackgroundQuadPixel.m128i_u8[6] * (255 - BitmapQuadPixel.m128i_u8[7]) / 255;
BlendedQuadPixel.m128i_u8[8] = BitmapQuadPixel.m128i_u8[8] * BitmapQuadPixel.m128i_u8[11] / 255 + BackgroundQuadPixel.m128i_u8[8] * (255 - BitmapQuadPixel.m128i_u8[11]) / 255;
BlendedQuadPixel.m128i_u8[9] = BitmapQuadPixel.m128i_u8[9] * BitmapQuadPixel.m128i_u8[11] / 255 + BackgroundQuadPixel.m128i_u8[9] * (255 - BitmapQuadPixel.m128i_u8[11]) / 255;
BlendedQuadPixel.m128i_u8[10] = BitmapQuadPixel.m128i_u8[10] * BitmapQuadPixel.m128i_u8[11] / 255 + BackgroundQuadPixel.m128i_u8[10] * (255 - BitmapQuadPixel.m128i_u8[11]) / 255;
BlendedQuadPixel.m128i_u8[12] = BitmapQuadPixel.m128i_u8[12] * BitmapQuadPixel.m128i_u8[15] / 255 + BackgroundQuadPixel.m128i_u8[12] * (255 - BitmapQuadPixel.m128i_u8[15]) / 255;
BlendedQuadPixel.m128i_u8[13] = BitmapQuadPixel.m128i_u8[13] * BitmapQuadPixel.m128i_u8[15] / 255 + BackgroundQuadPixel.m128i_u8[13] * (255 - BitmapQuadPixel.m128i_u8[15]) / 255;
BlendedQuadPixel.m128i_u8[14] = BitmapQuadPixel.m128i_u8[14] * BitmapQuadPixel.m128i_u8[15] / 255 + BackgroundQuadPixel.m128i_u8[14] * (255 - BitmapQuadPixel.m128i_u8[15]) / 255;
_mm_store_si128((uint32_t*)gRenderSurface.Memory + MemoryOffset, BlendedQuadPixel);
As I see gRenderSurface, I wonder whether you should just blend images on GPU, e.g., using GLSL shader, or if not, reading memory back from the render surface can be very slow. Anyways, here's my cup of tea using SSE4.1 as I didn't find fully similar linked in comments.
This one shuffles alpha bytes to all color channels using _aa and does the "one minus source alpha" blending by the final masking. With AVX2 it outperforms scalar implementation by factor ~5.7x, while the SSE4.1 version with separate low and high quadword processing is ~3.14x faster than the scalar implementation (both measured using Intel Compiler 19.0).
Division by 255 from How to divide 16-bit integer by 255 with using SSE?
const __m128i _aa = _mm_set_epi8( 15,15,15,15, 11,11,11,11, 7,7,7,7, 3,3,3,3 );
const __m128i _mask1 = _mm_set_epi16(-1,0,0,0, -1,0,0,0);
const __m128i _mask2 = _mm_set_epi16(0,-1,-1,-1, 0,-1,-1,-1);
const __m128i _v255 = _mm_set1_epi8( -1 );
const __m128i _v1 = _mm_set1_epi16( 1 );
const int xmax = 4*source.cols-15;
for ( int y=0;y<source.rows;++y )
{
// OpenCV CV_8UC4 input
const unsigned char * pS = source.ptr<unsigned char>( y );
const unsigned char * pD = dest.ptr<unsigned char>( y );
unsigned char *pOut = out.ptr<unsigned char>( y );
for ( int x=0;x<xmax;x+=16 )
{
__m128i _src = _mm_loadu_si128( (__m128i*)( pS+x ) );
__m128i _src_a = _mm_shuffle_epi8( _src, _aa );
__m128i _dst = _mm_loadu_si128( (__m128i*)( pD+x ) );
__m128i _dst_a = _mm_shuffle_epi8( _dst, _aa );
__m128i _one_minus_src_a = _mm_subs_epu8( _v255, _src_a );
__m128i _s_a = _mm_cvtepu8_epi16( _src_a );
__m128i _s = _mm_cvtepu8_epi16( _src );
__m128i _d = _mm_cvtepu8_epi16( _dst );
__m128i _d_a = _mm_cvtepu8_epi16( _one_minus_src_a );
__m128i _out = _mm_adds_epu16( _mm_mullo_epi16( _s, _s_a ), _mm_mullo_epi16( _d, _d_a ) );
_out = _mm_srli_epi16( _mm_adds_epu16( _mm_adds_epu16( _v1, _out ), _mm_srli_epi16( _out, 8 ) ), 8 );
_out = _mm_or_si128( _mm_and_si128(_out,_mask2), _mm_and_si128( _mm_adds_epu16(_s_a, _mm_cvtepu8_epi16(_dst_a)),_mask1) );
__m128i _out2;
// compute _out2 using high quadword of of the _src and _dst
//...
__m128i _ret = _mm_packus_epi16( _out, _out2 );
_mm_storeu_si128( (__m128i*)(pOut+x), _ret );
Related
I am currently developing a tile-based game in C and I'm trying to implement zooming using the nearest neighbor algorithm.
This is how the algorithm it looks right now:
u32 *DestPixel = (u32 *)DestBitmap.Pixels;
u32 *SourcePixel = (u32 *)SourceBitmap->Pixels;
f64 RatioX = (f64)SourceBitmap->Width / (f64)DestBitmap.Width;
f64 RatioY = (f64)SourceBitmap->Height / (f64)DestBitmap.Height;
for(s32 Y = 0;
Y < DestBitmap.Height;
++Y)
{
for(s32 X = 0;
X < DestBitmap.Width;
++X)
{
s32 ScaledX = (s32)(X * RatioX);
s32 ScaledY = (s32)(Y * RatioY);
s32 DestOffset = Y*DestBitmap.Width + X;
s32 SourceOffset = ScaledY*SourceBitmap->Width + ScaledX;
*(DestPixel + DestOffset) = *(SourcePixel + SourceOffset);
}
}
However, it is not producing the results I want. When trying to convert the source bitmap (64x64) to a bitmap whose size is not a power of 2 (in this case 30x30), the scaled bitmap looks weird on the right side. Left side is the original bitmap while the right side is the scaled bitmap:
What I've tried:
Rounding ScaledX and ScaledY (instead of truncating)
Flooring ScaledX and ScaledY (instead of truncating)
I actually solved it. It was the rendering of the bitmaps that the problem was, not the alogrithm. I was drawing 4 pixels at a time in my rendering code which doesn't work with 30x30, but only with power of 2s.
I'm learning how to optimize code for GPU. I read about importance of memory locality. I've also seen some tutorials and examples of GPU convolution. Based on that I wrote and tested several own kernels. Surprisingly I found that the simplest naive kernell is the fastest!? and it is jut <10x faster than CPU. (Yes I amortized upload/download time by running the kenrnel 64x).
What I do wrong? I woud expect that convolution is just that kind of operation for which GPUs are optimized. If I can get 100x speed-up on matrix multiplication, why convolution is so slow?
performance [CPU ticks/pixel] (lower is better):
CPU-naive 9.5
GPU-naive 1.64
GPU-local 2.56
GPU-local_async 15.10
GPU-scanline-private 7.35
GPU-scanline_async 15.37
EDIT: GPU-scanline_async I made later after reading advices about async_work_group_copy
I wonder 2 things:
Is the kernel speed limited by memory bandwidth or by computing power? From what I read I would expect memory. But the results of the tests show the opposite.
Kernel GPU-local is slower than GPU-naive even though it does much less global memory reads
modification of the kernel by gaussian-filter coeffs (i.e. add multiplication per each pixel) makes it >2x slower, although it does same number of memory reads
But if it is limited by processing power than why I get 100x faster matrix-multiplication on GPU than on CPU ?
why the kernel GPU-scanline-private is so slow? The memory locality is much better (just 3 instead of 9 reads from global memory per pixel) and the logic is minimal (no ifs/switches)
The test was done on my laptop with CPU Intel Core i7 6700HQ Skylake and GPU nVidia 960M by running the kernels 64x/frame on floating point array of 256x256 pixels. The code full can be seen here.
=========== Kernel codes ===========
kernel GPU-Naive 2D global=(256,256) local=(16,16)
__kernel void blur2D_naive(
__global float* I,
__global float* O
){
const int ix = get_global_id (0)+1;
const int iy = get_global_id (1)+1;
const int nx = get_global_size(0)+2;
int i = iy * nx + ix;
// 1.6 ticks/pixel
O[i] =( I[i-nx-1] + I[i-nx] + I[i-nx+1] +
I[i -1] + I[i ] + I[i +1] +
I[i+nx-1] + I[i+nx] + I[i+nx+1] ) * 0.11111111111;
// modified with gaussian mask 4.9 ticks/pixel
//O[i] =( 0.0625*I[i-nx-1] + 0.125*I[i-nx] + 0.0625*I[i-nx+1] +
// 0.125 *I[i -1] + 0.25 *I[i ] + 0.125 *I[i +1] +
// 0.0625*I[i+nx-1] + 0.125*I[i+nx] + 0.0625*I[i+nx+1] );
}
kernel GPU-local 2D global=(256,256) local=(16,16)
#define NBx 18 // tile size including borders [halo] 16+2
#define NBy 18
// seems to be slower than naive method
__kernel void blur2D_local(
__global float* I,
__global float* O
){
__local float L[NBx*NBy];
const int2 iG = (int2)(get_global_id (0)+1 , get_global_id (1)+1 );
const int2 nG = (int2)(get_global_size(0)+2 , get_global_size(1)+2 );
const int2 iL = (int2)(get_local_id (0)+1 , get_local_id (1)+1 );
const int2 nL = (int2)(get_local_size (0)+2 , get_local_size (1)+2 );
const int2 iGR = (int2)(get_group_id (0) , get_group_id (1) );
// copy boundary pixels to local memory
switch( get_local_id(1) ){ // some threads copy one more of boundary (halo) pixels
case 4:
switch( get_local_id(0) ){ // copy corner points
case 0: L[ 0 ] = I[ nG.x* get_group_id(1)*get_local_size(1) + get_group_id(0)*get_local_size(0) ]; break; // upper-left
case 1: L[ NBx-1 ] = I[ nG.x* get_group_id(1)*get_local_size(1) + get_group_id(0)*get_local_size(0)+(NBx-1) ]; break; // upper-right
case 2: L[ (NBy-1)*NBx ] = I[ nG.x*(get_group_id(1)*get_local_size(1)+(NBy-1)) + get_group_id(0)*get_local_size(0) ]; break; // lower-left
case 3: L[ NBy* NBx-1 ] = I[ nG.x*(get_group_id(1)*get_local_size(1)+(NBy-1)) + get_group_id(0)*get_local_size(0)+(NBx-1) ]; break; // lower-rigth
}
// copy border lines
case 0: L[ iL.x ] = I[ nG.x* get_group_id(1)*get_local_size(1) + iG.x ]; break; // top line
case 1: L[ NBx*(NBy-1) + iL.x ] = I[ nG.x*(get_group_id(1)*get_local_size(1)+(NBy-1) ) + iG.x ]; break; // botton line
case 2: L[ NBx*iL.x ] = I[ nG.x*(get_group_id(1)*get_local_size(1)+get_local_id(0) ) + get_group_id(0)*get_local_size(0) ]; break; // left line
case 3: L[ NBx*iL.x + (NBx-1) ] = I[ nG.x*(get_group_id(1)*get_local_size(1)+get_local_id(0) ) + (get_group_id(0)*get_local_size(0)+(NBx-1)) ]; break; // right line
} // each thread coppied at max. 1 border pixels
int ig = iG.y*nG.x + iG.x;
int il = iL.y*nL.x + iL.x;
L[il] = I[ig]; // each thread copy his pixel to local memory
barrier(CLK_LOCAL_MEM_FENCE);
const float renorm = 1.0/9.0;
O[ig] =( L[il-NBx-1] + L[il-NBx] + L[il-NBx+1] +
L[il -1] + L[il ] + L[il +1] +
L[il+NBx-1] + L[il+NBx] + L[il+NBx+1] ) / 9.0;
}
kernel GPU-local_async 2D global=(256,16) local=(16,16)
#define nTiles 16
#define NBx 18
#define NBy 18
#define copy_tile(event,ig0,I,L) { int ig_=ig0; int il_=0; for(int i=0; i<NBy; i++){ event = async_work_group_copy( L+il_, I+ig_, NBx, event ); ig_+=nx; il_+=NBx; } }
// https://streamcomputing.eu/blog/2014-06-19/using-async_work_group_copy-on-2d-data/
__kernel void blur2D_local_async(
__global float* I,
__global float* O
){
const int nx = get_global_size(0)+2;
__local float LI[NBx*NBy*2];
int iL0 = 0;
int iL1 = NBx*NBy;
event_t event = 0;
int ig0 = get_group_id(0)*get_local_size(0);
copy_tile(event,ig0,I,LI);
for( int it=0; it<nTiles; it++ ){
int ig = ig0 + (get_local_id(1)+1)*nx + get_local_id(0)+1;
int il = (get_local_id(1)+1)*NBx + get_local_id(0) + iL0;
ig0 += get_local_size(1)*nx;
event_t event_ = 0;
copy_tile(event_,ig0,I,LI+iL1);
wait_group_events(1, &event);
//barrier(CLK_LOCAL_MEM_FENCE);
O[ig] =( LI[il-NBx] + LI[il-NBx+1] + LI[il-NBx+2] +
LI[il ] + LI[il +1] + LI[il +2] +
LI[il+NBx] + LI[il+NBx+1] + LI[il+NBx+2] ) * 0.11111111111;
int iLtmp=iL0; iL0=iL1; iL1=iLtmp;
event = event_;
}
}
kernel GPU-scanline_private 1D global=(256) local=(32)
__kernel void blur2D_scanline_priv(
int nx, int ny,
__global float* I,
__global float* O
){
int ig = get_global_id(0)+1;
float3 Lm = (float3)( I[ig-1], I[ig], I[ig+1] ); ig += nx;
float3 L0 = (float3)( I[ig-1], I[ig], I[ig+1] );
for(int iy=1; iy<(ny-1); iy++ ){
ig += nx;
float3 Lp= (float3)( I[ig-1], I[ig], I[ig+1] );
O[ig-nx] =
( Lm.x + Lm.y + Lm.z +
L0.x + L0.y + L0.z +
Lp.x + Lp.y + Lp.z ) * 0.11111111111;
Lm=L0; L0=Lp;
}
}
kernel GPU-scanline_async 1D global=(256) local=(32)
#define NB 34
__kernel void blur2D_scanline_async(
int nx, int ny,
__global float* I,
__global float* O
){
__local float L[NB*4];
int i0=0;
int i1=NB;
int i2=NB*2;
int i3=NB*3;
event_t event = 0;
int ig0 = get_group_id(0)*get_local_size(0);
event = async_work_group_copy( L , I+ig0, NB, event ); ig0 += nx;
event = async_work_group_copy( L+NB , I+ig0, NB, event ); ig0 += nx;
event = async_work_group_copy( L+NB*2, I+ig0, NB, event ); ig0 += nx;
const int il = get_local_id(0);
int ig = get_global_id(0)+1;
for(int iy=1; iy<(ny-2); iy++ ){
wait_group_events(1, &event);
event = async_work_group_copy( L+i3, I+ig0, NB, event ); ig0 += nx;
ig += nx;
O[ig] =
( L[i0+il] + L[i0+il+1] + L[i0+il+2] +
L[i1+il] + L[i1+il+1] + L[i1+il+2] +
L[i2+il] + L[i2+il+1] + L[i2+il+2] ) * 0.11111111111;
__local float *Ltmp;
int itmp=i0; i0=i1; i1=i2; i2=i3; i3=itmp;
}
}
kernel CPU-naive
void blur(int nx, int ny, float * I, float * O ){
float renorm = 1.0/9.0;
for(int iy=1;iy<ny-1;iy++){ for(int ix=1;ix<nx-1;ix++){
int i = iy*nx+ix;
O[i] =( I[i-nx-1] + I[i-nx] + I[i-nx+1] +
I[i -1] + I[i ] + I[i +1] +
I[i+nx-1] + I[i+nx] + I[i+nx+1] ) * renorm;
} }
}
In matrix multiplication, each sub-matrix (patch) is used for all patches in all lines in the other matrix. If there is 2x2 sub-matrix in a patch and if main matrix is 20x20 then each sub-matrix is used 10 times for multiplication. GPU generally uses 16x16 or 32x32 sized patches which means, for a 2kx2k multiplication each 16x16 patch is re-used for 128 times at least.
MM reuse = 128
and add the sub-matrix - sub-matrix multiplication re-use, it is enough to push gpu to limits.
In a 3x3 convolution, 3x3 patch is not used for a whole scanline or a whole picture. Only its pixels are re-used.
3x3 stencil: each pixel is re-used by neighbouring 8 stencils.
5x5 stencil: each pixel is re-used by neighbouring 24 stencils.
to catch up with matrix multiplication, it would need
11x11 stencil to have a reuse of 120
which is also more local than matrix multiplication and should get more gflops than it but it is not doing equal amounts of multiplications and additions.
It is doing 9 additions + 1 multiplications.
8 potential multiplications are lost. Nearly half of GFLOPS limit is lost.
You should try async workgroup copies.
load top-left 18x18,
load top 18x18 and compute top-left async
load top-right 18x18 and compute top async and store top-left async
load right 18x18 and compute top-left async and store top async
load .... compute ... store... all async so both local memory and main memory could be used(main memory would take advantage of naive version, L1 maybe)
Matrix multiplication/with 16x16 sub matrices) vs convolution(17x17 brush size):
Matrix: L2 re-use ratio increases with main matrix size, or L1 re-use ratio increases with sub-matrix size (L1)
Convolution: total re-use ratio is same for all image sizes but L1 usage ratio increases with brush size(good)
Matrix: 16*16*16 multiplications + 16*16*16 additions per workgroup
Convolution: 17*17 additions + 1 multiplication per thread(bad)
Matrix: uniform thread usage, no if-else, all local memory is re-used
Convolution: needs to load at least 16 pixel further than borders(ghost walls with 16 thickness) which are to be re-used by neighbour workgroups but those neighbour workgroups may be in another compute unit and just use L2 instead of being on same compute unit to use L1 (ugly)
That is why I suggested async work group copies to use those neighbours on same compute unit (and L1) and increase re-use ratio.
Matrix: increasing patch size also increases re-use by cubic power rate in sub-matrix multiplications(but decreases L2 re-usage because of having less patches per line, which makes total re-use like square-power rate)
Convolution: increasing patch size increases re-use by square power rate
Matrix: local memory must be at least 2x tile area (sub mat-mat mul)
Convolution: local memory must be at least tile area + ghost walls area
Matrix: can do 4x4 sub-sub-multiplications in private memory(which use each element 4 times) which means 4x4 memory = 64 add+64 mul
Convolution: loading 4x4 into private memory doesnt do anything but just a 4-pixel compute (for a 3x3 brush) which means 4x4 memory = 36 add + 4 mul
Having an addition-heavy kernel leaves room for another multiplication-heavy kernel to work concurrently or in same kernel asynchronously. Maybe if you are using this for an image processing, maybe you can add some "blend" or "resize" kernels inside so they work together?
Scanline version is loading 3 elements, doing 9 add + 1 mul then repeats, loaded elements stay for 3 turns which means they are re-used for 3 times only and its neighbours(x or y directio) may not fall in neighbour thread or even neighbour workgroup. Also 3 loads versus 1 stores is unbalanced. If memory bandwidth is 100 GB/s then it would use 50GB/s for loads, 15 GB/s for stores unless they are coming from L1.
You can decrease add/mul imbalance by using accumulator.
store = (accumulator) * 0.1111111
accumulator+=new vector // 3 adds
accumulator-=old vecotr // 3 adds
so now it is 6 adds + 1 muls so more balanced like: 1Tflops GPU will have 500Gflops for adds, 90 Gflops for muls.
Naive version doesn't use local memory, leaving more room for more wavefronts in-flight. Local memory version actually breaks L1 access pattern and let less wavefronts in-flight. This reduces VALU occupation.
You can decrease local memory usage by doing scanline on workgroup level instead of thread level. What I mean is something like:
load from memory: x x x x x x x x x x
do scanline for it: (left to right,1-D) a b c d e f g h i j
now use it for scanline on workgroup level: a c c u m u l a t o r (+new)
(top to bottom) z x z x z x z x z x (- old)
calculate frontline 1-d scanline: 30 additions for each new row
calculate wide vector 2-d scanline:30*30 additions
each pixel get 1 value instead of adding 3 values
storing: 16x16 multiplications
much less local memory used, more balanced (~8 add 1 mul)
this has 1-d scanline that is single-thread for N cycles or multi threaded reduce for LogN cycles(considering enough threads in a compute unit).
I have created a function which does 64-bit * 64-bit to 128-bit using SIMD. Currently I have implemented it using SSE2 (acutally SSE4.1). This means it does two 64b*64b to 128b products at the same time. The same idea could be extended to AVX2 or AVX512 giving four or eight 64b*64 to 128b products at the same time.
I based my algorithm on http://www.hackersdelight.org/hdcodetxt/muldws.c.txt
That algorithm does one unsigned multiplication, one signed multiplication, and two signed * unsigned multiplications. The signed * signed and unsigned * unsigned operations are easy to do using _mm_mul_epi32 and _mm_mul_epu32. But the mixed signed and unsigned products caused me trouble.
Consider for example.
int32_t x = 0x80000000;
uint32_t y = 0x7fffffff;
int64_t z = (int64_t)x*y;
The double word product should be 0xc000000080000000. But how can you get this if you assume your compiler does know how to handle mixed types? This is what I came up with:
int64_t sign = x<0; sign*=-1; //get the sign and make it all ones
uint32_t t = abs(x); //if x<0 take two's complement again
uint64_t prod = (uint64_t)t*y; //unsigned product
int64_t z = (prod ^ sign) - sign; //take two's complement based on the sign
Using SSE this can be done like this
__m128i xh; //(xl2, xh2, xl1, xh1) high is signed, low unsigned
__m128i yl; //(yh2, yl2, yh2, yl2)
__m128i xs = _mm_cmpgt_epi32(_mm_setzero_si128(), xh); // get sign
xs = _mm_shuffle_epi32(xs, 0xA0); // extend sign
__m128i t = _mm_sign_epi32(xh,xh); // abs(xh)
__m128i prod = _mm_mul_epu32(t, yl); // unsigned (xh2*yl2,xh1*yl1)
__m128i inv = _mm_xor_si128(prod,xs); // invert bits if negative
__m128i z = _mm_sub_epi64(inv,xs); // add 1 if negative
This gives the correct result. But I have to do this twice (once when squaring) and it's now a significant fraction of my function. Is there a more efficient way of doing this with SSE4.2, AVX2 (four 128bit products), or even AVX512 (eight 128bit products)?
Maybe there are more efficient ways of doing this than with SIMD? It's a lot of calculations to get the upper word.
Edit: based on the comment by #ElderBug it looks like the way to do this is not with SIMD but with the mul instruction. For what it's worth, in case anyone want's to see how complicated this is, here is the full working function (I just got it working so I have not optimized it but I don't think it's worth it).
void muldws1_sse(__m128i x, __m128i y, __m128i *lo, __m128i *hi) {
__m128i lomask = _mm_set1_epi64x(0xffffffff);
__m128i xh = _mm_shuffle_epi32(x, 0xB1); // x0l, x0h, x1l, x1h
__m128i yh = _mm_shuffle_epi32(y, 0xB1); // y0l, y0h, y1l, y1h
__m128i xs = _mm_cmpgt_epi32(_mm_setzero_si128(), xh);
__m128i ys = _mm_cmpgt_epi32(_mm_setzero_si128(), yh);
xs = _mm_shuffle_epi32(xs, 0xA0);
ys = _mm_shuffle_epi32(ys, 0xA0);
__m128i w0 = _mm_mul_epu32(x, y); // x0l*y0l, y0l*y0h
__m128i w3 = _mm_mul_epi32(xh, yh); // x0h*y0h, x1h*y1h
xh = _mm_sign_epi32(xh,xh);
yh = _mm_sign_epi32(yh,yh);
__m128i w1 = _mm_mul_epu32(x, yh); // x0l*y0h, x1l*y1h
__m128i w2 = _mm_mul_epu32(xh, y); // x0h*y0l, x1h*y0l
__m128i yinv = _mm_xor_si128(w1,ys); // invert bits if negative
w1 = _mm_sub_epi64(yinv,ys); // add 1
__m128i xinv = _mm_xor_si128(w2,xs); // invert bits if negative
w2 = _mm_sub_epi64(xinv,xs); // add 1
__m128i w0l = _mm_and_si128(w0, lomask);
__m128i w0h = _mm_srli_epi64(w0, 32);
__m128i s1 = _mm_add_epi64(w1, w0h); // xl*yh + w0h;
__m128i s1l = _mm_and_si128(s1, lomask); // lo(wl*yh + w0h);
__m128i s1h = _mm_srai_epi64(s1, 32);
__m128i s2 = _mm_add_epi64(w2, s1l); //xh*yl + s1l
__m128i s2l = _mm_slli_epi64(s2, 32);
__m128i s2h = _mm_srai_epi64(s2, 32); //arithmetic shift right
__m128i hi1 = _mm_add_epi64(w3, s1h);
hi1 = _mm_add_epi64(hi1, s2h);
__m128i lo1 = _mm_add_epi64(w0l, s2l);
*hi = hi1;
*lo = lo1;
}
It gets worse. There is no _mm_srai_epi64 instrinsic/instruction until AVX512 so I had to make my own.
static inline __m128i _mm_srai_epi64(__m128i a, int b) {
__m128i sra = _mm_srai_epi32(a,32);
__m128i srl = _mm_srli_epi64(a,32);
__m128i mask = _mm_set_epi32(-1,0,-1,0);
__m128i out = _mm_blendv_epi8(srl, sra, mask);
}
My implementation of _mm_srai_epi64 above is incomplete. I think I was using Agner Fog's Vector Class Library. If you look in the file vectori128.h you find
static inline Vec2q operator >> (Vec2q const & a, int32_t b) {
// instruction does not exist. Split into 32-bit shifts
if (b <= 32) {
__m128i bb = _mm_cvtsi32_si128(b); // b
__m128i sra = _mm_sra_epi32(a,bb); // a >> b signed dwords
__m128i srl = _mm_srl_epi64(a,bb); // a >> b unsigned qwords
__m128i mask = _mm_setr_epi32(0,-1,0,-1); // mask for signed high part
return selectb(mask,sra,srl);
}
else { // b > 32
__m128i bm32 = _mm_cvtsi32_si128(b-32); // b - 32
__m128i sign = _mm_srai_epi32(a,31); // sign of a
__m128i sra2 = _mm_sra_epi32(a,bm32); // a >> (b-32) signed dwords
__m128i sra3 = _mm_srli_epi64(sra2,32); // a >> (b-32) >> 32 (second shift unsigned qword)
__m128i mask = _mm_setr_epi32(0,-1,0,-1); // mask for high part containing only sign
return selectb(mask,sign,sra3);
}
}
I found a SIMD solution which is much simpler and does not need signed*unsigned products. I'm no longer convinced that SIMD (at least with AVX2 and AV512) can't compete with mulx. In some cases SIMD can compete with mulx. The only case I'm aware of is in FFT based multiplication of large numbers.
The trick was to do unsigned multiplication first and then correct. I learned how to do this from this answer 32-bit-signed-multiplication-without-using-64-bit-data-type. The correction is simple for (hi,lo) = x*y do unsigned multiplication first and then correct hi like this:
hi -= ((x<0) ? y : 0) + ((y<0) ? x : 0)
This can be done with with the SSE4.2 intrinsic _mm_cmpgt_epi64
void muldws1_sse(__m128i x, __m128i y, __m128i *lo, __m128i *hi) {
muldwu1_sse(x,y,lo,hi);
//hi -= ((x<0) ? y : 0) + ((y<0) ? x : 0);
__m128i xs = _mm_cmpgt_epi64(_mm_setzero_si128(), x);
__m128i ys = _mm_cmpgt_epi64(_mm_setzero_si128(), y);
__m128i t1 = _mm_and_si128(y,xs);
__m128i t2 = _mm_and_si128(x,ys);
*hi = _mm_sub_epi64(*hi,t1);
*hi = _mm_sub_epi64(*hi,t2);
}
The code for the unsigned multiplication is simpler since it does not need mixed signed*unsigned products. Additionally, since it's unsigned it does not need arithmetic shift right which only has an instruction for AVX512. In fact the following function only needs SSE2:
void muldwu1_sse(__m128i x, __m128i y, __m128i *lo, __m128i *hi) {
__m128i lomask = _mm_set1_epi64x(0xffffffff);
__m128i xh = _mm_shuffle_epi32(x, 0xB1); // x0l, x0h, x1l, x1h
__m128i yh = _mm_shuffle_epi32(y, 0xB1); // y0l, y0h, y1l, y1h
__m128i w0 = _mm_mul_epu32(x, y); // x0l*y0l, x1l*y1l
__m128i w1 = _mm_mul_epu32(x, yh); // x0l*y0h, x1l*y1h
__m128i w2 = _mm_mul_epu32(xh, y); // x0h*y0l, x1h*y0l
__m128i w3 = _mm_mul_epu32(xh, yh); // x0h*y0h, x1h*y1h
__m128i w0l = _mm_and_si128(w0, lomask); //(*)
__m128i w0h = _mm_srli_epi64(w0, 32);
__m128i s1 = _mm_add_epi64(w1, w0h);
__m128i s1l = _mm_and_si128(s1, lomask);
__m128i s1h = _mm_srli_epi64(s1, 32);
__m128i s2 = _mm_add_epi64(w2, s1l);
__m128i s2l = _mm_slli_epi64(s2, 32); //(*)
__m128i s2h = _mm_srli_epi64(s2, 32);
__m128i hi1 = _mm_add_epi64(w3, s1h);
hi1 = _mm_add_epi64(hi1, s2h);
__m128i lo1 = _mm_add_epi64(w0l, s2l); //(*)
//__m128i lo1 = _mm_mullo_epi64(x,y); //alternative
*hi = hi1;
*lo = lo1;
}
This uses
4x mul_epu32
5x add_epi64
2x shuffle_epi32
2x and
2x srli_epi64
1x slli_epi64
****************
16 instructions
AVX512 has the _mm_mullo_epi64 intrinsic which can calculate lo with one instruction. In this case the alternative can be used (comment the lines with the (*) comment and uncomment the alternative line):
5x mul_epu32
4x add_epi64
2x shuffle_epi32
1x and
2x srli_epi64
****************
14 instructions
To change the code for full width AVX2 replace _mm with _mm256 , si128 with si256, and __m128i with __m256i for AVX512 replace them with _mm512, si512, and __m512i.
The right way to think about the throughput limits of integer multiplication using various instructions is in terms of how many "product bits" you can compute per cycle.
mulx produces one 64x64 -> 128 result every cycle; that's 64x64 = 4096 "product bits per cycle"
If you piece together a multiplier on SIMD out of instructions that do 32x32 -> 64 bit multiplies, you need to be able to get four results every cycle to match mulx (4x32x32 = 4096). If there was no arithmetic other than the multiplies, you'd just break even on AVX2. Unfortunately, as you've noticed, there's lots of arithmetic other than the multiplies, so this is a total non-starter on current generation hardware.
I'm writing a C program for a PIC micro-controller which needs to do a very specific exponential function. I need to calculate the following:
A = k . (1 - (p/p0)^0.19029)
k and p0 are constant, so it's all pretty simple apart from finding x^0.19029
(p/p0) ratio would always be in the range 0-1.
It works well if I add in math.h and use the power function, except that uses up all of the available 16 kB of program memory. Talk about bloatware! (Rest of program without power function = ~20% flash memory usage; add math.h and power function, =100%).
I'd like the program to do some other things as well. I was wondering if I can write a special case implementation for x^0.19029, maybe involving iteration and some kind of lookup table.
My idea is to generate a look-up table for the function x^0.19029, with perhaps 10-100 values of x in the range 0-1. The code would find a close match, then (somehow) iteratively refine it by re-scaling the lookup table values. However, this is where I get lost because my tiny brain can't visualise the maths involved.
Could this approach work?
Alternatively, I've looked at using Exp(x) and Ln(x), which can be implemented with a Taylor expansion. b^x can the be found with:
b^x = (e^(ln b))^x = e^(x.ln(b))
(See: Wikipedia - Powers via Logarithms)
This looks a bit tricky and complicated to me, though. Am I likely to get the implementation smaller then the compiler's math library, and can I simplify it for my special case (i.e. base = 0-1, exponent always 0.19029)?
Note that RAM usage is OK at the moment, but I've run low on Flash (used for code storage). Speed is not critical. Somebody has already suggested that I use a bigger micro with more flash memory, but that sounds like profligate wastefulness!
[EDIT] I was being lazy when I said "(p/p0) ratio would always be in the range 0-1". Actually it will never reach 0, and I did some calculations last night and decided that in fact a range of 0.3 - 1 would be quite adequate! This mean that some of the simpler solutions below should be suitable. Also, the "k" in the above is 44330, and I'd like the error in the final result to be less than 0.1. I guess that means an error in the (p/p0)^0.19029 needs to be less than 1/443300 or 2.256e-6
Use splines. The relevant part of the function is shown in the figure below. It varies approximately like the 5th root, so the problematic zone is close to p / p0 = 0. There is mathematical theory how to optimally place the knots of splines to minimize the error (see Carl de Boor: A Practical Guide to Splines). Usually one constructs the spline in B form ahead of time (using toolboxes such as Matlab's spline toolbox - also written by C. de Boor), then converts to Piecewise Polynomial representation for fast evaluation.
In C. de Boor, PGS, the function g(x) = sqrt(x + 1) is actually taken as an example (Chapter 12, Example II). This is exactly what you need here. The book comes back to this case a few times, since it is admittedly a hard problem for any interpolation scheme due to the infinite derivatives at x = -1. All software from PGS is available for free as PPPACK in netlib, and most of it is also part of SLATEC (also from netlib).
Edit (Removed)
(Multiplying by x once does not significantly help, since it only regularizes the first derivative, while all other derivatives at x = 0 are still infinite.)
Edit 2
My feeling is that optimally constructed splines (following de Boor) will be best (and fastest) for relatively low accuracy requirements. If the accuracy requirements are high (say 1e-8), one may be forced to get back to the algorithms that mathematicians have been researching for centuries. At this point, it may be best to simply download the sources of glibc and copy (provided GPL is acceptable) whatever is in
glibc-2.19/sysdeps/ieee754/dbl-64/e_pow.c
Since we don't have to include the whole math.h, there shouldn't be a problem with memory, but we will only marginally profit from having a fixed exponent.
Edit 3
Here is an adapted version of e_pow.c from netlib, as found by #Joni. This seems to be the grandfather of glibc's more modern implementation mentioned above. The old version has two advantages: (1) It is public domain, and (2) it uses a limited number of constants, which is beneficial if memory is a tight resource (glibc's version defines over 10000 lines of constants!). The following is completely standalone code, which calculates x^0.19029 for 0 <= x <= 1 to double precision (I tested it against Python's power function and found that at most 2 bits differed):
#define __LITTLE_ENDIAN
#ifdef __LITTLE_ENDIAN
#define __HI(x) *(1+(int*)&x)
#define __LO(x) *(int*)&x
#else
#define __HI(x) *(int*)&x
#define __LO(x) *(1+(int*)&x)
#endif
static const double
bp[] = {1.0, 1.5,},
dp_h[] = { 0.0, 5.84962487220764160156e-01,}, /* 0x3FE2B803, 0x40000000 */
dp_l[] = { 0.0, 1.35003920212974897128e-08,}, /* 0x3E4CFDEB, 0x43CFD006 */
zero = 0.0,
one = 1.0,
two = 2.0,
two53 = 9007199254740992.0, /* 0x43400000, 0x00000000 */
/* poly coefs for (3/2)*(log(x)-2s-2/3*s**3 */
L1 = 5.99999999999994648725e-01, /* 0x3FE33333, 0x33333303 */
L2 = 4.28571428578550184252e-01, /* 0x3FDB6DB6, 0xDB6FABFF */
L3 = 3.33333329818377432918e-01, /* 0x3FD55555, 0x518F264D */
L4 = 2.72728123808534006489e-01, /* 0x3FD17460, 0xA91D4101 */
L5 = 2.30660745775561754067e-01, /* 0x3FCD864A, 0x93C9DB65 */
L6 = 2.06975017800338417784e-01, /* 0x3FCA7E28, 0x4A454EEF */
P1 = 1.66666666666666019037e-01, /* 0x3FC55555, 0x5555553E */
P2 = -2.77777777770155933842e-03, /* 0xBF66C16C, 0x16BEBD93 */
P3 = 6.61375632143793436117e-05, /* 0x3F11566A, 0xAF25DE2C */
P4 = -1.65339022054652515390e-06, /* 0xBEBBBD41, 0xC5D26BF1 */
P5 = 4.13813679705723846039e-08, /* 0x3E663769, 0x72BEA4D0 */
lg2 = 6.93147180559945286227e-01, /* 0x3FE62E42, 0xFEFA39EF */
lg2_h = 6.93147182464599609375e-01, /* 0x3FE62E43, 0x00000000 */
lg2_l = -1.90465429995776804525e-09, /* 0xBE205C61, 0x0CA86C39 */
ovt = 8.0085662595372944372e-0017, /* -(1024-log2(ovfl+.5ulp)) */
cp = 9.61796693925975554329e-01, /* 0x3FEEC709, 0xDC3A03FD =2/(3ln2) */
cp_h = 9.61796700954437255859e-01, /* 0x3FEEC709, 0xE0000000 =(float)cp */
cp_l = -7.02846165095275826516e-09, /* 0xBE3E2FE0, 0x145B01F5 =tail of cp_h*/
ivln2 = 1.44269504088896338700e+00, /* 0x3FF71547, 0x652B82FE =1/ln2 */
ivln2_h = 1.44269502162933349609e+00, /* 0x3FF71547, 0x60000000 =24b 1/ln2*/
ivln2_l = 1.92596299112661746887e-08; /* 0x3E54AE0B, 0xF85DDF44 =1/ln2 tail*/
double pow0p19029(double x)
{
double y = 0.19029e+00;
double z,ax,z_h,z_l,p_h,p_l;
double y1,t1,t2,r,s,t,u,v,w;
int i,j,k,n;
int hx,hy,ix,iy;
unsigned lx,ly;
hx = __HI(x); lx = __LO(x);
hy = __HI(y); ly = __LO(y);
ix = hx&0x7fffffff; iy = hy&0x7fffffff;
ax = x;
/* special value of x */
if(lx==0) {
if(ix==0x7ff00000||ix==0||ix==0x3ff00000){
z = ax; /*x is +-0,+-inf,+-1*/
return z;
}
}
s = one; /* s (sign of result -ve**odd) = -1 else = 1 */
double ss,s2,s_h,s_l,t_h,t_l;
n = ((ix)>>20)-0x3ff;
j = ix&0x000fffff;
/* determine interval */
ix = j|0x3ff00000; /* normalize ix */
if(j<=0x3988E) k=0; /* |x|<sqrt(3/2) */
else if(j<0xBB67A) k=1; /* |x|<sqrt(3) */
else {k=0;n+=1;ix -= 0x00100000;}
__HI(ax) = ix;
/* compute ss = s_h+s_l = (x-1)/(x+1) or (x-1.5)/(x+1.5) */
u = ax-bp[k]; /* bp[0]=1.0, bp[1]=1.5 */
v = one/(ax+bp[k]);
ss = u*v;
s_h = ss;
__LO(s_h) = 0;
/* t_h=ax+bp[k] High */
t_h = zero;
__HI(t_h)=((ix>>1)|0x20000000)+0x00080000+(k<<18);
t_l = ax - (t_h-bp[k]);
s_l = v*((u-s_h*t_h)-s_h*t_l);
/* compute log(ax) */
s2 = ss*ss;
r = s2*s2*(L1+s2*(L2+s2*(L3+s2*(L4+s2*(L5+s2*L6)))));
r += s_l*(s_h+ss);
s2 = s_h*s_h;
t_h = 3.0+s2+r;
__LO(t_h) = 0;
t_l = r-((t_h-3.0)-s2);
/* u+v = ss*(1+...) */
u = s_h*t_h;
v = s_l*t_h+t_l*ss;
/* 2/(3log2)*(ss+...) */
p_h = u+v;
__LO(p_h) = 0;
p_l = v-(p_h-u);
z_h = cp_h*p_h; /* cp_h+cp_l = 2/(3*log2) */
z_l = cp_l*p_h+p_l*cp+dp_l[k];
/* log2(ax) = (ss+..)*2/(3*log2) = n + dp_h + z_h + z_l */
t = (double)n;
t1 = (((z_h+z_l)+dp_h[k])+t);
__LO(t1) = 0;
t2 = z_l-(((t1-t)-dp_h[k])-z_h);
/* split up y into y1+y2 and compute (y1+y2)*(t1+t2) */
y1 = y;
__LO(y1) = 0;
p_l = (y-y1)*t1+y*t2;
p_h = y1*t1;
z = p_l+p_h;
j = __HI(z);
i = __LO(z);
/*
* compute 2**(p_h+p_l)
*/
i = j&0x7fffffff;
k = (i>>20)-0x3ff;
n = 0;
if(i>0x3fe00000) { /* if |z| > 0.5, set n = [z+0.5] */
n = j+(0x00100000>>(k+1));
k = ((n&0x7fffffff)>>20)-0x3ff; /* new k for n */
t = zero;
__HI(t) = (n&~(0x000fffff>>k));
n = ((n&0x000fffff)|0x00100000)>>(20-k);
if(j<0) n = -n;
p_h -= t;
}
t = p_l+p_h;
__LO(t) = 0;
u = t*lg2_h;
v = (p_l-(t-p_h))*lg2+t*lg2_l;
z = u+v;
w = v-(z-u);
t = z*z;
t1 = z - t*(P1+t*(P2+t*(P3+t*(P4+t*P5))));
r = (z*t1)/(t1-two)-(w+z*w);
z = one-(r-z);
__HI(z) += (n<<20);
return s*z;
}
Clearly, 50+ years of research have gone into this, so it's probably very hard to do any better. (One has to appreciate that there are 0 loops, only 2 divisions, and only 6 if statements in the whole algorithm!) The reason for this is, again, the behavior at x = 0, where all derivatives diverge, which makes it extremely hard to keep the error under control: I once had a spline representation with 18 knots that was good up to x = 1e-4, with absolute and relative errors < 5e-4 everywhere, but going to x = 1e-5 ruined everything again.
So, unless the requirement to go arbitrarily close to zero is relaxed, I recommend using the adapted version of e_pow.c given above.
Edit 4
Now that we know that the domain 0.3 <= x <= 1 is sufficient, and that we have very low accuracy requirements, Edit 3 is clearly overkill. As #MvG has demonstrated, the function is so well behaved that a polynomial of degree 7 is sufficient to satisfy the accuracy requirements, which can be considered a single spline segment. #MvG's solution minimizes the integral error, which already looks very good.
The question arises as to how much better we can still do? It would be interesting to find the polynomial of a given degree that minimizes the maximum error in the interval of interest. The answer is the minimax
polynomial, which can be found using Remez' algorithm, which is implemented in the Boost library. I like #MvG's idea to clamp the value at x = 1 to 1, which I will do as well. Here is minimax.cpp:
#include <ostream>
#define TARG_PREC 64
#define WORK_PREC (TARG_PREC*2)
#include <boost/multiprecision/cpp_dec_float.hpp>
typedef boost::multiprecision::number<boost::multiprecision::cpp_dec_float<WORK_PREC> > dtype;
using boost::math::pow;
#include <boost/math/tools/remez.hpp>
boost::shared_ptr<boost::math::tools::remez_minimax<dtype> > p_remez;
dtype f(const dtype& x) {
static const dtype one(1), y(0.19029);
return one - pow(one - x, y);
}
void out(const char *descr, const dtype& x, const char *sep="") {
std::cout << descr << boost::math::tools::real_cast<double>(x) << sep << std::endl;
}
int main() {
dtype a(0), b(0.7); // range to optimise over
bool rel_error(false), pin(true);
int orderN(7), orderD(0), skew(0), brake(50);
int prec = 2 + (TARG_PREC * 3010LL)/10000;
std::cout << std::scientific << std::setprecision(prec);
p_remez.reset(new boost::math::tools::remez_minimax<dtype>(
&f, orderN, orderD, a, b, pin, rel_error, skew, WORK_PREC));
out("Max error in interpolated form: ", p_remez->max_error());
p_remez->set_brake(brake);
unsigned i, count(50);
for (i = 0; i < count; ++i) {
std::cout << "Stepping..." << std::endl;
dtype r = p_remez->iterate();
out("Maximum Deviation Found: ", p_remez->max_error());
out("Expected Error Term: ", p_remez->error_term());
out("Maximum Relative Change in Control Points: ", r);
}
boost::math::tools::polynomial<dtype> n = p_remez->numerator();
for(i = n.size(); i--; ) {
out("", n[i], ",");
}
}
Since all parts of boost that we use are header-only, simply build with:
c++ -O3 -I<path/to/boost/headers> minimax.cpp -o minimax
We finally get the coefficients, which are after multiplication by 44330:
24538.3409, -42811.1497, 34300.7501, -11284.1276, 4564.5847, 3186.7541, 8442.5236, 0.
The following error plot demonstrates that this is really the best possible degree-7 polynomial approximation, since all extrema are of equal magnitude (0.06659):
Should the requirements ever change (while still keeping well away from 0!), the C++ program above can be simply adapted to spit out the new optimal polynomial approximation.
Instead of a lookup table, I'd use a polynomial approximation:
1 - x0.19029 ≈ - 1073365.91783x15 + 8354695.40833x14 - 29422576.6529x13 + 61993794.537x12 - 87079891.4988x11 + 86005723.842x10 - 61389954.7459x9 + 32053170.1149x8 - 12253383.4372x7 + 3399819.97536x6 - 672003.142815x5 + 91817.6782072x4 - 8299.75873768x3 + 469.530204564x2 - 16.6572179869x + 0.722044145701
Or in code:
double f(double x) {
double fx;
fx = - 1073365.91783;
fx = fx*x + 8354695.40833;
fx = fx*x - 29422576.6529;
fx = fx*x + 61993794.537;
fx = fx*x - 87079891.4988;
fx = fx*x + 86005723.842;
fx = fx*x - 61389954.7459;
fx = fx*x + 32053170.1149;
fx = fx*x - 12253383.4372;
fx = fx*x + 3399819.97536;
fx = fx*x - 672003.142815;
fx = fx*x + 91817.6782072;
fx = fx*x - 8299.75873768;
fx = fx*x + 469.530204564;
fx = fx*x - 16.6572179869;
fx = fx*x + 0.722044145701;
return fx;
}
I computed this in sage using the least squares approach:
f(x) = 1-x^(19029/100000) # your function
d = 16 # number of terms, i.e. degree + 1
A = matrix(d, d, lambda r, c: integrate(x^r*x^c, (x, 0, 1)))
b = vector([integrate(x^r*f(x), (x, 0, 1)) for r in range(d)])
A.solve_right(b).change_ring(RDF)
Here is a plot of the error this will entail:
Blue is the error from my 16 term polynomial, while red is the error you'd get from piecewise linear interpolation with 16 equidistant values. As you can see, both errors are quite small for most parts of the range, but will become really huge close to x=0. I actually clipped the plot there. If you can somehow narrow the range of possible values, you could use that as the domain for the integration, and obtain an even better fit for the relevant range. At the cost of worse fit outside, of course. You could also increase the number of terms to obtain a closer fit, although that might also lead to higher oscillations.
I guess you can also combine this approach with the one Stefan posted: use his to split the domain into several parts, then use mine to find a close low degree polynomial for each part.
Update
Since you updated the specification of your question, with regard to both the domain and the error, here is a minimal solution to fit those requirements:
44330(1 - x0.19029) ≈ + 23024.9160933(1-x)7 - 39408.6473636(1-x)6 + 31379.9086193(1-x)5 - 10098.7031260(1-x)4 + 4339.44098317(1-x)3 + 3202.85705860(1-x)2 + 8442.42528906(1-x)
double f(double x) {
double fx, x1 = 1. - x;
fx = + 23024.9160933;
fx = fx*x1 - 39408.6473636;
fx = fx*x1 + 31379.9086193;
fx = fx*x1 - 10098.7031260;
fx = fx*x1 + 4339.44098317;
fx = fx*x1 + 3202.85705860;
fx = fx*x1 + 8442.42528906;
fx = fx*x1;
return fx;
}
I integrated x from 0.293 to 1 or equivalently 1 - x from 0 to 0.707 to keep the worst oscillations outside the relevant domain. I also omitted the constant term, to ensure an exact result at x=1. The maximal error for the range [0.3, 1] now occurs at x=0.3260 and amounts to 0.0972 < 0.1. Here is an error plot, which of course has bigger absolute errors than the one above due to the scale factor k=44330 which has been included here.
I can also state that the first three derivatives of the function will have constant sign over the range in question, so the function is monotonic, convex, and in general pretty well-behaved.
Not meant to answer the question, but it illustrates the Road Not To Go, and thus may be helpful:
This quick-and-dirty C code calculates pow(i, 0.19029) for 0.000 to 1.000 in steps of 0.01. The first half displays the error, in percents, when stored as 1/65536ths (as that theoretically provides slightly over 4 decimals of precision). The second half shows both interpolated and calculated values in steps of 0.001, and the difference between these two.
It kind of looks okay if you read from the bottom up, all 100s and 99.99s there, but about the first 20 values from 0.001 to 0.020 are worthless.
#include <stdio.h>
#include <math.h>
float powers[102];
int main (void)
{
int i, as_int;
double as_real, low, high, delta, approx, calcd, diff;
printf ("calculating and storing:\n");
for (i=0; i<=101; i++)
{
as_real = pow(i/100.0, 0.19029);
as_int = (int)round(65536*as_real);
powers[i] = as_real;
diff = 100*as_real/(as_int/65536.0);
printf ("%.5f %.5f %.5f ~ %.3f\n", i/100.0, as_real, as_int/65536.0, diff);
}
printf ("\n");
printf ("-- interpolating in 1/10ths:\n");
for (i=0; i<1000; i++)
{
as_real = i/1000.0;
low = powers[i/10];
high = powers[1+i/10];
delta = (high-low)/10.0;
approx = low + (i%10)*delta;
calcd = pow(as_real, 0.19029);
diff = 100.0*approx/calcd;
printf ("%.5f ~ %.5f = %.5f +/- %.5f%%\n", as_real, approx, calcd, diff);
}
return 0;
}
You can find a complete, correct standalone implementation of pow in fdlibm. It's about 200 lines of code, about half of which deal with special cases. If you remove the code that deals with special cases you're not interested in I doubt you'll have problems including it in your program.
LutzL's answer is a really good one: Calculate your power as (x^1.52232)^(1/8), computing the inner power by spline interpolation or another method. The eighth root deals with the pathological non-differentiable behavior near zero. I took the liberty of mocking up an implementation this way. The below, however, only does a linear interpolation to do x^1.52232, and you'd need to get the full coefficients using your favorite numerical mathematics tools. You'll adding scarcely 40 lines of code to get your needed power, plus however many knots you choose to use for your spline, as dicated by your required accuracy.
Don't be scared by the #include <math.h>; it's just for benchmarking the code.
#include <stdio.h>
#include <math.h>
double my_sqrt(double x) {
/* Newton's method for a square root. */
int i = 0;
double res = 1.0;
if (x > 0) {
for (i = 0; i < 10; i++) {
res = 0.5 * (res + x / res);
}
} else {
res = 0.0;
}
return res;
}
double my_152232(double x) {
/* Cubic spline interpolation for x ** 1.52232. */
int i = 0;
double res = 0.0;
/* coefs[i] will give the cubic polynomial coefficients between x =
i and x = i+1. Out of laziness, the below numbers give only a
linear interpolation. You'll need to do some work and research
to get the spline coefficients. */
double coefs[3][4] = {{0.0, 1.0, 0.0, 0.0},
{-0.872526, 1.872526, 0.0, 0.0},
{-2.032706, 2.452616, 0.0, 0.0}};
if ((x >= 0) && (x < 3.0)) {
i = (int) x;
/* Horner's method cubic. */
res = (((coefs[i][3] * x + coefs[i][2]) * x) + coefs[i][1] * x)
+ coefs[i][0];
} else if (x >= 3.0) {
/* Scaled x ** 1.5 once you go off the spline. */
res = 1.024824 * my_sqrt(x * x * x);
}
return res;
}
double my_019029(double x) {
return my_sqrt(my_sqrt(my_sqrt(my_152232(x))));
}
int main() {
int i;
double x = 0.0;
for (i = 0; i < 1000; i++) {
x = 1e-2 * i;
printf("%f %f %f \n", x, my_019029(x), pow(x, 0.19029));
}
return 0;
}
EDIT: If you're just interested in a small region like [0,1], even simpler is to peel off one sqrt(x) and compute x^1.02232, which is quite well behaved, using a Taylor series:
double my_152232(double x) {
double part_050000 = my_sqrt(x);
double part_102232 = 1.02232 * x + 0.0114091 * x * x - 3.718147e-3 * x * x * x;
return part_102232 * part_050000;
}
This gets you within 1% of the exact power for approximately [0.1,6], though getting the singularity exactly right is always a challenge. Even so, this three-term Taylor series gets you within 2.3% for x = 0.001.
Consider the two following pieces of code, the first is the C version :
void __attribute__((no_inline)) proj(uint8_t * line, uint16_t length)
{
uint16_t i;
int16_t tmp;
for(i=HPHD_MARGIN; i<length-HPHD_MARGIN; i++) {
tmp = line[i-3] - 4*line[i-2] + 5*line[i-1] - 5*line[i+1] + 4*line[i+2] - line[i+3];
hphd_temp[i]=ABS(tmp);
}
}
The second is the same function (except for the border) using neon intrinsics
void __attribute__((no_inline)) proj_neon(uint8_t * line, uint16_t length)
{
int i;
uint8x8_t b0b7, b8b15, p1p8,p2p9,p4p11,p5p12,p6p13, m4, m5;
uint16x8_t result;
m4 = vdup_n_u8(4);
m5 = vdup_n_u8(5);
b0b7 = vld1_u8(line);
for(i = 0; i < length - 16; i+=8) {
b8b15 = vld1_u8(line + i + 8);
p1p8 = vext_u8(b0b7,b8b15, 1);
p2p9 = vext_u8(b0b7,b8b15, 2);
p4p11 = vext_u8(b0b7,b8b15, 4);
p5p12 = vext_u8(b0b7,b8b15, 5);
p6p13 = vext_u8(b0b7,b8b15, 6);
result = vsubl_u8(b0b7, p6p13); //p[-3]
result = vmlal_u8(result, p2p9, m5); // +5 * p[-1];
result = vmlal_u8(result, p5p12, m4);// +4 * p[1];
result = vmlsl_u8(result, p1p8, m4); //-4 * p[-2];
result = vmlsl_u8(result, p4p11, m5);// -5 * p[1];
vst1q_s16(hphd_temp + i + 3, vabsq_s16(vreinterpretq_s16_u16(result)));
b0b7 = b8b15;
}
/* todo : remaining pixel */
}
I am disappointed by the performance gain : it is around 10 - 15 %. If I look at the generated assembly :
C version is transformed in a 108 instruction loop
Neon version is transformed in a 72 instruction loop.
But one loop in the neon code computes 8 times as much data as an iteration through the C loop, so a dramatic improvement should be seen.
Do you have any explanation regarding the small difference between the two version ?
Additional details :
Test data is a 10 Mpix image, computation time is around 2 seconds for the C version.
CPU : ARM cortex a8
I'm going to take a wild guess and say that caching (data) is the reason you don't see the big performance gain you are expecting. While I don't know if your chipset supports caching or at what level, if the data spans cache lines, has poor alignment, or is running in an environment where the CPU is doing other things at the same time (interrupts, threads, etc.), then that also could muddy your results.