Getting started with Intel x86 SSE SIMD instructions - c

I want to learn more about using the SSE.
What ways are there to learn, besides the obvious reading the Intel® 64 and IA-32 Architectures Software Developer's Manuals?
Mainly I'm interested to work with the GCC X86 Built-in Functions.

First, I don't recommend on using the built-in functions - they are not portable (across compilers of the same arch).
Use intrinsics, GCC does a wonderful job optimizing SSE intrinsics into even more optimized code. You can always have a peek at the assembly and see how to use SSE to it's full potential.
Intrinsics are easy - just like normal function calls:
#include <immintrin.h> // portable to all x86 compilers
int main()
{
__m128 vector1 = _mm_set_ps(4.0, 3.0, 2.0, 1.0); // high element first, opposite of C array order. Use _mm_setr_ps if you want "little endian" element order in the source.
__m128 vector2 = _mm_set_ps(7.0, 8.0, 9.0, 0.0);
__m128 sum = _mm_add_ps(vector1, vector2); // result = vector1 + vector 2
vector1 = _mm_shuffle_ps(vector1, vector1, _MM_SHUFFLE(0,1,2,3));
// vector1 is now (1, 2, 3, 4) (above shuffle reversed it)
return 0;
}
Use _mm_load_ps or _mm_loadu_ps to load data from arrays.
Of course there are way more options, SSE is really powerful and in my opinion relatively easy to learn.
See also https://stackoverflow.com/tags/sse/info for some links to guides.

Since you asked for resources:
A practical guide to using SSE with C++: Good conceptual overview on how to use SSE effectively, with examples.
MSDN Listing of Compiler Intrinsics: Comprehensive reference for all your intrinsic needs. It's MSDN, but pretty much all the intrinsics listed here are supported by GCC and ICC as well.
Christopher Wright's SSE Page: Quick reference on the meanings of the SSE opcodes. I guess the Intel Manuals can serve the same function, but this is faster.
It's probably best to write most of your code in intrinsics, but do check the objdump of your compiler's output to make sure that it's producing efficient code. SIMD code generation is still a fairly new technology and it's very possible that the compiler might get it wrong in some cases.

I find Dr. Agner Fog's research & optimization guides very valuable! He also has some libraries & testing tools that I have not tried yet.
http://www.agner.org/optimize/

Step 1: write some assembly manually
I recommend that you first try to write your own assembly manually to see and control exactly what is happening when you start learning.
Then the question becomes how to observe what is happening in the program, and the answers are:
GDB
use the C standard library to print and assert things
Using the C standard library yourself requires a little bit of work, but nothing much. I have for example done this work nicely for you on Linux in the following files of my test setup:
lkmc.h
lkmc.c
lkmc/x86_64.h
Using those helpers, I then start playing around with the basics, such as:
load and store data to / from memory into SSE registers
add integers and floating point numbers of different sizes
assert that the results are what I expect
addpd.S
#include <lkmc.h>
LKMC_PROLOGUE
.data
.align 16
addps_input0: .float 1.5, 2.5, 3.5, 4.5
addps_input1: .float 5.5, 6.5, 7.5, 8.5
addps_expect: .float 7.0, 9.0, 11.0, 13.0
addpd_input0: .double 1.5, 2.5
addpd_input1: .double 5.5, 6.5
addpd_expect: .double 7.0, 9.0
.bss
.align 16
output: .skip 16
.text
/* 4x 32-bit */
movaps addps_input0, %xmm0
movaps addps_input1, %xmm1
addps %xmm1, %xmm0
movaps %xmm0, output
LKMC_ASSERT_MEMCMP(output, addps_expect, $0x10)
/* 2x 64-bit */
movaps addpd_input0, %xmm0
movaps addpd_input1, %xmm1
addpd %xmm1, %xmm0
movaps %xmm0, output
LKMC_ASSERT_MEMCMP(output, addpd_expect, $0x10)
LKMC_EPILOGUE
GitHub upstream.
paddq.S
#include <lkmc.h>
LKMC_PROLOGUE
.data
.align 16
input0: .long 0xF1F1F1F1, 0xF2F2F2F2, 0xF3F3F3F3, 0xF4F4F4F4
input1: .long 0x12121212, 0x13131313, 0x14141414, 0x15151515
paddb_expect: .long 0x03030303, 0x05050505, 0x07070707, 0x09090909
paddw_expect: .long 0x04030403, 0x06050605, 0x08070807, 0x0A090A09
paddd_expect: .long 0x04040403, 0x06060605, 0x08080807, 0x0A0A0A09
paddq_expect: .long 0x04040403, 0x06060606, 0x08080807, 0x0A0A0A0A
.bss
.align 16
output: .skip 16
.text
movaps input1, %xmm1
/* 16x 8bit */
movaps input0, %xmm0
paddb %xmm1, %xmm0
movaps %xmm0, output
LKMC_ASSERT_MEMCMP(output, paddb_expect, $0x10)
/* 8x 16-bit */
movaps input0, %xmm0
paddw %xmm1, %xmm0
movaps %xmm0, output
LKMC_ASSERT_MEMCMP(output, paddw_expect, $0x10)
/* 4x 32-bit */
movaps input0, %xmm0
paddd %xmm1, %xmm0
movaps %xmm0, output
LKMC_ASSERT_MEMCMP(output, paddd_expect, $0x10)
/* 2x 64-bit */
movaps input0, %xmm0
paddq %xmm1, %xmm0
movaps %xmm0, output
LKMC_ASSERT_MEMCMP(output, paddq_expect, $0x10)
LKMC_EPILOGUE
GitHub upstream.
Step 2: write some intrinsics
For production code however, you will likely want to use the pre-existing intrinsics instead of raw assembly as mentioned at: https://stackoverflow.com/a/1390802/895245
So now I try to convert the previous examples into more or less equivalent C code with intrinsics.
addpq.c
#include <assert.h>
#include <string.h>
#include <x86intrin.h>
float global_input0[] __attribute__((aligned(16))) = {1.5f, 2.5f, 3.5f, 4.5f};
float global_input1[] __attribute__((aligned(16))) = {5.5f, 6.5f, 7.5f, 8.5f};
float global_output[4] __attribute__((aligned(16)));
float global_expected[] __attribute__((aligned(16))) = {7.0f, 9.0f, 11.0f, 13.0f};
int main(void) {
/* 32-bit add (addps). */
{
__m128 input0 = _mm_set_ps(1.5f, 2.5f, 3.5f, 4.5f);
__m128 input1 = _mm_set_ps(5.5f, 6.5f, 7.5f, 8.5f);
__m128 output = _mm_add_ps(input0, input1);
/* _mm_extract_ps returns int instead of float:
* * https://stackoverflow.com/questions/5526658/intel-sse-why-does-mm-extract-ps-return-int-instead-of-float
* * https://stackoverflow.com/questions/3130169/how-to-convert-a-hex-float-to-a-float-in-c-c-using-mm-extract-ps-sse-gcc-inst
* so we must use instead: _MM_EXTRACT_FLOAT
*/
float f;
_MM_EXTRACT_FLOAT(f, output, 3);
assert(f == 7.0f);
_MM_EXTRACT_FLOAT(f, output, 2);
assert(f == 9.0f);
_MM_EXTRACT_FLOAT(f, output, 1);
assert(f == 11.0f);
_MM_EXTRACT_FLOAT(f, output, 0);
assert(f == 13.0f);
/* And we also have _mm_cvtss_f32 + _mm_shuffle_ps, */
assert(_mm_cvtss_f32(output) == 13.0f);
assert(_mm_cvtss_f32(_mm_shuffle_ps(output, output, 1)) == 11.0f);
assert(_mm_cvtss_f32(_mm_shuffle_ps(output, output, 2)) == 9.0f);
assert(_mm_cvtss_f32(_mm_shuffle_ps(output, output, 3)) == 7.0f);
}
/* Now from memory. */
{
__m128 *input0 = (__m128 *)global_input0;
__m128 *input1 = (__m128 *)global_input1;
_mm_store_ps(global_output, _mm_add_ps(*input0, *input1));
assert(!memcmp(global_output, global_expected, sizeof(global_output)));
}
/* 64-bit add (addpd). */
{
__m128d input0 = _mm_set_pd(1.5, 2.5);
__m128d input1 = _mm_set_pd(5.5, 6.5);
__m128d output = _mm_add_pd(input0, input1);
/* OK, and this is how we get the doubles out:
* with _mm_cvtsd_f64 + _mm_unpackhi_pd
* https://stackoverflow.com/questions/19359372/mm-cvtsd-f64-analogon-for-higher-order-floating-point
*/
assert(_mm_cvtsd_f64(output) == 9.0);
assert(_mm_cvtsd_f64(_mm_unpackhi_pd(output, output)) == 7.0);
}
return 0;
}
GitHub upstream.
paddq.c
#include <assert.h>
#include <inttypes.h>
#include <string.h>
#include <x86intrin.h>
uint32_t global_input0[] __attribute__((aligned(16))) = {1, 2, 3, 4};
uint32_t global_input1[] __attribute__((aligned(16))) = {5, 6, 7, 8};
uint32_t global_output[4] __attribute__((aligned(16)));
uint32_t global_expected[] __attribute__((aligned(16))) = {6, 8, 10, 12};
int main(void) {
/* 32-bit add hello world. */
{
__m128i input0 = _mm_set_epi32(1, 2, 3, 4);
__m128i input1 = _mm_set_epi32(5, 6, 7, 8);
__m128i output = _mm_add_epi32(input0, input1);
/* _mm_extract_epi32 mentioned at:
* https://stackoverflow.com/questions/12495467/how-to-store-the-contents-of-a-m128d-simd-vector-as-doubles-without-accessing/56404421#56404421 */
assert(_mm_extract_epi32(output, 3) == 6);
assert(_mm_extract_epi32(output, 2) == 8);
assert(_mm_extract_epi32(output, 1) == 10);
assert(_mm_extract_epi32(output, 0) == 12);
}
/* Now from memory. */
{
__m128i *input0 = (__m128i *)global_input0;
__m128i *input1 = (__m128i *)global_input1;
_mm_store_si128((__m128i *)global_output, _mm_add_epi32(*input0, *input1));
assert(!memcmp(global_output, global_expected, sizeof(global_output)));
}
/* Now a bunch of other sizes. */
{
__m128i input0 = _mm_set_epi32(0xF1F1F1F1, 0xF2F2F2F2, 0xF3F3F3F3, 0xF4F4F4F4);
__m128i input1 = _mm_set_epi32(0x12121212, 0x13131313, 0x14141414, 0x15151515);
__m128i output;
/* 8-bit integers (paddb) */
output = _mm_add_epi8(input0, input1);
assert(_mm_extract_epi32(output, 3) == 0x03030303);
assert(_mm_extract_epi32(output, 2) == 0x05050505);
assert(_mm_extract_epi32(output, 1) == 0x07070707);
assert(_mm_extract_epi32(output, 0) == 0x09090909);
/* 32-bit integers (paddw) */
output = _mm_add_epi16(input0, input1);
assert(_mm_extract_epi32(output, 3) == 0x04030403);
assert(_mm_extract_epi32(output, 2) == 0x06050605);
assert(_mm_extract_epi32(output, 1) == 0x08070807);
assert(_mm_extract_epi32(output, 0) == 0x0A090A09);
/* 32-bit integers (paddd) */
output = _mm_add_epi32(input0, input1);
assert(_mm_extract_epi32(output, 3) == 0x04040403);
assert(_mm_extract_epi32(output, 2) == 0x06060605);
assert(_mm_extract_epi32(output, 1) == 0x08080807);
assert(_mm_extract_epi32(output, 0) == 0x0A0A0A09);
/* 64-bit integers (paddq) */
output = _mm_add_epi64(input0, input1);
assert(_mm_extract_epi32(output, 3) == 0x04040404);
assert(_mm_extract_epi32(output, 2) == 0x06060605);
assert(_mm_extract_epi32(output, 1) == 0x08080808);
assert(_mm_extract_epi32(output, 0) == 0x0A0A0A09);
}
return 0;
GitHub upstream.
Step 3: go and optimize some code and benchmark it
The final, and most important and hard step, is of course to actually use the intrinsics to make your code fast, and then to benchmark your improvement.
Doing so, will likely require you to learn a bit about the x86 microarchitecture, which I don't know myself. CPU vs IO bound will likely be one of the things that comes up: What do the terms "CPU bound" and "I/O bound" mean?
As mentioned at: https://stackoverflow.com/a/12172046/895245 this will almost inevitably involve reading Agner Fog's documentation, which appear to be better than anything Intel itself has published.
Hopefully however steps 1 and 2 will serve as a basis to at least experiment with functional non-performance aspects and quickly see what instructions are doing.
TODO: produce a minimal interesting example of such optimization here.

You can use the SIMD-Visualiser to graphically visualize and animate the operations. It'll greatly help understanding how the data lanes are processed

Related

movsd from memory to xmm0 in c x86-64 jit

I'm attempting to write a small x86-64 JIT, and I'm a little over my head in a few places.
I'm trying to JIT a simple function that assigns the value of a float into the xmm0 register and then returns it, but I am unsure of how I should go about encoding the arguments to the movsd call.
Any help would be greatly appreciated.
/* main.c */
#include <stdio.h>
#include <sys/mman.h>
#define xmm(n) (n)
typedef double(*fn)();
fn jit(){
char* memory = mmap(NULL,
4096,
PROT_READ|PROT_WRITE|PROT_EXEC,
MAP_PRIVATE|MAP_ANONYMOUS,
-1, 0);
int i=0;
float myfloat = 3.1f;
memory[i++] = 0x48; /* REX.W */
memory[i++] = 0xf2; /*******************/
memory[i++] = 0x0f; /* MOVSD xmm0, m64 */
memory[i++] = 0x10; /*******************/
memory[i++] = 0x47 | xmm(0) << 3; /* Not 100% sure this is correct */
memory[i++] = 0; /* what goes here to load myfloat into xmm0? */
memory[i++] = 0xc3; /* RET */
return (fn) memory;
}
int main(){
fn f = jit();
printf("result: %f\n", (*f)());
return 0;
}
SSE instructions generally don't support immediates except for some rare instructions with a one-byte immediate to control their operation. Thus you need to:
store myfloat to some nearby memory area
generate a memory operand the references this area
Both steps are easy. For the first step, I'd simply use the beginning of memory and let the code start right afterwards. Note that in this case, you need to make sure to return a pointer to the beginning of the function, not the beginning of memory. Other solutions are possible. Just make sure that myfloat is stored within &pm;2 GiB from the code.
To generate the operand, revisit the Intel manuals. The addressing mode you want is a 32 bit RIP-relative operand. This is generated with mod = 0, r/m = 5. The displacement is a signed 32 bit number that is added to the value of RIP right at the end of the instruction (this is where the +4 comes from as have to factor in the lenth of the displacement).
Thus we have something like:
memory[i++] = 0xf2; /*******************/
memory[i++] = 0x0f; /* MOVSD xmm0, m64 */
memory[i++] = 0x10; /*******************/
memory[i++] = 0005 | xmm(0) << 3; /* mod = 0, r/m = 5: [rip + disp32] */
*(int *)(memory + i) = memory + i + 4 - addr_of_myfloat;
i += 4;
memory[i++] = 0xc3; /* RET */
Note that the REX prefix is not needed here.

Testing equality between two __m128i variables

If I want to do a bitwise equality test between two __m128i variables, am I required to use an SSE instruction or can I use ==? If not, which SSE instruction should I use?
Although using _mm_movemask_epi8 is one solution, if you have a processor with SSE4.1 I think a better solution is to use an instruction which sets the zero or carry flag in the FLAGS register. This saves a test or cmp instruction.
To do this you could do this:
if(_mm_test_all_ones(_mm_cmpeq_epi8(v1,v2))) {
//v0 == v1
}
Edit: as Paul R pointed out _mm_test_all_ones generates two instructions: pcmpeqd and ptest. With _mm_cmpeq_epi8 that's three instructions total. Here's a better solution which only uses two instructions in total:
__m128i neq = _mm_xor_si128(v1,v2);
if(_mm_test_all_zeros(neq,neq)) {
//v0 == v1
}
This generates
pxor %xmm1, %xmm0
ptest %xmm0, %xmm0
You can use a compare and then extract a mask from the comparison result:
__m128i vcmp = _mm_cmpeq_epi8(v0, v1); // PCMPEQB
uint16_t vmask = _mm_movemask_epi8(vcmp); // PMOVMSKB
if (vmask == 0xffff)
{
// v0 == v1
}
This works with SSE2 and later.
As noted by #Zboson, if you have SSE 4.1 then you can do it like this, which may be slightly more efficient, as it's two SSE instructions and then a test on a flag (ZF):
__m128i vcmp = _mm_xor_si128(v0, v1); // PXOR
if (_mm_testz_si128(vcmp, vcmp)) // PTEST (requires SSE 4.1)
{
// v0 == v1
}
FWIW I just benchmarked both of these implementations on a Haswell Core i7 using clang to compile the test harness and the timing results were very similar - the SSE4 implementation appears to be very slightly faster but it's hard to measure the difference.
Consider using an SSE4.1 instruction ptest:
if(_mm_testc_si128(v0, v1)) {if equal}
else {if not}
ptest computes the bitwise AND of 128 bits (representing integer data) in a and mask, and return 1 if the result is zero, otherwise return 0.

My attempt to optimize memset on a 64bit machine takes more time than standard implementation. Can someone please explain why?

(machine is x86 64 bit running SL6)
I was trying to see if I can optimize memset on my 64 bit machine. As per my understanding memset goes byte by byte and sets the value. I assumed that if I do in units of 64 bits, it would be faster. But somehow it takes more time. Can someone take a look at my code and suggest why ?
/* Code */
#include <stdio.h>
#include <time.h>
#include <stdint.h>
#include <string.h>
void memset8(unsigned char *dest, unsigned char val, uint32_t count)
{
while (count--)
*dest++ = val;
}
void memset32(uint32_t *dest, uint32_t val, uint32_t count)
{
while (count--)
*dest++ = val;
}
void
memset64(uint64_t *dest, uint64_t val, uint32_t count)
{
while (count--)
*dest++ = val;
}
#define CYCLES 1000000000
int main()
{
clock_t start, end;
double total;
uint64_t loop;
uint64_t val;
/* memset 32 */
start = clock();
for (loop = 0; loop < CYCLES; loop++) {
val = 0xDEADBEEFDEADBEEF;
memset32((uint32_t*)&val, 0, 2);
}
end = clock();
total = (double)(end-start)/CLOCKS_PER_SEC;
printf("Timetaken memset32 %g\n", total);
/* memset 64 */
start = clock();
for (loop = 0; loop < CYCLES; loop++) {
val = 0xDEADBEEFDEADBEEF;
memset64(&val, 0, 1);
}
end = clock();
total = (double)(end-start)/CLOCKS_PER_SEC;
printf("Timetaken memset64 %g\n", total);
/* memset 8 */
start = clock();
for (loop = 0; loop < CYCLES; loop++) {
val = 0xDEADBEEFDEADBEEF;
memset8((unsigned char*)&val, 0, 8);
}
end = clock();
total = (double)(end-start)/CLOCKS_PER_SEC;
printf("Timetaken memset8 %g\n", total);
/* memset */
start = clock();
for (loop = 0; loop < CYCLES; loop++) {
val = 0xDEADBEEFDEADBEEF;
memset(&val, 0, 8);
}
end = clock();
total = (double)(end-start)/CLOCKS_PER_SEC;
printf("Timetaken memset %g\n", total);
printf("-----------------------------------------\n");
}
/*Result*/
Timetaken memset32 12.46
Timetaken memset64 7.57
Timetaken memset8 37.12
Timetaken memset 6.03
-----------------------------------------
Looks like the standard memset is more optimized than my implementation.
I tried looking into code and everywhere is see that implementation of memset is same as what I did for memset8. When I use memset8, the results are more like what I expect and very different from memset.
Can someone suggest what am I doing wrong ?
Actual memset implementations are typically hand-optimized in assembly, and use the widest aligned writes available on the targeted hardware. On x86_64 that will be at least 16B stores (movaps, for example). It may also take advantage of prefetching (this is less common recently, as most architectures have good automatic streaming prefetchers for regular access patterns), streaming stores or dedicated instructions (historically rep stos was unusably slow on x86, but it is quite fast on recent microarchitectures). Your implementation does none of these things. It should not be terribly surprising that the system implementation is faster.
As an example, consider the implementation used in OS X 10.8 (which has been superseded in 10.9). Here’s the core loop for modest-sized buffers:
.align 4,0x90
1: movdqa %xmm0, (%rdi,%rcx)
movdqa %xmm0, 16(%rdi,%rcx)
movdqa %xmm0, 32(%rdi,%rcx)
movdqa %xmm0, 48(%rdi,%rcx)
addq $64, %rcx
jne 1b
This loop will saturate the LSU when hitting cache on pre-Haswell microarchitectures at 16B/cycle. An implementation based on 64-bit stores like your memset64 cannot exceed 8B/cycle (and may not even achieve that, depending on the microarchitecture in question and whether or not the compiler unrolls your loop). On Haswell, an implementation that uses AVX stores or rep stos can go even faster and achieve 32B/cycle.
As per my understanding memset goes byte by byte and sets the value.
The details of what the memset facility does are implementation dependent. Relying on this is usually a good thing, because the I'm sure the implementors have extensive knowledge of the system and know all kind of techniques to make things as fast as possible.
To elaborate a little more, lets look at:
memset(&val, 0, 8);
When the compiler sees this it can notice a few things like:
The fill value is 0
The number of bytes to fill is 8
and then choose the right instructions to use depending on where val or &val is (in a register, in memory, ...). But if memset is stuck needing to be a function call (like your implementations), none of those optimizations are possible. Even if it can't make compile time decisions like:
memset(&val, x, y); // no way to tell at compile time what x and y will be...
you can be assured that there's a function call written in assembler that will be as fast as possible for your platform.
I think it's worth exploring how to write a faster memset particularly with GCC (which I assume you are using with Scientific Linux 6) in C/C++. Many people assume the standard implementation is optimized. This is not necessarily true. If you see table 2.1 of Agner Fog's Optimizing Software in C++ manuals he compares memcpy for for several different compilers and platforms to his own assembly optimized version of memcpy. Memcpy in GCC at the time really underperformed (but the Mac version was good). He claims the built in functions are even worse and recommends using -no-builtin. GCC in my experience is very good at optimizing code but its library functions (and built in functions) are not very optimized (with ICC it's the other way around).
It would be interesting to see how good you could do using intrinsics. If you look at his asmlib you can see how he implements memset with SSE and AVX (it would be interesting to compare this to the Apple's optimized version Stephen Canon posted).
With AVX you can see he writes 32 bytes at a time.
K100: ; Loop through 32-bytes blocks. Register use is swapped
; Rcount = end of 32-bytes blocks part
; Rdest = negative index from the end, counting up to zero
vmovaps [Rcount+Rdest], ymm0
add Rdest, 20H
jnz K100
vmovaps in this case is the same as the intrinsic _mm256_store_ps. Maybe GCC has improved since then but you might be able to beat GCC's implementation of memset using intrinsics. If you don't have AVX you certainly have SSE (all x86 64bit do) so you could look at the SSE version of his code to see what you could do.
Here is a start for your memset32 funcion assuming the array fits in the L1 cache. If the array does not fit in the cache you want to do a non temporal store with _mm256_stream_ps. For a general function you need several cases also including cases when the memory is not 32 byte aligned.
#include <immintrin.h>
int main() {
int count = (1<<14)/sizeof(int);
int* dest = (int*)_mm_malloc(sizeof(int)*count, 32); // 32 byte aligned
int val = 0xDEADBEEFDEADBEEF;
__m256 val8 = _mm256_castsi256_ps(_mm256_set1_epi32(val));
for(int i=0; i<count; i+=8) {
_mm256_store_ps((float*)(dest+i), val8);
}
}

Efficient complex arithmetic in x86 assembly for a Mandelbrot loop

Consider the following program:
for i=1 to 10000000 do
z <- z*z + c
where z and c are complex numbers.
What are efficient x86 assembler implementations of this program using x87 vs SSE and single vs double precision arithmetic?
EDIT I know I can write this in another language and trust the compiler to generate optimal machine code for me but I am doing this to learn how to write optimal x86 assembler myself. I have already looked at the code generated by gcc -O2 and my guess is that there is a lot of room for improvement but I am not adept enough to write optimal x86 assembler by hand myself so I am asking for help here.
You don't need to do this in assembler per se - you can use SSE via intrinsics for an efficient implementation, particularly if you can use single precision.
temp.re = z.re * z.re - z.im * z.im;
temp.im = 2.0 * z.re * z.im;
z.re = temp.re + c.re;
z.im = temp.im + c.im;
If you shuffle your input vectors appropriately then you can get all the multiplies in one instruction (_mm_mul_ps) and the adds in a second instruction (_mm_hadd_ps).
If you need double precision then the same general principle applies but you'll need two multiplies and two horizontal adds.
Note that most modern x86 CPUs have two scalar FPUs so the benefit for double precision in SSE may not be worthwhile - single precision however should definitely be a win.
Here's an initial working implementation using SSE - I think it is more or less debugged now - performance is not much better than scalar code compiled with gcc -O3 though, as gcc does a pretty good job of generating SSE code for this:
static Complex loop_simd(const Complex z0, const Complex c, const int n)
{
__m128 vz = _mm_set_ps(z0.im, z0.re, z0.im, z0.re);
const __m128 vc = _mm_set_ps(0.0f, 0.0f, c.im, c.re);
const __m128 vs = _mm_set_ps(0.0f, 0.0f, -0.0f, 0.0f);
Complex z[2];
int i;
for (i = 0; i < n; ++i)
{
__m128 vtemp;
vtemp = _mm_shuffle_ps(vz, vz, 0x16); // temp = { z.re, z.im, z.im, z.re }
vtemp = _mm_xor_ps(vtemp, vs); // temp = { z.re, -z.im, z.im, z.re }
vtemp = _mm_mul_ps(vtemp, vz); // temp = { z.re * z.re, - z.im * z.im, z.re * z.im, z.im * z.re }
vtemp = _mm_hadd_ps(vtemp, vtemp); // temp = { z.re * z.re - z.im * z.im, 2 * z.re * z.im, ... }
vz = _mm_add_ps(vtemp, vc); // temp = { z.re * z.re - z.im * z.im + c.re, 2 * z.re * z.im + c.im, ... }
}
_mm_storeu_ps(&z[0].re, vz);
return z[0];
}
Note that the inner loop is just 6 SSE instructions (it really ought to be 5) + a little housekeeping for the loop itself:
L4:
movaps %xmm0, %xmm1
shufps $22, %xmm0, %xmm1
xorps %xmm3, %xmm1
mulps %xmm1, %xmm0
haddps %xmm0, %xmm0
addps %xmm2, %xmm0
incl %eax
cmpl %edi, %eax
jne L4
L2:
Look at the disassembly from your favorite compiler. If you're looking to perform this computation for several values of z and c (like calculating a mandelbrot image) I suggest you work on four values at once and put these in SSE registers. If you look at the code in Paul R's answer you could do all these calculations for four values at once:
__m128 z_im, z_re, c_im, c_re; //Four z and c values packed
__m128 re = _mm_sub_ps(_mm_mul_ps(z_re, z_re), _mm_mul_ps(z_im, z_im));
__m128 im = _mm_mul_ps(z_re, z_im);
im = _mm_add_ps(im, im); // Multiply by two
z_re = _mm_add_ps(re, c_re);
z_im = _mm_add_ps(im, c_im);
Z = Z*Z + C
That is the mandelbrot fractal iteration.
I'm sure you'll find highly optimized code for this all over the net. I would start at the sourcecode of Xaos and Fractint.
Xaos: http://wmi.math.u-szeged.hu/xaos
fractint: http://www.fractint.org/

Fastest way to compute distance squared

My code relies heavily on computing distances between two points in 3D space.
To avoid the expensive square root I use the squared distance throughout.
But still it takes up a major fraction of the computing time and I would like to replace my simple function with something even faster.
I now have:
double distance_squared(double *a, double *b)
{
double dx = a[0] - b[0];
double dy = a[1] - b[1];
double dz = a[2] - b[2];
return dx*dx + dy*dy + dz*dz;
}
I also tried using a macro to avoid the function call but it doesn't help much.
#define DISTANCE_SQUARED(a, b) ((a)[0]-(b)[0])*((a)[0]-(b)[0]) + ((a)[1]-(b)[1])*((a)[1]-(b)[1]) + ((a)[2]-(b)[2])*((a)[2]-(b)[2])
I thought about using SIMD instructions but could not find a good example or complete list of instructions (ideally some multiply+add on two vectors).
GPU's are not an option since only one set of points is known at each function call.
What would be the fastest way to compute the distance squared?
A good compiler will optimize that about as well as you will ever manage. A good compiler will use SIMD instructions if it deems that they are going to be beneficial. Make sure that you turn on all such possible optimizations for your compiler. Unfortunately, vectors of dimension 3 don't tend to sit well with SIMD units.
I suspect that you will simply have to accept that the code produced by the compiler is probably pretty close to optimal and that no significant gains can be made.
The first obvious thing would be to use the restrict keyword.
As it is now, a and b are aliasable (and thus, from the compiler's point of view which assumes the worst possible case are aliased). No compiler will auto-vectorize this, as it is wrong to do so.
Worse, not only can the compiler not vectorize such a loop, in case you also store (luckily not in your example), it also must re-load values each time. Always be clear about aliasing, as it greatly impacts the compiler.
Next, if you can live with that, use float instead of double and pad to 4 floats even if one is unused, this is a more "natural" data layout for the majority of CPUs (this is somewhat platform specific, but 4 floats is a good guess for most platforms -- 3 doubles, a.k.a. 1.5 SIMD registers on "typical" CPUs, is not optimal anywhere).
(For a hand-written SIMD implementation (which is harder than you think), first and before all be sure to have aligned data. Next, look into what latencies your instrucitons have on the target machine and do the longest ones first. For example on pre-Prescott Intel it makes sense to first shuffle each component into a register and then multiply with itself, even though that uses 3 multiplies instead of one, because shuffles have a long latency. On the later models, a shuffle takes a single cycle, so that would be a total anti-optimization.
Which again shows that leaving it to the compiler is not such a bad idea.)
The SIMD code to do this (using SSE3):
movaps xmm0,a
movaps xmm1,b
subps xmm0,xmm1
mulps xmm0,xmm0
haddps xmm0,xmm0
haddps xmm0,xmm0
but you need four value vectors (x,y,z,0) for this to work. If you've only got three values then you'd need to do a bit of fiddling about to get the required format which would cancel out any benefit of the above.
In general though, due to the superscalar pipelined architecture of the CPU, the best way to get performance is to do the same operation on lots of data, that way you can interleave the various steps and do a bit of loop unrolling to avoid pipeline stalls. The above code will definately stall on the last three instructions based on the "can't use a value directly after it's modified" principle - the second instruction has to wait for the result of the previous instruction to complete which isn't good in a pipelined system.
Doing the calculation on two or more different sets points of points at the same time can remove the above bottleneck - whilst waiting for the result of one computation, you can start the computation of the next point:
movaps xmm0,a1
movaps xmm2,a2
movaps xmm1,b1
movaps xmm3,b2
subps xmm0,xmm1
subps xmm2,xmm3
mulps xmm0,xmm0
mulps xmm2,xmm2
haddps xmm0,xmm0
haddps xmm2,xmm2
haddps xmm0,xmm0
haddps xmm2,xmm2
If you would like to optimize something, at first profile code and inspect assembler output.
After compiling it with gcc -O3 (4.6.1) we'll have nice disassembled output with SIMD:
movsd (%rdi), %xmm0
movsd 8(%rdi), %xmm2
subsd (%rsi), %xmm0
movsd 16(%rdi), %xmm1
subsd 8(%rsi), %xmm2
subsd 16(%rsi), %xmm1
mulsd %xmm0, %xmm0
mulsd %xmm2, %xmm2
mulsd %xmm1, %xmm1
addsd %xmm2, %xmm0
addsd %xmm1, %xmm0
This type of problem often occurs in MD simulations. Usually the amount of calculations is reduced by cutoffs and neighbor lists, so the number for the calculation is reduced. The actual calculation of the squared distances however is exactly done (with compiler optimizations and a fixed type float[3]) as given in your question.
So if you want to reduce the amount of squared calculations you should tell us more about the problem.
Perhaps passing the 6 doubles directly as arguments could make it faster (because it could avoid the array dereference):
inline double distsquare_coord(double xa, double ya, double za,
double xb, double yb, double zb)
{
double dx = xa-yb; double dy=ya-yb; double dz=za-zb;
return dx*dx + dy*dy + dz*dz;
}
Or perhaps, if you have many points in the vicinity, you might compute a distance (to the same fixed other point) by linear approximation of the distances of other near points.
If you can rearrange your data to process two pairs of input vectors at once, you may use this code (SSE2 only)
// #brief Computes two squared distances between two pairs of 3D vectors
// #param a
// Pointer to the first pair of 3D vectors.
// The two vectors must be stored with stride 24, i.e. (a + 3) should point to the first component of the second vector in the pair.
// Must be aligned by 16 (2 doubles).
// #param b
// Pointer to the second pairs of 3D vectors.
// The two vectors must be stored with stride 24, i.e. (a + 3) should point to the first component of the second vector in the pair.
// Must be aligned by 16 (2 doubles).
// #param c
// Pointer to the output 2 element array.
// Must be aligned by 16 (2 doubles).
// The two distances between a and b vectors will be written to c[0] and c[1] respectively.
void (const double * __restrict__ a, const double * __restrict__ b, double * __restrict c) {
// diff0 = ( a0.y - b0.y, a0.x - b0.x ) = ( d0.y, d0.x )
__m128d diff0 = _mm_sub_pd(_mm_load_pd(a), _mm_load_pd(b));
// diff1 = ( a1.x - b1.x, a0.z - b0.z ) = ( d1.x, d0.z )
__m128d diff1 = _mm_sub_pd(_mm_load_pd(a + 2), _mm_load_pd(b + 2));
// diff2 = ( a1.z - b1.z, a1.y - b1.y ) = ( d1.z, d1.y )
__m128d diff2 = _mm_sub_pd(_mm_load_pd(a + 4), _mm_load_pd(b + 4));
// prod0 = ( d0.y * d0.y, d0.x * d0.x )
__m128d prod0 = _mm_mul_pd(diff0, diff0);
// prod1 = ( d1.x * d1.x, d0.z * d0.z )
__m128d prod1 = _mm_mul_pd(diff1, diff1);
// prod2 = ( d1.z * d1.z, d1.y * d1.y )
__m128d prod2 = _mm_mul_pd(diff1, diff1);
// _mm_unpacklo_pd(prod0, prod2) = ( d1.y * d1.y, d0.x * d0.x )
// psum = ( d1.x * d1.x + d1.y * d1.y, d0.x * d0.x + d0.z * d0.z )
__m128d psum = _mm_add_pd(_mm_unpacklo_pd(prod0, prod2), prod1);
// _mm_unpackhi_pd(prod0, prod2) = ( d1.z * d1.z, d0.y * d0.y )
// dotprod = ( d1.x * d1.x + d1.y * d1.y + d1.z * d1.z, d0.x * d0.x + d0.y * d0.y + d0.z * d0.z )
__m128d dotprod = _mm_add_pd(_mm_unpackhi_pd(prod0, prod2), psum);
__mm_store_pd(c, dotprod);
}

Resources