Broadcasting each element of a SIMD register in a loop - c

I need to fill a SIMD register with one element of another SIMD register. i.e. "broadcast" or "splat" a single element to every position.
My current code for doing it is (it's simplified, my real functions are declared inline):
__m128
f4_broadcast_1(__m128 a, int i) {
return _mm_set1_ps(a[i]);
}
This seem to generate efficient code on clang and gcc, but msvc forbids index accesses. Therefore, I instead write:
__m128
f4_broadcast_2(__m128 a, int i) {
union { __m128 reg; float f[4]; } r = { .reg = a };
return _mm_set1_ps(r.f[i]);
}
It generates the same code on clang and gcc but bad code on msvc. Godbolt link: https://godbolt.org/z/IlOqZl
Is there a better way to do it? I know there are similar questions on SO already, but my use case involves both extracting a float32 from a register and putting it back into another one, which is a slightly different problem. It would be cool if you could do this without having to touch the main memory at all.
Is the index variable or constant? Apparently it matters a lot to SIMD performance whether it is. In my case, the index is a loop variable:
for (int i = 0; i < M; i++) {
... broadcast element i of some reg
}
where M is either 4, 8 or 16. Maybe I should manually unroll the loops to make it a constant? It's a lot of code in the for-loop so the amount of code would grow considerably.
I also wonder how to do the same thing but for the __m256 and __m512 registers found on modern cpu:s.

Some of the shuffles in Get an arbitrary float from a simd register at runtime? can be adapted to broadcast an element instead of just get 1 copy if it to the low element. It discusses tradeoffs of shuffle vs. store/reload strategies in more detail.
x86 doesn't have a 32-bit-element variable-control shuffle until AVX vpermilps and AVX2 lane-crossing vpermps / vpermd. e.g.
// for runtime-variable i. Otherwise use something more efficient.
_mm_permutevar_ps(v, _mm_set1_epi32(i));
Or broadcast the low element with vbroadcastss (the vector-source version requires AVX2)
Broadcast loads are very efficient with AVX1: _mm_broadcast_ss(float*) (or _mm256/512 of the same) or simply 128/256/512 _mm_set1_ps(float) of a float that happened to come from memory, and let your compiler use a broadcast load if compiling with AVX1 enabled.
With a compile-time-constant control, you can broadcast any single element with SSE1
_mm_shuffle_ps(same,same, _MM_SHUFFLE(i,i,i,i));
Or for integer, with SSE2 pshufd: _mm_shuffle_epi32(v, _MM_SHUFFLE(i,i,i,i)).
Depending on your compiler, it may have to be a macro for i to be a compile-time constant with optimization disabled. The shuffle-control constant has to compile into an immediate byte (with 4x 2-bit fields) embedded in the machine code, not loaded as data or from a register.
Iterating over elements in a loop.
I'm using AVX2 in this section; this easily adapts to AVX512. Without AVX2 the store/reload strategy is your only good option for 256-bit vectors, or vpermilps for 128-bit vectors.
Possibly incrementing counters (by 4) for SSSE3 pshufb (with casting between __m128i and __m128) `could be a good idea without AVX where you don't have an efficient broadcast load.
the index is a loop variable
Compilers will often fully unroll loops for you, turning the loop variable into a compile-time constant for each iteration. But only with optimization enabled. In C++ you could maybe use template recursion to iterate with a constexpr.
MSVC doesn't optimize intrinsics, so if you write _mm_permutevar_ps(v, _mm_set1_epi32(i)); you're actually going to get that in each iteration, not 4x vshufps. But gcc and especially clang do optimize shuffles, so they should do well with optimization enabled.
It's a lot of code in the for-loop
If it's going to need a lot of registers / spend a lot of time, a store/reload might be a good choice especially with AVX available for broadcast reloads. Shuffle throughput is more limited (1/clock) than load throughput (2/clock) on current Intel CPUs.
Compiling your code with AVX512 will even allow broadcast memory-source operands, not a separate load instruction, so the compiler can even fold a broadcast-load into a source operand if it's only needed once.
/********* Store/reload strategy ****************/
#include <stdalign.h>
void foo(__m256 v) {
alignas(32) float tmp[8];
_mm256_store_ps(tmp, v);
// with only AVX1, maybe don't peel first iteration, or broadcast manually in 2 steps
__m256 bcast = _mm256_broadcastss_ps(_mm256_castps256_ps128(v)); // AVX2 vbroadcastss ymm, xmm
... do stuff with bcast ...
for (int i=1; i<8 ; i++) {
bcast = _mm256_broadcast_ss(tmp[i]);
... do stuff with bcast ...
}
}
I peeled the first iteration manually to just broadcast the low element with an ALU operation (lower latency) so it can get started right away. Later iterations then reload with a broadcast load.
Another option would be to use a SIMD increment for a vector shuffle-control (aka mask), if you have AVX2.
// Also AVX2
void foo(__m256 v) {
__m256i shufmask = _mm256_setzero_si256();
for (int i=1; i<8 ; i++) {
__m256 bcast = _mm256_permutevar8x32_ps(v, shufmask); // AVX2 vpermps
// prep for next iteration by incrementing the element selectors
shufmask = _mm256_add_epi32(shufmask, _mm256_set1_epi32(1));
... do stuff with bcast ...
}
}
This does one redundant vpaddd on shufmask (in the last iteration), but that's probably fine and better than peeling the first or last iteration. And obviously better than starting with -1 and doing an add before the shuffle in the first iteration.
Lane-crossing shuffles have 3-cycle latency on Intel so putting it right after the shuffle is probably good scheduling unless there's other per-iteration work that doesn't depend on bcast; out-of-order exec makes this a minor issue anyway. In the first iteration, vpermps with a mask that was just xor-zeroed is basically just as good as vbroadcastss on Intel, for out-of-order exec to get started quickly.
But on AMD CPUs (at least before Zen2), lane-crossing vpermps is pretty slow; lane-crossing shuffles with granularity <128-bit are extra expensive because it has to decode into 128-bit uops. So this strategy isn't wonderful on AMD. If store/reload performs equally for your surrounding code on Intel, then it might be a better choice to make your code AMD-friendly as well.
vpermps also has a new intrinsic introduced with AVX512 intrinsics: _mm256_permutexvar_ps(__m256i idx, __m256 a) which has the operands in the order that matches asm. Use whichever one you like, if your compiler supports the new one.

Broadcasting can be achieved by using the AVX2 instruction VBROADCASTSS, but moving the value to the input position (first position) depends on your instruction set:
VBROADCASTSS (128 bit version VEX and legacy)
This instruction broadcasts the source value on position [0] of the source XMM register to all four FLOATS of the destination XMM register. Its intrinsic is __m128 _mm_broadcastss_ps(__m128 a);.
If the position of your value is constant, you can use the instruction PSHUFD to move the value from its current position to the first position. Its intrinsic is __m128i _mm_shuffle_epi32(__m128i a, int n). To move the value that should be broadcasted to the first position of the input XMM vector, use the following values for int n:
1. : 0h
2. : 1h
3. : 2h
4. : 3h
This moves the value from the 0..3 position to the first position.
So use, for example, use the following to move the fourth position of the input vector to the first one:
__m128 newInput = _mm_shuffle_epi32(__m128i input, 3)
Then apply the following intrinsic:
__m128 result = _mm_broadcastss_ps(__m128 newInput);
Now the value from the fourth position of your input XMM vector should be on all positions of your result vector.

Related

SSE interleave/merge/combine 2 vectors using a mask, per-element conditional move?

Essentially i am trying to implement a ternary-like operation on 2 SSE (__m128) vectors.
The mask is another __m128 vector obtained from _mm_cmplt_ps.
What i want to achieve is to select element of vector a when the corresponding element of the mask is 0xffff'ffff and element of b when the mask's element is 0.
Example of the desired operation (in semi-pseudocode):
const __m128i mask = {0xffffffff, 0, 0xffffffff, 0}; // e.g. a compare result
const __m128 a = {1.0, 1.1, 1.2, 1.3};
const __m128 b = {2.0, 2.1, 2.2, 2.3};
const __m128 c = interleave(a, b, mask); // c contains {1.0, 2.1, 1.2, 2.3}
I am having trouble implementing this operation in SIMD (SSE) intrinsics.
My original idea was to mix a and b using moves and then shuffle the elements using the mask, however _mm_shuffle_ps takes an int mask consisting of 4-bit indices, not an __m128 mask.
Another idea was to use something akin to a conditional move, but there does not seem to be a conditional move in SSE (or at least I did not manage to find it in Intel's guide).
How is this normally done in SSE?
That's called a "blend".
Intel's intrinsics guide groups blend instructions under the "swizzle" category, along with shuffles.
You're looking for SSE4.1 blendvps (intrinsic _mm_blendv_ps). The other element sizes are _mm_blendv_pd and _mm_blendv_epi8. These use the high bit of the corresponding element as the control, so you can use a float directly (without _mm_cmp_ps) if its sign bit is interesting.
__m128i mask = _mm_castps_si128(_mm_cmplt_ps(x, y)); // integer 0 / -1 bit patterns
__m128 c = _mm_blendv_ps(b, a, mask); // copy element from 2nd op where the mask is set
Note that I reversed a, b to b, a because SSE blends take the element from the 2nd operand in positions where the mask was set. Like a conditional-move which copies when the condition is true. If you name your constants / variables accordingly, you can write blend(a,b, mask) instead of having them backwards. Or give them meaningful names line ones and twos.
In other cases where your control operand is a constant, there's also _mm_blend_ps / pd / _mm_blend_epi16 (an 8-bit immediate operand can only control 8 separate elements, so 8x 2-byte.)
Performance
blendps xmm, xmm, imm8 is a single-uop instruction for any vector ALU port on Intel CPUs, as cheap as andps. (https://uops.info/). pblendw is also single-uop, but only runs on port 5 on Intel, competing with shuffles. AVX2 vpblendd blends with dword granularity, an integer version of vblendps, and with the same very good efficiency. (It's an integer-SIMD instruction; unlike shuffles, blends have extra bypass latency on Intel CPUs if you mix integer and FP SIMD.)
But variable blendvps is 2 uops on Intel before Skylake (and only for port 5). And the AVX version (vblendvps) is unfortunately still 2 uops on Intel (3 on Alder Lake-P, 4 on Alder Lake-E). Although the uops can at least run on any of 3 vector ALU ports.
The vblendvps version is funky in asm because it has 4 operands, not overwriting any of the inputs registers. (The non-AVX version overwrites one input, and uses XMM0 implicitly as the mask input.) Intel uops apparently can't handle 4 separate registers, only 3 for stuff like FMA, adc, and cmov. (And AVX-512 vpternlogd which can do a bitwise blend as a single uop)
AMD has fully efficient handling of vblendvps, single uop (except for YMM on Zen1) with 2/clock throughput.
Without SSE4.1, you can emulate with ANDN/AND/OR
(x&~mask) | (y&mask) is equivalent to _mm_blendv_ps(x,y,mask), except it's pure bitwise so all the bits of each mask element should match the top bit. (e.g. a compare result, or broadcast the top bit with _mm_srai_epi32(mask, 31).)
Compilers know this trick and will use it when auto-vectorizing scalar code if you compile without any arch options like -march=haswell or whatever. (SSE4.1 was new in 2nd-gen Core 2, so it's increasingly widespread but not universal.)
For constant / loop-invariant a^b without SSE4.1
x ^ ((x ^ y) & mask saves one operation if you can reuse x ^ y. (Suggested in comments by Aki). Otherwise this is worse, longer critical-path latency and no instruction-level parallelism.
Without AVX non-destructive 3-operand instructions, this way would need a movaps xmm,xmm register-copy to save b, but it can choose to destroy the mask instead of a. The AND/ANDN/OR way would normally destroy its 2nd operand, the one you use with y&mask, and destroy the mask with ANDN (~mask & x).
With AVX, vblendvps is guaranteed available. Although if you're targeting Intel (especially Haswell) and don't care about AMD, you might still choose an AND/XOR if a^b can be pre-computed.
Blending with 0: just AND[N]
(Applies to integer and FP; the bit-pattern for 0.0f and 0.0 is all-zeros, same as integer 0.)
You don't need to copy a zero from anywhere, just x & mask, or x & ~mask.
(The (x & ~mask) | (y & mask) expression reduces to this for x=0 or y=0; that term becomes zero, and z|=0 is a no-op.)
For example, to implement x = mask ? x+y : x, which would put the latency of an add and blend on the critical path, you simplify to x += select y or zero according to mask, i.e. to x += y & mask; Or to do the opposite, x += ~mask & y using _mm_andn_ps(mask, vy).
This has an ADD and an AND operation (so already cheaper than blend on some CPUs, and you don't need a 0.0 source operand in another register). Also, the dependency chain through x now only includes the += operation, if you were doing this in a loop with loop-carried x but independent y & mask. e.g. summing only matching elements of an array, sum += A[i]>=thresh ? A[i] : 0.0f;
For an example of an extra slowdown due to lengthening the critical path unnecessarily, see gcc optimization flag -O3 makes code slower than -O2 where GCC's scalar asm using cmov has that flaw, doing cmov as part of the loop-carried dependency chain instead of to prepare a 0 or arr[i] input for it.
Clamping to a MIN or MAX
If you want something like a < upper ? a : upper, you can do that clamping in one instruction with _mm_min_ps instead of cmpps / blendvps. (Similarly _mm_max_ps, and _mm_min_pd / _mm_max_pd.)
See What is the instruction that gives branchless FP min and max on x86? for details on their exact semantics, including a longstanding (but recently fixed) GCC bug where the FP intrinsics didn't provide the expected strict-FP semantics of which operand would be the one to keep if one was NaN.
Or for integer, SSE2 is highly non-orthogonal (signed min/max for int16_t, unsigned min/max for uint8_t). Similar for saturating pack instructions. SSE4.1 fills in the missing operand-size and signedness combinations.
Signed: SSE2 _mm_max_epi16 (and corresponding mins for all of these)
SSE4.1 _mm_max_epi32 / _mm_max_epi8; AVX-512 _mm_max_epi64
Unsigned: SSE2 _mm_max_epu8
SSE4.1 _mm_max_epu16 / _mm_max_epu32; AVX-512 _mm_max_epu64
AVX-512 makes masking/blending a first-class operation
AVX-512 compares into a mask register, k0..k7 (intrinsic types __mmask16 and so on). Merge-masking or zero-masking can be part of most ALU instructions. There is also a dedicated blend instruction that blends according to a mask.
I won't go into the details here, suffice it to say if you have a lot of conditional stuff to do, AVX-512 is great (even if you only use 256-bit vectors to avoid the turbo clock speed penalties and so on.) And you'll want to read up on the details for AVX-512 specifically.
As suggested by #Peter Cordes in the comments to the question, the blendvps instruction (_mm_blendv_* intrinsics) is used to preform the interleave/conditional move operation.
It should be noted that _mm_blendv_* family select the left-hand elements if the mask contains 0 instead of 0xffffffff, thus a and b should be passed in reverse order.
The implementation then would look like this
const __m128i mask = {0xffffffff, 0, 0xffffffff, 0}; // e.g. a compare result
const __m128 m_ps = _mm_castsi128_ps(mask);
const __m128 a = {1.0, 1.1, 1.2, 1.3};
const __m128 b = {2.0, 2.1, 2.2, 2.3};
#ifdef __SSE4_1__ // _mm_blendv_ps requires SSE4.1
const __m128 c = _mm_blendv_ps(b, a, m_ps);
#else
const __m128 c = _mm_or_ps(_mm_and_ps(m_ps, a), _mm_andnot_ps(m_ps, b));
#endif
// c contains {1.0, 2.1, 1.2, 2.3}

Optimizing horizontal boolean reduction in ARM NEON

I'm experimenting with a cross-platform SIMD library ala ecmascript_simd aka SIMD.js, and part of this is providing a few "horizontal" SIMD operations. In particular, the API that library offers includes any(<boolN x M>) -> bool and all(<boolN x M>) -> bool functions, where <T x K> is a vector of K elements of type T and boolN is an N-bit boolean, i.e. all ones or all zeros, as SSE and NEON return for their comparison operations.
For example, let v be a <bool32 x 4> (a 128-bit vector), it could be the result of VCLT.S32 or something. I'd like to compute all(v) = v[0] && v[1] && v[2] && v[3] and any(v) = v[0] || v[1] || v[2] || v[3].
This is easy with SSE, e.g. movmskps will extract the high bit of each element, so all for the type above becomes (with C intrinsics):
#include<xmmintrin.h>
int all(__m128 x) {
return _mm_movemask_ps(x) == 8 + 4 + 2 + 1;
}
and similarly for any.
I'm struggling to find obvious/nice/efficient ways to implement this with NEON, which doesn't support an instruction like movmskps. There's the approach of simply extracting each element and computing with scalars. E.g. there's the naive method but there's also the approach of using the "horizontal" operations NEON supports, like VPMAX and VPMIN.
#include<arm_neon.h>
int all_naive(uint32x4_t v) {
return v[0] && v[1] && v[2] && v[3];
}
int all_horiz(uint32x4_t v) {
uint32x2_t x = vpmin_u32(vget_low_u32(v),
vget_high_u32(v));
uint32x2_t y = vpmin_u32(x, x);
return x[0] != 0;
}
(One can do a similar thing for the latter with VPADD, which may be faster, but it's fundamentally the same idea.)
Are there are other tricks one can use to implement this?
Yes, I know that horizontal operations are not great with SIMD vector units. But sometimes it is useful, e.g. many SIMD implementations of mandlebrot will operate on 4 points at once, and bail out of the inner loop when all of them are out of range... which requires doing a comparison and then a horizontal and.
This is my current solution that is implemented in eve library.
If your backend has C++20 support, you can just use the library: it has implementations for arm-v7, arm-v8 (only little endian at the moment) and all x86 from sse2 to avx-512. It's open source and MIT licensed. In beta at the moment. Feel free to reach out (for example with an issue) if you are trying out the library.
Take everything with a grain of salt - I don't yet have the arm benchmarks set up.
NOTE: On top of basic all and any we also have a movemask equivalent to do more complex operations like first_true. That wasn't part of the question and it's not amazing but the code can be found here
ARM-V7, 8 bytes register
Now, arm-v7 is 32 bit architecture, so we try to get to 32 bit elements where we can.
any
Use pairwise 32 bit max. If any element is true, the max is true.
// cast to dwords
dwords = vpmax_u32(dwords, dwords);
return vget_lane_u32(dwords, 0);
all
Pairwise min instead of max. Also what you test against changes.
If you have 4 byte element - just test for true. If shorts or chars - you need to test for -1;
// cast to dwords
dwords = vpmin_u32(dwords, dwords);
std::uint32_t combined = vget_lane_u32(dwords, 0);
// Assuming T is your scalar type
if constexpr ( sizeof(T) >= 4 ) return combined;
// I decided that !~ is better than -1, compiler will figure it out.
return !~combined;
ARM-V7, 16 bytes register
For anything bigger than chars, just do a conversion to a 64 bit one. Here is the list of vector narrow integer conversions.
For chars, the best I found is to reinterpret as uint32 and do an extra check.
So compare for == -1 for all and > 0 for any.
Seemed nicer asm the split in two 8 byte registers.
Then just do all/any on that dword register.
ARM-v8, 8 byte
ARM-v8 has 64 bit support, so you can just get a 64 bit lane. That one is trivially testable.
ARM-v8, 16 byte
We use vmaxvq_u32 since there is not a 64 bit one for any and vminvq_u32, vminvq_u16 or vminvq_u8 for all depending on the element size.
(Which is similar to glibc strlen)
Conclusion
Lack of benchmarks definitely makes me worried, some instructions are problematic sometimes and I don't know about it.
Regardless, that's the best I've got, so far at least.
NOTE: first time looking at arm today, I might be wrong about things.
UPD: Removed ARM-V7 and will write up what we ended up doing in a separate answer
ARM-V8.
For ARM-V8, have a look at this strlen implementation from glibc:
https://code.woboq.org/userspace/glibc/sysdeps/aarch64/multiarch/strlen_asimd.S.html
ARM-V8 introduced reductions across registers. Here they use min to compare with 0
uminv datab2, datav.16b
mov tmp1, datav2.d[0]
cbnz tmp1, L(main_loop)
Find the smallest char, compare with 0 - take the next 16 bytes.
There are a few other reductions in ARM-V8 like vaddvq_u8.
I'm pretty sure you can do most of the things you'd want from movemask and alike with this.
Another interesting thing here is how they find the first_true
/* Set te NULL byte as 0xff and the rest as 0x00, move the data into a
pair of scalars and then compute the length from the earliest NULL
byte. */
cmeq datav.16b, datav.16b, #0
mov data1, datav.d[0]
mov data2, datav.d[1]
cmp data1, 0
csel data1, data1, data2, ne
sub len, src, srcin
rev data1, data1
add tmp2, len, 8
clz tmp1, data1
csel len, len, tmp2, ne
add len, len, tmp1, lsr 3
Looks a bit intimidating, but my understanding is:
they narrow it down to a 64 bit number just by doing if/else (if the first half doesn't have the zero - the second half does.
use count leading zeroes to find the position (didn't quite understand all of the endianness stuff here but it's libc - so this is the correct one).
So - if you only need V8 - there is a solution.

Quickly find whether a value is present in a C array?

I have an embedded application with a time-critical ISR that needs to iterate through an array of size 256 (preferably 1024, but 256 is the minimum) and check if a value matches the arrays contents. A bool will be set to true is this is the case.
The microcontroller is an NXP LPC4357, ARM Cortex M4 core, and the compiler is GCC. I already have combined optimisation level 2 (3 is slower) and placing the function in RAM instead of flash. I also use pointer arithmetic and a for loop, which does down-counting instead of up (checking if i!=0 is faster than checking if i<256). All in all, I end up with a duration of 12.5 µs which has to be reduced drastically to be feasible. This is the (pseudo) code I use now:
uint32_t i;
uint32_t *array_ptr = &theArray[0];
uint32_t compareVal = 0x1234ABCD;
bool validFlag = false;
for (i=256; i!=0; i--)
{
if (compareVal == *array_ptr++)
{
validFlag = true;
break;
}
}
What would be the absolute fastest way to do this? Using inline assembly is allowed. Other 'less elegant' tricks are also allowed.
In situations where performance is of utmost importance, the C compiler will most likely not produce the fastest code compared to what you can do with hand tuned assembly language. I tend to take the path of least resistance - for small routines like this, I just write asm code and have a good idea how many cycles it will take to execute. You may be able to fiddle with the C code and get the compiler to generate good output, but you may end up wasting lots of time tuning the output that way. Compilers (especially from Microsoft) have come a long way in the last few years, but they are still not as smart as the compiler between your ears because you're working on your specific situation and not just a general case. The compiler may not make use of certain instructions (e.g. LDM) that can speed this up, and it's unlikely to be smart enough to unroll the loop. Here's a way to do it which incorporates the 3 ideas I mentioned in my comment: Loop unrolling, cache prefetch and making use of the multiple load (ldm) instruction. The instruction cycle count comes out to about 3 clocks per array element, but this doesn't take into account memory delays.
Theory of operation: ARM's CPU design executes most instructions in one clock cycle, but the instructions are executed in a pipeline. C compilers will try to eliminate the pipeline delays by interleaving other instructions in between. When presented with a tight loop like the original C code, the compiler will have a hard time hiding the delays because the value read from memory must be immediately compared. My code below alternates between 2 sets of 4 registers to significantly reduce the delays of the memory itself and the pipeline fetching the data. In general, when working with large data sets and your code doesn't make use of most or all of the available registers, then you're not getting maximum performance.
; r0 = count, r1 = source ptr, r2 = comparison value
stmfd sp!,{r4-r11} ; save non-volatile registers
mov r3,r0,LSR #3 ; loop count = total count / 8
pld [r1,#128]
ldmia r1!,{r4-r7} ; pre load first set
loop_top:
pld [r1,#128]
ldmia r1!,{r8-r11} ; pre load second set
cmp r4,r2 ; search for match
cmpne r5,r2 ; use conditional execution to avoid extra branch instructions
cmpne r6,r2
cmpne r7,r2
beq found_it
ldmia r1!,{r4-r7} ; use 2 sets of registers to hide load delays
cmp r8,r2
cmpne r9,r2
cmpne r10,r2
cmpne r11,r2
beq found_it
subs r3,r3,#1 ; decrement loop count
bne loop_top
mov r0,#0 ; return value = false (not found)
ldmia sp!,{r4-r11} ; restore non-volatile registers
bx lr ; return
found_it:
mov r0,#1 ; return true
ldmia sp!,{r4-r11}
bx lr
Update:
There are a lot of skeptics in the comments who think that my experience is anecdotal/worthless and require proof. I used GCC 4.8 (from the Android NDK 9C) to generate the following output with optimization -O2 (all optimizations turned on including loop unrolling). I compiled the original C code presented in the question above. Here's what GCC produced:
.L9: cmp r3, r0
beq .L8
.L3: ldr r2, [r3, #4]!
cmp r2, r1
bne .L9
mov r0, #1
.L2: add sp, sp, #1024
bx lr
.L8: mov r0, #0
b .L2
GCC's output not only doesn't unroll the loop, but also wastes a clock on a stall after the LDR. It requires at least 8 clocks per array element. It does a good job of using the address to know when to exit the loop, but all of the magical things compilers are capable of doing are nowhere to be found in this code. I haven't run the code on the target platform (I don't own one), but anyone experienced in ARM code performance can see that my code is faster.
Update 2:
I gave Microsoft's Visual Studio 2013 SP2 a chance to do better with the code. It was able to use NEON instructions to vectorize my array initialization, but the linear value search as written by the OP came out similar to what GCC generated (I renamed the labels to make it more readable):
loop_top:
ldr r3,[r1],#4
cmp r3,r2
beq true_exit
subs r0,r0,#1
bne loop_top
false_exit: xxx
bx lr
true_exit: xxx
bx lr
As I said, I don't own the OP's exact hardware, but I will be testing the performance on an nVidia Tegra 3 and Tegra 4 of the 3 different versions and post the results here soon.
Update 3:
I ran my code and Microsoft's compiled ARM code on a Tegra 3 and Tegra 4 (Surface RT, Surface RT 2). I ran 1000000 iterations of a loop which fails to find a match so that everything is in cache and it's easy to measure.
My Code MS Code
Surface RT 297ns 562ns
Surface RT 2 172ns 296ns
In both cases my code runs almost twice as fast. Most modern ARM CPUs will probably give similar results.
There's a trick for optimizing it (I was asked this on a job-interview once):
If the last entry in the array holds the value that you're looking for, then return true
Write the value that you're looking for into the last entry in the array
Iterate the array until you encounter the value that you're looking for
If you've encountered it before the last entry in the array, then return true
Return false
bool check(uint32_t theArray[], uint32_t compareVal)
{
uint32_t i;
uint32_t x = theArray[SIZE-1];
if (x == compareVal)
return true;
theArray[SIZE-1] = compareVal;
for (i = 0; theArray[i] != compareVal; i++);
theArray[SIZE-1] = x;
return i != SIZE-1;
}
This yields one branch per iteration instead of two branches per iteration.
UPDATE:
If you're allowed to allocate the array to SIZE+1, then you can get rid of the "last entry swapping" part:
bool check(uint32_t theArray[], uint32_t compareVal)
{
uint32_t i;
theArray[SIZE] = compareVal;
for (i = 0; theArray[i] != compareVal; i++);
return i != SIZE;
}
You can also get rid of the additional arithmetic embedded in theArray[i], using the following instead:
bool check(uint32_t theArray[], uint32_t compareVal)
{
uint32_t *arrayPtr;
theArray[SIZE] = compareVal;
for (arrayPtr = theArray; *arrayPtr != compareVal; arrayPtr++);
return arrayPtr != theArray+SIZE;
}
If the compiler doesn't already apply it, then this function will do so for sure. On the other hand, it might make it harder on the optimizer to unroll the loop, so you will have to verify that in the generated assembly code...
Keep the table in sorted order, and use Bentley's unrolled binary search:
i = 0;
if (key >= a[i+512]) i += 512;
if (key >= a[i+256]) i += 256;
if (key >= a[i+128]) i += 128;
if (key >= a[i+ 64]) i += 64;
if (key >= a[i+ 32]) i += 32;
if (key >= a[i+ 16]) i += 16;
if (key >= a[i+ 8]) i += 8;
if (key >= a[i+ 4]) i += 4;
if (key >= a[i+ 2]) i += 2;
if (key >= a[i+ 1]) i += 1;
return (key == a[i]);
The point is,
if you know how big the table is, then you know how many iterations there will be, so you can fully unroll it.
Then, there's no point testing for the == case on each iteration because, except on the last iteration, the probability of that case is too low to justify spending time testing for it.**
Finally, by expanding the table to a power of 2, you add at most one comparison, and at most a factor of two storage.
** If you're not used to thinking in terms of probabilities, every decision point has an entropy, which is the average information you learn by executing it.
For the >= tests, the probability of each branch is about 0.5, and -log2(0.5) is 1, so that means if you take one branch you learn 1 bit, and if you take the other branch you learn one bit, and the average is just the sum of what you learn on each branch times the probability of that branch.
So 1*0.5 + 1*0.5 = 1, so the entropy of the >= test is 1. Since you have 10 bits to learn, it takes 10 branches.
That's why it's fast!
On the other hand, what if your first test is if (key == a[i+512)? The probability of being true is 1/1024, while the probability of false is 1023/1024. So if it's true you learn all 10 bits!
But if it's false you learn -log2(1023/1024) = .00141 bits, practically nothing!
So the average amount you learn from that test is 10/1024 + .00141*1023/1024 = .0098 + .00141 = .0112 bits. About one hundredth of a bit.
That test is not carrying its weight!
You're asking for help with optimising your algorithm, which may push you to assembler. But your algorithm (a linear search) is not so clever, so you should consider changing your algorithm. E.g.:
perfect hash function
binary search
Perfect hash function
If your 256 "valid" values are static and known at compile time, then you can use a perfect hash function. You need to find a hash function that maps your input value to a value in the range 0..n, where there are no collisions for all the valid values you care about. That is, no two "valid" values hash to the same output value. When searching for a good hash function, you aim to:
Keep the hash function reasonably fast.
Minimise n. The smallest you can get is 256 (minimal perfect hash function), but that's probably hard to achieve, depending on the data.
Note for efficient hash functions, n is often a power of 2, which is equivalent to a bitwise mask of low bits (AND operation). Example hash functions:
CRC of input bytes, modulo n.
((x << i) ^ (x >> j) ^ (x << k) ^ ...) % n (picking as many i, j, k, ... as needed, with left or right shifts)
Then you make a fixed table of n entries, where the hash maps the input values to an index i into the table. For valid values, table entry i contains the valid value. For all other table entries, ensure that each entry of index i contains some other invalid value which doesn't hash to i.
Then in your interrupt routine, with input x:
Hash x to index i (which is in the range 0..n)
Look up entry i in the table and see if it contains the value x.
This will be much faster than a linear search of 256 or 1024 values.
I've written some Python code to find reasonable hash functions.
Binary search
If you sort your array of 256 "valid" values, then you can do a binary search, rather than a linear search. That means you should be able to search 256-entry table in only 8 steps (log2(256)), or a 1024-entry table in 10 steps. Again, this will be much faster than a linear search of 256 or 1024 values.
If the set of constants in your table is known in advance, you can use perfect hashing to ensure that only one access is made to the table. Perfect hashing determines a hash function
that maps every interesting key to a unique slot (that table isn't always dense, but you can decide how un-dense a table you can afford, with less dense tables typically leading to simpler hashing functions).
Usually, the perfect hash function for the specific set of keys is relatively easy to compute; you don't want that to be long and complicated because that competes for time perhaps better spent doing multiple probes.
Perfect hashing is a "1-probe max" scheme. One can generalize the idea, with the thought that one should trade simplicity of computing the hash code with the time it takes to make k probes. After all, the goal is "least total time to look up", not fewest probes or simplest hash function. However, I've never seen anybody build a k-probes-max hashing algorithm. I suspect one can do it, but that's likely research.
One other thought: if your processor is extremely fast, the one probe to memory from a perfect hash probably dominates the execution time. If the processor is not very fast, than k>1 probes might be practical.
Use a hash set. It will give O(1) lookup time.
The following code assumes that you can reserve value 0 as an 'empty' value, i.e. not occurring in actual data.
The solution can be expanded for a situation where this is not the case.
#define HASH(x) (((x >> 16) ^ x) & 1023)
#define HASH_LEN 1024
uint32_t my_hash[HASH_LEN];
int lookup(uint32_t value)
{
int i = HASH(value);
while (my_hash[i] != 0 && my_hash[i] != value) i = (i + 1) % HASH_LEN;
return i;
}
void store(uint32_t value)
{
int i = lookup(value);
if (my_hash[i] == 0)
my_hash[i] = value;
}
bool contains(uint32_t value)
{
return (my_hash[lookup(value)] == value);
}
In this example implementation, the lookup time will typically be very low, but at the worst case can be up to the number of entries stored. For a realtime application, you can consider also an implementation using binary trees, which will have a more predictable lookup time.
In this case, it might be worthwhile investigating Bloom filters. They're capable of quickly establishing that a value is not present, which is a good thing since most of the 2^32 possible values are not in that 1024 element array. However, there are some false positives that will need an extra check.
Since your table is apparently static, you can determine which false positives exist for your Bloom filter and put those in a perfect hash.
Assuming your processor runs at 204 MHz which seems to be the maximum for the LPC4357, and also assuming your timing result reflects the average case (half of the array traversed), we get:
CPU frequency: 204 MHz
Cycle period: 4.9 ns
Duration in cycles: 12.5 µs / 4.9 ns = 2551 cycles
Cycles per iteration: 2551 / 128 = 19.9
So, your search loop spends around 20 cycles per iteration. That doesn't sound awful, but I guess that in order to make it faster you need to look at the assembly.
I would recommend dropping the index and using a pointer comparison instead, and making all the pointers const.
bool arrayContains(const uint32_t *array, size_t length)
{
const uint32_t * const end = array + length;
while(array != end)
{
if(*array++ == 0x1234ABCD)
return true;
}
return false;
}
That's at least worth testing.
Other people have suggested reorganizing your table, adding a sentinel value at the end, or sorting it in order to provide a binary search.
You state "I also use pointer arithmetic and a for loop, which does down-counting instead of up (checking if i != 0 is faster than checking if i < 256)."
My first advice is: get rid of the pointer arithmetic and the downcounting. Stuff like
for (i=0; i<256; i++)
{
if (compareVal == the_array[i])
{
[...]
}
}
tends to be idiomatic to the compiler. The loop is idiomatic, and the indexing of an array over a loop variable is idiomatic. Juggling with pointer arithmetic and pointers will tend to obfuscate the idioms to the compiler and make it generate code related to what you wrote rather than what the compiler writer decided to be the best course for the general task.
For example, the above code might be compiled into a loop running from -256 or -255 to zero, indexing off &the_array[256]. Possibly stuff that is not even expressible in valid C but matches the architecture of the machine you are generating for.
So don't microoptimize. You are just throwing spanners into the works of your optimizer. If you want to be clever, work on the data structures and algorithms but don't microoptimize their expression. It will just come back to bite you, if not on the current compiler/architecture, then on the next.
In particular using pointer arithmetic instead of arrays and indexes is poison for the compiler being fully aware of alignments, storage locations, aliasing considerations and other stuff, and for doing optimizations like strength reduction in the way best suited to the machine architecture.
Vectorization can be used here, as it is often is in implementations of memchr. You use the following algorithm:
Create a mask of your query repeating, equal in length to your OS'es bit count (64-bit, 32-bit, etc.). On a 64-bit system you would repeat the 32-bit query twice.
Process the list as a list of multiple pieces of data at once, simply by casting the list to a list of a larger data type and pulling values out. For each chunk, XOR it with the mask, then XOR with 0b0111...1, then add 1, then & with a mask of 0b1000...0 repeating. If the result is 0, there is definitely not a match. Otherwise, there may (usually with very high probability) be a match, so search the chunk normally.
Example implementation: https://sourceware.org/cgi-bin/cvsweb.cgi/src/newlib/libc/string/memchr.c?rev=1.3&content-type=text/x-cvsweb-markup&cvsroot=src
If you can accommodate the domain of your values with the amount of memory that's available to your application, then, the fastest solution would be to represent your array as an array of bits:
bool theArray[MAX_VALUE]; // of which 1024 values are true, the rest false
uint32_t compareVal = 0x1234ABCD;
bool validFlag = theArray[compareVal];
EDIT
I'm astounded by the number of critics. The title of this thread is "How do I quickly find whether a value is present in a C array?" for which I will stand by my answer because it answers precisely that. I could argue that this has the most speed efficient hash function (since address === value). I've read the comments and I'm aware of the obvious caveats. Undoubtedly those caveats limit the range of problems this can be used to solve, but, for those problems that it does solve, it solves very efficiently.
Rather than reject this answer outright, consider it as the optimal starting point for which you can evolve by using hash functions to achieve a better balance between speed and performance.
I'm sorry if my answer was already answered - just I'm a lazy reader. Feel you free to downvote then ))
1) you could remove counter 'i' at all - just compare pointers, ie
for (ptr = &the_array[0]; ptr < the_array+1024; ptr++)
{
if (compareVal == *ptr)
{
break;
}
}
... compare ptr and the_array+1024 here - you do not need validFlag at all.
all that won't give any significant improvement though, such optimization probably could be achieved by the compiler itself.
2) As it was already mentioned by other answers, almost all modern CPU are RISC-based, for example ARM. Even modern Intel X86 CPUs use RISC cores inside, as far as I know (compiling from X86 on fly). Major optimization for RISC is pipeline optimization (and for Intel and other CPU as well), minimizing code jumps. One type of such optimization (probably a major one), is "cycle rollback" one. It's incredibly stupid, and efficient, even Intel compiler can do that AFAIK. It looks like:
if (compareVal == the_array[0]) { validFlag = true; goto end_of_compare; }
if (compareVal == the_array[1]) { validFlag = true; goto end_of_compare; }
...and so on...
end_of_compare:
This way the optimization is that the pipeline is not broken for the worst case (if compareVal is absent in the array), so it is as fast as possible (of course not counting algorithm optimizations such as hash tables, sorted arrays and so on, mentioned in other answers, which may give better results depending on array size. Cycles Rollback approach can be applied there as well by the way. I'm writing here about that I think I didn't see in others)
The second part of this optimization is that that array item is taken by direct address (calculated at compiling stage, make sure you use a static array), and do not need additional ADD op to calculate pointer from array's base address. This optimization may not have significant effect, since AFAIK ARM architecture has special features to speed up arrays addressing. But anyway it's always better to know that you did all the best just in C code directly, right?
Cycle Rollback may look awkward due to waste of ROM (yep, you did right placing it to fast part of RAM, if your board supports this feature), but actually it's a fair pay for speed, being based on RISC concept. This is just a general point of calculation optimization - you sacrifice space for sake of speed, and vice versa, depending on your requirements.
If you think that rollback for array of 1024 elements is too large sacrifice for your case, you can consider 'partial rollback', for example dividing the array into 2 parts of 512 items each, or 4x256, and so on.
3) modern CPU often support SIMD ops, for example ARM NEON instruction set - it allows to execute the same ops in parallel. Frankly speaking I do not remember if it is suitable for comparison ops, but I feel it may be, you should check that. Googling shows that there may be some tricks as well, to get max speed, see https://stackoverflow.com/a/5734019/1028256
I hope it can give you some new ideas.
This is more like an addendum than an answer.
I've had a similar case in the past, but my array was constant over a considerable number of searches.
In half of them, the searched value was NOT present in array. Then I realized I could apply a "filter" before doing any search.
This "filter" is just a simple integer number, calculated ONCE and used in each search.
It's in Java, but it's pretty simple:
binaryfilter = 0;
for (int i = 0; i < array.length; i++)
{
// just apply "Binary OR Operator" over values.
binaryfilter = binaryfilter | array[i];
}
So, before do a binary search, I check binaryfilter:
// Check binaryfilter vs value with a "Binary AND Operator"
if ((binaryfilter & valuetosearch) != valuetosearch)
{
// valuetosearch is not in the array!
return false;
}
else
{
// valuetosearch MAYBE in the array, so let's check it out
// ... do binary search stuff ...
}
You can use a 'better' hash algorithm, but this can be very fast, specially for large numbers.
May be this could save you even more cycles.
Make sure the instructions ("the pseudo code") and the data ("theArray") are in separate (RAM) memories so CM4 Harvard architecture is utilized to its full potential. From the user manual:
To optimize the CPU performance, the ARM Cortex-M4 has three buses for Instruction (code) (I) access, Data (D) access, and System (S) access. When instructions and data are kept in separate memories, then code and data accesses can be done in parallel in one cycle. When code and data are kept in the same memory, then instructions that load or store data may take two cycles.
Following this guideline I observed ~30% speed increase (FFT calculation in my case).
I'm a great fan of hashing. The problem of course is to find an efficient algorithm that is both fast and uses a minimum amount of memory (especially on an embedded processor).
If you know beforehand the values that may occur you can create a program that runs through a multitude of algorithms to find the best one - or, rather, the best parameters for your data.
I created such a program that you can read about in this post and achieved some very fast results. 16000 entries translates roughly to 2^14 or an average of 14 comparisons to find the value using a binary search. I explicitly aimed for very fast lookups - on average finding the value in <=1.5 lookups - which resulted in greater RAM requirements. I believe that with a more conservative average value (say <=3) a lot of memory could be saved. By comparison the average case for a binary search on your 256 or 1024 entries would result in an average number of comparisons of 8 and 10, respectively.
My average lookup required around 60 cycles (on a laptop with an intel i5) with a generic algorithm (utilizing one division by a variable) and 40-45 cycles with a specialized (probably utilizing a multiplication). This should translate into sub-microsecond lookup times on your MCU, depending of course on the clock frequency it executes at.
It can be real-life-tweaked further if the entry array keeps track of how many times an entry was accessed. If the entry array is sorted from most to least accessed before the indeces are computed then it'll find the most commonly occuring values with a single comparison.

Finding the instances of the number in a vector array in KNC (Xeon Phi)

I am trying to exploit the SIMD 512 offered by knc (Xeon Phi) to improve performance of the below C code using intel intrinsics. However, my intrinsic embedded code runs slower than auto-vectorized code
C Code
int64_t match=0;
int *myArray __attribute__((align(64)));
myArray = (int*) malloc (sizeof(int)*SIZE); //SIZE is array size taken from user
radomize(myArray); //to fill some random data
int searchVal=24;
#pragma vector always
for(int i=0;i<SIZE;i++) {
if (myArray[i]==searchVal) match++;
return match;
Intrinsic embedded code:
In the below code I am first loading the array and comparing it with search key. Intrinsics return 16bit mask values that is reduced using _mm512_mask_reduce_add_epi32().
register int64_t match=0;
int *myArray __attribute__((align(64)));
myArray = (int*) malloc (sizeof(int)*SIZE); //SIZE is array size taken from user
const int values[16]=\
{ 1,1,1,1,\
1,1,1,1,\
1,1,1,1,\
1,1,1,1,\
};
__m512i const flag = _mm512_load_epi32((void*) values);
__mmask16 countMask;
__m512i searchVal = _mm512_set1_epi32(16);
__m512i kV = _mm512_setzero_epi32();
for (int i=0;i<SIZE;i+=16)
{
// kV = _mm512_setzero_epi32();
kV = _mm512_loadunpacklo_epi32(kV,(void* )(&myArray[i]));
kV = _mm512_loadunpackhi_epi32(kV,(void* )(&myArray[i + 16]));
countMask = _mm512_cmpeq_epi32_mask(kV, searchVal);
match += _mm512_mask_reduce_add_epi32(countMask,flag);
}
return match;
I believe I have some how introduced extra cycles in this code and hence it is running slowly compared to the auto-vectorized code. Unlike SIMD128 which directly returns the value of the compare in 128bit register, SIMD512 returns the values in mask register which is adding more complexity to my code. Am I missing something here, there must be a way out to directly compare and keep count of successful search rather than using masks such as XOR ops.
Finally, please suggest me the ways to increase the performance of this code using intrinsics. I believe I can juice out more performance using intrinsics. This was at least true for SIMD128 where in using intrinsics allowed me to gain 25% performance.
I suggest the following optimizations:
Use prefetching. Your code performs very little computations, and almost surely bandwidth-bound. Xeon Phi has hardware prefetching only for L2 cache, so for optimal performance you need to insert prefetching instructions manually.
Use aligned read _mm512_load_epi32 as hinted by #PaulR. Use memalign function instead of malloc to guarantee that the array is really aligned on 64 bytes. And in case you will ever need misaligned instructions, use _mm512_undefined_epi32() as the source for the first misaligned load, as it breaks dependency on kV (in your current code) and lets the compiler do additional optimizations.
Unroll the array by 2 or use at least two threads to hide instruction latency.
Avoid using int variable as an index. unsigned int, size_t or ssize_t are better options.

some mandelbrot drawing routine from c to sse2

I want to rewrite such simple routine to SSE2 code, (preferably
in nasm) and I am not totally sure how to do it, two things
not clear (how to express calculations (inner loop and those from
outer loop too) and how to call c code function "SetPixelInDibInt(i ,j, palette[n]);"
from under staticaly linked asm code
void DrawMandelbrotD(double ox, double oy, double lx, int N_ITER)
{
double ly = lx * double(CLIENT_Y)/double(CLIENT_X);
double dx = lx / CLIENT_X;
double dy = ly / CLIENT_Y;
double ax = ox - lx * 0.5 + dx * 0.5;
double ay = oy - ly * 0.5 + dy * 0.5;
static double re, im, re_n, im_n, c_re, c_im, rere, imim, int n;
for(int j=0; j<CLIENT_Y; j+=1)
{
for(int i=0; i<CLIENT_X; i+=1)
{
c_re = ax + i * dx;
c_im = ay + j * dy;
re = c_re;
im = c_im;
rere=re*re;
imim=im*im;
n=1;
for(int k=0;k<N_ITER;k++)
{
im = (re+re)*im + c_im;
re = rere - imim + c_re;
rere=re*re;
imim=im*im;
if ( (rere + imim) > 4.0 ) break;
n++;
}
SetPixelInDibInt(i ,j, palette[n]);
}
}
}
could someone help, I would like not to see other code
implementations but just nasm-sse translation of those above
- it would be most helpfull in my case - could someone help with that?
Intel has a complete implementation as an AVX example. See below.
What makes Mandelbrot tricky is that the early-out condition for each point in the set (i.e. pixel) is different. You could keep a pair or quad of pixels iterating until the magnitude of both exceeds 2.0 (or you hit max iterations). To do otherwise would require tracking which pixel's points were in which vector element.
Anyway, a simplistic implementation to operate on a vector of 2 (or 4 with AVX) doubles at a time would have its throughput limited by the latency of the dependency chains. You'd need to do multiple dependency chains in parallel to keep both of Haswell's FMA units fed. So you'd duplicate your variables, and interleave operations for two iterations of the outer loop inside the inner loop.
Keeping track of which pixels are being calculated would be a little tricky. I think it might take less overhead to use one set of registers for one row of pixels, and another set of registers for another row. (So you can always just move 4 pixels to the right, rather than checking whether the other dep chain is already processing that vector.)
I suspect that only checking the loop exit condition every 4 iterations or so might be a win. Getting code to branch based on a packed vector comparison, is slightly more expensive than in the scalar case. The extra FP add required is also expensive. (Haswell can do two FMAs per cycle, (latency = 5). The lone FP add unit is one the same port as one of the FMA units. The two FP mul units are on the same ports that can run FMA.)
The loop condition can be checked with a packed-compare to generate a mask of zeros and ones, and a (V)PTEST of that register with itself to see if it's all zero. (edit: movmskps then test+jcc is fewer uops, but maybe higher latency.) Then obviously je or jne as appropriate, depending on whether you did a FP compare that leaves zeros when you should exit, or zeros when you shouldn't. NAN shouldn't be possible, but there's no reason not to choose your comparison op such that a NAN will result in the exit condition being true.
const __mm256d const_four = _mm256_set1_pd(4.0); // outside the loop
__m256i cmp_result = _mm256_cmp_pd(mag_squared, const_four, _CMP_LE_OQ); // vcmppd. result is non-zero if at least one element < 4.0
if (_mm256_testz_si256(cmp_result, cmp_result))
break;
There MIGHT be some way to use PTEST directly on a packed-double, with some bit-hack AND-mask that will pick out bits that will be set iff the FP value is > 4.0. Like maybe some bits in the exponent? Maybe worth considering. I found a forum post about it, but didn't try it out.
Hmm, oh crap, this doesn't record WHEN the loop condition failed, for each vector element separately, for the purpose of coloring the points outside the Mandelbrot set. Maybe test for any element hitting the condition (instead of all), record the result, and then set that element (and c for that element) to 0.0 so it won't trigger the exit condition again. Or maybe scheduling pixels into vector elements is the way to go after all. This code might do fairly well on a hyperthreaded CPU, since there will be a lot of branch mispredicts with every element separately triggering the early-out condition.
That might waste a lot of your throughput, and given that 4 uops per cycle is doable, but only 2 of them can be FP mul/add/FMA, there's room for a significant amount of integer code to schedule points into vector elements. (On Sandybridge/Ivybrideg, without FMA, FP throughput is lower. But there are only 3 ports that can handle integer ops, and 2 of those are the ports for the FP mul and FP add units.)
Since you don't have to read any source data, there's only 1 memory access stream for each dep chain, and it's a write stream. (And it's low bandwidth, since most points take a lot of iterations before you're ready to write a single pixel value.) So the number of hardware prefetch streams isn't a limiting factor for the number of dep chains to run in parallel. Cache misses latency should be hidden by write buffers.
I can write some code if anyone's still interested in this (just post a comment). I stopped at the high-level design stage since this is an old question, though.
==============
I also found that Intel already used the Mandelbrot set as an example for one of their AVX tutorials. They use the mask-off-vector-elements method for the loop condition. (using the mask generated directly by vcmpps to AND). Their results indicate that AVX (single-precision) gave a 7x speedup over scalar float, so apparently it's not common for neighbouring pixels to hit the early-out condition at very different numbers of iterations. (at least for the zoom / pan they tested with.)
They just let the FP results keep accumulating for elements that fail the early-out condition. They just stop incrementing the counter for that element. Hopefully most systems default to having the control word set to zero out denormals, if denormals still take extra cycles.
Their code is silly in one way, though: They track the iteration count for each vector element with a floating-point vector, and then convert it to int at the end before use. It'd be faster, and not occupy an FP execution unit, to use packed-integers for that. Oh, I know why they do that: AVX (without AVX2) doesn't support 256bit integer vector ops. They could have used packed 16bit int loop counters, but that could overflow. (And they'd have to compress the mask down from 256b to 128b).
They also test for all elements being > 4.0 with movmskps and then test that, instead of using ptest. I guess the test / jcc can macro-fuse, and run on a different execution unit than FP vector ops, so it's maybe not even slower. Oh, and of course AVX (without AVX2) doesn't have 256bit PTEST. Also, PTEST is 2 uops, so actually movmskps + test / jcc is fewer uops than ptest + jcc. (PTEST is 1 fused-domain uop on SnB, but still 2 unfused uops for the execution ports. On IvB/HSW, 2 uops even in the fused domain.) So it looks like movmskps is the optimal way, unless you can take advantage of the bitwise AND that's part of PTEST, or need to test more than just the high bit of each element. If a branch is unpredictable, ptest might be lower latency, and thus be worth it by catching mispredicts a cycle sooner.

Resources