Low pass shape curve with low cycle count - c

Hi, I want to achieve the above curve in software without using a dsp function. I was hoping to use a fast and low cycle arm function like multiply-accumulate.
Is there any fast way of doing this in C on an embedded arm processor?

The curve shown is that of the simplest possible first-order filter charactarised by a 3dB cut-off frequency fc> and a 6dB/Octave or 20dB/Decade roll-off. As an analogue filter it could be implemented as a simple passive RC filter thus:
In the digital domain such a filter would be implemented by:
yn = a0 xn + b1 yn-1
Where y are input samples and x output samples. Or in code:
void lowPassFilter( const tSample* x, tSample* y, size_t sample_count )
{
static tSample y_1 = 0 ;
for( int i = 0; i < n; ++i)
{
y[i] = a0 * x[i] + b1 * y_1 ;
y_1 = y[i];
}
}
The filter is characterised by the coefficients:
a0 = 1 - x
b1 = x
where x is a value between 0 and 1 (I'll address the eradication of the implied floating point operations in due course):
x = e-2πfc
Where fc is the desired -3dB cut-off frequency expressed as a fraction of the sample rate. So for a sample rate 32Ksps and a cut-off frequency of 1KHz, fc = 1000/32000 = 0.03125, so:
b1 = x = e-2πfc = 0.821725
a0 = 1 - x = 0.178275
Now naïvely plugging those constants into the lowPassFilter() will result in generation of floating point code and on an MCU without an FPU that might be prohibitive and even with an FPU might be be sub-optimal. So in this case we might use a fixed-point representation. Since all the real values are less than one, and the machine is 32bit, a UQ0.16 representation would be appropriate, as intermediate multiplication results will not then overflow a 32 bit machine word. This does require the sample width to be 16bit or less (or scaled accordingly). So using fixed-point the code might look like:
typedef uint16_t tSample ;
#define b1 53852 // 0.821725 * 65535
#define a0 (1 - b1)
#define FIXED_MUL( x, y ) (((x)*(y))>>16))
void lowPassFilter( const tSample* x, tSample* y, size_t sample_count )
{
static tSample y_1 = 0 ;
for( int i = 0; i < n; ++i)
{
y[i] = FIXED_MUL(a0, x[i]) + FIXED_MUL(b1, y_1) ;
y_1 = y[i];
}
}
Now that is not a significant amount of processing for most ARM processors at 32ksps suggested in this example. Obviously it depends what other demands are on the processor, but on its own this would not be a significant load, even without applying compiler optimisation. As with any optimisation, you should implement it, measure it and improve it if necessary.
As a first stab I'd trust the compiler optimiser to generate code that in most cases will meet requirements or at least be as good as you might achieve with handwritten assembler. Whether or not it would choose to use a multiply-accumulate instruction is out of your hands, but if it didn't the chances are that it is because there s no advantage.
Bare in mind that ARM Cortex-M4 and M7 for example include DSP instructions not supported in M0 or M3 ports. The compiler may or may not utilise these, but the simplest way to guarantee that without resorting to assembler would be to use the CMSIS DSP Library whether or not that provided greater performance or better fidelity than the above, you would have to test.
Worth noting that the function lowPassFilter() retains state staically so can be called iteratively for "blocks" of samples (from ADC DMA transfer for example), so you might have:
int dma_buffer_n = 0
for(;;)
{
waitEvent( DMA_BUFFER_READY ) ;
lowPassFilter( dma_buffer[dma_buffer_n], output_buffer, DMA_BLOCK_SIZE ) ;
dma_buffer_n = dma_buffer_n == 0 ? 1 : 0 ; // Flip buffers
}
The use of DMA double-buffering is likely to be far more important to performance than the filter function implementation. I have worked on a DSP application sampling two channels at 48ksps on a 72MHz Cortex-M3 with far more complex DSP requirements than this with each channel having a high pass IIR, an 18 coefficient FIR and a Viterbi decoder, so I really do think that your assumption that this simple filter will not be fast enough is somewhat premature.

Related

Inverse FFT with CMSIS is wrong

I am attempting to perform an FFT on a signal and use the resulting data to retrieve the original samples via an IFFT. I am using the CMSIS DSP library on an STM32 with a M3.
My issue is understanding the scaling that occurs with the FFT, and also how to get a correct IFFT. Currently the IFFT results in a similar wave as the input, but points are scaled anywhere between 120x-140x of the original. Is this simply the result of precision errors of q15? Am I too scale the IFFT results by 7 bits? My code is below
The documentation also mentions "For the RIFFT, the source buffer must at least have length fftLenReal + 2. The last two elements must be equal to what would be generated by the RFFT: (pSrc[0] - pSrc[1]) >> 1 and 0". What is this for? Applying these operations to FFT_SIZE2 - 2, and FFT_SIZE2 - 1 respectively did not change the results of the IFFT at all.
//128 point FFT
#define FFT_SIZE 128
arm_rfft_instance_q15 fft_instance;
arm_rfft_instance_q15 ifft_instance;
//time domain signal buffers
float32_t sinetbl_in[FFT_SIZE];
float32_t sinetbl_out[FFT_SIZE];
//a copy for comparison after RFFT since function modifies input buffer
volatile q15_t fft_in_buf_cpy[FFT_SIZE];
q15_t fft_in_buf[FFT_SIZE];
//output for FFT, RFFT provides real and complex data organized as re[0], im[0], re[1], im[1]
q15_t fft_out_buf[FFT_SIZE*2];
q15_t fft_out_buf_mag[FFT_SIZE*2];
//inverse fft buffer result
q15_t ifft_out_buf[FFT_SIZE];
//generate 1kHz sinewave with a sample frequency of 8kHz for 128 samples, amplitude is 1
for(int i = 0; i < FFT_SIZE; ++i){
sinetbl_in[i] = arm_sin_f32(2*3.14*1000 *i/8000);
sinetbl_out[i] = 0;
}
//convert buffer to q15 (not enough flash to use f32 fft functions)
arm_float_to_q15(sinetbl_in, fft_in_buf, FFT_SIZE);
memcpy(fft_in_buf_cpy, fft_in_buf, FFT_SIZE*2);
//perform RFFT
arm_rfft_init_q15(&fft_instance, FFT_SIZE, 0, 1);
arm_rfft_q15(&fft_instance, fft_in_buf, fft_out_buf);
//calculate magnitude, skip 1st real and img numbers as they are DC and both real
arm_cmplx_mag_q15(fft_out_buf + 2, fft_out_buf_mag + 1, FFT_SIZE/2-1);
//weird operations described by documentation, does not change results
//fft_out_buf[FFT_SIZE*2 - 2] = (fft_out_buf[0] - fft_out_buf[1]) >> 1;
//fft_out_buf[FFT_SIZE*2 - 1] = 0;
//perform inverse FFT
arm_rfft_init_q15(&ifft_instance, FFT_SIZE, 1, 1);
arm_rfft_q15(&ifft_instance, fft_out_buf, ifft_out_buf);
//closest approximation to get to original scaling
//arm_shift_q15(ifft_out_buf, 7, ifft_out_buf, FFT_SIZE);
//convert back to float for comparison with input
arm_q15_to_float(ifft_out_buf, sinetbl_out, FFT_SIZE);
I feel like I answered my own question with the precision comment, but I'd like to be sure. Am I doing this FFT stuff right?
Thanks in advance
As Cris pointed out some libraries skip the normalization process. CMSIS DSP is one of those libraries as it is intended to be fast. For CMSIS, depending on the FFT size you must left shift your data a certain amount to get back to the original range. In my case with a FFT size of 128 and also the magnitude calculation, it was 7 as I originally surmised.

How can I effectively time the execution of a function that's only a few cycles long?

I'm trying to do some comparisons on different methods for calculating dot products using SSE Intrinsics, but since the methods are only a few cycles long, I have to run the instructions trillions of times for it to take more than a tiny fraction of a second. The only problem with that is that gcc with the -O3 flag is "optimizing" my main method into an infinite loop.
My code is
#include <immintrin.h>
#include <stdlib.h>
#include <stdio.h>
#include <time.h>
#include <inttypes.h>
#define NORMAL 0
struct _Vec3 {
float x;
float y;
float z;
float w;
};
typedef struct _Vec3 Vec3;
__m128 singleDot(__m128 a, __m128 b) {
return _mm_dp_ps(a, b, 0b00001111);
}
int main(int argc, char** argv) {
for (uint16_t j = 0; j < (1L << 16); j++) {
for (uint64_t i = 0; i < (1L << 62); i++) {
Vec3 a = {i, i + 0.5, i + 1, 0.0};
Vec3 b = {i, i - 0.5, i - 1, 0.0};
#if NORMAL
float ans = normalDot(a, b); // naive implementation
#else
// float _c[4] = {a.x, a.y, a.z, 0.0};
// float _d[4] = {b.x, b.y, b.z, 0.0};
__m128 c = _mm_load_ps((float*)&a);
__m128 d = _mm_load_ps((float*)&b);
__m128 ans = singleDot(c, d);
#endif
}
}
}
but when I compile with gcc -std=c11 -march=native -O3 main.c and run objdump -d, it turns main into
0000000000400400 <main>:
400400: eb fe jmp 400400 <main>
is there an alternative for timing different approaches?
That's because this:
for (uint16_t j = 0; j < (1L << 16); j++) {
is an infinte loop -- the maximum value for a uint16_t is 65535 (216-1), after which it will wrap back to 0. So the test will always be true.
Even after fixing the uint16_t instead of uint64_t typo that makes your loop infinite, the actual work would still be optimized away because nothing uses the result.
You can use Google Benchmark's DoNotOptimize to stop your unused ans result from being optimized away. e.g. functions like "Escape" and "Clobber" that this Q&A is asking about. That works in GCC, and that question links to a relevant youtube video from a clang developer's CppCon talk.
Another worse way is to assign the result to a volatile variable. But keep in mind that common-subexpression elimination can still optimize away earlier parts of the calculation, whether you use volatile or an inline-asm macro to make sure the compiler materializes the actual final result somewhere. Micro-benchmarking is hard. You need the compiler to do exactly the amount of work that would happen in the real use-case, but not more.
See Idiomatic way of performance evaluation? for that and more.
Keep in mind exactly what you're measuring here.
Probably a bunch of loop overhead and probably store-forwarding stalls depending on whether the compiler vectorizes those initializers or not, but even if it does; conversion of integer to FP and 2x SIMD FP additions are comparable in cost a dpps in terms of throughput cost. (Which is what you're measuring, not latency; the difference matters a lot on CPUs with out-of-order execution depending on the context of your real use case).
Performance is not 1-dimensional at the scale of a couple instructions. Slapping a repeat loop around some work can measure the throughput or latency, depending on whether you make the input dependent on the previous output (a loop-carried dependency chain). But if your work ends up bound on front-end throughput, then loop overhead is an important part. Plus you might end up with effects due to how the machine code for your loop lines up with 32-byte boundaries for the uop cache.
For something this short and simple, static analysis is usually good. Count uops for the front-end, and ports in the back end, and analyze latency. What considerations go into predicting latency for operations on modern superscalar processors and how can I calculate them by hand?. LLVM-MCA can do this for you, so can IACA. You can also measure as part of your real loop that uses dot products.
See also RDTSCP in NASM always returns the same value for some discussion of what you can measure about a single instruction.
I have to run the instructions trillions of times for it to take more than a tiny fraction of a second
Current x86 CPUs can loop at best one iteration per clock cycle for a tiny loop. It's impossible to write a loop that runs faster than that. 4 billion iterations (in asm) will take at least a whole second on a 4GHz CPU.
Of course an optimizing C compiler could unroll your loop and be doing as many source iterations as it wants per asm jump.

How to extract 8 integers from a 256 vector using intel intrinsics?

I'm trying to enhance the performance of my code by using the 256bit vector (Intel intrinsics - AVX).
I have an I7 Gen.4 (Haswell architecture) processor supporting SSE1 to SSE4.2 and AVX/AVX2 Extensions.
This is the code snippet that I'm trying to enhance:
/* code snipet */
kfac1 = kfac + factor; /* 7 cycles for 7 additions */
kfac2 = kfac1 + factor;
kfac3 = kfac2 + factor;
kfac4 = kfac3 + factor;
kfac5 = kfac4 + factor;
kfac6 = kfac5 + factor;
kfac7 = kfac6 + factor;
k1fac1 = k1fac + factor1; /* 7 cycles for 7 additions */
k1fac2 = k1fac1 + factor1;
k1fac3 = k1fac2 + factor1;
k1fac4 = k1fac3 + factor1;
k1fac5 = k1fac4 + factor1;
k1fac6 = k1fac5 + factor1;
k1fac7 = k1fac6 + factor1;
k2fac1 = k2fac + factor2; /* 7 cycles for 7 additions */
k2fac2 = k2fac1 + factor2;
k2fac3 = k2fac2 + factor2;
k2fac4 = k2fac3 + factor2;
k2fac5 = k2fac4 + factor2;
k2fac6 = k2fac5 + factor2;
k2fac7 = k2fac6 + factor2;
/* code snipet */
From the Intel Manuals, I found this.
an integer addition ADD takes 1 cycle (latency).
a vector of 8 integers (32 bit) takes 1 cycle also.
So I've tried ton make it this way:
fac = _mm256_set1_epi32 (factor )
fac1 = _mm256_set1_epi32 (factor1)
fac2 = _mm256_set1_epi32 (factor2)
v1 = _mm256_set_epi32 (0,kfac6,kfac5,kfac4,kfac3,kfac2,kfac1,kfac)
v2 = _mm256_set_epi32 (0,k1fac6,k1fac5,k1fac4,k1fac3,k1fac2,k1fac1,k1fac)
v3 = _mm256_set_epi32 (0,k2fac6,k2fac5,k2fac4,k2fac3,k2fac2,k2fac1,k2fac)
res1 = _mm256_add_epi32 (v1,fac) ////////////////////
res2 = _mm256_add_epi32 (v2,fa1) // just 3 cycles //
res3 = _mm256_add_epi32 (v3,fa2) ////////////////////
But the problem is that these factors are going to be used as tables indexes ( table[kfac] ... ). So i have to extract the factor as seperate integers again.
I wonder if there is any possible way to do it??
A smart compiler could get table+factor into a register and use indexed addressing modes to get table+factor+k1fac6 as an address. Check the asm, and if the compiler doesn't do this for you, try changing the source to hand-hold the compiler:
const int *tf = table + factor;
const int *tf2 = table + factor2; // could be lea rdx, [rax+rcx*4] or something.
...
foo = tf[kfac2];
bar = tf2[k2fac6]; // could be mov r12, [rdx + rdi*4]
But to answer the question you asked:
Latency isn't a big deal when you have that many independent adds happening. The throughput of 4 scalar add instructions per clock on Haswell is much more relevant.
If k1fac2 and so on are already in contiguous memory, then using SIMD is possibly worth it. Otherwise all the shuffling and data transfer to get them in/out of vector regs makes it definitely not worth it. (i.e. the stuff compiler emits to implement _mm256_set_epi32 (0,kfac6,kfac5,kfac4,kfac3,kfac2,kfac1,kfac).
You could avoid needing to get the indices back into integer registers by using an AVX2 gather for the table loads. But gather is slow on Haswell, so probably not worth it. Maybe worth it on Broadwell.
On Skylake, gather is fast so it could be good if you can SIMD whatever you do with the LUT results. If you need to extract all the gather results back to separate integer registers, it's probably not worth it.
If you did need to extract 8x 32-bit integers from a __m256i into integer registers, you have three main choices of strategy:
Vector store to a tmp array and scalar loads
ALU shuffle instructions like pextrd (_mm_extract_epi32). Use _mm256_extracti128_si256 to get the high lane into a separate __m128i.
A mix of both strategies (e.g. store the high 128 to memory while using ALU stuff on the low half).
Depending on the surrounding code, any of these three could be optimal on Haswell.
pextrd r32, xmm, imm8 is 2 uops on Haswell, with one of them needing the shuffle unit on port5. That's a lot of shuffle uops, so a pure ALU strategy is only going to be good if your code is bottlenecked on L1d cache throughput. (Not the same thing as memory bandwidth). movd r32, xmm is only 1 uop, and compilers do know to use that when compiling _mm_extract_epi32(vec, 0), but you can also write int foo = _mm_cvtsi128_si32(vec) to make it explicit and remind yourself that the bottom element can be accessed more efficiently.
Store/reload has good throughput. Intel SnB-family CPUs including Haswell can run two loads per clock, and IIRC store-forwarding works from an aligned 32-byte store to any 4-byte element of it. But make sure it's an aligned store, e.g. into _Alignas(32) int tmp[8], or into a union between an __m256i and an int array. You could still store into the int array instead of the __m256i member to avoid union type-punning while still having the array aligned, but it's easiest to just use C++11 alignas or C11 _Alignas.
_Alignas(32) int tmp[8];
_mm256_store_si256((__m256i*)tmp, vec);
...
foo2 = tmp[2];
However, the problem with store/reload is latency. Even the first result won't be ready for 6 cycles after the store-data is ready.
A mixed strategy gives you the best of both worlds: ALU to extract the first 2 or 3 elements lets execution get started on whatever code uses them, hiding the store-forwarding latency of the store/reload.
_Alignas(32) int tmp[8];
_mm256_store_si256((__m256i*)tmp, vec);
__m128i lo = _mm256_castsi256_si128(vec); // This is free, no instructions
int foo0 = _mm_cvtsi128_si32(lo);
int foo1 = _mm_extract_epi32(lo, 1);
foo2 = tmp[2];
// rest of foo3..foo7 also loaded from tmp[]
// Then use foo0..foo7
You might find that it's optimal to do the first 4 elements with pextrd, in which case you only need to store/reload the upper lane. Use vextracti128 [mem], ymm, 1:
_Alignas(16) int tmp[4];
_mm_store_si128((__m128i*)tmp, _mm256_extracti128_si256(vec, 1));
// movd / pextrd for foo0..foo3
int foo4 = tmp[0];
...
With fewer larger elements (e.g. 64-bit integers), a pure ALU strategy is more attractive. 6-cycle vector-store / integer-reload latency is longer than it would take to get all of the results with ALU ops, but store/reload could still be good if there's a lot of instruction-level parallelism and you bottleneck on ALU throughput instead of latency.
With more smaller elements (8 or 16-bit), store/reload is definitely attractive. Extracting the first 2 to 4 elements with ALU instructions is still good. And maybe even vmovd r32, xmm and then picking that apart with integer shift/mask instructions is good.
Your cycle-counting for the vector version is also bogus. The three _mm256_add_epi32 operations are independent, and Haswell can run two vpaddd instructions in parallel. (Skylake can run all three in a single cycle, each with 1 cycle latency.)
Superscalar pipelined out-of-order execution means there's a big difference between latency and throughput, and keeping track of dependency chains matters a lot. See http://agner.org/optimize/, and other links in the x86 tag wiki for more optimization guides.

converting SSE code to AVX - cost of _mm256_and_ps

I'm converting SSE2 sine and cosine functions (from Julien Pommier's sse_mathfun.h; based on the CEPHES sinf function) to use AVX in order to accept 8 float vectors or 4 doubles.
So, Julien's function sin_ps becomes sin_ps8 (for 8 floats) and sin_pd4 for 4 doubles. (The "advanced" editor here fails to accept my code, so please visit http://arstechnica.com/civis/viewtopic.php?f=20&t=1227375 to see it.)
Testing with clang 3.3 under Mac OS X 10.6.8 running on a 2011 Core2 i7 # 2.7Ghz, benchmarking results look like this:
sinf .. -> 27.7 millions of vector evaluations/second over 5.56e+07
iters (standard, scalar sinf() function)
sin_ps .. -> 41.0 millions of vector evaluations/second over
8.22e+07 iters
sin_pd4 .. -> 40.2 millions of vector evaluations/second over
8.06e+07 iters
sin_ps8 .. -> 2.5 millions of vector evaluations/second over
5.1e+06 iters
The cost of sin_ps8 is downright frightening, and it seems it is due to the use of _mm256_castsi256_ps . In fact, commenting out the line "poly_mask = _mm256_castsi256_ps(emmm2);" results in a more normal performance.
sin_pd4 uses _mm_castsi128_pd, but it appears that is not (just) the mix of SSE and AVX instructions that is biting me in sin_ps8: when I emulate the _mm256_castsi256_ps calls with 2 calls to _mm_castsi128_ps, performance doesn't improve. emm2 and emm0 are pointers to emmm2 and emmm0, both v8si instances and thus (a priori) correctly aligned to 32 bits boundaries.
See sse_mathfun.h and sse_mathfun_test.c for compilable code.
Is there a(n easy) way to avoid the penalty I'm seeing?
Transferring stuff out of registers into memory isn't usually a good idea. You are doing this every time you store into a pointer.
Instead of this:
{ ALIGN32_BEG v4sf *yy ALIGN32_END = (v4sf*) &y;
emm2[0] = _mm_and_si128(_mm_add_epi32( _mm_cvttps_epi32( yy[0] ), _v4si_pi32_1), _v4si_pi32_inv1),
emm2[1] = _mm_and_si128(_mm_add_epi32( _mm_cvttps_epi32( yy[1] ), _v4si_pi32_1), _v4si_pi32_inv1);
yy[0] = _mm_cvtepi32_ps(emm2[0]),
yy[1] = _mm_cvtepi32_ps(emm2[1]);
}
/* get the swap sign flag */
emm0[0] = _mm_slli_epi32(_mm_and_si128(emm2[0], _v4si_pi32_4), 29),
emm0[1] = _mm_slli_epi32(_mm_and_si128(emm2[1], _v4si_pi32_4), 29);
/* get the polynom selection mask
there is one polynom for 0 <= x <= Pi/4
and another one for Pi/4<x<=Pi/2
Both branches will be computed.
*/
emm2[0] = _mm_cmpeq_epi32(_mm_and_si128(emm2[0], _v4si_pi32_2), _mm_setzero_si128()),
emm2[1] = _mm_cmpeq_epi32(_mm_and_si128(emm2[1], _v4si_pi32_2), _mm_setzero_si128());
((v4sf*)&poly_mask)[0] = _mm_castsi128_ps(emm2[0]);
((v4sf*)&poly_mask)[1] = _mm_castsi128_ps(emm2[1]);
swap_sign_bit = _mm256_castsi256_ps(emmm0);
Try something like this:
__m128i emm2a = _mm_and_si128(_mm_add_epi32( _mm256_castps256_ps128(y), _v4si_pi32_1), _v4si_pi32_inv1);
__m128i emm2b = _mm_and_si128(_mm_add_epi32( _mm256_extractf128_ps(y, 1), _v4si_pi32_1), _v4si_pi32_inv1);
y = _mm256_insertf128_ps(_mm256_castps128_ps256(_mm_cvtepi32_ps(emm2a)), _mm_cvtepi32_ps(emm2b), 1);
/* get the swap sign flag */
__m128i emm0a = _mm_slli_epi32(_mm_and_si128(emm2a, _v4si_pi32_4), 29),
__m128i emm0b = _mm_slli_epi32(_mm_and_si128(emm2b, _v4si_pi32_4), 29);
swap_sign_bit = _mm256_castsi256_ps(_mm256_insertf128_si256(_mm256_castsi128_si256(emm0a), emm0b, 1));
/* get the polynom selection mask
there is one polynom for 0 <= x <= Pi/4
and another one for Pi/4<x<=Pi/2
Both branches will be computed.
*/
emm2a = _mm_cmpeq_epi32(_mm_and_si128(emm2a, _v4si_pi32_2), _mm_setzero_si128()),
emm2b = _mm_cmpeq_epi32(_mm_and_si128(emm2b, _v4si_pi32_2), _mm_setzero_si128());
poly_mask = _mm256_castsi256_ps(_mm256_insertf128_si256(_mm256_castsi128_si256(emm2a), emm2b, 1));
As mentioned in comments, cast intrinsics are purely compile-time and emit no instructions.
Maybe you could compare your code to the already working AVX extension of Julien Pommier SSE math functions?
http://software-lisc.fbk.eu/avx_mathfun/
This code works in GCC but not MSVC and only supports floats (float8) but I think you could easily extend it to use doubles (double4) as well. A quick comparison of your sin function shows that they are quite similar except for the SSE2 integer part.

How to efficiently combine comparisons in SSE?

I am trying to convert the following code to SSE/AVX:
float x1, x2, x3;
float a1[], a2[], a3[], b1[], b2[], b3[];
for (i=0; i < N; i++)
{
if (x1 > a1[i] && x2 > a2[i] && x3 > a3[i] && x1 < b1[i] && x2 < b2[i] && x3 < b3[i])
{
// do something with i
}
}
Here N is a small constant, let's say 8. The if(...) statement evaluates to false most of the time.
First attempt:
__m128 x; // x1, x2, x3, 0
__m128 a[N]; // packed a1[i], a2[i], a3[i], 0
__m128 b[N]; // packed b1[i], b2[i], b3[i], 0
for (int i = 0; i < N; i++)
{
__m128 gt_mask = _mm_cmpgt_ps(x, a[i]);
__m128 lt_mask = _mm_cmplt_ps(x, b[i]);
__m128 mask = _mm_and_ps(gt_mask, lt_mask);
if (_mm_movemask_epi8 (_mm_castps_si128(mask)) == 0xfff0)
{
// do something with i
}
}
This works, and is reasonably fast. The question is, is there be a more efficient way of doing this? In particular, if there is a register with results from SSE or AVX comparisons on floats (which put 0xffff or 0x0000 in that slot), how can the results of all the comparisons be (for example) and-ed or or-ed together, in general? Is PMOVMSKB (or the corresponding _mm_movemask intrinsic) the standard way to do this?
Also, how can AVX 256-bit registers be used instead of SSE in the code above?
EDIT:
Tested and benchmarked a version using VPTEST (from _mm_test* intrinsic) as suggested below.
__m128 x; // x1, x2, x3, 0
__m128 a[N]; // packed a1[i], a2[i], a3[i], 0
__m128 b[N]; // packed b1[i], b2[i], b3[i], 0
__m128i ref_mask = _mm_set_epi32(0xffff, 0xffff, 0xffff, 0x0000);
for (int i = 0; i < N; i++)
{
__m128 gt_mask = _mm_cmpgt_ps(x, a[i]);
__m128 lt_mask = _mm_cmplt_ps(x, b[i]);
__m128 mask = _mm_and_ps(gt_mask, lt_mask);
if (_mm_testc_si128(_mm_castps_si128(mask), ref_mask))
{
// do stuff with i
}
}
This also works, and is fast. Benchmarking this (Intel i7-2630QM, Windows 7, cygwin 1.7, cygwin gcc 4.5.3 or mingw x86_64 gcc 4.5.3, N=8) shows this to be identical speed to the code above (within less than 0.1%) on 64bit. Either version of the inner loop runs in about 6.8 clocks average on data which is all in cache and for which the comparison returns always false.
Interestingly, on 32bit, the _mm_test version runs about 10% slower. It turns out that the compiler spills the masks after loop unrolling and has to re-read them back; this is probably unnecessary and can be avoided in hand-coded assembly.
Which method to choose? It seems that there is no compelling reason to prefer VPTEST over VMOVMSKPS. Actually, there is a slight reason to prefer VMOVMSKPS, namely it frees up a xmm register which would otherwise be taken up by the mask.
If you're working with floats, you generally want to use MOVMSKPS (and the corresponding AVX instruction VMOVMSKPS) instead of PMOVMSKB.
That aside, yes, this is one standard way of doing this; you can also use PTEST (VPTEST) to directly update the condition flags based on the result of an SSE or AVX AND or ANDNOT.
To address your editted version:
If you're going to directly branch on the result of PTEST, it's faster to use it than to MOVMSKPS to a GP reg, and then do a TEST on that to set the flags for a branch instruction. On AMD CPUs, moving data between vector and integer domains is very slow (5 to 10 cycle latency, depending on the CPU model).
As far as needing an extra register for PTEST, you often don't. You can use the same value as both args, like with the regular non-vector TEST instruction. (Testing foo & foo is the same as testing foo).
In your case, you do need to check that all the vector elements are set. If you reversed the comparison, and then OR the result together (so you're testing !(x1 < a1[i]) || !(x2 < a2[i]) || ...) you'd have vectors you needed to test for all zero, rather than for all ones. But dealing with the low element is still problematic. If you needed to save a register to avoid needing a vector mask for PTEST / VTESTPS, you could right-shift the vector by 4 bytes before doing a PTEST and branching on it being all-zero.
AVX introduced VTESTPS, which I guess avoids the possible float -> int bypass delay. If you used any int-domain instructions to generate inputs for a test, though, you might as well use (V)PTEST. (I know you were using intrinsics, but they're a pain to type and look at compared to mnemonics.)

Resources