Are there are ARM Neon instructions for round function? - c

I am trying to implement round function using ARM Neon intrinsics.
This function looks like this:
float roundf(float x) {
return signbit(x) ? ceil(x - 0.5) : floor(x + 0.5);
}
Is there a way to do this using Neon intrinsics? If not, how to use Neon intrinsics to implement this function?
edited
After calculating the multiplication of two floats, call roundf(on armv7 and armv8).
My compiler is clang.
this can be done with vrndaq_f32: https://developer.arm.com/architectures/instruction-sets/intrinsics/#f:#navigationhierarchiessimdisa=[Neon]&q=vrndaq_f32 for armv8.
How to do this on armv7?
edited
My implementation
// input: float32x4_t arg
float32x4_t vector_zero = vdupq_n_f32(0.f);
float32x4_t neg_half = vdupq_n_f32(-0.5f);
float32x4_t pos_half = vdupq_n_f32(0.5f);
uint32x4_t mask = vcgeq_f32(arg, vector_zero);
uint32x4_t mask_neg = vandq_u32(mask, neg_half);
uint32x4_t mask_pos = vandq_u32(mask, pos_half);
arg = vaddq_f32(arg, (float32x4_t)mask_pos);
arg = vaddq_f32(arg, (float32x4_t)mask_neg);
int32x4_t arg_int32 = vcvtq_s32_f32(arg);
arg = vcvtq_f32_s32(arg_int32);
Is there a better way to implement this?

It's important that you define which form of rounding you really want. See Wikipedia for a sense of how many rounding choices there are.
From your code-snippet, you are asking for commercial or symmetric rounding which is round-away from zero for ties. For ARMv8 / ARM64, vrndaq_f32 should do that.
The SSE4 _mm_round_ps and ARMv8 ARM-NEON vrndnq_f32 do bankers rounding i.e. round-to-nearest (even).

Your solution is VERY expensive, both in cycle counts and register utilization.
Provided -(2^30) <= arg < (2^30), you can do following:
int32x4_t argi = vcvtq_n_s32_f32(arg, 1);
argi = vsraq_n_s32(argi, argi, 31);
argi = vrshrq_n_s32(argi, 1);
arg = vcvtq_f32_s32(argi);
It doesn't require any other register than arg itself, and it will be done with 4 inexpensive instructions. And it works both for aarch32 and aarch64
godblot link

Related

Low pass shape curve with low cycle count

Hi, I want to achieve the above curve in software without using a dsp function. I was hoping to use a fast and low cycle arm function like multiply-accumulate.
Is there any fast way of doing this in C on an embedded arm processor?
The curve shown is that of the simplest possible first-order filter charactarised by a 3dB cut-off frequency fc> and a 6dB/Octave or 20dB/Decade roll-off. As an analogue filter it could be implemented as a simple passive RC filter thus:
In the digital domain such a filter would be implemented by:
yn = a0 xn + b1 yn-1
Where y are input samples and x output samples. Or in code:
void lowPassFilter( const tSample* x, tSample* y, size_t sample_count )
{
static tSample y_1 = 0 ;
for( int i = 0; i < n; ++i)
{
y[i] = a0 * x[i] + b1 * y_1 ;
y_1 = y[i];
}
}
The filter is characterised by the coefficients:
a0 = 1 - x
b1 = x
where x is a value between 0 and 1 (I'll address the eradication of the implied floating point operations in due course):
x = e-2πfc
Where fc is the desired -3dB cut-off frequency expressed as a fraction of the sample rate. So for a sample rate 32Ksps and a cut-off frequency of 1KHz, fc = 1000/32000 = 0.03125, so:
b1 = x = e-2πfc = 0.821725
a0 = 1 - x = 0.178275
Now naïvely plugging those constants into the lowPassFilter() will result in generation of floating point code and on an MCU without an FPU that might be prohibitive and even with an FPU might be be sub-optimal. So in this case we might use a fixed-point representation. Since all the real values are less than one, and the machine is 32bit, a UQ0.16 representation would be appropriate, as intermediate multiplication results will not then overflow a 32 bit machine word. This does require the sample width to be 16bit or less (or scaled accordingly). So using fixed-point the code might look like:
typedef uint16_t tSample ;
#define b1 53852 // 0.821725 * 65535
#define a0 (1 - b1)
#define FIXED_MUL( x, y ) (((x)*(y))>>16))
void lowPassFilter( const tSample* x, tSample* y, size_t sample_count )
{
static tSample y_1 = 0 ;
for( int i = 0; i < n; ++i)
{
y[i] = FIXED_MUL(a0, x[i]) + FIXED_MUL(b1, y_1) ;
y_1 = y[i];
}
}
Now that is not a significant amount of processing for most ARM processors at 32ksps suggested in this example. Obviously it depends what other demands are on the processor, but on its own this would not be a significant load, even without applying compiler optimisation. As with any optimisation, you should implement it, measure it and improve it if necessary.
As a first stab I'd trust the compiler optimiser to generate code that in most cases will meet requirements or at least be as good as you might achieve with handwritten assembler. Whether or not it would choose to use a multiply-accumulate instruction is out of your hands, but if it didn't the chances are that it is because there s no advantage.
Bare in mind that ARM Cortex-M4 and M7 for example include DSP instructions not supported in M0 or M3 ports. The compiler may or may not utilise these, but the simplest way to guarantee that without resorting to assembler would be to use the CMSIS DSP Library whether or not that provided greater performance or better fidelity than the above, you would have to test.
Worth noting that the function lowPassFilter() retains state staically so can be called iteratively for "blocks" of samples (from ADC DMA transfer for example), so you might have:
int dma_buffer_n = 0
for(;;)
{
waitEvent( DMA_BUFFER_READY ) ;
lowPassFilter( dma_buffer[dma_buffer_n], output_buffer, DMA_BLOCK_SIZE ) ;
dma_buffer_n = dma_buffer_n == 0 ? 1 : 0 ; // Flip buffers
}
The use of DMA double-buffering is likely to be far more important to performance than the filter function implementation. I have worked on a DSP application sampling two channels at 48ksps on a 72MHz Cortex-M3 with far more complex DSP requirements than this with each channel having a high pass IIR, an 18 coefficient FIR and a Viterbi decoder, so I really do think that your assumption that this simple filter will not be fast enough is somewhat premature.

Neon equivalent of mm_madd_epi16 and mm_maddubs_epi16

I am trying to port a code in SSE to Neon.
I could not find the equivalent intrinsics for mm_maddubs_epi16 and mm_madd_epi16.
Any insights on these intrinsics for Neon.
You might want to look at the implementations is SIMDe for _mm_madd_epi16 and _mm_maddubs_epi16 (note to future readers: you might want to check the latest version of those files since implementations in SIMDe get improved sometimes and it's very unlikely I'll remember to update this answer). These implementations are just copied from there.
If you're on AArch64, for _mm_madd_epi16 you probably want to use an vmull_s16+vget_low_s16 for the low half, a vmull_high_s16 for the high half, then use vpaddq_s32 to add them together into a 128-bit result. Without AArch64 you'll need two vmull_s16 calls (one with vget_low_s16 and one with vget_high_s16), but since vpaddq_s32 isn't supported you'll need two vpadd_s32 calls with a vcombine_s32:
#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
int32x4_t pl = vmull_s16(vget_low_s16(a_.neon_i16), vget_low_s16(b_.neon_i16));
int32x4_t ph = vmull_high_s16(a_.neon_i16, b_.neon_i16);
r_.neon_s32 = vpaddq_s32(pl, ph);
#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
int32x4_t pl = vmull_s16(vget_low_s16(a_.neon_i16), vget_low_s16(b_.neon_i16));
int32x4_t ph = vmull_s16(vget_high_s16(a_.neon_i16), vget_high_s16(b_.neon_i16));
int32x2_t rl = vpadd_s32(vget_low_s32(pl), vget_high_s32(pl));
int32x2_t rh = vpadd_s32(vget_low_s32(ph), vget_high_s32(ph));
r_.neon_i32 = vcombine_s32(rl, rh);
#endif
For _mm_maddubs_epi16 it's a little more complicated, but I don't think an AArch64-specific version will do much good:
/* Zero extend a */
int16x8_t a_odd = vreinterpretq_s16_u16(vshrq_n_u16(a_.neon_u16, 8));
int16x8_t a_even = vreinterpretq_s16_u16(vbicq_u16(a_.neon_u16, vdupq_n_u16(0xff00)));
/* Sign extend by shifting left then shifting right. */
int16x8_t b_even = vshrq_n_s16(vshlq_n_s16(b_.neon_i16, 8), 8);
int16x8_t b_odd = vshrq_n_s16(b_.neon_i16, 8);
/* multiply */
int16x8_t prod1 = vmulq_s16(a_even, b_even);
int16x8_t prod2 = vmulq_s16(a_odd, b_odd);
/* saturated add */
r_.neon_i16 = vqaddq_s16(prod1, prod2);

pairwise addition in neon

I want to add 00 and 01 indices value of int64x2_t vector in neon .
I am not able to find any pairwise-add instruction which will do this functionality .
int64x2_t sum_64_2;
//I am expecting result should be..
//int64_t result = sum_64_2[0] + sum_64_2[1];
Is there any instruction in neon do to this logic.
You can write it in two ways. This one explicitly uses the NEON VADD.I64 instruction:
int64x1_t f(int64x2_t v)
{
return vadd_s64 (vget_high_s64 (v), vget_low_s64 (v));
}
and the following one relies on the compiler to correctly select between using the NEON and general integer instruction sets. GCC 4.9 does the right thing in this case, but other compilers may not.
int64x1_t g(int64x2_t v)
{
int64x1_t r;
r=vset_lane_s64(vgetq_lane_s64(v, 0) + vgetq_lane_s64(v, 1), r, 0);
return r;
}
When targeting ARM, the code generation is efficient. For AArch64, extra instructions are used, but the compiler could do better.

converting SSE code to AVX - cost of _mm256_and_ps

I'm converting SSE2 sine and cosine functions (from Julien Pommier's sse_mathfun.h; based on the CEPHES sinf function) to use AVX in order to accept 8 float vectors or 4 doubles.
So, Julien's function sin_ps becomes sin_ps8 (for 8 floats) and sin_pd4 for 4 doubles. (The "advanced" editor here fails to accept my code, so please visit http://arstechnica.com/civis/viewtopic.php?f=20&t=1227375 to see it.)
Testing with clang 3.3 under Mac OS X 10.6.8 running on a 2011 Core2 i7 # 2.7Ghz, benchmarking results look like this:
sinf .. -> 27.7 millions of vector evaluations/second over 5.56e+07
iters (standard, scalar sinf() function)
sin_ps .. -> 41.0 millions of vector evaluations/second over
8.22e+07 iters
sin_pd4 .. -> 40.2 millions of vector evaluations/second over
8.06e+07 iters
sin_ps8 .. -> 2.5 millions of vector evaluations/second over
5.1e+06 iters
The cost of sin_ps8 is downright frightening, and it seems it is due to the use of _mm256_castsi256_ps . In fact, commenting out the line "poly_mask = _mm256_castsi256_ps(emmm2);" results in a more normal performance.
sin_pd4 uses _mm_castsi128_pd, but it appears that is not (just) the mix of SSE and AVX instructions that is biting me in sin_ps8: when I emulate the _mm256_castsi256_ps calls with 2 calls to _mm_castsi128_ps, performance doesn't improve. emm2 and emm0 are pointers to emmm2 and emmm0, both v8si instances and thus (a priori) correctly aligned to 32 bits boundaries.
See sse_mathfun.h and sse_mathfun_test.c for compilable code.
Is there a(n easy) way to avoid the penalty I'm seeing?
Transferring stuff out of registers into memory isn't usually a good idea. You are doing this every time you store into a pointer.
Instead of this:
{ ALIGN32_BEG v4sf *yy ALIGN32_END = (v4sf*) &y;
emm2[0] = _mm_and_si128(_mm_add_epi32( _mm_cvttps_epi32( yy[0] ), _v4si_pi32_1), _v4si_pi32_inv1),
emm2[1] = _mm_and_si128(_mm_add_epi32( _mm_cvttps_epi32( yy[1] ), _v4si_pi32_1), _v4si_pi32_inv1);
yy[0] = _mm_cvtepi32_ps(emm2[0]),
yy[1] = _mm_cvtepi32_ps(emm2[1]);
}
/* get the swap sign flag */
emm0[0] = _mm_slli_epi32(_mm_and_si128(emm2[0], _v4si_pi32_4), 29),
emm0[1] = _mm_slli_epi32(_mm_and_si128(emm2[1], _v4si_pi32_4), 29);
/* get the polynom selection mask
there is one polynom for 0 <= x <= Pi/4
and another one for Pi/4<x<=Pi/2
Both branches will be computed.
*/
emm2[0] = _mm_cmpeq_epi32(_mm_and_si128(emm2[0], _v4si_pi32_2), _mm_setzero_si128()),
emm2[1] = _mm_cmpeq_epi32(_mm_and_si128(emm2[1], _v4si_pi32_2), _mm_setzero_si128());
((v4sf*)&poly_mask)[0] = _mm_castsi128_ps(emm2[0]);
((v4sf*)&poly_mask)[1] = _mm_castsi128_ps(emm2[1]);
swap_sign_bit = _mm256_castsi256_ps(emmm0);
Try something like this:
__m128i emm2a = _mm_and_si128(_mm_add_epi32( _mm256_castps256_ps128(y), _v4si_pi32_1), _v4si_pi32_inv1);
__m128i emm2b = _mm_and_si128(_mm_add_epi32( _mm256_extractf128_ps(y, 1), _v4si_pi32_1), _v4si_pi32_inv1);
y = _mm256_insertf128_ps(_mm256_castps128_ps256(_mm_cvtepi32_ps(emm2a)), _mm_cvtepi32_ps(emm2b), 1);
/* get the swap sign flag */
__m128i emm0a = _mm_slli_epi32(_mm_and_si128(emm2a, _v4si_pi32_4), 29),
__m128i emm0b = _mm_slli_epi32(_mm_and_si128(emm2b, _v4si_pi32_4), 29);
swap_sign_bit = _mm256_castsi256_ps(_mm256_insertf128_si256(_mm256_castsi128_si256(emm0a), emm0b, 1));
/* get the polynom selection mask
there is one polynom for 0 <= x <= Pi/4
and another one for Pi/4<x<=Pi/2
Both branches will be computed.
*/
emm2a = _mm_cmpeq_epi32(_mm_and_si128(emm2a, _v4si_pi32_2), _mm_setzero_si128()),
emm2b = _mm_cmpeq_epi32(_mm_and_si128(emm2b, _v4si_pi32_2), _mm_setzero_si128());
poly_mask = _mm256_castsi256_ps(_mm256_insertf128_si256(_mm256_castsi128_si256(emm2a), emm2b, 1));
As mentioned in comments, cast intrinsics are purely compile-time and emit no instructions.
Maybe you could compare your code to the already working AVX extension of Julien Pommier SSE math functions?
http://software-lisc.fbk.eu/avx_mathfun/
This code works in GCC but not MSVC and only supports floats (float8) but I think you could easily extend it to use doubles (double4) as well. A quick comparison of your sin function shows that they are quite similar except for the SSE2 integer part.

128-bit rotation using ARM Neon intrinsics

I'm trying to optimize my code using Neon intrinsics. I have a 24-bit rotation over a 128-bit array (8 each uint16_t).
Here is my c code:
uint16_t rotated[8];
uint16_t temp[8];
uint16_t j;
for(j = 0; j < 8; j++)
{
//Rotation <<< 24 over 128 bits (x << shift) | (x >> (16 - shift)
rotated[j] = ((temp[(j+1) % 8] << 8) & 0xffff) | ((temp[(j+2) % 8] >> 8) & 0x00ff);
}
I've checked the gcc documentation about Neon Intrinsics and it doesn't have instruction for vector rotations. Moreover, I've tried to do this using vshlq_n_u16(temp, 8) but all the bits shifted outside a uint16_t word are lost.
How to achieve this using neon intrinsics ? By the way is there a better documentation about GCC Neon Intrinsics ?
After some reading on Arm Community Blogs, I've found this :
VEXT: Extract
VEXT extracts a new vector of bytes from a pair of existing vectors. The bytes in the new vector are from the top of the first operand, and the bottom of the second operand. This allows you to produce a new vector containing elements that straddle a pair of existing vectors. VEXT can be used to implement a moving window on data from two vectors, useful in FIR filters. For permutation, it can also be used to simulate a byte-wise rotate operation, when using the same vector for both input operands.
The following Neon GCC Intrinsic does the same as the assembly provided in the picture :
uint16x8_t vextq_u16 (uint16x8_t, uint16x8_t, const int)
So the the 24bit rotation over a full 128bit vector (not over each element) could be done by the following:
uint16x8_t input;
uint16x8_t t0;
uint16x8_t t1;
uint16x8_t rotated;
t0 = vextq_u16(input, input, 1);
t0 = vshlq_n_u16(t0, 8);
t1 = vextq_u16(input, input, 2);
t1 = vshrq_n_u16(t1, 8);
rotated = vorrq_u16(t0, t1);
Use vext.8 to concat a vector with itself and give you the 16-byte window that you want (in this case offset by 3 bytes).
Doing this with intrinsics requires casting to keep the compiler happy, but it's still a single instruction:
#include <arm_neon.h>
uint16x8_t byterotate3(uint16x8_t input) {
uint8x16_t tmp = vreinterpretq_u8_u16(input);
uint8x16_t rotated = vextq_u8(tmp, tmp, 16-3);
return vreinterpretq_u16_u8(rotated);
}
g++5.4 -O3 -march=armv7-a -mfloat-abi=hard -mfpu=neon (on Godbolt) compiles it to this:
byterotate3(__simd128_uint16_t):
vext.8 q0, q0, q0, #13
bx lr
A count of 16-3 means we left-rotate by 3 bytes. (It means we take 13 bytes from the left vector and 3 bytes from the right vector, so it's also a right-rotate by 13).
Related: x86 also has instruction that takes a sliding window into the concatenation of two registers: palignr (added in SSSE3).
Maybe I'm missing something about NEON, but I don't understand why the OP's self-answer is using vext.16 (vextq_u16), which has 16-bit granularity. It's not even a different instruction, just an alias for vext.8 which makes it impossible to use an odd-numbered count, requiring extra instructions. The manual for vext.8 says:
VEXT pseudo-instruction
You can specify a datatype of 16, 32, or 64 instead of 8. In this
case, #imm refers to halfwords, words, or doublewords instead of
referring to bytes, and the permitted ranges are correspondingly
reduced.
I'm not 100% sure but I don't think NEON has rotate instructions.
You can compose the rotation operation you require with a left shift, a right shit and an or, e.g.:
uint8_t ror(uint8_t in, int rotation)
{
return (in >> rotation) | (in << (8-rotation));
}
Just do the same with the Neon intrinsics for left shift, right shit and or.
uint16x8_t temp;
uint8_t rot;
uint16x8_t rotated = vorrq_u16 ( vshlq_n_u16(temp, rot) , vshrq_n_u16(temp, 16 - rot) );
See http://en.wikipedia.org/wiki/Circular_shift "Implementing circular shifts."
This will rotate the values inside the lanes. If you want to rotate the lanes themselves use VEXT as described in the other answer.

Resources