How to combine two __m128 values to __m256? - c

I would like to combine two __m128 values to one __m256.
Something like this:
__m128 a = _mm_set_ps(1, 2, 3, 4);
__m128 b = _mm_set_ps(5, 6, 7, 8);
to something like:
__m256 c = { 1, 2, 3, 4, 5, 6, 7, 8 };
are there any intrinsics that I can use to do this?

This should do what you want:
__m128 a = _mm_set_ps(1,2,3,4);
__m128 b = _mm_set_ps(5,6,7,8);
__m256 c = _mm256_castps128_ps256(a);
c = _mm256_insertf128_ps(c,b,1);
If the order is reversed from what you want, then just switch a and b.
The intrinsic of interest is _mm256_insertf128_ps which will let you insert a 128-bit register into either lower or upper half of a 256-bit AVX register:
http://software.intel.com/sites/products/documentation/studio/composer/en-us/2011/compiler_c/intref_cls/common/intref_avx_insertf128_ps.htm
The complete family of them is here:
_mm256_insertf128_pd()
_mm256_insertf128_ps()
_mm256_insertf128_si256()

Intel documents __m256 _mm256_set_m128(__m128 hi, __m128 lo) and _mm256_setr_m128(lo, hi) as intrinsics for the vinsertf128 instruction, which is what you want1. (Of course there are also __m256d and __m256i versions, which use the same instruction. The __m256i version may use vinserti128 if AVX2 is available, otherwise it'll use f128 as well.)
These days, those intrinsics are supported by current versions of all 4 major x86 compilers (gcc, clang, MSVC, and ICC). But not by older versions; like some other helper intrinsics that Intel documents, widespread implementation has been slow. (Often GCC or clang are the last hold-out to not have something you wish you could use portably.)
Use it if you don't need portability to old GCC versions: it's the most readable way to express what you want, following the well known _mm_set and _mm_setr patterns.
Performance-wise, it's of course just as efficient as manual cast + vinsertf128 intrinsics (#Mysticial's answer), and for gcc at least that's literally how the internal .h actually implements _mm256_set_m128.
Compiler version support for _mm256_set_m128 / _mm256_setr_m128:
clang: 3.6 and newer. (Mainline, IDK about Apple)
GCC: 8.x and newer, not present as recently as GCC7!
ICC: since at least ICC13, the earliest on Godbolt.
MSVC: since at least 19.14 and 19.10 (WINE) VS2015, the earliest on Godbolt.
https://godbolt.org/z/1na1qr has test cases for all 4 compilers.
__m256 combine_testcase(__m128 hi, __m128 lo) {
return _mm256_set_m128(hi, lo);
}
They all compile this function to one vinsertf128, except MSVC where even the latest version wastes a vmovups xmm2, xmm1 copying a register. (I used -O2 -Gv -arch:AVX to use the vectorcall convention so args would be in registers to make an efficient non-inlined function definition possible for MSVC.) Presumably MSVC would be ok inlining into a larger function if it could write the result to a 3rd register, instead of the calling convention forcing it to read xmm0 and write ymm0.
Footnote 1:
vinsertf128 is very efficient on Zen1, and as efficient as vperm2f128 on other CPUs with 256-bit-wide shuffle units. It can also take the high half from memory in case the compiler spilled it or is folding a _mm_loadu_ps into it, instead of needing to separately do a 128-bit load into a register; vperm2f128's memory operand would be a 256-bit load which you don't want.
https://uops.info/ / https://agner.org/optimize/

Even this one will work:
__m128 a = _mm_set_ps(1,2,3,4);
__m128 b = _mm_set_ps(5,6,7,8);
__m256 c = _mm256_insertf128_ps(c,a,0);
c = _mm256_insertf128_ps(c,b,1);
You will get a warning as c is not initialized but you can ignore it and if you're looking for performances this solution will use less clock cycle then the other one.

Can also use permute intrinsic:
__m128 a = _mm_set_ps(1,2,3,4);
__m128 b = _mm_set_ps(5,6,7,8);
__m256 c = _mm256_permute2f128_ps(_mm256_castps128_ps256(a), _mm256_castps128_ps256(b), 0x20);
I don't know which way is faster.

I believe this is the simplest:
#define _mm256_set_m128(/* __m128 */ hi, /* __m128 */ lo) \ _mm256_insertf128_ps(_mm256_castps128_ps256(lo), (hi), 0x1)
__m256 c = _mm256_set_m128(a, b);
Do note __mm256_set_m128 is already defined in msvc 2019 if you #include "immintrin.h"

Related

GNU C inline asm input constraint for AVX512 mask registers (k1...k7)?

AVX512 introduced opmask feature for its arithmetic commands. A simple example: godbolt.org.
#include <immintrin.h>
__m512i add(__m512i a, __m512i b) {
__m512i sum;
asm(
"mov ebx, 0xAAAAAAAA; \n\t"
"kmovw k1, ebx; \n\t"
"vpaddd %[SUM] %{k1%}%{z%}, %[A], %[B]; # conditional add "
: [SUM] "=v"(sum)
: [A] "v" (a),
[B] "v" (b)
: "ebx", "k1" // clobbers
);
return sum;
}
-march=skylake-avx512 -masm=intel -O3
mov ebx,0xaaaaaaaa
kmovw k1,ebx
vpaddd zmm0{k1}{z},zmm0,zmm1
The problem is that k1 has to be specified.
Is there an input constraint like "r" for integers except that it picks a k register instead of a general-purpose register?
__mmask16 is literally a typedef for unsigned short (and other mask types for other plain integer types), so we just need a constraint for passing it in a k register.
We have to go digging in the gcc sources config/i386/constraints.md to find it:
The constraint for any mask register is "k". Or use "Yk" for k1..k7 (which can be used as a predicate, unlike k0). You'd use an "=k" operand as the destination for a compare-into-mask, for example.
Obviously you can use "=Yk"(tmp) with a __mmask16 tmp to get the compiler to do register allocation for you, instead of just declaring clobbers on whichever "k" registers you decide to use.
Prefer intrinsics like _mm512_maskz_add_epi32
First of all, https://gcc.gnu.org/wiki/DontUseInlineAsm if you can avoid it. Understanding asm is great, but use that to read compiler output and/or figure out what would be optimal, then write intrinsics that can compile the way you want. Performance tuning info like https://agner.org/optimize/ and https://uops.info/ list things by asm mnemonic, and they're shorter / easier to remember than intrinsics, but you can search by mnemonic to find intrinsics on https://software.intel.com/sites/landingpage/IntrinsicsGuide/
Intrinsics will also let the compiler fold loads into memory source operands for other instructions; with AVX512 those can even be broadcast loads! Your inline asm forces the compiler to use a separate load instruction. Even a "vm" input won't let the compiler pick a broadcast-load as the memory source, because it wouldn't know the broadcast element width of the instruction(s) you were using it with.
Use _mm512_mask_add_epi32 or _mm512_maskz_add_epi32 especially if you're already using __m512i types from <immintrin.h>.
Also, your asm has a bug: you're using {k1} merge-masking not {k1}{z} zero-masking, but you used uninitialized __m512i sum; with an output-only "=v" constraint as the merge destination! As a stand-alone function, it happens to merge into a because the calling convention has ZMM0 = first input = return value register. But when inlining into other functions, you definitely can't assume that sum will pick the same register as a. Your best bet is to use a read/write operand for "+v"(a) and use is as the destination and first source.
Merge-masking only makes sense with a "+v" read/write operand. (Or in an asm statement with multiple instructions where you've already written an output once, and want to merge another result into it.)
Intrinsics would stop you from making this mistake; the merge-masking version has an extra input for the merge-target. (The asm destination operand).
Example using "Yk"
// works with -march=skylake-avx512 or -march=knl
// or just -mavx512f but don't do that.
// also needed: -masm=intel
#include <immintrin.h>
__m512i add_zmask(__m512i a, __m512i b) {
__m512i sum;
asm(
"vpaddd %[SUM] %{%[mask]%}%{z%}, %[A], %[B]; # conditional add "
: [SUM] "=v"(sum)
: [A] "v" (a),
[B] "v" (b),
[mask] "Yk" ((__mmask16)0xAAAA)
// no clobbers needed, unlike your question which I fixed with an edit
);
return sum;
}
Note that all the { and } are escaped with % (https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html#Special-format-strings), so they're not parsed as dialect-alternatives {AT&T | Intel-syntax}.
This compiles with gcc as early as 4.9, but don't actually do that because it doesn't understand -march=skylake-avx512, or even have tuning settings for Skylake or KNL. Use a more recent GCC that knows about your CPU for best results.
Godbolt compiler explorer:
# gcc8.3 -O3 -march=skylake-avx512 or -march=knl (and -masm=intel)
add(long long __vector, long long __vector):
mov eax, -21846
kmovw k1, eax # compiler-generated
# inline asm starts
vpaddd zmm0 {k1}{z}, zmm0, zmm1; # conditional add
# inline asm ends
ret
-mavx512bw (implied by -march=skylake-avx512 but not knl) is required for "Yk" to work on an int. If you're compiling with -march=knl, integer literals need a cast to __mmask16 or __mask8, because unsigned int = __mask32 isn't available for masks.
[mask] "Yk" (0xAAAA) requires AVX512BW even though the constant does fit in 16 bits, just because bare integer literals always have type int. (vpaddd zmm has 16 elements per vector, so I shortened your constant to 16-bit.) With AVX512BW, you can pass wider constants or leave out the cast for narrow ones.
gcc6 and later support -march=skylake-avx512. Use that to set tuning as well as enabling everything. Preferably gcc8 or at least gcc7. Newer compilers generate less clunky code with new ISA extensions like AVX512 if you're ever using it outside of inline asm.
gcc5 supports -mavx512f -mavx512bw but doesn't know about Skylake.
gcc4.9 doesn't support -mavx512bw.
"Yk" is unfortunately not yet documented in https://gcc.gnu.org/onlinedocs/gcc/Machine-Constraints.html.
I knew where to look in the GCC source thanks to Ross's answer on In GNU C inline asm, what are the size-override modifiers for xmm/ymm/zmm for a single operand?
While it is undocumented, looking here we see:
(define_register_constraint "Yk" "TARGET_AVX512F ? MASK_REGS :
NO_REGS" "#internal Any mask register that can be used as predicate,
i.e. k1-k7.")
Editing your godbolt to this:
asm(
"vpaddd %[SUM] %{%[k]}, %[A], %[B]"
: [SUM] "=v"(sum)
: [A] "v" (a), [B] "v" (b), [k] "Yk" (0xaaaaaaaa) );
seems to produce the correct output.
That said, I usually try to discourage people from using inline asm (and undocumented features). Can you use _mm512_mask_add_epi32?

How to merge a scalar into a vector without the compiler wasting an instruction zeroing upper elements? Design limitation in Intel's intrinsics?

I don't have a particular use-case in mind; I'm asking if this is really a design flaw / limitation in Intel's intrinsics or if I'm just missing something.
If you want to combine a scalar float with an existing vector, there doesn't seem to be a way to do it without high-element-zeroing or broadcasting the scalar into a vector, using Intel intrinsics. I haven't investigated GNU C native vector extensions and the associated builtins.
This wouldn't be too bad if the extra intrinsic optimized away, but it doesn't with gcc (5.4 or 6.2). There's also no nice way to use pmovzx or insertps as loads, for the related reason that their intrinsics only take vector args. (And gcc doesn't fold a scalar->vector load into the asm instruction.)
__m128 replace_lower_two_elements(__m128 v, float x) {
__m128 xv = _mm_set_ss(x); // WANTED: something else for this step, some compilers actually compile this to a separate insn
return _mm_shuffle_ps(v, xv, 0); // lower 2 elements are both x, and the garbage is gone
}
gcc 5.3 -march=nehalem -O3 output, to enable SSE4.1 and tune for that Intel CPU: (It's even worse without SSE4.1; multiple instructions to zero the upper elements).
insertps xmm1, xmm1, 0xe # pointless zeroing of upper elements. shufps only reads the low element of xmm1
shufps xmm0, xmm1, 0 # The function *should* just compile to this.
ret
TL:DR: the rest of this question is just asking if you can actually do this efficiently, and if not why not.
clang's shuffle-optimizer gets this right, and doesn't waste instructions on zeroing high elements (_mm_set_ss(x)), or duplicating the scalar into them (_mm_set1_ps(x)). Instead of writing something the compiler has to optimize away, shouldn't there be a way to write it "efficiently" in C in the first place? Even very recent gcc doesn't optimize it away, so this is a real (but minor) problem.
This would be possible if there was a scalar->128b equivalent of __m256 _mm256_castps128_ps256 (__m128 a). i.e. produce a __m128 with undefined garbage in upper elements, and the float in the low element, compiling to zero asm instructions if the scalar float/double was already in an xmm register.
None of the following intrinsics exist, but they should.
a scalar->__m128 equivalent of _mm256_castps128_ps256 as described above. The most general solution for the scalar-already-in-register case.
__m128 _mm_move_ss_scalar (__m128 a, float s): replace low element of vector a with scalar s. This isn't actually necessary if there's a general-purpose scalar->__m128 (previous bullet point). (The reg-reg form of movss merges, unlike the load form which zeros, and unlike movd which zeros upper elements in both cases. To copy a register holding a scalar float without false dependencies, use movaps).
__m128i _mm_loadzxbd (const uint8_t *four_bytes) and other sizes of PMOVZX / PMOVSX: AFAICT, there's no good safe way to use the PMOVZX intrinsics as a load, because the inconvenient safe way doesn't optimize away with gcc.
__m128 _mm_insertload_ps (__m128 a, float *s, const int imm8). INSERTPS behaves differently as a load: the upper 2 bits of the imm8 are ignored, and it always takes the scalar at the effective address (instead of an element from a vector in memory). This lets it work with addresses that aren't 16B-aligned, and work even without faulting if the float right before an unmapped page.
Like with PMOVZX, gcc fails to fold an upper-element-zeroing _mm_load_ss() into a memory operand for INSERTPS. (Note that if the upper 2 bits of the imm8 aren't both zero, then _mm_insert_ps(xmm0, _mm_load_ss(), imm8) can compile to insertps xmm0,xmm0,foo, with a different imm8 that zeros elements in vec as-if the src element was actually a zero produced by MOVSS from memory. Clang actually uses XORPS/BLENDPS in that case)
Are there any viable workarounds to emulate any of those that are both safe (don't break at -O0 by e.g. loading 16B that might touch the next page and segfault), and efficient (no wasted instructions at -O3 with current gcc and clang at least, preferably also other major compilers)? Preferably also in a readable way, but if necessary it could be put behind an inline wrapper function like __m128 float_to_vec(float a){ something(a); }.
Is there any good reason for Intel not to introduce intrinsics like that? They could have added a float->__m128 with undefined upper elements at the same time as adding _mm256_castps128_ps256. Is this a matter of compiler internals making it hard to implement? Perhaps specifically ICC internals?
The major calling conventions on x86-64 (SysV or MS __vectorcall) take the first FP arg in xmm0 and return scalar FP args in xmm0, with upper elements undefined. (See the x86 tag wiki for ABI docs). This means it's not uncommon for the compiler to have a scalar float/double in a register with unknown upper elements. This will be rare in a vectorized inner loop, so I think avoiding these useless instructions will mostly just save a bit of code size.
The pmovzx case is more serious: that is something you might use in an inner loop (e.g. for a LUT of VPERMD shuffle masks, saving a factor of 4 in cache footprint vs. storing each index padded to 32 bits in memory).
The pmovzx-as-a-load issue has been bothering me for a while now, and the original version of this question got me thinking about the related issue of using a scalar float in an xmm register. There are probably more use-cases for pmovzx as a load than for scalar->__m128.
It's doable with GNU C inline asm, but this is ugly and defeats many optimizations, including constant-propagation (https://gcc.gnu.org/wiki/DontUseInlineAsm). This will not be the accepted answer. I'm adding this as an answer instead of part of the question so the question stays short isn't huge.
// don't use this: defeating optimizations is probably worse than an extra instruction
#ifdef __GNUC__
__m128 float_to_vec_inlineasm(float x) {
__m128 retval;
asm ("" : "=x"(retval) : "0"(x)); // matching constraint: provide x in the same xmm reg as retval
return retval;
}
#endif
This does compile to a single ret, as desired, and will inline to let you shufps a scalar into a vector:
gcc5.3
float_to_vec_and_shuffle_asm(float __vector(4), float):
shufps xmm0, xmm1, 0 # tmp93, xv,
ret
See this code on the Godbolt compiler explorer.
This is obviously trivial in pure assembly language, where you don't have to fight with a compiler to get it not to emit instructions you don't want or need.
I haven't found any real way to write a __m128 float_to_vec(float a){ something(a); } that compiles to just a ret instruction. An attempt for double using _mm_undefined_pd() and _mm_move_sd() actually makes worse code with gcc (see the Godbolt link above). None of the existing float->__m128 intrinsics help.
Off-topic: actual _mm_set_ss() code-gen strategies: When you do write code that has to zero upper elements, compilers pick from an interesting range of strategies. Some good, some weird. The strategies also differ between double and float on the same compiler (gcc or clang), as you can see on the Godbolt link above.
One example: __m128 float_to_vec(float x){ return _mm_set_ss(x); } compiles to:
# gcc5.3 -march=core2
movd eax, xmm0 # movd xmm0,xmm0 would work; IDK why gcc doesn't do that
movd xmm0, eax
ret
# gcc5.3 -march=nehalem
insertps xmm0, xmm0, 0xe
ret
# clang3.8 -march=nehalem
xorps xmm1, xmm1
blendps xmm0, xmm1, 14 # xmm0 = xmm0[0],xmm1[1,2,3]
ret

Intrinsics for 128 multiplication and division

In x86_64 I know that the mul and div opp codes support 128 integers by putting the lower 64 bits in the rax and the upper in the rdx registers. I was looking for some sort of intrinsic to do this in the intel intrinsics guide and I could not find one. I am writing a big number library where the word size is 64 bits. Right now I am doing division by a single word like this.
int ubi_div_i64(ubigint_t* a, ubi_i64_t b, ubi_i64_t* rem)
{
if(b == 0)
return UBI_MATH_ERR;
ubi_i64_t r = 0;
for(size_t i = a->used; i-- > 0;)
{
ubi_i64_t out;
__asm__("\t"
"div %[d] \n\t"
: "=a"(out), "=d"(r)
: "a"(a->data[i]), "d"(r), [d]"r"(b)
: "cc");
a->data[i] = out;
//ubi_i128_t top = (r << 64) + a->data[i];
//r = top % b;
//a->data[i] = top / b;
}
if(rem)
*rem = r;
return ubi_strip_leading_zeros(a);
}
It would be nice if I could use something in the x86intrinsics.h header instead of inline asm.
gcc has __int128 and __uint128 types.
Arithmetic with them should be using the right assembly instructions when they exist; I've used them in the past to get the upper 64 bits of a product, although I've never used it for division. If it's not using the right ones, submit a bug report / feature request as appropriate.
Last I looked into it the intrinsic were in a state of flux. The main reason for the intrinsics in this case appears to be due to the fact that MSVC in 64-bit mode does not allow inline assembly.
With MSVC (and I think ICC) you can use _umul128 for mul and _mulx_u64 for mulx. These don't work in GCC , at least not GCC 4.9 (_umul128 is much older than GCC 4.9). I don't know if GCC plans to support these since you can get mul and mulx indirectly through __int128 (depending on your compile options) or directly through inline assembly.
__int128 works fine until you need a larger type and a 128-bit carry. Then you need adc, adcx, or adox and these are even more of a problem with intrinsics. Intel's documentation disagree's with MSVC and the compilers don't seem to produce adox yet with these intrinsics. See this question: _addcarry_u64 and _addcarryx_u64 with MSVC and ICC.
Inline assembly is probably the best solution with GCC (and probably even ICC).

How to clear the upper 128 bits of __m256 value?

How can I clear the upper 128 bits of m2:
__m256i m2 = _mm256_set1_epi32(2);
__m128i m1 = _mm_set1_epi32(1);
m2 = _mm256_castsi128_si256(_mm256_castsi256_si128(m2));
m2 = _mm256_castsi128_si256(m1);
don't work -- Intel’s documentation for the _mm256_castsi128_si256 intrinsic says that “the upper bits of the resulting vector are undefined”.
At the same time I can easily do it in assembly:
VMOVDQA xmm2, xmm2 //zeros upper ymm2
VMOVDQA xmm2, xmm1
Of course I'd not like to use "and" or _mm256_insertf128_si256() and such.
A new intrinsic function has been added for solving this problem:
m2 = _mm256_zextsi128_si256(m1);
https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm256_zextsi128_si256&expand=6177,6177
This function doesn't produce any code if the upper half is already known to be zero, it just makes sure the upper half is not treated as undefined.
Update: there's now a __m128i _mm256_zextsi128_si256(__m128i) intrinsic; see Agner Fog's answer. The rest of the answer below is only relevant for old compilers that don't support this intrinsic, and where there's no efficient, portable solution.
Unfortunately, the ideal solution will depend on which compiler you are using, and on some of them, there is no ideal solution.
There are several basic ways that we could write this:
Version A:
ymm = _mm256_set_m128i(_mm_setzero_si128(), _mm256_castsi256_si128(ymm));
Version B:
ymm = _mm256_blend_epi32(_mm256_setzero_si256(),
ymm,
_MM_SHUFFLE(0, 0, 3, 3));
Version C:
ymm = _mm256_inserti128_si256(_mm256_setzero_si256(),
_mm256_castsi256_si128(ymm),
0);
Each of these do precisely what we want, clearing the upper 128 bits of a 256-bit YMM register, so any of them could safely be used. But which is the most optimal? Well, that depends on which compiler you are using...
GCC:
Version A: Not supported at all because GCC lacks the _mm256_set_m128i intrinsic. (Could be simulated, of course, but that would be done using one of the forms in "B" or "C".)
Version B: Compiled to inefficient code. Idiom is not recognized and intrinsics are translated very literally to machine-code instructions. A temporary YMM register is zeroed using VPXOR, and then that is blended with the input YMM register using VPBLENDD.
Version C: Ideal. Although the code looks kind of scary and inefficient, all versions of GCC that support AVX2 code generation recognize this idiom. You get the expected VMOVDQA xmm?, xmm? instruction, which implicitly clears the upper bits.
Prefer Version C!
Clang:
Version A: Compiled to inefficient code. A temporary YMM register is zeroed using VPXOR, and then that is inserted into the temporary YMM register using VINSERTI128 (or the floating-point forms, depending on version and options).
Version B & C: Also compiled to inefficient code. A temporary YMM register is again zeroed, but here, it is blended with the input YMM register using VPBLENDD.
Nothing ideal!
ICC:
Version A: Ideal. Produces the expected VMOVDQA xmm?, xmm? instruction.
Version B: Compiled to inefficient code. Zeros a temporary YMM register, and then blends zeros with the input YMM register (VPBLENDD).
Version C: Also compiled to inefficient code. Zeros a temporary YMM register, and then uses VINSERTI128 to insert zeros into the temporary YMM register.
Prefer Version A!
MSVC:
Version A and C: Compiled to inefficient code. Zeroes a temporary YMM register, and then uses VINSERTI128 (A) or VINSERTF128 (C) to insert zeros into the temporary YMM register.
Version B: Also compiled to inefficient code. Zeros a temporary YMM register, and then blends this with the input YMM register using VPBLENDD.
Nothing ideal!
In conclusion, then, it is possible to get GCC and ICC to emit the ideal VMOVDQA instruction, if you use the right code sequence. But, I can't see any way to get either Clang or MSVC to safely emit a VMOVDQA instruction. These compilers are missing the optimization opportunity.
So, on Clang and MSVC, we have the choice between XOR+blend and XOR+insert. Which is better? We turn to Agner Fog's instruction tables (spreadsheet version also available):
On AMD's Ryzen architecture: (Bulldozer-family is similar for the AVX __m256 equivalents of these, and for AVX2 on Excavator):
Instruction | Ops | Latency | Reciprocal Throughput | Execution Ports
---------------|-----|---------|-----------------------|---------------------
VMOVDQA | 1 | 0 | 0.25 | 0 (renamed)
VPBLENDD | 2 | 1 | 0.67 | 3
VINSERTI128 | 2 | 1 | 0.67 | 3
Agner Fog seems to have missed some AVX2 instructions in the Ryzen section of his tables. See this AIDA64 InstLatX64 result for confirmation that VPBLENDD ymm performs the same as VPBLENDW ymm on Ryzen, rather than being the same as VBLENDPS ymm (1c throughput from 2 uops that can run on 2 ports).
See also an Excavator / Carrizo InstLatX64 showing that VPBLENDD and VINSERTI128 have equal performance there (2 cycle latency, 1 per cycle throughput). Same for VBLENDPS/VINSERTF128.
On Intel architectures (Haswell, Broadwell, and Skylake):
Instruction | Ops | Latency | Reciprocal Throughput | Execution Ports
---------------|-----|---------|-----------------------|---------------------
VMOVDQA | 1 | 0-1 | 0.33 | 3 (may be renamed)
VPBLENDD | 1 | 1 | 0.33 | 3
VINSERTI128 | 1 | 3 | 1.00 | 1
Obviously, VMOVDQA is optimal on both AMD and Intel, but we already knew that, and it doesn't seem to be an option on either Clang or MSVC until their code generators are improved to recognize one of the above idioms or an additional intrinsic is added for this precise purpose.
Luckily, VPBLENDD is at least as good as VINSERTI128 on both AMD and Intel CPUs. On Intel processors, VPBLENDD is a significant improvement over VINSERTI128. (In fact, it's nearly as good as VMOVDQA in the rare case where the latter cannot be renamed, except for needing an all-zero vector constant.) Prefer the sequence of intrinsics that results in a VPBLENDD instruction if you can't coax your compiler to use VMOVDQA.
If you need a floating-point __m256 or __m256d version of this, the choice is more difficult. On Ryzen, VBLENDPS has 1c throughput, but VINSERTF128 has 0.67c. On all other CPUs (including AMD Bulldozer-family), VBLENDPS is equal or better. It's much better on Intel (same as for integer). If you're optimizing specifically for AMD, you may need to do more tests to see which variant is fastest in your particular sequence of code, otherwise blend. It's only a tiny bit worse on Ryzen.
In summary, then, targeting generic x86 and supporting as many different compilers as possible, we can do:
#if (defined _MSC_VER)
ymm = _mm256_blend_epi32(_mm256_setzero_si256(),
ymm,
_MM_SHUFFLE(0, 0, 3, 3));
#elif (defined __INTEL_COMPILER)
ymm = _mm256_set_m128i(_mm_setzero_si128(), _mm256_castsi256_si128(ymm));
#elif (defined __GNUC__)
// Intended to cover GCC and Clang.
ymm = _mm256_inserti128_si256(_mm256_setzero_si256(),
_mm256_castsi256_si128(ymm),
0);
#else
#error "Unsupported compiler: need to figure out optimal sequence for this compiler."
#endif
See this and versions A,B, and C separately on the Godbolt compiler explorer.
Perhaps you could build on this to define your own macro-based intrinsic until something better comes down the pike.
See what your compiler generates for this:
__m128i m1 = _mm_set1_epi32(1);
__m256i m2 = _mm256_set_m128i(_mm_setzero_si128(), m1);
or alternatively this:
__m128i m1 = _mm_set1_epi32(1);
__m256i m2 = _mm256_setzero_si256();
m2 = _mm256_inserti128_si256 (m2, m1, 0);
The version of clang I have here seems to generate the same code for either (vxorps + vinsertf128), but YMMV.

How to use Fused Multiply-Add (FMA) instructions with SSE/AVX

I have learned that some Intel/AMD CPUs can do simultanous multiply and add with SSE/AVX: FLOPS per cycle for sandy-bridge and haswell SSE2/AVX/AVX2.
I like to know how to do this best in code and I also want to know how it's done internally in the CPU. I mean with the super-scalar architecture. Let's say I want to do a long sum such as the following in SSE:
//sum = a1*b1 + a2*b2 + a3*b3 +... where a is a scalar and b is a SIMD vector (e.g. from matrix multiplication)
sum = _mm_set1_ps(0.0f);
a1 = _mm_set1_ps(a[0]);
b1 = _mm_load_ps(&b[0]);
sum = _mm_add_ps(sum, _mm_mul_ps(a1, b1));
a2 = _mm_set1_ps(a[1]);
b2 = _mm_load_ps(&b[4]);
sum = _mm_add_ps(sum, _mm_mul_ps(a2, b2));
a3 = _mm_set1_ps(a[2]);
b3 = _mm_load_ps(&b[8]);
sum = _mm_add_ps(sum, _mm_mul_ps(a3, b3));
...
My question is how does this get converted to simultaneous multiply and add? Can the data be dependent? I mean can the CPU do _mm_add_ps(sum, _mm_mul_ps(a1, b1)) simultaneously or do the registers used in the multiplication and add have to be independent?
Lastly how does this apply to FMA (with Haswell)? Is _mm_add_ps(sum, _mm_mul_ps(a1, b1)) automatically converted to a single FMA instruction or micro-operation?
The compiler is allowed to fuse a separated add and multiply, even though this changes the final result (by making it more accurate).
An FMA has only one rounding (it effectively keeps infinite precision for the internal temporary multiply result), while an ADD + MUL has two.
The IEEE and C standards allow this when #pragma STDC FP_CONTRACT ON is in effect, and compilers are allowed to have it ON by default (but not all do). Gcc contracts into FMA by default (with the default -std=gnu*, but not -std=c*, e.g. -std=c++14). For Clang, it's only enabled with -ffp-contract=fast. (With just the #pragma enabled, only within a single expression like a+b*c, not across separate C++ statements.).
This is different from strict vs. relaxed floating point (or in gcc terms, -ffast-math vs. -fno-fast-math) that would allow other kinds of optimizations that could increase the rounding error depending on the input values. This one is special because of the infinite precision of the FMA internal temporary; if there was any rounding at all in the internal temporary, this wouldn't be allowed in strict FP.
Even if you enable relaxed floating-point, the compiler might still choose not to fuse since it might expect you to know what you're doing if you're already using intrinsics.
So the best way to make sure you actually get the FMA instructions you want is you actually use the provided intrinsics for them:
FMA3 Intrinsics: (AVX2 - Intel Haswell)
_mm_fmadd_pd(), _mm256_fmadd_pd()
_mm_fmadd_ps(), _mm256_fmadd_ps()
and about a gazillion other variations...
FMA4 Intrinsics: (XOP - AMD Bulldozer)
_mm_macc_pd(), _mm256_macc_pd()
_mm_macc_ps(), _mm256_macc_ps()
and about a gazillion other variations...
I tested the following code in GCC 5.3, Clang 3.7, ICC 13.0.1 and MSVC 2015 (compiler version 19.00).
float mul_add(float a, float b, float c) {
return a*b + c;
}
__m256 mul_addv(__m256 a, __m256 b, __m256 c) {
return _mm256_add_ps(_mm256_mul_ps(a, b), c);
}
With the right compiler options (see below) every compiler will generate a vfmadd instruction (e.g. vfmadd213ss) from mul_add. However, only MSVC fails to contract mul_addv to a single vfmadd instruction (e.g. vfmadd213ps).
The following compiler options are sufficient to generate vfmadd instructions (except with mul_addv with MSVC).
GCC: -O2 -mavx2 -mfma
Clang: -O1 -mavx2 -mfma -ffp-contract=fast
ICC: -O1 -march=core-avx2
MSVC: /O1 /arch:AVX2 /fp:fast
GCC 4.9 will not contract mul_addv to a single fma instruction but since at least GCC 5.1 it does. I don't know when the other compilers started doing this.

Resources