Why does gcc -march=znver1 restrict uint64_t vectorization? - c

I'm trying to make sure gcc vectorizes my loops. It turns out, that by using -march=znver1 (or -march=native) gcc skips some loops even though they can be vectorized. Why does this happen?
In this code, the second loop, which multiplies each element by a scalar is not vectorised:
#include <stdio.h>
#include <inttypes.h>
int main() {
const size_t N = 1000;
uint64_t arr[N];
for (size_t i = 0; i < N; ++i)
arr[i] = 1;
for (size_t i = 0; i < N; ++i)
arr[i] *= 5;
for (size_t i = 0; i < N; ++i)
printf("%lu\n", arr[i]); // use the array so that it is not optimized away
}
gcc -O3 -fopt-info-vec-all -mavx2 main.c:
main.cpp:13:26: missed: couldn't vectorize loop
main.cpp:14:15: missed: statement clobbers memory: printf ("%lu\n", _3);
main.cpp:10:26: optimized: loop vectorized using 32 byte vectors
main.cpp:7:26: optimized: loop vectorized using 32 byte vectors
main.cpp:4:5: note: vectorized 2 loops in function.
main.cpp:14:15: missed: statement clobbers memory: printf ("%lu\n", _3);
main.cpp:15:1: note: ***** Analysis failed with vector mode V4DI
main.cpp:15:1: note: ***** Skipping vector mode V32QI, which would repeat the analysis for V4DI
gcc -O3 -fopt-info-vec-all -march=znver1 main.c:
main.cpp:13:26: missed: couldn't vectorize loop
main.cpp:14:15: missed: statement clobbers memory: printf ("%lu\n", _3);
main.cpp:10:26: missed: couldn't vectorize loop
main.cpp:10:26: missed: not vectorized: unsupported data-type
main.cpp:7:26: optimized: loop vectorized using 16 byte vectors
main.cpp:4:5: note: vectorized 1 loops in function.
main.cpp:14:15: missed: statement clobbers memory: printf ("%lu\n", _3);
main.cpp:15:1: note: ***** Analysis failed with vector mode V2DI
main.cpp:15:1: note: ***** Skipping vector mode V16QI, which would repeat the analysis for V2DI
-march=znver1 includes -mavx2, so I think gcc chooses not to vectorise it for some reason:
~ $ gcc -march=znver1 -Q --help=target
The following options are target specific:
-m128bit-long-double [enabled]
-m16 [disabled]
-m32 [disabled]
-m3dnow [disabled]
-m3dnowa [disabled]
-m64 [enabled]
-m80387 [enabled]
-m8bit-idiv [disabled]
-m96bit-long-double [disabled]
-mabi= sysv
-mabm [enabled]
-maccumulate-outgoing-args [disabled]
-maddress-mode= long
-madx [enabled]
-maes [enabled]
-malign-data= compat
-malign-double [disabled]
-malign-functions= 0
-malign-jumps= 0
-malign-loops= 0
-malign-stringops [enabled]
-mamx-bf16 [disabled]
-mamx-int8 [disabled]
-mamx-tile [disabled]
-mandroid [disabled]
-march= znver1
-masm= att
-mavx [enabled]
-mavx2 [enabled]
-mavx256-split-unaligned-load [disabled]
-mavx256-split-unaligned-store [enabled]
-mavx5124fmaps [disabled]
-mavx5124vnniw [disabled]
-mavx512bf16 [disabled]
-mavx512bitalg [disabled]
-mavx512bw [disabled]
-mavx512cd [disabled]
-mavx512dq [disabled]
-mavx512er [disabled]
-mavx512f [disabled]
-mavx512ifma [disabled]
-mavx512pf [disabled]
-mavx512vbmi [disabled]
-mavx512vbmi2 [disabled]
-mavx512vl [disabled]
-mavx512vnni [disabled]
-mavx512vp2intersect [disabled]
-mavx512vpopcntdq [disabled]
-mavxvnni [disabled]
-mbionic [disabled]
-mbmi [enabled]
-mbmi2 [enabled]
-mbranch-cost=<0,5> 3
-mcall-ms2sysv-xlogues [disabled]
-mcet-switch [disabled]
-mcld [disabled]
-mcldemote [disabled]
-mclflushopt [enabled]
-mclwb [disabled]
-mclzero [enabled]
-mcmodel= [default]
-mcpu=
-mcrc32 [disabled]
-mcx16 [enabled]
-mdispatch-scheduler [disabled]
-mdump-tune-features [disabled]
-menqcmd [disabled]
-mf16c [enabled]
-mfancy-math-387 [enabled]
-mfentry [disabled]
-mfentry-name=
-mfentry-section=
-mfma [enabled]
-mfma4 [disabled]
-mforce-drap [disabled]
-mforce-indirect-call [disabled]
-mfp-ret-in-387 [enabled]
-mfpmath= sse
-mfsgsbase [enabled]
-mfunction-return= keep
-mfused-madd -ffp-contract=fast
-mfxsr [enabled]
-mgeneral-regs-only [disabled]
-mgfni [disabled]
-mglibc [enabled]
-mhard-float [enabled]
-mhle [disabled]
-mhreset [disabled]
-miamcu [disabled]
-mieee-fp [enabled]
-mincoming-stack-boundary= 0
-mindirect-branch-register [disabled]
-mindirect-branch= keep
-minline-all-stringops [disabled]
-minline-stringops-dynamically [disabled]
-minstrument-return= none
-mintel-syntax -masm=intel
-mkl [disabled]
-mlarge-data-threshold=<number> 65536
-mlong-double-128 [disabled]
-mlong-double-64 [disabled]
-mlong-double-80 [enabled]
-mlwp [disabled]
-mlzcnt [enabled]
-mmanual-endbr [disabled]
-mmemcpy-strategy=
-mmemset-strategy=
-mmitigate-rop [disabled]
-mmmx [enabled]
-mmovbe [enabled]
-mmovdir64b [disabled]
-mmovdiri [disabled]
-mmpx [disabled]
-mms-bitfields [disabled]
-mmusl [disabled]
-mmwaitx [enabled]
-mneeded [disabled]
-mno-align-stringops [disabled]
-mno-default [disabled]
-mno-fancy-math-387 [disabled]
-mno-push-args [disabled]
-mno-red-zone [disabled]
-mno-sse4 [disabled]
-mnop-mcount [disabled]
-momit-leaf-frame-pointer [disabled]
-mpc32 [disabled]
-mpc64 [disabled]
-mpc80 [disabled]
-mpclmul [enabled]
-mpcommit [disabled]
-mpconfig [disabled]
-mpku [disabled]
-mpopcnt [enabled]
-mprefer-avx128 -mprefer-vector-width=128
-mprefer-vector-width= 128
-mpreferred-stack-boundary= 0
-mprefetchwt1 [disabled]
-mprfchw [enabled]
-mptwrite [disabled]
-mpush-args [enabled]
-mrdpid [disabled]
-mrdrnd [enabled]
-mrdseed [enabled]
-mrecip [disabled]
-mrecip=
-mrecord-mcount [disabled]
-mrecord-return [disabled]
-mred-zone [enabled]
-mregparm= 6
-mrtd [disabled]
-mrtm [disabled]
-msahf [enabled]
-mserialize [disabled]
-msgx [disabled]
-msha [enabled]
-mshstk [disabled]
-mskip-rax-setup [disabled]
-msoft-float [disabled]
-msse [enabled]
-msse2 [enabled]
-msse2avx [disabled]
-msse3 [enabled]
-msse4 [enabled]
-msse4.1 [enabled]
-msse4.2 [enabled]
-msse4a [enabled]
-msse5 -mavx
-msseregparm [disabled]
-mssse3 [enabled]
-mstack-arg-probe [disabled]
-mstack-protector-guard-offset=
-mstack-protector-guard-reg=
-mstack-protector-guard-symbol=
-mstack-protector-guard= tls
-mstackrealign [disabled]
-mstringop-strategy= [default]
-mstv [enabled]
-mtbm [disabled]
-mtls-dialect= gnu
-mtls-direct-seg-refs [enabled]
-mtsxldtrk [disabled]
-mtune-ctrl=
-mtune= znver1
-muclibc [disabled]
-muintr [disabled]
-mvaes [disabled]
-mveclibabi= [default]
-mvect8-ret-in-mem [disabled]
-mvpclmulqdq [disabled]
-mvzeroupper [enabled]
-mwaitpkg [disabled]
-mwbnoinvd [disabled]
-mwidekl [disabled]
-mx32 [disabled]
-mxop [disabled]
-mxsave [enabled]
-mxsavec [enabled]
-mxsaveopt [enabled]
-mxsaves [enabled]
Known assembler dialects (for use with the -masm= option):
att intel
Known ABIs (for use with the -mabi= option):
ms sysv
Known code models (for use with the -mcmodel= option):
32 kernel large medium small
Valid arguments to -mfpmath=:
387 387+sse 387,sse both sse sse+387 sse,387
Known indirect branch choices (for use with the -mindirect-branch=/-mfunction-return= options):
keep thunk thunk-extern thunk-inline
Known choices for return instrumentation with -minstrument-return=:
call none nop5
Known data alignment choices (for use with the -malign-data= option):
abi cacheline compat
Known vectorization library ABIs (for use with the -mveclibabi= option):
acml svml
Known address mode (for use with the -maddress-mode= option):
long short
Known preferred register vector length (to use with the -mprefer-vector-width= option):
128 256 512 none
Known stack protector guard (for use with the -mstack-protector-guard= option):
global tls
Valid arguments to -mstringop-strategy=:
byte_loop libcall loop rep_4byte rep_8byte rep_byte unrolled_loop vector_loop
Known TLS dialects (for use with the -mtls-dialect= option):
gnu gnu2
Known valid arguments for -march= option:
i386 i486 i586 pentium lakemont pentium-mmx winchip-c6 winchip2 c3 samuel-2 c3-2 nehemiah c7 esther i686 pentiumpro pentium2 pentium3 pentium3m pentium-m pentium4 pentium4m prescott nocona core2 nehalem corei7 westmere sandybridge corei7-avx ivybridge core-avx-i haswell core-avx2 broadwell skylake skylake-avx512 cannonlake icelake-client rocketlake icelake-server cascadelake tigerlake cooperlake sapphirerapids alderlake bonnell atom silvermont slm goldmont goldmont-plus tremont knl knm intel geode k6 k6-2 k6-3 athlon athlon-tbird athlon-4 athlon-xp athlon-mp x86-64 x86-64-v2 x86-64-v3 x86-64-v4 eden-x2 nano nano-1000 nano-2000 nano-3000 nano-x2 eden-x4 nano-x4 k8 k8-sse3 opteron opteron-sse3 athlon64 athlon64-sse3 athlon-fx amdfam10 barcelona bdver1 bdver2 bdver3 bdver4 znver1 znver2 znver3 btver1 btver2 generic native
Known valid arguments for -mtune= option:
generic i386 i486 pentium lakemont pentiumpro pentium4 nocona core2 nehalem sandybridge haswell bonnell silvermont goldmont goldmont-plus tremont knl knm skylake skylake-avx512 cannonlake icelake-client icelake-server cascadelake tigerlake cooperlake sapphirerapids alderlake rocketlake intel geode k6 athlon k8 amdfam10 bdver1 bdver2 bdver3 bdver4 btver1 btver2 znver1 znver2 znver3
I also tried clang and in both cases the loops are vectorised by, I believe, 32 byte vectors:
remark: vectorized loop (vectorization width: 4, interleaved count: 4)
I'm using gcc 11.2.0
Edit:
As requested by Peter Cordes
I realised I was actually benchmarking with a multiplication by 4 for some time.
Makefile:
all:
gcc -O3 -mavx2 main.c -o 3
gcc -O3 -march=znver2 main.c -o 32
gcc -O3 -march=znver2 main.c -mprefer-vector-width=128 -o 32128
gcc -O3 -march=znver1 main.c -o 31
gcc -O2 -mavx2 main.c -o 2
gcc -O2 -march=znver2 main.c -o 22
gcc -O2 -march=znver2 main.c -mprefer-vector-width=128 -o 22128
gcc -O2 -march=znver1 main.c -o 21
hyperfine -r5 ./3 ./32 ./32128 ./31 ./2 ./22 ./22128 ./21
clean:
rm ./3 ./32 ./32128 ./31 ./2 ./22 ./22128 ./21
Code:
#include <stdio.h>
#include <inttypes.h>
#include <stdlib.h>
#include <time.h>
int main() {
const size_t N = 500;
uint64_t arr[N];
for (size_t i = 0; i < N; ++i)
arr[i] = 1;
for (int j = 0; j < 20000000; ++j)
for (size_t i = 0; i < N; ++i)
arr[i] *= 4;
srand(time(0));
printf("%lu\n", arr[rand() % N]); // use the array so that it is not optimized away
}
N = 500, arr[i] *= 4:
Benchmark 1: ./3
Time (mean ± σ): 1.780 s ± 0.011 s [User: 1.778 s, System: 0.000 s]
Range (min … max): 1.763 s … 1.791 s 5 runs
Benchmark 2: ./32
Time (mean ± σ): 1.785 s ± 0.016 s [User: 1.783 s, System: 0.000 s]
Range (min … max): 1.773 s … 1.810 s 5 runs
Benchmark 3: ./32128
Time (mean ± σ): 1.740 s ± 0.026 s [User: 1.737 s, System: 0.000 s]
Range (min … max): 1.724 s … 1.785 s 5 runs
Benchmark 4: ./31
Time (mean ± σ): 1.757 s ± 0.022 s [User: 1.754 s, System: 0.000 s]
Range (min … max): 1.727 s … 1.785 s 5 runs
Benchmark 5: ./2
Time (mean ± σ): 3.467 s ± 0.031 s [User: 3.462 s, System: 0.000 s]
Range (min … max): 3.443 s … 3.519 s 5 runs
Benchmark 6: ./22
Time (mean ± σ): 3.475 s ± 0.028 s [User: 3.469 s, System: 0.001 s]
Range (min … max): 3.447 s … 3.512 s 5 runs
Benchmark 7: ./22128
Time (mean ± σ): 3.464 s ± 0.034 s [User: 3.459 s, System: 0.001 s]
Range (min … max): 3.431 s … 3.509 s 5 runs
Benchmark 8: ./21
Time (mean ± σ): 3.465 s ± 0.013 s [User: 3.460 s, System: 0.001 s]
Range (min … max): 3.443 s … 3.475 s 5 runs
N = 500, arr[i] *= 5:
Benchmark 1: ./3
Time (mean ± σ): 1.789 s ± 0.004 s [User: 1.786 s, System: 0.001 s]
Range (min … max): 1.783 s … 1.793 s 5 runs
Benchmark 2: ./32
Time (mean ± σ): 1.772 s ± 0.017 s [User: 1.769 s, System: 0.000 s]
Range (min … max): 1.755 s … 1.800 s 5 runs
Benchmark 3: ./32128
Time (mean ± σ): 2.911 s ± 0.023 s [User: 2.907 s, System: 0.001 s]
Range (min … max): 2.880 s … 2.943 s 5 runs
Benchmark 4: ./31
Time (mean ± σ): 2.924 s ± 0.013 s [User: 2.921 s, System: 0.000 s]
Range (min … max): 2.906 s … 2.934 s 5 runs
Benchmark 5: ./2
Time (mean ± σ): 3.850 s ± 0.029 s [User: 3.846 s, System: 0.000 s]
Range (min … max): 3.823 s … 3.896 s 5 runs
Benchmark 6: ./22
Time (mean ± σ): 3.816 s ± 0.036 s [User: 3.812 s, System: 0.000 s]
Range (min … max): 3.777 s … 3.855 s 5 runs
Benchmark 7: ./22128
Time (mean ± σ): 3.813 s ± 0.026 s [User: 3.809 s, System: 0.000 s]
Range (min … max): 3.780 s … 3.834 s 5 runs
Benchmark 8: ./21
Time (mean ± σ): 3.783 s ± 0.010 s [User: 3.779 s, System: 0.000 s]
Range (min … max): 3.773 s … 3.798 s 5 runs
N = 512, arr[i] *= 4
Benchmark 1: ./3
Time (mean ± σ): 1.849 s ± 0.015 s [User: 1.847 s, System: 0.000 s]
Range (min … max): 1.831 s … 1.873 s 5 runs
Benchmark 2: ./32
Time (mean ± σ): 1.846 s ± 0.013 s [User: 1.844 s, System: 0.001 s]
Range (min … max): 1.832 s … 1.860 s 5 runs
Benchmark 3: ./32128
Time (mean ± σ): 1.756 s ± 0.012 s [User: 1.754 s, System: 0.000 s]
Range (min … max): 1.744 s … 1.771 s 5 runs
Benchmark 4: ./31
Time (mean ± σ): 1.788 s ± 0.012 s [User: 1.785 s, System: 0.001 s]
Range (min … max): 1.774 s … 1.801 s 5 runs
Benchmark 5: ./2
Time (mean ± σ): 3.476 s ± 0.015 s [User: 3.472 s, System: 0.001 s]
Range (min … max): 3.458 s … 3.494 s 5 runs
Benchmark 6: ./22
Time (mean ± σ): 3.449 s ± 0.002 s [User: 3.446 s, System: 0.000 s]
Range (min … max): 3.446 s … 3.452 s 5 runs
Benchmark 7: ./22128
Time (mean ± σ): 3.456 s ± 0.007 s [User: 3.453 s, System: 0.000 s]
Range (min … max): 3.446 s … 3.462 s 5 runs
Benchmark 8: ./21
Time (mean ± σ): 3.547 s ± 0.044 s [User: 3.542 s, System: 0.001 s]
Range (min … max): 3.482 s … 3.600 s 5 runs
N = 512, arr[i] *= 5
Benchmark 1: ./3
Time (mean ± σ): 1.847 s ± 0.013 s [User: 1.845 s, System: 0.000 s]
Range (min … max): 1.836 s … 1.863 s 5 runs
Benchmark 2: ./32
Time (mean ± σ): 1.830 s ± 0.007 s [User: 1.827 s, System: 0.001 s]
Range (min … max): 1.820 s … 1.837 s 5 runs
Benchmark 3: ./32128
Time (mean ± σ): 2.983 s ± 0.017 s [User: 2.980 s, System: 0.000 s]
Range (min … max): 2.966 s … 3.012 s 5 runs
Benchmark 4: ./31
Time (mean ± σ): 3.026 s ± 0.039 s [User: 3.021 s, System: 0.001 s]
Range (min … max): 2.989 s … 3.089 s 5 runs
Benchmark 5: ./2
Time (mean ± σ): 4.000 s ± 0.021 s [User: 3.994 s, System: 0.001 s]
Range (min … max): 3.982 s … 4.035 s 5 runs
Benchmark 6: ./22
Time (mean ± σ): 3.940 s ± 0.041 s [User: 3.934 s, System: 0.001 s]
Range (min … max): 3.890 s … 3.981 s 5 runs
Benchmark 7: ./22128
Time (mean ± σ): 3.928 s ± 0.032 s [User: 3.922 s, System: 0.001 s]
Range (min … max): 3.898 s … 3.979 s 5 runs
Benchmark 8: ./21
Time (mean ± σ): 3.908 s ± 0.029 s [User: 3.904 s, System: 0.000 s]
Range (min … max): 3.879 s … 3.954 s 5 runs
I think the run where -O2 -march=znver1 was the same speed as -O3 -march=znver1 was a mistake on my part with the naming of the files, I had not created the makefile back then yet, I was using my shell's history.

The default -mtune=generic has -mprefer-vector-width=256, and -mavx2 doesn't change that.
znver1 implies -mprefer-vector-width=128, because that's all the native width of the HW. An instruction using 32-byte YMM vectors decodes to at least 2 uops, more if it's a lane-crossing shuffle. For simple vertical SIMD like this, 32-byte vectors would be ok; the pipeline handles 2-uop instructions efficiently. (And I think is 6 uops wide but only 5 instructions wide, so max front-end throughput isn't available using only 1-uop instructions). But when vectorization would require shuffling, e.g. with arrays of different element widths, GCC code-gen can get messier with 256-bit or wider.
And vmovdqa ymm0, ymm1 mov-elimination only works on the low 128-bit half on Zen1. Also, normally using 256-bit vectors would imply one should use vzeroupper afterwards, to avoid performance problems on other CPUs (but not Zen1).
I don't know how Zen1 handles misaligned 32-byte loads/stores where each 16-byte half is aligned but in separate cache lines. If that performs well, GCC might want to consider increasing the znver1 -mprefer-vector-width to 256. But wider vectors means more cleanup code if the size isn't known to be a multiple of the vector width.
Ideally GCC would be able to detect easy cases like this and use 256-bit vectors there. (Pure vertical, no mixing of element widths, constant size that's am multiple of 32 bytes.) At least on CPUs where that's fine: znver1, but not bdver2 for example where 256-bit stores are always slow due to a CPU design bug.
You can see the result of this choice in the way it vectorizes your first loop, the memset-like loop, with a vmovdqu [rdx], xmm0. https://godbolt.org/z/E5Tq7Gfzc
So given that GCC has decided to only use 128-bit vectors, which can only hold two uint64_t elements, it (rightly or wrongly) decides it wouldn't be worth using vpsllq / vpaddd to implement qword *5 as (v<<2) + v, vs. doing it with integer in one LEA instruction.
Almost certainly wrongly in this case, since it still requires a separate load and store for every element or pair of elements. (And loop overhead since GCC's default is not to unroll except with PGO, -fprofile-use. SIMD is like loop unrolling, especially on a CPU that handles 256-bit vectors as 2 separate uops.)
I'm not sure exactly what GCC means by "not vectorized: unsupported data-type". x86 doesn't have a SIMD uint64_t multiply instruction until AVX-512, so perhaps GCC assigns it a cost based on the general case of having to emulate it with multiple 32x32 => 64-bit pmuludq instructions and a bunch of shuffles. And it's only after it gets over that hump that it realizes that it's actually quite cheap for a constant like 5 with only 2 set bits?
That would explain GCC's decision-making process here, but I'm not sure it's exactly the right explanation. Still, these kinds of factors are what happen in a complex piece of machinery like a compiler. A skilled human can easily make smarter choices, but compilers just do sequences of optimization passes that don't always consider the big picture and all the details at the same time.
-mprefer-vector-width=256 doesn't help:
Not vectorizing uint64_t *= 5 seems to be a GCC9 regression
(The benchmarks in the question confirm that an actual Zen1 CPU gets a nearly 2x speedup, as expected from doing 2x uint64 in 6 uops vs. 1x in 5 uops with scalar. Or 4x uint64_t in 10 uops with 256-bit vectors, including two 128-bit stores which will be the throughput bottleneck along with the front-end.)
Even with -march=znver1 -O3 -mprefer-vector-width=256, we don't get the *= 5 loop vectorized with GCC9, 10, or 11, or current trunk. As you say, we do with -march=znver2. https://godbolt.org/z/dMTh7Wxcq
We do get vectorization with those options for uint32_t (even leaving the vector width at 128-bit). Scalar would cost 4 operations per vector uop (not instruction), regardless of 128 or 256-bit vectorization on Zen1, so this doesn't tell us whether *= is what makes the cost-model decide not to vectorize, or just the 2 vs. 4 elements per 128-bit internal uop.
With uint64_t, changing to arr[i] += arr[i]<<2; still doesn't vectorize, but arr[i] <<= 1; does. (https://godbolt.org/z/6PMn93Y5G). Even arr[i] <<= 2; and arr[i] += 123 in the same loop vectorize, to the same instructions that GCC thinks aren't worth it for vectorizing *= 5, just different operands, constant instead of the original vector again. (Scalar could still use one LEA). So clearly the cost-model isn't looking as far as final x86 asm machine instructions, but I don't know why arr[i] += arr[i] would be considered more expensive than arr[i] <<= 1; which is exactly the same thing.
GCC8 does vectorize your loop, even with 128-bit vector width: https://godbolt.org/z/5o6qjc7f6
# GCC8.5 -march=znver1 -O3 (-mprefer-vector-width=128)
.L12: # do{
vmovups xmm1, XMMWORD PTR [rsi] # 16-byte load
add rsi, 16 # ptr += 2 elements
vpsllq xmm0, xmm1, 2 # v << 2
vpaddq xmm0, xmm0, xmm1 # tmp += v
vmovups XMMWORD PTR [rsi-16], xmm0 # store
cmp rax, rsi
jne .L12 # } while(p != endp)
With -march=znver1 -mprefer-vector-width=256, doing the store as two 16-byte halves with vmovups xmm / vextracti128 is Why doesn't gcc resolve _mm256_loadu_pd as single vmovupd? znver1 implies -mavx256-split-unaligned-store (which affects every store when GCC doesn't know for sure that it is aligned. So it costs extra instructions even when data does happen to be aligned).
znver1 doesn't imply -mavx256-split-unaligned-load, though, so GCC is willing to fold loads as memory source operands into ALU operations in code where that's useful.

Related

Simple array operation in parallel using julia

I am working on a project which include some simple array operations in a huge array.
i.e. A example here
function singleoperation!(A::Array,B::Array,C::Array)
#simd for k in eachindex(A)
#inbounds C[k] = A[k] * B[k] / (A[k] +B[k]);
end
I try to parallelize it to get a faster speed. To parallelize it, I am using distirbuded and share array function, which just modified a bit on the function I just show:
#everywhere function paralleloperation(A::SharedArray,B::SharedArray,C::SharedArray)
#sync #distributed for k in eachindex(A)
#inbounds C[k] = A[k] * B[k] / (A[k] +B[k]);
end
end
However, there has no time difference between two functions even I am using 4 threads (with the try on R7-5800x and I7-9750H CPU). Can I know anythings I can improve in this code? Thanks a lot! I will post the full testing code in below:
using Distributed
addprocs(4)
#everywhere begin
using SharedArrays
using BenchmarkTools
end
#everywhere function paralleloperation!(A::SharedArray,B::SharedArray,C::SharedArray)
#sync #distributed for k in eachindex(A)
#inbounds C[k] = A[k] * B[k] / (A[k] +B[k]);
end
end
function singleoperation!(A::Array,B::Array,C::Array)
#simd for k in eachindex(A)
#inbounds C[k] = A[k] * B[k] / (A[k] +B[k]);
end
end
N = 128;
A,B,C = fill(0,N,N,N),fill(.2,N,N,N),fill(.3,N,N,N);
AN,BN,CN = SharedArray(fill(0,N,N,N)),SharedArray(fill(.2,N,N,N)),SharedArray(fill(.3,N,N,N));
#benchmark singleoperation!(A,B,C);
BenchmarkTools.Trial: 1612 samples with 1 evaluation.
Range (min … max): 2.582 ms … 9.358 ms ┊ GC (min … max): 0.00% … 0.00%
Time (median): 2.796 ms ┊ GC (median): 0.00%
Time (mean ± σ): 3.086 ms ± 790.997 μs ┊ GC (mean ± σ): 0.00% ± 0.00%
#benchmark paralleloperation!(AN,BN,CN);
BenchmarkTools.Trial: 1404 samples with 1 evaluation.
Range (min … max): 2.538 ms … 17.651 ms ┊ GC (min … max): 0.00% … 0.00%
Time (median): 3.154 ms ┊ GC (median): 0.00%
Time (mean ± σ): 3.548 ms ± 1.238 ms ┊ GC (mean ± σ): 0.08% ± 1.65%
As the comments note, this looks like perhaps more of a job for multithreading than multiprocessing. The best approach in detail will generally depend on whether you are CPU-bound or memory-bandwith-bound. With so simple a calculation as in the example, it may well be the latter, in which case you will reach a point of diminishing returns from adding additional threads, and and may want to turn to something featuring explicit memory modelling, and/or to GPUs.
However, one very easy general-purpose approach would be to use the multithreading built-in to LoopVectorization.jl
A = rand(10000,10000)
B = rand(10000,10000)
C = zeros(10000,10000)
# Base
function singleoperation!(A,B,C)
#inbounds #simd for k in eachindex(A)
C[k] = A[k] * B[k] / (A[k] + B[k])
end
end
using LoopVectorization
function singleoperation_lv!(A,B,C)
#turbo for k in eachindex(A)
C[k] = A[k] * B[k] / (A[k] + B[k])
end
end
# Multithreaded (make sure you've started Julia with multiple threads)
function threadedoperation_lv!(A,B,C)
#tturbo for k in eachindex(A)
C[k] = A[k] * B[k] / (A[k] + B[k])
end
end
which gives us
julia> #benchmark singleoperation!(A,B,C)
BenchmarkTools.Trial: 31 samples with 1 evaluation.
Range (min … max): 163.484 ms … 164.022 ms ┊ GC (min … max): 0.00% … 0.00%
Time (median): 163.664 ms ┊ GC (median): 0.00%
Time (mean ± σ): 163.701 ms ± 118.397 μs ┊ GC (mean ± σ): 0.00% ± 0.00%
█ ▄ ▄▄ ▁ ▁ ▁
▆▁▁▁▁▁▁▁▁▁▁▁▁▆█▆▆█▆██▁█▆▁▆█▁▁▁▆▁▁▁▁▆▁▁▁▁▁▁▁▁█▁▁▁▆▁▁▁▁▁▁▆▁▁▁▁▆ ▁
163 ms Histogram: frequency by time 164 ms <
Memory estimate: 0 bytes, allocs estimate: 0.
julia> #benchmark singleoperation_lv!(A,B,C)
BenchmarkTools.Trial: 31 samples with 1 evaluation.
Range (min … max): 163.252 ms … 163.754 ms ┊ GC (min … max): 0.00% … 0.00%
Time (median): 163.408 ms ┊ GC (median): 0.00%
Time (mean ± σ): 163.453 ms ± 130.212 μs ┊ GC (mean ± σ): 0.00% ± 0.00%
▃▃ ▃█▃ █ ▃ ▃
▇▁▁▁▁▁▇▁▁▁▇██▇▁▇███▇▁▁█▇█▁▁▁▁▁▁▁▁█▁▇▁▁▁▁▁▁▁▁▇▁▁▁▁▁▁▁▁▁▁▇▇▁▇▁▇ ▁
163 ms Histogram: frequency by time 164 ms <
Memory estimate: 0 bytes, allocs estimate: 0.
julia> #benchmark threadedoperation_lv!(A,B,C)
BenchmarkTools.Trial: 57 samples with 1 evaluation.
Range (min … max): 86.976 ms … 88.595 ms ┊ GC (min … max): 0.00% … 0.00%
Time (median): 87.642 ms ┊ GC (median): 0.00%
Time (mean ± σ): 87.727 ms ± 439.427 μs ┊ GC (mean ± σ): 0.00% ± 0.00%
▅ █ ▂ ▂
▅▁▁▁▁██▁▅█▁█▁▁▅▅█▅█▁█▅██▅█▁██▁▅▁▁▅▅▁▅▁▅▁▅▁▅▁▅▁▁▁▅▁█▁▁█▅▁▅▁▅█ ▁
87 ms Histogram: frequency by time 88.5 ms <
Memory estimate: 0 bytes, allocs estimate: 0.
Now, the fact that the singlethreaded LoopVectorization #turbo version is almost perfectly tied with the singlethreaded #inbounds #simd version is to me a hint that we are probably memory-bandwidth bound here (usually #turbo is notably faster than #inbounds #simd, so the tie suggests that the actual calculation is not the bottleneck) -- in which case the multithreaded version is only helping us by getting us access to a bit more memory bandwidth (though with diminishing returns, assuming there is some main memory bus that can only go so fast regardless of how many cores it can talk to).
To get a bit more insight, let's try making the arithmetic a bit harder:
function singlemoremath!(A,B,C)
#inbounds #simd for k in eachindex(A)
C[k] = cos(log(sqrt(A[k] * B[k] / (A[k] + B[k]))))
end
end
using LoopVectorization
function singlemoremath_lv!(A,B,C)
#turbo for k in eachindex(A)
C[k] = cos(log(sqrt(A[k] * B[k] / (A[k] + B[k]))))
end
end
function threadedmoremath_lv!(A,B,C)
#tturbo for k in eachindex(A)
C[k] = cos(log(sqrt(A[k] * B[k] / (A[k] + B[k]))))
end
end
then sure enough
julia> #benchmark singlemoremath!(A,B,C)
BenchmarkTools.Trial: 2 samples with 1 evaluation.
Range (min … max): 2.651 s … 2.652 s ┊ GC (min … max): 0.00% … 0.00%
Time (median): 2.651 s ┊ GC (median): 0.00%
Time (mean ± σ): 2.651 s ± 792.423 μs ┊ GC (mean ± σ): 0.00% ± 0.00%
█ █
█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█ ▁
2.65 s Histogram: frequency by time 2.65 s <
Memory estimate: 0 bytes, allocs estimate: 0.
julia> #benchmark singlemoremath_lv!(A,B,C)
BenchmarkTools.Trial: 19 samples with 1 evaluation.
Range (min … max): 268.101 ms … 270.072 ms ┊ GC (min … max): 0.00% … 0.00%
Time (median): 269.016 ms ┊ GC (median): 0.00%
Time (mean ± σ): 269.058 ms ± 467.744 μs ┊ GC (mean ± σ): 0.00% ± 0.00%
▁ █ ▁ ▁ ▁▁█ ▁ ██ ▁ ▁ ▁ ▁ ▁
█▁▁▁▁▁▁▁▁▁▁▁█▁▁▁▁▁█▁▁█▁▁███▁█▁▁██▁▁▁█▁▁▁▁█▁▁▁▁▁█▁▁▁▁▁█▁▁▁▁▁▁█ ▁
268 ms Histogram: frequency by time 270 ms <
Memory estimate: 0 bytes, allocs estimate: 0.
julia> #benchmark threadedmoremath_lv!(A,B,C)
BenchmarkTools.Trial: 56 samples with 1 evaluation.
Range (min … max): 88.247 ms … 93.590 ms ┊ GC (min … max): 0.00% … 0.00%
Time (median): 89.325 ms ┊ GC (median): 0.00%
Time (mean ± σ): 89.707 ms ± 1.200 ms ┊ GC (mean ± σ): 0.00% ± 0.00%
▄ ▁ ▄█ ▄▄▄ ▁ ▄ ▁ ▁ ▁ ▁ ▁▄
█▁█▆██▆▆▆███▆▁█▁█▁▆▁█▆▁█▆▆▁▁▆█▁▁▁▁▁▁█▆██▁▁▆▆▁▁▆▁▁▁▁▁▁▁▁▁▁▆▆ ▁
88.2 ms Histogram: frequency by time 92.4 ms <
Memory estimate: 0 bytes, allocs estimate: 0.
now we're closer to CPU-bound, and now threading and SIMD-vectorization is the difference between 2.6 seconds and 90 ms!
If your real problem is going to be as memory-bound as the example problem, you may consider working on GPU, on a server optimized for memory bandwidth, and/or using a package that puts a lot of effort into memory modelling.
Some other packages you might check out could include Octavian.jl (CPU), Tullio.jl (CPU or GPU), and GemmKernels.jl (GPU).

How to turn an array of arrays into a matrix in Julia?

I want to turn a array of arrays into a matrix. To illustrate; let the array of arrays be:
[ [1,2,3], [4,5,6], [7,8,9]]
I would like to turn this into the 3x3 matrix:
[1 2 3
4 5 6
7 8 9]
How would you do this in Julia?
There are several ways of doing this. For instance, something along the lines of vcat(transpose.(a)...) will work as a one-liner
julia> a = [[1,2,3], [4,5,6], [7,8,9]]
3-element Vector{Vector{Int64}}:
[1, 2, 3]
[4, 5, 6]
[7, 8, 9]
julia> vcat(transpose.(a)...)
3×3 Matrix{Int64}:
1 2 3
4 5 6
7 8 9
though note that
Since your inner arrays are column-vectors as written, you need to transpose them all before you can vertically concatenate (aka vcat) them (either that or horizontally concatenate and then transpose the whole result after, i.e., transpose(hcat(a...))), and
The splatting operator ... which makes this one-liner work will not be very efficient when applied to Arrays in general, and especially not when applied to larger arrays-of-arrays.
Performance-wise for larger arrays-of-arrays, it will likely actually be hard to beat preallocating a result of the right size and then simply filling with a loop, e.g.
result = similar(first(a), length(a), length(first(a)))
for i=1:length(a)
result[i,:] = a[i] # Aside: `=` is actually slightly faster than `.=` here, though either will have the same practical result in this case
end
Some quick benchmarks for reference:
julia> using BenchmarkTools
julia> #benchmark vcat(transpose.($a)...)
BechmarkTools.Trial: 10000 samples with 405 evaluations.
Range (min … max): 241.289 ns … 3.994 μs ┊ GC (min … max): 0.00% … 92.59%
Time (median): 262.836 ns ┊ GC (median): 0.00%
Time (mean ± σ): 289.105 ns ± 125.940 ns ┊ GC (mean ± σ): 2.06% ± 4.61%
▁▆▇█▇▆▅▅▅▄▄▄▄▃▂▂▂▃▃▂▂▁▁▁▂▄▃▁▁ ▁ ▁ ▂
████████████████████████████████▇▆▅▆▆▄▆▆▆▄▄▃▅▅▃▄▆▄▁▃▃▃▅▄▁▃▅██ █
241 ns Histogram: log(frequency) by time 534 ns <
Memory estimate: 320 bytes, allocs estimate: 5.
julia> #benchmark for i=1:length($a)
$result[i,:] = $a[i]
end
BechmarkTools.Trial: 10000 samples with 993 evaluations.
Range (min … max): 33.966 ns … 124.918 ns ┊ GC (min … max): 0.00% … 0.00%
Time (median): 36.710 ns ┊ GC (median): 0.00%
Time (mean ± σ): 39.795 ns ± 7.566 ns ┊ GC (mean ± σ): 0.00% ± 0.00%
▄▄██▄▅▃ ▅▃ ▄▁▂ ▂▁▂▅▂▁ ▄▂▁ ▂
██████████████▇██████▆█▇▆███▆▇███▇▆▆▅▆▅▅▄▄▅▄▆▆▆▄▁▃▄▁▃▄▅▅▃▁▄█ █
34 ns Histogram: log(frequency) by time 77.7 ns <
Memory estimate: 0 bytes, allocs estimate: 0.
In general, filling column-by-column (if possible) will be faster than filling row-by-row as we have done here, since Julia is column-major.
Expanding on #cbk's answer, another (slightly more efficient) one-liner is
julia> transpose(reduce(hcat, a))
3×3 transpose(::Matrix{Int64}) with eltype Int64:
1 2 3
4 5 6
7 8 9
[1 2 3; 4 5 6; 7 8 9]
# or
reshape(1:9, 3, 3)' # remember that ' makes the transpose of a Matrix

Get max and std over array-fields in dataframe column pandas

(Pandas version 1.1.1.)
I have arrays as entries in the cells of a Dataframe column.
a = np.array([1,8])
b = np.array([5,14])
df = pd.DataFrame({'float':[1,2], 'array': [a,b]})
> float array
> 0 1 [1, 8]
> 1 2 [5, 14]
Now I need some statistics over each array position.
It works perfectly with the mean:
df['array'].mean()
> array([ 3., 11.])
But if I try to do it with the maximum or the standard deviation error occur:
df['array'].std()
> setting an array element with a sequence.
df['array'].max()
> The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
It seems like .mean() .std() ánd .max() are constructed differently. Anyhow, has someone an idea how to caluculate the std and max (and min etc), without dividing the array into several columns?
(The DataFrame has array's of different shapes. But I do only want to caluculate statistics within a .groupyby() over rows where the arrays do have the same shape.)
You can convert columns to 2d arrays and use numpy for count:
a = np.array([1,8])
b = np.array([5,14])
df = pd.DataFrame({'float':[1,2], 'array': [a,b]})
#2k for test
df = pd.concat([df] * 1000, ignore_index=True)
In [150]: %timeit (pd.DataFrame(df['array'].tolist(), index=df.index).std())
4.25 ms ± 305 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [151]: %timeit (np.std(np.array(df['array'].tolist()), ddof=1, axis=0))
944 µs ± 1.59 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [152]: %timeit (pd.DataFrame(df['array'].tolist(), index=df.index).max())
4.31 ms ± 646 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [153]: %timeit (np.max(np.array(df['array'].tolist()), axis=0))
836 µs ± 1.47 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
For 20k rows:
df = pd.concat([df] * 10000, ignore_index=True)
In [155]: %timeit (pd.DataFrame(df['array'].tolist(), index=df.index).std())
35.3 ms ± 87.6 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [156]: %timeit (np.std(np.array(df['array'].tolist()), ddof=1, axis=0))
9.13 ms ± 170 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [157]: %timeit (pd.DataFrame(df['array'].tolist(), index=df.index).max())
35.3 ms ± 127 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [158]: %timeit (np.max(np.array(df['array'].tolist()), axis=0))
8.21 ms ± 27.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

How to find the index of the min/max object in a numpy array of objects?

In a numpy array of objects (where each object has a numeric attribute y that can be retrieved by the method get_y()), how do I obtain the index of the object with the maximum (or minimum) y attribute (without explicit looping; to save time)? If myarray were a python list, I could use the following, but ndarray does not seem to support index. Also, numpy argmin does not seem to allow a provision for supplying the key.
minindex = myarray.index(min(myarray, key = lambda x: x.get_y()))
Some timings, comparing a numeric dtype, object dtype, and lists. Draw your own conclusions:
In [117]: x = np.arange(1000)
In [118]: xo=x.astype(object)
In [119]: np.sum(x)
Out[119]: 499500
In [120]: np.sum(xo)
Out[120]: 499500
In [121]: timeit np.sum(x)
10.8 µs ± 242 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [122]: timeit np.sum(xo)
39.2 µs ± 673 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [123]: sum(x)
Out[123]: 499500
In [124]: timeit sum(x)
214 µs ± 6.58 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [125]: timeit sum(xo)
25.3 µs ± 4.54 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [126]: timeit sum(x.tolist())
29.1 µs ± 26.7 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [127]: timeit sum(xo.tolist())
14.4 µs ± 120 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [129]: %%timeit temp=x.tolist()
...: sum(temp)
6.27 µs ± 18.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

Absence of Consing in SBCL loop

* (defparameter lst (make-list 1000))
LST
* (time (loop for x in lst
for i from 0
unless (= i 500)
collect x))
Evaluation took:
0.000 seconds of real time
0.000000 seconds of total run time (0.000000 user, 0.000000 system)
100.00% CPU
47,292 processor cycles
0 bytes consed
How does SBCL build the return list with 0 bytes consed?
Your test case is too small for time. Try (defparameter lst 100000).
Evaluation took:
0.003 seconds of real time
0.003150 seconds of total run time (0.002126 user, 0.001024 system)
100.00% CPU
8,518,420 processor cycles
1,579,472 bytes consed

Resources