Why `memmove` function has significant difference in two different computers? - c

I've tried to run the following C code from two different computers.
#include <string.h>
int a[100000];
int main(){
for(int sz = 100000; sz > 1; sz --){
memmove(a, a+1, 4*(sz - 1));
}
}
Computer A uses 800ms, while computer B uses 6200ms. The running time of B is always far more than A.
Environment
compile command
Shell is bash.
And the -O gcc command doesn't influence runtime.
gcc myfile.c -o mybin
time ./mybin
Computer A
gcc 9.3.0
glibc: ldd (Ubuntu GLIBC 2.31-0ubuntu9) 2.31
uname_result(system='Linux', release='5.4.0-100-generic', machine='x86_64')
CPU: Intel(R) Xeon(R) Gold 6140 CPU # 2.30GHz
Computer B
gcc 9.3.0
glibc: ldd (Ubuntu GLIBC 2.31-0ubuntu9.2) 2.31
uname_result(system='Linux', release='4.4.0-210-generic', machine='x86_64')
CPU: Intel(R) Xeon(R) Platinum 8369B CPU # 2.70GHz
Question
Then I run the following file on same virtual machines, with different kernal version(4.4.210-0404210-generic x86_64 and 5.4.0-113-generic x86_64) with gcc 9.3.0. Two test cost less than 500ms.
#include <string.h>
#include <time.h>
#include <stdio.h>
#define TICK(X) clock_t X = clock()
#define TOCK(X) printf("time %s: %g sec.\n", (#X), (double)(clock() - (X)) / CLOCKS_PER_SEC)
int a[100000];
int main(){
TICK(timer);
for(int sz = 100000; sz > 100; sz --){
memmove(a, a+1, 4*(sz - 1));
}
TOCK(timer);
}
How can I find the cause?

Related

Very slow speed of gcc compiled C-program under Linux

I have two OS on my PC with i7-3770 # 3.40 GHz. One OS is latest Linux Kubuntu 18.04, the other OS is Windows 10 Pro running on same HDD.
I have tested a simple funny program written in C language doing some arithmetic calculations from number theory. On Kubuntu compiled with gcc 7.3.0, on Windows compiled with gcc 5.2.0. built by MinGW-W64 project.
The result is amazing, running program was 4-times slower on Linux, than on Windows.
On Windows the elapsed time is just 6 seconds. On Linux is elapsed time 24 seconds! On the same hardware.
I tried on Kubuntu to compile with some CPU specific options like "gcc -corei7" etc., but nothing helped. In the program is used "math.h" library, so the compilation is done with "-lm" on both systems. The source code is the same.
Is there a reason for this slow speed under Linux?
Further more I have compiled the same code also on older 32-bit machine with Core Duo T2250 # 1.73 GHz under Linux Mint 19 with gcc 7.3.0. The elapsed time was 28 seconds! Not much difference than 64-bit machine running on double frequency under Linux.
The sorce code is below, you can compile it and test it.
/* Program for playing with sigma(n) and tau(n) functions */
/* Compilation of code: "gcc name.c -o name -lm" */
#include <stdio.h>
#include <math.h>
#include <time.h>
int main(void)
{
double i, nq, x, zacatek, konec, p;
double odx, soucet, delitel, celkem, ZM;
unsigned long cas1, cas2;
i=(double)0; soucet=(double)0; celkem=(double)0; nq=(double)0;
zacatek=(double)1; konec=(double)1000000; x=zacatek;
ZM=(double)16 / (double)10;
printf("\n Program for playing with sigma(n) and tau(n) functions \n");
printf("---------------------------------------------------------\n");
printf("Calculation is running in range from %.0lf to %.0lf\n\n\n", zacatek, konec);
printf("Finding numbers which have sigma(n)/n = %.3lf\n\n", ZM);
cas1=time(NULL);
while (x <= konec) {
i=1; celkem=0; nq=0;
odx=sqrt(x)+1;
while (i <= odx) {
if (fmod(x, i)==0) {
nq++;
celkem=celkem+x/i+i;
}
i++;
}
nq=2*nq-1;
if ((odx-floor(odx))==0) {celkem=celkem-odx;}
if (fabs(celkem - (ZM*x)) < 0.001) {
printf("%.0lf has sum of all divisors = %.3lf times the number itself (%.0lf, %.0lf)\n", x, ZM, celkem, nq+1);
}
x++;
}
cas2=time(NULL);
printf("\n\nProgram ended.\n\n");
printf("Elapsed time %lu seconds.\n\n", cas2-cas1);
return (0);
}

-std=c++11 flag increases executable size

Having C style code in C++ like:
#include <stdio.h>
int main()
{
printf("hi\n");
return 0;
}
Executable size normally is: 8.50 KB. (Optimized for size with -Os -s)
When added -std=c++11, the size grows to 32.50 KB.
Does it have anything to do with stdio.h or why does the size change?
Tested with g++ version 6.3.0 on code::blocks. OS: Windows 8.1 pro

gcc's __builtin_memcpy performance with certain number of bytes is terrible compared to clang's

I thought I`d first share this here to have your opinions before doing anything else. I found out while designing an algorithm that the gcc compiled code performance for some simple code was catastrophic compared to clang's.
How to reproduce
Create a test.c file containing this code :
#include <sys/stat.h>
#include <sys/types.h>
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <stdbool.h>
#include <string.h>
int main(int argc, char *argv[]) {
const uint64_t size = 1000000000;
const size_t alloc_mem = size * sizeof(uint8_t);
uint8_t *mem = (uint8_t*)malloc(alloc_mem);
for (uint_fast64_t i = 0; i < size; i++)
mem[i] = (uint8_t) (i >> 7);
uint8_t block = 0;
uint_fast64_t counter = 0;
uint64_t total = 0x123456789abcdefllu;
uint64_t receiver = 0;
for(block = 1; block <= 8; block ++) {
printf("%u ...\n", block);
counter = 0;
while (counter < size - 8) {
__builtin_memcpy(&receiver, &mem[counter], block);
receiver &= (0xffffffffffffffffllu >> (64 - ((block) << 3)));
total += ((receiver * 0x321654987cbafedllu) >> 48);
counter += block;
}
}
printf("=> %llu\n", total);
return EXIT_SUCCESS;
}
gcc
Compile and run :
gcc-7 -O3 test.c
time ./a.out
1 ...
2 ...
3 ...
4 ...
5 ...
6 ...
7 ...
8 ...
=> 82075168519762377
real 0m23.367s
user 0m22.634s
sys 0m0.495s
info :
gcc-7 -v
Using built-in specs.
COLLECT_GCC=gcc-7
COLLECT_LTO_WRAPPER=/usr/local/Cellar/gcc/7.3.0/libexec/gcc/x86_64-apple-darwin17.4.0/7.3.0/lto-wrapper
Target: x86_64-apple-darwin17.4.0
Configured with: ../configure --build=x86_64-apple-darwin17.4.0 --prefix=/usr/local/Cellar/gcc/7.3.0 --libdir=/usr/local/Cellar/gcc/7.3.0/lib/gcc/7 --enable-languages=c,c++,objc,obj-c++,fortran --program-suffix=-7 --with-gmp=/usr/local/opt/gmp --with-mpfr=/usr/local/opt/mpfr --with-mpc=/usr/local/opt/libmpc --with-isl=/usr/local/opt/isl --with-system-zlib --enable-checking=release --with-pkgversion='Homebrew GCC 7.3.0' --with-bugurl=https://github.com/Homebrew/homebrew-core/issues --disable-nls
Thread model: posix
gcc version 7.3.0 (Homebrew GCC 7.3.0)
So we get about 23s of user time. Now let's do the same with cc (clang on macOS) :
clang
cc -O3 test.c
time ./a.out
1 ...
2 ...
3 ...
4 ...
5 ...
6 ...
7 ...
8 ...
=> 82075168519762377
real 0m9.832s
user 0m9.310s
sys 0m0.442s
info :
Apple LLVM version 9.0.0 (clang-900.0.39.2)
Target: x86_64-apple-darwin17.4.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
That's more than 2.5x faster !! Any thoughts ?
I replaced the __builtin_memcpy function by memcpy to test things out and this time the compiled code runs in about 34s on both sides - consistent and slower as expected.
It would appear that the combination of __builtin_memcpy and bitmasking is interpreted very differently by both compilers.
I had a look at the assembly code, but couldn't see anything standing out that would explain such a drop in performance as I'm not an asm expert.
Edit 03-05-2018 :
Posted this bug : https://gcc.gnu.org/bugzilla/show_bug.cgi?id=84719.
I find it suspicious that you get different code for memcpy vs __builtin_memcpy. I don't think that's supposed to happen, and indeed I cannot reproduce it on my (linux) system.
If you add #pragma GCC unroll 16 (implemented in gcc-8+) before the for loop, gcc gets the same perf as clang (making block a constant is essential to optimize the code), so essentially llvm's unrolling is more aggressive than gcc's, which can be good or bad depending on cases. Still, feel free to report it to gcc, maybe they'll tweak the unrolling heuristics some day and an extra testcase could help.
Once unrolling is taken care of, gcc does ok for some values (block equals 4 or 8 in particular), but much worse for some others, in particular 3. But that's better analyzed with a smaller testcase without the loop on block. Gcc seems to have trouble with memcpy(,,3), it works much better if you always read 8 bytes (the next line already takes care of the extra bytes IIUC). Another thing that could be reported to gcc.

Units of `ru_maxrss` on BSD?

getrusage returns a struct containing
long ru_maxrss; /* max resident set size */
What are the units of ru_maxrss, on BSD operating systems? Is this value in bytes, or in kilobytes? By "BSD", I mean FreeBSD, OpenBSD, but not Mac OS X / Darwin.
The FreeBSD man page and OpenBSD man page say the units are in kilobytes:
ru_maxrss the maximum resident set size utilized (in kilobytes).
However, I know that on Darwin / Mac OS X, the units are in bytes (even though online man pages claim otherwise), and I have also found some statements online that claim on BSD ru_maxrss is also returned in bytes (see e.g. here). Which is correct?
On FreeBSD it's in kilobytes, as evidenced by the source in kern_clock.c:
/* Update resource usage integrals and maximums. */
MPASS(p->p_vmspace != NULL);
vm = p->p_vmspace;
ru = &td->td_ru;
ru->ru_ixrss += pgtok(vm->vm_tsize) * cnt;
ru->ru_idrss += pgtok(vm->vm_dsize) * cnt;
ru->ru_isrss += pgtok(vm->vm_ssize) * cnt;
rss = pgtok(vmspace_resident_count(vm));
if (ru->ru_maxrss < rss)
ru->ru_maxrss = rss;
pgtok() computes kb from page size.
But who would believe the kernel source? Lets run a program:
$ cat x.c
#include <sys/types.h>
#include <sys/time.h>
#include <sys/resource.h>
#include <stdio.h>
int main(void)
{
struct rusage r;
if (getrusage( RUSAGE_SELF, &r) == 0) {
printf ("ru_maxrss=%lu\n", (unsigned long)r.ru_maxrss);
}
return 0;
}
$ ./a.out
ru_maxrss=5824
$ ./a.out
ru_maxrss=0
$ ./a.out
ru_maxrss=216
$ ./a.out
ru_maxrss=1880
These numbers aren't multiples of 1024, so can't be bytes.

learning sample of likely() and unlikely() compiler hints

How can I demonstrate for students the usability of likely and unlikely compiler hints (__builtin_expect)?
Can you write an sample code, which will be several times faster with these hints comparing the code without hints.
Here is the one I use, a really inefficient implementation of the Fibonacci numbers:
#include <stdio.h>
#include <inttypes.h>
#include <time.h>
#include <assert.h>
#define likely(x) __builtin_expect((x),1)
#define unlikely(x) __builtin_expect((x),0)
uint64_t fib(uint64_t n)
{
if (opt(n == 0 || n == 1)) {
return n;
} else {
return fib(n - 2) + fib(n - 1);
}
}
int main(int argc, char **argv)
{
int i, max = 45;
clock_t tm;
if (argc == 2) {
max = atoi(argv[1]);
assert(max > 0);
} else {
assert(argc == 1);
}
tm = -clock();
for (i = 0; i <= max; ++i)
printf("fib(%d) = %" PRIu64 "\n", i, fib(i));
tm += clock();
printf("Time elapsed: %.3fs\n", (double)tm / CLOCKS_PER_SEC);
return 0;
}
To demonstrate, using GCC:
~% gcc -O2 -Dopt= -o test-nrm test.c
~% ./test-nrm
...
fib(45) = 1134903170
Time elapsed: 34.290s
~% gcc -O2 -Dopt=unlikely -o test-opt test.c
~% ./test-opt
...
fib(45) = 1134903170
Time elapsed: 33.530s
A few hundred milliseconds less. This gain is due to the programmer-aided branch prediction.
But now, for what the programmer should really be doing instead:
~% gcc -O2 -Dopt= -fprofile-generate -o test.prof test.c
~% ./test.prof
...
fib(45) = 1134903170
Time elapsed: 77.530s /this run is slowed down by profile generation.
~% gcc -O2 -Dopt= -fprofile-use -o test.good test.c
~% ./test.good
fib(45) = 1134903170
Time elapsed: 17.760s
With compiler-aided runtime profiling, we managed to reduce from the original 34.290s to 17.760s. Much better than with programmer-aided branch prediction!
From this blog post. I think likely and unlikely are mostly obsolete. Very cheap CPUs (ARM Cortex A20 in the example) have branch predictors and there is no penalty regardless of jump is taken / jump is not taken. When you introduce likely/unlikely the results will be either the same or worse (because compiler has generated more instructions).

Resources