-std=c++11 flag increases executable size - c

Having C style code in C++ like:
#include <stdio.h>
int main()
{
printf("hi\n");
return 0;
}
Executable size normally is: 8.50 KB. (Optimized for size with -Os -s)
When added -std=c++11, the size grows to 32.50 KB.
Does it have anything to do with stdio.h or why does the size change?
Tested with g++ version 6.3.0 on code::blocks. OS: Windows 8.1 pro

Related

Why `memmove` function has significant difference in two different computers?

I've tried to run the following C code from two different computers.
#include <string.h>
int a[100000];
int main(){
for(int sz = 100000; sz > 1; sz --){
memmove(a, a+1, 4*(sz - 1));
}
}
Computer A uses 800ms, while computer B uses 6200ms. The running time of B is always far more than A.
Environment
compile command
Shell is bash.
And the -O gcc command doesn't influence runtime.
gcc myfile.c -o mybin
time ./mybin
Computer A
gcc 9.3.0
glibc: ldd (Ubuntu GLIBC 2.31-0ubuntu9) 2.31
uname_result(system='Linux', release='5.4.0-100-generic', machine='x86_64')
CPU: Intel(R) Xeon(R) Gold 6140 CPU # 2.30GHz
Computer B
gcc 9.3.0
glibc: ldd (Ubuntu GLIBC 2.31-0ubuntu9.2) 2.31
uname_result(system='Linux', release='4.4.0-210-generic', machine='x86_64')
CPU: Intel(R) Xeon(R) Platinum 8369B CPU # 2.70GHz
Question
Then I run the following file on same virtual machines, with different kernal version(4.4.210-0404210-generic x86_64 and 5.4.0-113-generic x86_64) with gcc 9.3.0. Two test cost less than 500ms.
#include <string.h>
#include <time.h>
#include <stdio.h>
#define TICK(X) clock_t X = clock()
#define TOCK(X) printf("time %s: %g sec.\n", (#X), (double)(clock() - (X)) / CLOCKS_PER_SEC)
int a[100000];
int main(){
TICK(timer);
for(int sz = 100000; sz > 100; sz --){
memmove(a, a+1, 4*(sz - 1));
}
TOCK(timer);
}
How can I find the cause?

Maximum size of size_t on Windows 10/ mingw-w64 compiler

I am using mingw-w64 compiler (gcc 8.1.0) on Windows 10 (64 bit installation). My expectation is to get the maximum value of size_t equal to (2^64 - 1). But my programs always return the value of SIZE_MAX as (2^32 - 1 only). What could be the reason? How can i achieve the a maximum value of (2^64 - 1) in my C programs?
I am learning C language and wanted to check out the statement made in the book that on modern computers max size of size_t can be (2^64 - 1). The piece of code I tried for checking this is as below:
#include <stdio.h>
#include <stdint.h>
int main(){
printf("%d\n", sizeof(size_t));
printf("%zu\n", SIZE_MAX);
return 0;
}
Output:
4
4294967295
I am using only one flag -std=c11 during compiling and gcc --version returns:
gcc.exe (i686-posix-dwarf-rev0, Built by MinGW-W64 project) 8.1.0
When checking gcc --version you state that you get
gcc.exe (i686-posix-dwarf-rev0, Built by MinGW-W64 project) 8.1.0
The important part of the output is this: i686-posix-dwarf-rev0, more specifically the i686 part.
This tells us that GCC is built as a 32-bit application (i686 is a 32-bit system), and will default to creating 32-bit executables.
To build a 64-bit executable you need to add the -m64 flag.

Very slow speed of gcc compiled C-program under Linux

I have two OS on my PC with i7-3770 # 3.40 GHz. One OS is latest Linux Kubuntu 18.04, the other OS is Windows 10 Pro running on same HDD.
I have tested a simple funny program written in C language doing some arithmetic calculations from number theory. On Kubuntu compiled with gcc 7.3.0, on Windows compiled with gcc 5.2.0. built by MinGW-W64 project.
The result is amazing, running program was 4-times slower on Linux, than on Windows.
On Windows the elapsed time is just 6 seconds. On Linux is elapsed time 24 seconds! On the same hardware.
I tried on Kubuntu to compile with some CPU specific options like "gcc -corei7" etc., but nothing helped. In the program is used "math.h" library, so the compilation is done with "-lm" on both systems. The source code is the same.
Is there a reason for this slow speed under Linux?
Further more I have compiled the same code also on older 32-bit machine with Core Duo T2250 # 1.73 GHz under Linux Mint 19 with gcc 7.3.0. The elapsed time was 28 seconds! Not much difference than 64-bit machine running on double frequency under Linux.
The sorce code is below, you can compile it and test it.
/* Program for playing with sigma(n) and tau(n) functions */
/* Compilation of code: "gcc name.c -o name -lm" */
#include <stdio.h>
#include <math.h>
#include <time.h>
int main(void)
{
double i, nq, x, zacatek, konec, p;
double odx, soucet, delitel, celkem, ZM;
unsigned long cas1, cas2;
i=(double)0; soucet=(double)0; celkem=(double)0; nq=(double)0;
zacatek=(double)1; konec=(double)1000000; x=zacatek;
ZM=(double)16 / (double)10;
printf("\n Program for playing with sigma(n) and tau(n) functions \n");
printf("---------------------------------------------------------\n");
printf("Calculation is running in range from %.0lf to %.0lf\n\n\n", zacatek, konec);
printf("Finding numbers which have sigma(n)/n = %.3lf\n\n", ZM);
cas1=time(NULL);
while (x <= konec) {
i=1; celkem=0; nq=0;
odx=sqrt(x)+1;
while (i <= odx) {
if (fmod(x, i)==0) {
nq++;
celkem=celkem+x/i+i;
}
i++;
}
nq=2*nq-1;
if ((odx-floor(odx))==0) {celkem=celkem-odx;}
if (fabs(celkem - (ZM*x)) < 0.001) {
printf("%.0lf has sum of all divisors = %.3lf times the number itself (%.0lf, %.0lf)\n", x, ZM, celkem, nq+1);
}
x++;
}
cas2=time(NULL);
printf("\n\nProgram ended.\n\n");
printf("Elapsed time %lu seconds.\n\n", cas2-cas1);
return (0);
}

glib.h negative array size error in 64 bit but not 32 bit build

I am working in a dev environment where we produce both 32 and 64 bit
executables. I have one application that is failing to build in 64 bit mode.
It uses inotify and includes glib.h to get the definitions for that.
I decided to see if a minimal program can cause the problem to happen and here it is.
The source for the test, glibtest.c:
#include <stdio.h>
#include <glib.h>
int
main (int argc, char ** argv)
{
printf( "hello, I am glib test.\n\n");
}
Building in 32 bit mode...
[svn/glibtest] : gcc glibtest.c -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -m32
[svn/glibtest] : a.out
hello, I am glib test.
[svn/glibtest] :
Things compile in 32 bit mode and a.out prints what one expects.
Now if one compiles in 64 bit mode the error occurs.
[svn/glibtest] : gcc -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include glibtest.c
In file included from /usr/include/glib-2.0/glib/gasyncqueue.h:34,
from /usr/include/glib-2.0/glib.h:34,
from glibtest.c:7:
/usr/include/glib-2.0/glib/gthread.h: In function ‘g_once_init_enter’:
/usr/include/glib-2.0/glib/gthread.h:347: error: size of array ‘type name’ is negative
[svn/glibtest] :
In 64 bit mode the error points into gthread.h here...
#if defined (G_CAN_INLINE) || defined (__G_THREAD_C__)
G_INLINE_FUNC gboolean
g_once_init_enter (volatile gsize *value_location)
{
error>>> if G_LIKELY ((gpointer) g_atomic_pointer_get (value_location) != NULL)
return FALSE;
else
return g_once_init_enter_impl (value_location);
}
#endif /* G_CAN_INLINE || __G_THREAD_C__ */
Am I missing a needed header? Has anyone seen this before and found the solution? (yes, there is a similar post from a year ago that no one has answered.)
Centos 6.5, 'Linux tushar 2.6.32-431.17.1.el6.x86_64 #1 SMP Wed May 7 23:32:49 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux'
glib package is 1:1.2.10-33.el6
gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4)
Thanks.
/usr/lib/glib-2.0/include is, generally, platform-specific. It probably contains 32 bit-specific definitions. e.g., I have the following in /usr/lib/x86_64-linux-gnu/glib-2.0/include/glibconfig.h:
#define GLIB_SIZEOF_SIZE_T 8
That would probably be 4 in your version, since it seems to be the 32-bit one.
Check if you have correct glibconfig.h file in includes with proper settings for your build target (64-bit). For different targets (32 and 64 bit) you must have different glibconfig.h

valgrind/memcheck fails to release "large" memory chunks

Consider this small program:
#include <stdio.h>
#include <stdlib.h>
// Change 60000 to 70000 and valgrind (memcheck) eats my memory
#define L (60000)
#define M (100*(1<<20))
int main(void) {
int i;
for (i = 0; i < M; ++i) {
unsigned char *a = malloc(L);
a[i % L] = i % 128; // Touch something; a[0] is not enough
free(a);
if (i % (1<<16) == 0)
fprintf(stderr, "i = %d\n", i);
}
return 0;
}
Compiling with gcc -o vg and running valgrind --leak-check=full ./vg works fine, with memcheck using roughly 1.5% of my memory. However, changing L to 70000 (I suppose the magic limit is 1<<16), memcheck uses an ever-increasing amount of memory, until the kernel finally kills it.
Is there anything one can do about this? There is obviously no leak, but there appears to be one in valgrind itself (!?), making it difficult to use for checking programs with lots of large and short-lived allocations.
Some background, not sure which is relevant:
$ valgrind --version
valgrind-3.7.0
$ gcc --version
gcc (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3)
$ /lib/libc.so.6
GNU C Library stable release version 2.12, by Roland McGrath et al.
$ uname -rms
Linux 2.6.32-220.2.1.el6.x86_64 x86_64
This is very likely caused by a gcc 4.4 bug,
which is bypassed in valgrind 3.8.0 (not yet released)
extract from Valgrind 3.8.0 NEWS file:
n-i-bz Bypass gcc4.4/4.5 wrong code generation causing out of memory or asserts
Set the resource limit of your process to unlimited using setrlimit So that kernel won't kill your process if you exceed men limit. And thus kernel think you are okay extending into the virtual address space.
Hope this helps.

Resources