I have the following tagged union in my code: https://github.com/EarlGray/SECD/blob/f2e364f84d194aea5cef9257630bf931e9f88cab/secd.h#L217
When I compile it on 64 bit Linux or OS X using gcc or clang, size of cell_t is always 32 bytes (4 * sizeof(long), as expected).
When I compile it on Linux (Ubuntu 14.04, gcc 4.8) using -m32 switch, the size is 16 bytes (as expected).
When I compile the same code on OS X (10.9.5) using either gcc (4.8) or clang (Apple 6.0) with -m32, the size is 20 bytes. I tried to debug the program and to see if there are any union cases that might use the fifth word, haven't found any. It does not depend on levels of optimization and debug information presence.
Any ideas why sizeof(cell_t) is 20 bytes?
On OS X:
sizeof(off_t) == 8
On Linux:
sizeof(off_t) == 4
You use that type in string_t. There may be other occasions of this, but that was the first that I came across.
Related
I am using mingw-w64 compiler (gcc 8.1.0) on Windows 10 (64 bit installation). My expectation is to get the maximum value of size_t equal to (2^64 - 1). But my programs always return the value of SIZE_MAX as (2^32 - 1 only). What could be the reason? How can i achieve the a maximum value of (2^64 - 1) in my C programs?
I am learning C language and wanted to check out the statement made in the book that on modern computers max size of size_t can be (2^64 - 1). The piece of code I tried for checking this is as below:
#include <stdio.h>
#include <stdint.h>
int main(){
printf("%d\n", sizeof(size_t));
printf("%zu\n", SIZE_MAX);
return 0;
}
Output:
4
4294967295
I am using only one flag -std=c11 during compiling and gcc --version returns:
gcc.exe (i686-posix-dwarf-rev0, Built by MinGW-W64 project) 8.1.0
When checking gcc --version you state that you get
gcc.exe (i686-posix-dwarf-rev0, Built by MinGW-W64 project) 8.1.0
The important part of the output is this: i686-posix-dwarf-rev0, more specifically the i686 part.
This tells us that GCC is built as a 32-bit application (i686 is a 32-bit system), and will default to creating 32-bit executables.
To build a 64-bit executable you need to add the -m64 flag.
Sample Code
#include "stdio.h"
#include <stdint.h>
int main()
{
double d1 = 210.01;
uint32_t m = 1000;
uint32_t v1 = (uint32_t) (d1 * m);
printf("%d",v1);
return 0;
}
Output
1. When compiling with -m32 option (i.e gcc -g3 -m32 test.c)
/test 174 # ./a.out
210009
2. When compiling with -m64 option (i.e gcc -g3 -m64 test.c)
test 176 # ./a.out
210010
Why do I get a difference?
My understanding "was", m would be promoted to double and multiplication would be cast downward to unit32_t. Moreover, since we are using stdint type integer, we would be further removing ambiguity related to architecture etc etc.
I know something is fishy here, but not able to pin it down.
Update:
Just to clarify (for one of the comment), the above behavior is seen for both gcc and g++.
I can confirm the results on my gcc (Ubuntu 5.2.1-22ubuntu2). What seems to happen is that the 32-bit unoptimized code uses 387 FPU with FMUL opcode, whereas 64-bit uses the SSE MULS opcode. (just execute gcc -S test.c with different parameters and see the assembler output). And as is well known, the 387 FPU that executes the FMUL has more than 64 bits of precision (80!) so it seems that it rounds differently here. The reason of course is that that the exact value of 64-bit IEEE double 210.01 is not that, but
210.009999999999990905052982270717620849609375
and when you multiply by 1000, you're not actually just shifting the decimal point - after all there is no decimal point but binary point in the floating point value; so the value must be rounded. And on 64-bit doubles it is rounded up. On 80-bit 387 FPU registers, the calculation is more precise, and it ends up being rounded down.
After reading about this a bit more, I believe the result generated by gcc on 32-bit arch is not standard conforming. Thus if you force the standard to C99 or C11 with -std=c99, -std=c11, you will get the correct result
% gcc -m32 -std=c11 test.c; ./a.out
210010
If you do not want to force C99 or C11 standard, you could also use the -fexcess-precision=standard switch.
However fun does not stop here.
% gcc -m32 test.c; ./a.out
210009
% gcc -m32 -O3 test.c; ./a.out
210010
So you get the "correct" result if you compile with -O3; this is of course because the 64-bit compiler uses the 64-bit SSE math to constant-fold the calculation.
To confirm that extra precision affects it, you can use a long double:
#include "stdio.h"
#include <stdint.h>
int main()
{
long double d1 = 210.01; // double constant to long double!
uint32_t m = 1000;
uint32_t v1 = (uint32_t) (d1 * m);
printf("%d",v1);
return 0;
}
Now even -m64 rounds it to 210009.
% gcc -m64 test.c; ./a.out
210009
I am working in a dev environment where we produce both 32 and 64 bit
executables. I have one application that is failing to build in 64 bit mode.
It uses inotify and includes glib.h to get the definitions for that.
I decided to see if a minimal program can cause the problem to happen and here it is.
The source for the test, glibtest.c:
#include <stdio.h>
#include <glib.h>
int
main (int argc, char ** argv)
{
printf( "hello, I am glib test.\n\n");
}
Building in 32 bit mode...
[svn/glibtest] : gcc glibtest.c -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -m32
[svn/glibtest] : a.out
hello, I am glib test.
[svn/glibtest] :
Things compile in 32 bit mode and a.out prints what one expects.
Now if one compiles in 64 bit mode the error occurs.
[svn/glibtest] : gcc -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include glibtest.c
In file included from /usr/include/glib-2.0/glib/gasyncqueue.h:34,
from /usr/include/glib-2.0/glib.h:34,
from glibtest.c:7:
/usr/include/glib-2.0/glib/gthread.h: In function ‘g_once_init_enter’:
/usr/include/glib-2.0/glib/gthread.h:347: error: size of array ‘type name’ is negative
[svn/glibtest] :
In 64 bit mode the error points into gthread.h here...
#if defined (G_CAN_INLINE) || defined (__G_THREAD_C__)
G_INLINE_FUNC gboolean
g_once_init_enter (volatile gsize *value_location)
{
error>>> if G_LIKELY ((gpointer) g_atomic_pointer_get (value_location) != NULL)
return FALSE;
else
return g_once_init_enter_impl (value_location);
}
#endif /* G_CAN_INLINE || __G_THREAD_C__ */
Am I missing a needed header? Has anyone seen this before and found the solution? (yes, there is a similar post from a year ago that no one has answered.)
Centos 6.5, 'Linux tushar 2.6.32-431.17.1.el6.x86_64 #1 SMP Wed May 7 23:32:49 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux'
glib package is 1:1.2.10-33.el6
gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4)
Thanks.
/usr/lib/glib-2.0/include is, generally, platform-specific. It probably contains 32 bit-specific definitions. e.g., I have the following in /usr/lib/x86_64-linux-gnu/glib-2.0/include/glibconfig.h:
#define GLIB_SIZEOF_SIZE_T 8
That would probably be 4 in your version, since it seems to be the 32-bit one.
Check if you have correct glibconfig.h file in includes with proper settings for your build target (64-bit). For different targets (32 and 64 bit) you must have different glibconfig.h
I am running RHEL 6.4 64 bit, and I was given a program to compile and execute. The program has:
cpu_set_t cputset;
CPU_ZERO(&cpuset);
CPU_SET(data->num, &cpuset); //data is a structure, don't think it's relevant to my question
int ret = sched_setaffinity(0, sizeof(cpuset), &cpuset);
//print ret
When compiling for 32/64 bit gcc/icc, there are no compilation errors. This will return 0 and correctly produce the result when compiled with -m32 (32 bit compiler), but when compiled with a 64 bit compiler, it has returned 1, 2, and 128 seemingly randomly just by running without any recompilation. Could someone help me troubleshoot / identify what is going wrong with this when I compile it for 64 bit and execute? Thanks for any help.
I m playing with open source which contains the following code
uint32_t addr = htonl(* (uint32_t *)RTA_DATA(rth));
if (htonl(13) == 13) {
// running on big endian system
} else {
// running on little endian system
addr = __builtin_bswap32(addr);
}
It looks like it check if the system is a big endian or little endian with if (htonl(13) == 13). is it correct? and could you please explain why the check this in this way? and why he use 13?
Also the addr = __builtin_bswap32(addr); cause a compilation problem "undefined reference". Are there a solution to fix that? it looks like that function does not exist any more in the new versions of the gcc libs. is it correct?
EDIT:
The toolchain I use is toolchain-i386_gcc-4.1.2_uClibc-0.9.30.1
for the options I used in the compilation:
for the c to object compilation options:
-DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -I. -I/opt/lampp/htdocs/backfire/staging_dir/target-i386_uClibc-0.9.30.1/usr/include -O2 -pipe -march=i486 -funit-at-a-time -fhonour-copts -D_GNU_SOURCE -MT
for the object to binary (linker)
-O2 -pipe -march=i486 -funit-at-a-time -fhonour-copts -D_GNU_SOURCE -L/opt/lampp/htdocs/backfire/staging_dir/target-i386_uClibc-0.9.30.1/usr/lib -L/opt/lampp/htdocs/backfire/staging_dir/target-i386_uClibc-0.9.30.1/lib -L/opt/lampp/htdocs/backfire/staging_dir/toolchain-i386_gcc-4.1.2_uClibc-0.9.30.1/lib -Wl,-rpath-link=/opt/lampp/htdocs/backfire/staging_dir/target-i386_uClibc-0.9.30.1/usr/lib
htonl converts a "host-order" number to network byte order. Host order is whatever endianness you have on the system running the code. Network byte order is big-endian. If host-to-network is big-to-big, that means no change -- which is what 13 -> 13 would verify. On the other hand, if host-to-network is small-to-big, that means you'll get swapping, so the least-significant byte 13 (least because changing it by 1 changes the overall number only by 1) would become most-significant-byte 13 (most because changing that byte by one changes the overall number by the largest amount), and 13 -> (13 << 24).
13 specifically is unimportant. You could use any number, so long as its little-endian representation wasn't the same as its big-endian representation. (0 would be bad, because 0 byte-swapped is still 0. Same for (65536 + 256) as well, because the 32-bit representation is 00 01 01 00 in both big-endian and little-endian.
Note that there are also mixed-endian systems where for the 32-bit number 0x12345678, you'd have bytes not in the order 12 34 56 78 (big-endian) or 78 56 34 12 (little-endian): 34 12 78 56 for one, I believe. These systems aren't common, but they do still exist, and the code here wouldn't handle them correctly.
http://gcc.gnu.org/onlinedocs/gcc-4.2.0/gcc/Other-Builtins.html and http://gcc.gnu.org/onlinedocs/gcc-4.3.0/gcc/Other-Builtins.html suggest __builtin_bswap32 was added in gcc 4.3, so your gcc 4.1.2 toolchain wouldn't have it.