Trying to build+run this simple C program in MinGW produces strange results
#include <stdio.h>
void main() {
printf("%03d", 7);
};
If I build it with any standard-C compliance flags (-std=c89/99/11) the padding is ignored:
C:\>gcc -std=c11 a.c
C:\>a
7
Whereas in regular GNU C mode it works fine:
C:\>gcc a.c
C:\>a
007
Is this a bug in MinGW? Have I missed something? Or is the padding specifier really not a standard C feature?
For reference, here's the output of gcc -v on my system.
As suggested by 2501, the best workaround is to instead use MinGW-W64, which is actually a separate project from MinGW. It can still produce 32-bit binaries, despite the "W64" label.
Related
I'm trying to hunt down a problem using complex literals when compiling with GCC. Consider the following
#include <stdio.h>
#include <complex.h>
int main(void)
{
double complex z = CMPLX(0.0, -1.0);
printf("z = %.1f%+.1fi\n", creal(z), cimag(z));
return 0;
}
(slightly modified from the reference page). If I compile with Clang, it works as expected. However, if I use GCC I get an undefined reference error
gcc -std=c11 mwe.c
mwe.c: 6:24 warning: implicit declaration of function 'CMPLX' ...
mwe.c:(...) undefined reference to `CMPLX'
I have tried this with GCC 4.7 and 7.2 on Linux and GCC 9 on MacOS. The error messages change, but the net result remains the same. Reviewing the reference for CMPLX, this should be valid C11. Based on this answer and this post, it appears like GCC accepted this construct before.
My bottom line question is: Why can't I use CMPLX with GCC?
It appears like this is caused by a header/library disconnect on the systems I have. Compiling with the -save-temps flag, it appears GCC uses the system header for complex.h. This means the selected Xcode SDK's usr/include/complex.h on MacOS and /usr/include/complex.h on Linux. On MacOS, the CMPLX macro is only defined when using Clang. The Linux I have is RHEL 6 meaning the header is aimed at GCC 3 which did not have CMPLX. Based on the discussion on this bug report, it looks like making sure the macro is defined is not up to GCC.
The short answer is: The compiler/platform combination doesn't support it. Use the native compiler or update the system.
I have tried to compile this C code:
#define MAX_INT 2147483647
int main()
{
int vector[MAX_INT];
return 0;
}
I'm using the C compilers provided by both MinGW and MSYS projects, i.e., MinGW / MSYS. MinGW compiler is "gcc version 6.3.0 (MinGW.org GCC-6.3.0-1)", which is the most recently version and have win32 thread model, and MSYS compiler is "gcc version 3.4.4 (msys special)" with posix thread model.
That MAX_INT constant value is set in the constant "__INT_MAX__" provided by the "limits.h" header.
How can I avoid this problem and get my simplest code compiled?
Your stack will not be that large to contain the array this is the main problem.
Try setting the stack size using the following lines as suggested in Increase stack size when compiling with mingw? while compiling
gcc -Wl,--stack,N
where N is stack size. E.g. gcc -Wl,--stack,4194304
Also as mentioned in the comments you might have to compile for 64 bits and will require that much amount of RAM or possibly a large page file.
I am using codeblock 13.12 and it uses mingw (GCC 4.7 & 4.8 Series)
It supports call by reference (func1(int &a)) eventhough I am selecting C project and not CPP project. If I am not mistaken, there is no concept of call by reference in C and everything is call by value even if it is making use of pointers.
My question is how to use C only features? Any settings for this? I saw that in toolchain it is using mingw32-gcc.exe for c compilations.
How to know which compiler version (Like C11, C99 etc) it is really using?
Name your files with an extension of .c. And definitely not .cc or .cpp
Compile with gcc as the command line, not g++
And if in doubt, use the -std= command line parameter to force the flavor of C you want (e.g. -std=C90, -std=C99, or even -std=C11 ). There's also -ansi.
Also, a cheap and dirty way to validate if your code is getting compiled as C and not C++ is to add this block of code within your source code. If it's C++, then the compiler will generate an error.
#ifdef __cplusplus
int compile_time_assert[-1];
#endif
#include <sys/syscall.h>
#define BUFSIZE 1024
main()
{
char buf[BUFSIZE];
int n;
while((n=read(0,buf,BUFSIZE))>0)
write(1,buf,n);
return 0;
}
When I compile this by using gcc, it is fine.
But use g++ I got :
inandout.c:7:32: error: ‘read’ was not declared in this scope
while((n=read(0,buf,BUFSIZE))>0)
^
inandout.c:8:22: error: ‘write’ was not declared in this scope
write(1,buf,n);
^
Why is that?
This is because gcc is a C compiler, g++ is a C++ compiler, and C and C++ are different languages.
If you want to compiler that source code as C++ program, you must change it to become C++. For example, there are no implicit function declarations in C++, so you must include unistd.h for read() and write() declarations. You also don't need syscall.h header.
Also, it is only that simple because you have a simple code snippet. Porting C code to C++ could be a nightmare as there are ~ 50 differences and in some cases code compiles well in both cases, but behaves differently.
P.S.: And instead of defining weird BUFSIZE yourself, consider using standard BUFSIZ :)
You Just need to add include <unistd.h>
C defaults functions that do not have a prototype to a function that returns an int - but you should have got warnings for that (did you use -Wall?).
C++ doesn't allow that, you need to include the correct header file, unistd.h, which you should also do in C.
I upgraded to gcc 4.8.5. In version 4.7 the compiler stopped including unistd.h in a number of include files. This is why older gcc compiler versions worked without including unistd.h.
https://gcc.gnu.org/gcc-4.7/porting_to.html
"C++ language issues
Header dependency changes
Many of the standard C++ library include files have been edited to no longer include unistd.h to remove namespace pollution. "
In my case I got ::write has not been declared when I included stdio.h, but my previous gcc version 4.4 compiled fine. This is a useful command to see what paths are being searched by the preprocessor: g++ -H test.cpp
I have not made much effort to discover the cause, but gcc 4.8.1 is giving me a lot of trouble to compile old sources that combine c and c++ plus some new stuff in c++11
I've managed to isolate the problem in this piece of code:
# include <argp.h>
# include <algorithm>
which compiles fine with g++ -std=c++0x -c -o test-temp.o test-temp.C version 4.6.3, ubuntu 12.04
By contrast, with version 4.8.1, the same command line throws a lot of errors:
In file included from /home/lrleon/GCC/lib/gcc/x86_64-unknown-linux-gnu/4.8.1/include/x86intrin.h:30:0,
from /home/lrleon/GCC/include/c++/4.8.1/bits/opt_random.h:33,
from /home/lrleon/GCC/include/c++/4.8.1/random:51,
from /home/lrleon/GCC/include/c++/4.8.1/bits/stl_algo.h:65,
from /home/lrleon/GCC/include/c++/4.8.1/algorithm:62,
from test-temp.C:4:
/home/lrleon/GCC/lib/gcc/x86_64-unknown-linux-gnu/4.8.1/include/mmintrin.h: In function ‘__m64 _mm_cvtsi32_si64(int)’:
/home/lrleon/GCC/lib/gcc/x86_64-unknown-linux-gnu/4.8.1/include/mmintrin.h:61:54: error: can’t convert between vector values of different size
return (__m64) __builtin_ia32_vec_init_v2si (__i, 0);
^
... and much more.
The same happens if I execute
g++ -std=c++11 -c -o test-temp.o test-temp.C ; again, version 4.8.1
But, if I swap the header lines, that is
# include <algorithm>
# include <argp.h>
then all compiles fine.
Someone enlighten me to understand what is happening?
I ran into the same problem. As it is really annoying, I hacked it down to <argp.h>.
This is the code (in standard gcc header argp.h) which trigger the error on ubuntu 14.04 / gcc 4.8.2:
/* This feature is available in gcc versions 2.5 and later. */
# if __GNUC__ < 2 || (__GNUC__ == 2 && __GNUC_MINOR__ < 5) || __STRICT_ANSI__
# define __attribute__(Spec) /* empty */
# endif
This is probably to make headers compatible with old gcc AND to strict ANSI C++ definition. The problem is that --std=c++11 set the __STRICT_ANSI__ macro.
I've commented the #define __attribute__(spec) and the compilation worked fine !
As it is not practical to comment a system header, a workaround is to use g++ --std=gnu++11 instead of g++ --std=c++11 as it does not define __STRICT_ANSI__. It worked in my case.
It seems to be a bug in gcc.
This is a known bug, apparently some headers are missing extern "C" declarations at the right places:
I also just came across this issue with GCC 4.7.2 on Windows. It appears that all the intrin.h headers are missing the extern "C" part. Since the functions are always inline and thus the symbols never show up anywhere this has not been a problem before. But now that another header declares those functions a second time something must be done.
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=56038
Two things come to mind:
1) It's a missing extern "C" in headers, which is not that rare.
2) Something is wrong with data alignment. Probably you are using STL container to store SSE types, which doesn't guarantee aligment for those. In this case you should implement custom allocator, which will use alligned_malloc. But I think it should've compiled fine in this case, but give you segfault at runtime. But who knows what compilers can detect now :)
Here's something your might want to read on that topic: About memory alignment ; About custom allocators
p.s. a piece of your code would be nice