I have tried to compile this C code:
#define MAX_INT 2147483647
int main()
{
int vector[MAX_INT];
return 0;
}
I'm using the C compilers provided by both MinGW and MSYS projects, i.e., MinGW / MSYS. MinGW compiler is "gcc version 6.3.0 (MinGW.org GCC-6.3.0-1)", which is the most recently version and have win32 thread model, and MSYS compiler is "gcc version 3.4.4 (msys special)" with posix thread model.
That MAX_INT constant value is set in the constant "__INT_MAX__" provided by the "limits.h" header.
How can I avoid this problem and get my simplest code compiled?
Your stack will not be that large to contain the array this is the main problem.
Try setting the stack size using the following lines as suggested in Increase stack size when compiling with mingw? while compiling
gcc -Wl,--stack,N
where N is stack size. E.g. gcc -Wl,--stack,4194304
Also as mentioned in the comments you might have to compile for 64 bits and will require that much amount of RAM or possibly a large page file.
Related
While trying to copy the string larger than the "string" variable, I know the reason for getting this warning, it is because I am trying to fit a 21-byte string into a 6-byte region. But why I am confused is why I am not getting a warning on the windows compiler.
On Windows, I am using Mingw, Visual Studio Code, and it runs the loop but there is no warning of any kind, while on Linux it is showing this warning.
rtos_test.c: In function 'main':
rtos_test.c:18:5: warning: '__builtin_memcpy' writing 21 bytes into a region of size 6 overflows the destination [-Wstringop-overflow=]
18 | strcpy(string, "Too long to fit ahan");
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#include <stdio.h>
#include <stdint.h>
#include <pthread.h>
#include <string.h>
uint8_t test = 0;
char string[] = "Short";
int main()
{
while (test < 12)
{
printf("\nA sample C program\n\n");
test++;
}
strcpy(string, "Too long to fit ahan");
return 0;
}
I haven't enough reputation point to comment to your post.
I think in Linux gcc -Wall flag is enabled, you can try add -Wall flag to your IDE on Windows
additional,
I checked some compiler I saw that
char string[] = "Short";
only allocate for string with size is 6
your code use string is incorrectly, if you try to use more than allocated space the program may be crashed, you can verified this via asm code on Windows
└─[0] <> gcc test.c -S
test.c: In function ‘main’:
test.c:18:5: warning: ‘__builtin_memcpy’ writing 21 bytes into a region of size 6 overflows the destination [-Wstringop-overflow=]
18 | strcpy(stringssss, "Too long to fit ahan");
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
┌─[longkl#VN-MF10-NC1011M] - [~/tmp] - [2021-12-22 07:00:36]
└─[0] <> grep stringsss test.s
.globl stringssss
.type stringssss, #object
.size stringssss, 6
This warning on Linux imply that GCC replaced memcpy() to a GCC builtin and that GCC can detect and is configured to detect such error. Which may not be the case on Windows and depending of compiler options, version, mood, etc.
You are also comparing Windows and Linux which are very different platforms, don't expect the same behavior on both. GCC is not very Windows oriented too (MinGW = Minimalist Gnu for Windows). Even between Linux distros, the GCC is different, there is a hugely large amount of variables to consider, especially when optimizations are involved.
To sum up, different environments produce different results, warnings and errors. You can't really do anything against that without fixing your code when you rely on environment specific behavior (often without knowing it), tweaking compiler options or code... Often the answer is to fix your source, the source of your problems ~100% of the time.
As a side note, setting up CI with different environment are a great bug catching system since behavior that looks fine on a system would not on another as in your case where there is memory corruption that would happen on both Linux and Windows.
Trying to build+run this simple C program in MinGW produces strange results
#include <stdio.h>
void main() {
printf("%03d", 7);
};
If I build it with any standard-C compliance flags (-std=c89/99/11) the padding is ignored:
C:\>gcc -std=c11 a.c
C:\>a
7
Whereas in regular GNU C mode it works fine:
C:\>gcc a.c
C:\>a
007
Is this a bug in MinGW? Have I missed something? Or is the padding specifier really not a standard C feature?
For reference, here's the output of gcc -v on my system.
As suggested by 2501, the best workaround is to instead use MinGW-W64, which is actually a separate project from MinGW. It can still produce 32-bit binaries, despite the "W64" label.
I have installed Cygwin (CYGWIN_NT-6.1 AlexReynolds-PC 1.7.27(0.271/5/3) 2013-12-09 11:54 x86_64 Cygwin) and GNU gcc/g++ 4.8.1.
I am compiling some tools that use POSIX C I/O routines, such as fseeko() and get a fatal error of the following sort:
error: ‘fseeko’ was not declared in this scope
int retValue = fseeko(stream, offset, whence);
Is fseeko() available in GNU gcc/g++ 4.8.1 on Cygwin? Are alternatives available which reliably honor a 64-bit offset, if not?
fseeko() is available on my install of Cygwin (CYGWIN_NT-6.1-WOW64 1.7.25(0.270/5/3) 2013-08-31 20:39 i686 Cygwin) with GCC 4.7.3. But if your install doesn't have it for some reason, you have a couple of alternatives:
fseek(), with the caveat that the offset is likely limited to 32 bits instead of 64 (depending on sizeof(long))
fsetpos(), which takes an fpos_t for the offset. However, fpos_t may be an opaque structure, so the only reliable way to use it is by calling fgetpos() to get the current position and then later call fsetpos() to restore the offset to the earlier position; you can't use it to seek to a particular offset otherwise.
In my case the error was because I compiled the program with --std=c++11. Changing it to --std=gnu++11 fixed the problem with compilation, but now I wonder if I should be using fseeko at all.
Then I peeked at Cygwin's /usr/include/stdio.h and discovered the same thing as in this discussion. There is an interesting reply:
On Oct 29 14:32, Hongliang Wang wrote:
Hello all,
My platform is WindowsXP+SP2, Cygwin DLL release version is 1.5.24-2
I am trying to make my program support large files, so in stdio.h I found
356 #ifdef __LARGE64_FILES
357 #if !defined(__CYGWIN__) || defined(_COMPILING_NEWLIB)
However, when I tried to compile with _COMPILING_NEWLIB, it fails
Never do that. It should only be set when compiling newlib itself.
$ gcc -Wall -D_COMPILING_NEWLIB test.c -o test
/cygdrive/c/DOCUME~1/wan/LOCALS~1/Temp/ccUmErSH.o:test.c:(.text+0x3a):
undefined reference to `_fopen64'
collect2: ld returned 1 exit status
It seems as if fopen64 is mapped to _fopen64, while the latter is missing.
Could anybody tell me how to compile with _COMPILING_NEWLIB flag or how does Cygwin support large files?
Don't compile with _COMPILING_NEWLIB. 64 bit file access is the
natural file access type for Cygwin. off_t is 8 bytes. There are no
foo64 functions for that reason. Just use fopen and friends and you
get 64 bit file access for free.
Corinna
-- Corinna Vinschen Please, send mails regarding Cygwin to Cygwin Project Co-Leader cygwin AT cygwin DOT com
Red Hat
So there you have it.
Just use fseek. As long as your long's are 64 bit, there's no difference.
I wrote a program, where the size of an array is taken as an input from user.
#include <stdio.h>
main()
{
int x;
scanf("%d", &x);
int y[x];
/* some stuff */
}
This program failed to compile on my school's compiler Turbo C (an antique compiler).
But when I tried this on my PC with GNU CC, it compiled successfully.
So my question is, is this a valid C program? Can I set the size of the array using a user's input?
It is a valid C program now, but it wasn't 15 years ago.
Either way, it's a buggy C program because x is used without any knowledge of how large it might be. The user can input a malicious value for x and cause the program to crash or worse.
C99 gives C programmers the ability to use variable length arrays,which are arrays whose sizes are not known until run time. --C:A Reference Manual
c90 does not support variable length arrays you can see this using this command line:
gcc -std=c90 -pedantic code.c
you will see an error message like this:
warning: ISO C90 forbids variable length array ‘y’ [-Wvla]
but c99 this is perfectly valid:
gcc -std=c99 -pedantic code.c
Instead of asking whether this is strictly valid C code, it may be better to ask whether it is good C code. Although it is valid, as you have seen, a number of compilers do not support variable length arrays.
Variable length arrays are not supported by a number of modern compilers. These include Microsoft Visual Studio and some versions of the IBM XL compilers. As you have found, variable length arrays are not entirely portable. That's fine if the code will only be used on systems that support the feature but not if it has to be run on other systems. Instead, it may be better to allocate the array with constant size using a reasonable limit or use a malloc and free to create the array in portable manner.
Suppose I have the following program:
#include <stdio.h>
int main()
{
printf("This is a sample C program.\n");
return 0;
}
If I compile it with the Microsoft compiler (cl.exe /O1 sample.c) on a Windows 7 32-bit machine, then it outputs an executable file that is 44 KB.
If I compile it with the GNU compiler (gcc sample.c) on a CentOS 64-bit machine, then it outputs an executable file that is 6 KB.
Generally speaking, why is there such a big difference in file size for this small program? Why does it take Windows 44 KB just to print a line and exit?
If you use the /MD switch with cl.exe, it will dynamically link against msvcrt (the Microsoft C runtime library) and use the msvcrt.dll (and you will get a comparable executable size of 6KB), otherwise the code from the C library is statically linked into your executable increasing the size of the executable.
Your gcc compiler on CentOS is setup to dynamically link against the C library by default.
Apart from the links provided above, I feel this will also help you to understand on what happens when we compile using gcc!