Erroneous Code + GCC 5.4 Optimisation Causes Infinite Loop - c

Disclaimer:
The code I'm sharing here is an isolated version of the real code, nevertheless reproduce the same behaviour.
The following code compiled using gcc 5.4.0 with optimisation enabled on Ubuntu 16.04, when executed, generates a infinite loop:
#include <stdio.h>
void *loop(char *filename){
int counter = 10;
int level = 0;
char *filenames[10];
filenames[0] = filename;
while (counter-- > 0) {
level++;
if (level > 10) {
break;
}
printf("Level %d - MAX_LEVELS %d\n", level, 10);
filenames[level] = filename;
}
return NULL;
}
int main(int argc, char *argv[]) {
loop(argv[0]);
}
The compiler versions:
gcc --version
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609
Copyright (C) 2015 Free Software Foundation, Inc.
The compilation command used:
gcc infinite.c -O2 -o infinite
I know that it is caused by the optimisation flag "-02" because it doesn't happen without it. I also Know that adding volatile to the variable "level" also fix the error. But I can't add this keyword to all my variables.
My question is, why this happen and what can I do to avoid it in the future?
Is there any gcc flag that still optimise the code at a similar level of -O2 without this kind of problem?

You've found an example of undefined behaviour causing the optimizer to do something unexpected! GCC can see that if the loop runs 10 times then there is undefined behaviour.
This code will write filenames[1] through filenames[10] i.e. the 2nd through 11th elements of filenames. However filenames is 10 elements long, so the last one is undefined behaviour.
Because you aren't allowed to have undefined behaviour, it can assume that the loop will stop some other way before it gets to 10 (perhaps you have a modified version of printf that will call exit?).
And it sees that if the loop is going to stop anyway before it gets to 10, there is no point having code to make it only run 10 times. So it removes that code.

To prevent this optimization, you can use "-fno-aggressive-loop-optimizations".

Related

Visual Studio C program run fails with VS compiler but works with Clang

Configuration: Win10 64-bit, VS2019 with all updates and Clang v12.0 installed, and a breakpoint set on the null statement.
If compile with the VS compiler, the first version of the code below hits the breakpoint after Foo is output to the console the first time, but loops as expected if compiled with Clang. The second version loops as expected with both the VS compiler and Clang. If I remove the breakpoint both versions loop as expected with both compilers. Why does a breakpoint cause the first version to fail with the VS compiler?
Version 1:
#include <stdio.h>
int main(void)
{
for (;;)
if (fwrite("Foo", 3, 1, stdout) != 1)
;
}
Version 2:
#include <stdio.h>
int main(void)
{
for (;;)
{
if (fwrite("Foo", 3, 1, stdout) != 1)
;
}
}
So, the actual issue is hitting breakpoint on empty statement, where if condition is evaluated to false.
It is not a "fail": the observed program behavior is the same, and compiler may emit whatever instructions to match the observed behavior. The more optimizations are enabled, the more compiler would "transform" the program.
There's a workaround to insert __nop() intrinsic. The compiler will emit one-byte purpose-less instruction, but it will not omit it, and will make control flow fair. This is the most lightweight way to have something that compiler won't optimize away.
#include <stdio.h>
#include <intrin.h>
int main(void)
{
for (;;)
if (fwrite("Foo", 3, 1, stdout) != 1)
__nop();
}
Note that adding this intrinsic into a program may make the code slightly less optimal, and it would be mostly caused not by extra instruction, but by limiting the compiler in transforming program. For your case, compiler is very likely to throw away the whole != 1 comparison, but it won't do that with __nop().

Why does GCC 9.1.0 sometimes complain about this use of strncpy()?

This is a 40-line MCVE (Minimal, Complete, Verifiable Example) — or something close to minimal — cut down from a 1675 line source file that originally included 32 headers (and most of those included multiple other headers — compiling it with gcc -H lists 464 headers from the project and the system, many of them several times). That file is working code that previously compiled without warnings (GCC 8.3.0), but not with GCC 9.1.0. All structure, function, type, variable names have been changed.
pf31.c
#include <string.h>
enum { SERVERNAME_LEN = 128 };
typedef struct ServerQueue
{
char server_name[SERVERNAME_LEN + 1];
struct ServerQueue *next;
} ServerQueue;
extern int function_under_test(char *servername);
#ifdef SUPPRESS_BUG
extern int function_using_name(char *name);
#endif /* SUPPRESS_BUG */
extern int GetServerQueue(const char *servername, ServerQueue *queue);
int
function_under_test(char *servername)
{
ServerQueue queue;
char name[SERVERNAME_LEN + 1];
if (GetServerQueue(servername, &queue) != 0)
return -1;
char *name_in_queue = queue.server_name;
if (name_in_queue)
strncpy(name, name_in_queue, SERVERNAME_LEN);
else
strncpy(name, servername, SERVERNAME_LEN);
name[SERVERNAME_LEN] = '\0';
#ifdef SUPPRESS_BUG
return function_using_name(name);
#else
return 0;
#endif /* SUPPRESS_BUG */
}
Compilation
When compiled using GCC 9.1.0 (on a Mac running macOS 10.14.5 Mojave, or on a Linux VM running RedHat 5.x — don't ask!), with the option -DSUPPRESS_BUG I get no error, but with the option -USUPPRESS_BUG, I get an error:
$ gcc -std=c11 -O3 -g -Wall -Wextra -Werror -DSUPPRESS_BUG -c pf31.c
$ gcc -std=c11 -O3 -g -Wall -Wextra -Werror -USUPPRESS_BUG -c pf31.c
In file included from /usr/include/string.h:417,
from pf31.c:1:
pf31.c: In function ‘function_under_test’:
pf31.c:30:9: error: ‘__builtin_strncpy’ output may be truncated copying 128 bytes from a string of length 128 [-Werror=stringop-truncation]
30 | strncpy(name, name_in_queue, SERVERNAME_LEN);
| ^~~~~~~
cc1: all warnings being treated as errors
$
When I compile using GCC 8.3.0, I get no errors reported.
Question
Two sides of one question:
Why does GCC 9.1.0 complain about the use of strncpy() when the code is compiled with -USUPPRESS_BUG?
Why doesn't it complain when the code is compiled with -DSUPPRESS_BUG?
Corollary: is there a way to work around this unwanted warning that works with older GCC versions as well as 9.1.0. I've not yet found one. There's also a strong element of "I don't think it should be necessary, because this is using strncpy() to limit the amount of data copied, which is what it is designed for".
Another variant
I have another non-erroring variant, changing the signature of the function_under_test() — here's a set of diffs:
11c11
< extern int function_under_test(char *servername);
---
> extern int function_under_test(char *servername, ServerQueue *queue);
20c20
< function_under_test(char *servername)
---
> function_under_test(char *servername, ServerQueue *queue)
22d21
< ServerQueue queue;
25c24
< if (GetServerQueue(servername, &queue) != 0)
---
> if (GetServerQueue(servername, queue) != 0)
27c26
< char *name_in_queue = queue.server_name;
---
> char *name_in_queue = queue->server_name;
This compiles cleanly regardless of whether SUPPRESS_BUG is defined or not.
As you can guess from the SUPPRESS_BUG terminology, I'm tending towards the view that this is bug in GCC, but I'm kinda cautious about claiming it is one just yet.
More about the the original code: the function itself was 540 lines long; the strncpy() block occurs about 170 lines into the function; the variable corresponding to name was used further down the function in a number of function call, some of which take name as an argument and supply a return value for the function. This corresponds more to the -DSUPPRESS_BUG code, except that in the 'real code', the bug is not suppressed.
This is a GCC bug tracked as PR88780. According to Martin's comment, this warning did not exist prior to GCC 8.
GCC is shipped with this known bug, as it is not deemed release-critical.
To be honest, I am not 100% sure it is the bug. The point is, there are known false-positives. If you feel like helping the GCC project, you can find the most appropriate bug among strncpy / Wstringop-truncation bugs and post your example there. It would be more helpful if you minimized it further (say, with creduce); minimizing the compile string is also appreciated (that would be rather trivial, I guess).
Several compilation warnings related to strncpy were found in GCC 9.0 and reported here and here.
One of them is the error mentioned in the question which seems to occur in the file string_fortified.h:
/usr/include/bits/string_fortified.h:106:10: warning: ‘__builtin_strncpy’ output may be truncated copying 16 bytes from a string of length 16 [-Wstringop-truncation]
106 | return __builtin___strncpy_chk (__dest, __src, __len, __bos (__dest));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The response to this was given on April 15, 2019 was:
Thank you for the report, however as GCC 9 still under development. We do not see the above errors in the current stable GCC 7.4 or GCC 8.3. We appreciate the advanced notice, and will accept PRs to fix issues against GCC 9, but for now our target compiler is gcc stable.
So I believe the errors are probably a result of versions 9 and 9.1 being not stable versions. Hopefully they will be eliminated when these versions become stable.

What is causing Segmentation Fault in this code?

I am currently studying C implementations of Linux terminal commands in class but I can't seem to get the following example to run on my machine. I'm using the latest distro of Ubuntu. It will compile but with a warning. "assignment makes pointer from integer without a cast" it refers to the line with a=ctime(&n->ut_time); I found this code online. It is suppose to replicate the terminal command "Who" to display system users. I'm simply trying to study how it works but I cant seem to get it to run. Any help or explanations would be appreciated.
#include<stdio.h>
#include<sys/utsname.h>
#include<utmp.h>
int main(void)
{
struct utmp *n;
char *a;
int i;
setutent();
n=getutent();
while(n!=NULL)
{
if(n->ut_type==7)
{
printf("%-9s",n->ut_user);
printf("%-12s",n->ut_line);
a=ctime(&n->ut_time);
printf(" ");
for(i=4;i<16;i++)
{
printf("%c",a[i]);
}
printf(" (");
printf("%s",n->ut_host);
printf(")\n");
}
n=getutent();
}
}
Transferring comment to answer
The compiler is telling you you've not got #include <time.h> so ctime() is assumed to return an int and not a char *. All hell breaks loose (segmentation faults, etc) because you are not paying attention to the compiler warnings.
Remember, while you're learning C, the compiler knows a lot more about C than you do. Its warnings should be heeded. (When you know C reasonably well, you still pay attention to the compiler warnings - and make the code compile cleanly without warnings. I use gcc -Wall -Wextra -Werror and some extra options — usually -Wmissing-prototypes -Wstrict-prototypes -Wold-style-definition -Wold-style-declaration; sometimes -Wshadow, -pedantic; and occasionally a few others.)

Why is clang optimizing out my array even when using -O0 flag?

I am trying to debug the following C Program using GDB:
// Program to generate a user specified number of
// fibonacci numbers using variable length arrays
// Chapter 7 Program 8 2013-07-14
#include <stdio.h>
int main(void)
{
int i, numFibs;
printf("How many fibonacci numbers do you want (between 1 and 75)?\n");
scanf("%i", &numFibs);
if (numFibs < 1 || numFibs > 75)
{
printf("Between 1 and 75 remember?\n");
return 1;
}
unsigned long long int fibonacci[numFibs];
fibonacci[0] = 0; // by definition
fibonacci[1] = 1; // by definition
for(i = 2; i < numFibs; i++)
fibonacci[i] = fibonacci[i-2] + fibonacci[i-1];
for(i = 0; i < numFibs; i++)
printf("%llu ", fibonacci[i]);
printf("\n");
return 0;
}
The issue I am having is when trying to compile the code using:
clang -ggdb3 -O0 -Wall -Werror 7_8_FibonacciVarLengthArrays.c
When I try to run gdb on the a.out file created and I am stepping through the program execution. Anytime after the fibonacci[] array is decalared and I type:
info locals
the result says fibonacci <value optimized out> (until after the first iteration of my for loop) which then results in fibonacci holding the address 0xbffff128 for the rest of the program (but dereferencing that address does not appear to contain any meaningful data).
I am just confused why clang appears to be optimizing out this array when the -O0 flag is used?
I can use gcc to compile this code and the value displays as expected when using GDB....
Any thoughts?
Thank you.
You don't mention which version of clang you are using. I tried it with both 3.2 and a recent SVN install (3.4).
The code generated by the two versions looks pretty similar to me, but the debugging information is different. The clang 3.2 (which comes from a default ubuntu 13.04 install) produces an error when I try to examine fibonacci in gdb:
fibonacci = <error reading variable fibonacci (DWARF-2 expression error: DW_OP_reg operations must be used either alone or in conjunction with DW_OP_piece or DW_OP_bit_piece.)>
In the code compiled with clang 3.4, it all works fine. In neither case is the array "optimized out"; it's clearly allocated on the stack.
So I suspect the oddity that you're seeing has more to do with the emission of debugging information than with the actual code.
gdb does not yet support debugging stack allocated variable-length arrays. See https://sourceware.org/gdb/wiki/VariableLengthArray
Use a compile time constant or malloc to allocate fibonacci so that it will be visible to gdb.
See also GDB reports "no symbol in current context" upon array initialization
clang is not "optimizing out" the array at all! The array is declared as a variable-length array on the stack, so it has to be explicitly allocated (using techniques similar to those used by alloca()) when its declaration is reached. The starting address of the array is unknown until that process is complete.

Error with -mno-sse flag and gettimeofday() in C

A simple C program which uses gettimeofday() works fine when compiled without any flags ( gcc-4.5.1) but doesn't give output when compiled with the flag -mno-sse.
#include <stdio.h>
#include <stdlib.h>
int main()
{
struct timeval s,e;
float time;
int i;
gettimeofday(&s, NULL);
for( i=0; i< 10000; i++);
gettimeofday(&e, NULL);
time = e.tv_sec - s.tv_sec + e.tv_usec - s.tv_usec;
printf("%f\n", time);
return 0;
}
I have CFLAGS=-march=native -mtune=native
Could someone explain why this happens?
The program returns a correct value normally, but prints "0" when compiled with -mno-sse enabled.
The flag -mno-sse causes floating point arguments to be passed on the stack, whereas the usual x86_64 ABI specifies that they should be passed via SSE registers.
Since printf() in your C library was compiled without -mno-sse, it is expecting floating point arguments to be passed in accordance with the ABI. This is why your code fails. It has nothing to do with gettimeofday().
If you wish to use printf() from your code compiled with -mno-sse and pass it floating point arguments, you will need to recompile your C library with that option and link against that version.
It appears that you are using a loop which does nothing in order to observe a time difference. The problem is, the compiler may optimize this loop away entirely. The issue may not be with the -mno-sse itself, but may be that that allows an optimization that removes the loop, thus giving you the same time each time you run it.
I would recommend trying to put something in that loop which can't be optimized out (such as incrementing a number which you print out at the end). See if you still get the same behavior. If not, I'd recommend looking at the generated assembler gcc -S and see what the code difference is.
The datastructures tv_usec and tv_sec are usually longs.
Redeclaration of the variable "time" as a long integer solved the issue.
The following link addresses the issue.
http://gcc.gnu.org/ml/gcc-patches/2006-10/msg00525.html
Working code:
#include <stdio.h>
#include <stdlib.h>
int main()
{
struct timeval s,e;
long time;
int i;
gettimeofday(&s, NULL);
for( i=0; i< 10000; i++);
gettimeofday(&e, NULL);
time = e.tv_sec - s.tv_sec + e.tv_usec - s.tv_usec;
printf("%ld\n", time);
return 0;
}
Thanks for the prompt replies. Hope this helps.
What do you mean doesn't give output?
0 (zero) is a perfectly reasonable output to expect.
Edit: Try compiling to assembler (gcc -S ...) and see the differences between the normal and the no-sse version.

Resources