int climbStairs(int n ){
if(n==1){
return 1;
}
if (n>=2){
return (2+ climbStairs(n-2)+ climbStairs(n-1));
}
}
How do I fix the compiler error?
Your compiler is not as clever as you. You may well know that the function is never called with n less than 1, but the compiler doesn't. Personally I think it's called with n as 0 for typical inputs.
Therefore it thinks that program control can reach the function closing brace } without an explicit return, which formally is undefined behaviour if you make use of the function return value which, to repeat, I think you do.
You have this warning the compiler issues to you elevated to an error, so compilation halts.
Some fixes:
Block the recursion with the stronger if (n <= 1){.
Run-time assert before the closing brace } using assert(false) or similar.
Switch off the elevation of that warning to an error, but do deal with the warning nonetheless.
Some advice from #JonathanLeffler
Don't switch off -Werror — it is too valuable. Deal with the warning. assert(n >= 0); at the top; if (n <= 1) { return 1; } as well. Assertions fire in debug builds; erroneous parameter value handled reasonably safely and sanely even if assertions are not enabled.
And in addition to other answers here is another good trick.
If you are absolutely certain that you know more than the compiler, and there's nothing to return, then put an abort(); as the last line of your function. The compiler is smart enough to know that abort() will never return because it causes a program crash. That will silence the warning.
Note that the program crash is a good thing. Because as in this case, when the "impossible thing" does indeed happen, you get the crash and it will pop open in your debugger if you are using one.
Related
Take a look at the following code:
int foo(void) {
for (int i = 0; i != 1; i = 1) {
return i;
}
}
GCC 10.2 with -Wall -Wextra -pedantic -std=c99 emits the following warning:
warning: control reaches end of non-void function [-Wreturn-type]
If I don't miss something, foo obviously terminates -- it will always return 0.
How to fix this warning? I can't use GCC diagnostic pragmas because the above for-loop is actually generated by a macro which must be a statement prefix, so I can't just _Pragma("GCC diagnostic pop") after a user-provided statement ({ return i; } in the above example).
Godbolt
You are correct in your assumption that the code in the function you have shown can never reach beyond the for loop and, thus, the "invalid return" cannot be executed. However, many (most) compilers won't accept this as a reason to not emit the warning. (Similar situations can arise when you have if and else if blocks that cover all possibilities, yet still omit a 'terminal' else block.
How to fix this warning?
Simply add a return 42; statement (the value of the integer literal can be anything you like) at the end of the function.
As a point of interest, the "clang-cl" code analysis tool gives two warnings: one that the for loop never runs past its first iteration, and one about control reaching the end of a non-void function (like your GCC warning) 😊:
warning : non-void function does not return a value in all control
paths [-Wreturn-type] warning : loop will run at most once (loop
increment never executed) [-Wunreachable-code-loop-increment]
These are things you have to learn to live with: compilers can't always reliably predict all possible outcomes, even though a human reader can see what's really happening.
For a class, I wanted to demonstrate undefined behavior with goto to the students. I came up with the following program:
#include <stdio.h>
int main()
{
goto x;
for(int i = 0; i < 10; i++)
x: printf("%d\n", i);
return 0;
}
I would expect the compiler (gcc version 4.9.2) to warn me about the access to i being undefined behavior, but there is no warning, not even with:
gcc -std=c99 -Wall -Wextra -pedantic -O0 test.c
When running the program, i is apparently initialized to zero. To understand what is happening, I extended the code with a second variable j:
#include <stdio.h>
int main()
{
goto x;
for(int i = 0, j = 1; i < 10; i++)
x: printf("%d %d\n", i, j);
return 0;
}
Now the compiler warns me that I am accessing j without it being initialized. I understand that, but why is i not uninitialized as well?
Undefined behavior is a run-time phenomenon. Therefore it is quite rare that the compiler will be able to detect it for you. Most cases of undefined behavior are invoked when doing things beyond the scope of the compiler.
To make things even more complicated, the compiler might optimize the code. Suppose it decided to put i in a CPU register but j on the stack or vice versa. And then suppose that during debug build, it sets all stack contents to zero.
If you need to reliably detect undefined behavior, you need a static analysis tool which does checks beyond what is done by a compiler.
Now the compiler warns me that I am accessing j without it being
initialized. I understand that, but why is i not uninitialized as
well?
Thats the point with undefined behavior, it sometimes does work, or not, or partially, or print garbage. The problem is that you can't know what exactly your compiler is doing under the hood to make this, and its not the compiler's fault for producing inconsistent results, since, as you admit yourself, the behavior is undefined.
At that point the only thing thats guaranteed is that nothing is guaranteed as to how this will play out. Different compilers may even give different results, or different optimization levels may.
A compiler is also not required to check for this, and its not required to handle this, so consequently compilers don't. You can't use a compiler to check for undefined behavior reliably, anyways. Thats what unit tests and lots of test cases or statistical analysis is for.
Using "goto" to skip a variable initialization would, per the C Standard, allow a compiler to do anything it wants even on platforms where it would normally yield an Indeterminate Value which may not behave consistently but wouldn't have any other side-effects. The behavior of gcc in this case doesn't seem to have devolved as much as its behavior in case of e.g. integer overflow, but its optimizations may be somewhat interesting though benign. Given:
int test(int x)
{
int y;
if (x) goto SKIP;
y=x+1;
SKIP:
return y*2;
}
int test2(unsigned short y)
{
int q=0;int i;
for (i=0; i<=y; i++)
q+=test(i);
return q;
}
The compiler will observe that in all defined cases, test will return 2, and can thus eliminate the loop by generating code for test2 equivalent to:
int test2(unsigned short y)
{
return (int)y << 1;
}
Such an example, however, may give the impression that compilers treat UB in a benign fashion. Unfortunately, in the case of gcc, that is no longer true in general. It used to be that on machines without hardware traps, compilers would treat uses of Indeterminate Value as simply yielding arbitrary values that may or may not behave in any consistent fashion, but without any other side-effects. I'm not sure of any cases where using goto to skip variable initialization would yet cause side-effects other than having a meaningless value in the variable, but that doesn't mean the authors of gcc won't decide to exploit that freedom in future.
What is the usefulness of assert(), when we can also use printf and if-else statements will inform the user that b is 0?
#include <stdio.h>
#include <assert.h>
int main() {
int a, b;
printf("Input two integers to divide\n");
scanf("%d%d", &a, &b);
assert(b != 0);
printf("%d/%d = %.2f\n", a, b, a/(float)b);
return 0;
}
There're two things here that people conflate. Validation of run-time errors and compile-time bugs.
Avoid a runtime error with if
Say the user is supposed to enter only numbers, but the input encountered is alphanumeric then it's a run-time error; there's nothing that you, the programmer, can do about it. You've to raise this to the user in a user-friendly, graceful way; this involves bubbling up the error to the right layer, etc. There are multiple ways of doing this: returning error value, throwing an exception, etc. Not all options are available in all languages.
Catch a bug with an assert
On the other hand, there're design-time assumptions and invariants a programmer depends on when writing new code. Say you're authoring a function make_unit_vec involving division of a vector’s components by its length. Passing the 0 vector breaks this function since it’ll lead to a divide-by-zero; an invalid operation. However, you don’t want to check this every time in this function, since you're sure none of the callers will pass the 0 vector -- an assumption, not about extern user input but, about internal systems within programmer’s control.
So the caller of this function should never pass a vector with length 0. What if some other programmer doesn’t know of this assumption and breaks make_unit_vec by passing a 0 vector? To be sure this bug is caught, you put an assert i.e. you're asserting (validating) the design-time assumption you made.
When a debug build is run under the debugger, this assert will "fire" and you can check the call stack to find the erring function! However, beware that assertions don’t fire when running a release build. In our example, in a release build, the div-by-zero will happen, if the erroneous caller isn’t corrected.
This is the rationale behind assert being enabled only for debug builds i.e. when NDEBUG is not set.
Asserts are not used to inform users about anything. Assertions are used to enforce invariants in your program.
In the above program, the assert is misused. The b != 0 is not an invariant. It is a condition to be checked at runtime and the proper way to do it would be something like
if (b == 0) {
sprintf(stderr, "Can't divide by 0\n");
return -1;
}
This means that the program will check user input and then abort if it's incorrect. The assert would be inappropriate because, one, it can be compiled out using the NDEBUG macro which will, in your case, alter program logic (if the input is 0, the program will continue to the printf with the NDEBUG flag) and two because the purpose of an assert is to assert a condition at a point in the program.
The example in the wikipedia article I've linked to gives you a place where an assert is the right thing to use. It helps to ensure program correctness and improves readability when someone is trying to read and understand the algorithm.
In your program if the condition b != 0 holds true then the program execution will continue otherwise it is terminated and an error message is displayed and this is not possible with printf only. Using if-else will work too but that is not a MACRO.
assert will stop the program, it marks a condition that should never occur.
From Wikipedia,
When executed, if the expression is false (that is, compares equal to 0), assert() writes
information about the call that failed on stderr and then calls abort(). The information it > writes to stderr includes:
the source filename (the predefined macro __FILE__)
the source line number (the predefined macro __LINE__)
the source function (the predefined identifier __func__) (added in C99)
the text of expression that evaluated to 0 [1]
Example output of a program compiled with gcc on Linux:
program: program.c:5: main: Assertion `a != 1' failed.
Abort (core dumped)
It stops the program and prevents it from doing anything wrong (besides abruptly stopping).
It's like a baked-in breakpoint; the debugger will stop on it without the extra manual work of looking up the line number of the printf and setting a breakpoint. (assert will still helpfully print this information though.)
By default it's defined to be conditional on the NDEBUG macro, which is usually set up to reflect the debug/release build switch in the development environment. This prevents it from decreasing performance or causing bloat for end users.
Note that assert isn't designed to inform the user of anything; it's only for debugging output to the programmer.
That is not a good use of assert; validating user input with assert is too brutal for anything other than your own private use.
This is a better example — it is a convenient one I happen to have kicking around:
int Random_Integer(int lo, int hi)
{
int range = hi - lo + 1;
assert(range < RAND_MAX);
int max_r = RAND_MAX - (RAND_MAX % range);
int r;
while ((r = rand()) > max_r)
;
return (r % range + lo);
}
If the value of RAND_MAX is smaller than the range of integers requested, this code is not going to work, so the assertion enforces that critical constrain.
Using GCC (4.0 for me), is this legal:
if(__builtin_expect(setjmp(buf) != 0, 1))
{
// handle error
}
else
{
// do action
}
I found a discussion saying it caused a problem for GCC back in 2003, but I would imagine that they would have fixed it by now. The C standard says that it's illegal to use setjmp unless it's one of four conditions, the relevant one being this:
one operand of a relational or equality operator with the other operand an integer constant expression, with the resulting expression being the entire controlling expression of a selection or iteration statement;
But if this is a GCC extension, can I guarantee that it will work under for GCC, since it's already nonstandard functionality? I tested it and it seemed to work, though I don't know how much testing I'd have to do to actually break it. (I'm hiding the call to __builtin_expect behind a macro, which is defined as a no-op for non-GCC, so it would be perfectly legal for other compilers.)
I think that what the standard was talking about was to account for doing something like this:
int x = printf("howdy");
if (setjmp(buf) != x ) {
function_that_might_call_longjmp_with_x(buf, x);
} else {
do_something_about_them_errors();
}
In this case you could not rely on x having the value that it was assigned in the previous line anymore. The compiler may have moved the place where x had been (reusing the register it had been in, or something), so the code that did the comparison would be looking in the wrong spot. (you could save x to another variable, and then reassign x to something else before calling the function, which might make the problem more obvious)
In your code you could have written it as:
int conditional;
conditional = setjump(buf) != 0 ;
if(__builtin_expect( conditional, 1)) {
// handle error
} else {
// do action
}
And I think that we can satisfy ourselves that the line of code that assigns the variable conditional meets that requirement.
But if this is a GCC extension, can I guarantee that it will work under for GCC, since it's already nonstandard functionality? I tested it and it seemed to work, though I don't know how much testing I'd have to do to actually break it. (I'm hiding the call to __builtin_expect behind a macro, which is defined as a no-op for non-GCC, so it would be perfectly legal for other compilers.)
You are correct, __builtin_expect should be a macro no-op for other compilers so the result is still defined.
I just ran into some code that overuse semicolons, or use semicolon for different purposes that I am not aware of.
I found semicolons at the end of if-statements and at the end of functions. For instance:
int main (int argc, char * argv[]) {
// some code
if (x == NULL) {
// some code
}; <-----
// more code
return 0;
}; <---
It is compiling with cc, not gcc.
What do those semicolons do? I'm assuming that there is no difference because the compiler would just consider it as empty statement.
They do nothing. They're a sign of someone who doesn't understand the language terribly well, I suspect.
If this is source code you notionally "own", I would remove the code and try to have a gentle chat with the person who wrote it.
that's dummy statememt. You sample is identical to
if (x == NULL) {
// some code
do_something_here();
}
/* empty (dummy statement) here */ ;
// more code
some_other_code_here();
You are right, the compiler considers them empty statements. They are not needed, I guess the programmer somehow thought they were.
The first semicolon (after the if-statement) is just an empty expression which does nothing. I fail to see any point of having it there.
The second semicolon (after the function) is an error since it is outside of any block of code. The compiler should give a warning.
These semicolons are not needed (as you said, they are empty statements). Your code compiles with gcc, providing that 'x' is defined (check http://www.codepad.org). There's no reason why a C compiler would refuse to compile your code.
I think that the author may have been going for something like:
if(condition for tbd block)
;
else {
//Some code here
}
which you might do if you were scaffolding code and still wanted it to compile. There's a good chance that it's just an error as Jon suggests though.
These semicolons are useless as others have pointed out already. The only thing I want to add is that IMO, these are optimized out anyway i.e., compiler doesn't generate any real code for these.