Given this snippet
#include <inttypes.h>
uint8_t test(uint32_t foo)
{
if (foo > 0xFFFFFFFF)
{
return 0;
}
if (foo < 0)
{
return 0;
}
return 1;
}
compiling with -Wall -Wextra using gcc 12.2 gives only the following warning:
warning: comparison of unsigned expression in '< 0' is always false [-Wtype-limits]
9 | if (foo < 0)
| ^
I don't understand why the first line if (foo > 0xFFFFFFFF) does not trigger the same warning.
Trying if (foo > UINT32_MAX) does not trigger any compiler warning.
is it a bug or a feature?
These warnings are not mandated by the language rules, it's up to the compiler vendors to add them, or not. The fact the gcc warns you about the second if can be considered a courtesy. For this reason I believe we can't consider this a bug, unless you can spot this particular feature on the gcc documentation, that could change my view.
I checked clang for these warnings and sure enough it doesn't warn you at all, at least not with those flags:
https://godbolt.org/z/c7hbM3K5d
It's only when you try -Weverything it warns you of both cases:
<source>:5:13: warning: result of comparison 'uint32_t' (aka 'unsigned int') > 4294967295 is always false [-Wtautological-type-limit-compare]
if (foo > 0xFFFFFFFF)
~~~ ^ ~~~~~~~~~~
<source>:9:13: warning: result of comparison of unsigned expression < 0 is always false [-Wtautological-unsigned-zero-compare]
if (foo < 0)
~~~ ^ ~
https://godbolt.org/z/9b9q13rTe
+1 for clang in my book.
If you want to see these warnings in your build be sure to use the specific flag (-Wtautological-unsigned-zero-compare and -Wtautological-type-limit-compare), using -Weverything would be too verbose and flag many situations you don't need as mentioned in the comments.
Back to gcc, if you use a wider type for the comparison like for example if (foo > 0xFFFFFFFF0) then you get your warning:
<source>:5:13: warning: comparison is always false due to limited range of data type [-Wtype-limits]
5 | if (foo > 0xFFFFFFFF0)
| ^
https://godbolt.org/z/Edvz7aq3b
We can assume they forgot to include the max value for uint32_t, it can be an off by one mistake, or it can be that the warning implementation focuses on types rather than values, one can only speculate. You can file a report if you're adamant on having this particular warning working for UINT32_MAX, or at least get to the end of it, and find out what is the reason for this behavior.
Related
I came up with this issue whilst practicing assignment in C. When I try to initialize any variable with it's name(identifier), which doesn't even exist, is not throwing any kind of error.
int x = x;
As far as I know associativity of assignment operator is right to left. So the following code should throw an error whilst i'm initializing a variable with an rvalue which doesn't even exist. Rather, it's assigning some kind of garbage value to it. Why is this happening?
Setting value to be itself is undefined behavior in c standard. After appending compiler option -Winit-self and warning occurs.
GCC 11.2 does diagnose this if you use -Wuninitialized -Winit-self.
I suspect int x = x; may have been used as an idiom1 for “I know what I am doing; do not warn me about x being uninitialized,” and so it was excluded from -Wuninitialized, but a separate warning was provided with -Winit-self.
Note that while the behavior of int x = x; is not defined by the C standard (if it is inside a function and the address of x is not taken), neither does it violate any constraints of the C standard. This means a compiler is not required to diagnose the problem. Choosing to issue a warning message is a matter of choices and quality of the C implementation rather than rules of the C standard.
Apple Clang 11.0 does not warn for int x = x; even with -Wuninitialized -Winit-self. I suspect this is a bug (a deviation from what the authors would have wanted, if not from the rules of the C standard), but perhaps there was some reason for it.
Consider code such as:
int FirstIteration = 1;
int x;
for (int i = 0; i < N; ++i)
{
if (FirstIteration)
x = 4;
x = foo(x);
FirstIteration = 0;
}
A compiler might observe that x = 4; is inside an if, and therefore reason that x might not be initialized in foo(x). The compiler might be designed to issue a warning message in such cases and be unable to reason that the use of FirstIteration guarantees that x is always initialized before being used in foo(x). Taking int x = x; as an assertion from the author that they have deliberately designed the code this way gives them a way to suppress the spurious warning.
Footnote
1 There are several idioms which are used to tell a compiler the author is deliberately using some construction which is often an error. For example, if (x = 3) … is completely defined C code that sets x to 3 and always evaluates as true for the if, but it is usually a mistake as x == 3 was intended. The code if ((x = 3)) … is identical in C semantics, but the presence of the extra parentheses is an idiom used to say “I used assignment deliberately,” and so a compiler may warn for if (x = 3) … but not for if ((x = 3)) ….
The below given code is written to satisfy the condition (x == x+2) to be returning true in C.
#include<stdio.h>
#define x 2|0
int main()
{
printf("%d",x==x+2);
return 0;
}
In the above code why the printf() is printing 2 ( if I write x+3 I get 3 and so on ).
Can someone explain how the given macro is working.
What is the use of | operator in C and what does the macro
#define x 2|0
mean? I read about macros in other questions but no question explained similar kind of example.
TL;DR; Read about operator precedence.
+ binds higher than == which binds higher than |.
After preprocessing, your printf() statement looks like
printf("%d",2|0==2|0+2);
which is the same as
printf("%d",2|(0==2)|(0+2));
which is
printf("%d",2|0|2);
Word of advice: Do not write this type of code in real scenario. With minimal level of compiler warning enabled, your code produces
source_file.c: In function ‘main’:
source_file.c:4:12: warning: suggest parentheses around comparison in operand of ‘|’ [-Wparentheses]
#define x 2|0
^
source_file.c:8:21: note: in expansion of macro ‘x’
printf("%d\n\n",x==x+2);
^
source_file.c:4:12: warning: suggest parentheses around arithmetic in operand of ‘|’ [-Wparentheses]
#define x 2|0
^
source_file.c:8:24: note: in expansion of macro ‘x’
printf("%d\n\n",x==x+2);
So, the moment you change the MACRO definition to something sane, like
#define x (2|0)
the result will also change, as the explicit precedence then will be guaranteed by the parenthesis.
After running the preprocessor, gcc -E main.c you will get:
int main()
{
printf("%d",2|0==2|0 +2);
return 0;
}
Since (0==2) is 0, 2|0|2
I have the following program
#include <stdio.h>
#include <stdlib.h>
#include <inttypes.h>
int main(void) {
uint16_t o = 100;
uint32_t i1 = 30;
uint32_t i2 = 20;
o = (uint16_t) (o - (i1 - i2)); /*Case A*/
o -= (uint16_t) (i1 - i2); /*Case B*/
(void)o;
return 0;
}
Case A compiles with no errors.
Case B causes the following error
[error: conversion to ‘uint16_t’ from ‘int’ may alter its value [-Werror=conversion]]
The warning options I'm using are:
-Werror -Werror=strict-prototypes -pedantic-errors -Wconversion -pedantic -Wall -Wextra -Wno-unused-function
I'm using GCC 4.9.2 on Ubuntu 15.04 64-bits.
Why do I get this error in Case B but not in Case A?
PS:
I ran the same example with clang compiler and both cases are compiled fine.
Integer Promotion is a strange thing. Basically, all integer values, of any smaller size, are promoted to int so they can be operated on efficiently, and then converted back to the smaller size when stored. This is mandated by the C standard.
So, Case A really looks like this:
o = (uint16_t) ((int)o - ((uint32_t)i1 - (uint32_t)i2));
(Note that uint32_t does not fit in int, so needs no promotion.)
And, Case B really looks like this:
o = (int)o - (int)(uint16_t) ((uint32_t)i1 - (uint32_t)i2);
The main difference is that Case A has an explicit cast, whereas Case B has an implicit conversion.
From the GCC manual:
-Wconversion
Warn for implicit conversions that may alter a value. ....
So, only Case B gets a warning.
Your case B is equivalent to:
o = o - (uint16_t) (i1 - i2); /*Case B*/
The result is an int which may not fit in uint16_t, so, per your extreme warning options, it produces a warning (and thus an error since you're treating warnings as errors).
So there's this gcc warning that bothers me:
warning: assuming signed overflow does not occur when simplifying multiplication
The code it points at looks like this:
/* Move the memory block of entries after the removed one - if any. */
if (database->entries + database->entries_size - 1 != database_entry) {
memmove(
database_entry,
database_entry + 1,
sizeof(spm_database_entry_t)
* (
(database->entries + database->entries_size)
- database_entry - 1
)
);
}
As you can easily guess it moves part of the container's memory after element removal to allow its further reallocation (shrinking).
database_entry is a pointer of type spm_database_entry_t* to the removed element
database->entries is a pointer to array of spm_database_entry_t
database->entries_size is a size_t representing number database->entries elements before the removal
How to get rid of the warning? Can I prevent the multiplication simplifying or maybe there's better way to calculate how much memory needs moving?
edit
Are you sure that database_entry < database->entries + database->entries_size?
Positive.
What are the compiler flags you're using?
-Wall -Wextra -Wshadow -Wpointer-arith -Wcast-qual -Wstrict-prototypes
-Wmissing-prototypes -Wdeclaration-after-statement -Wwrite-strings
-Winit-self -Wcast-align -Wstrict-aliasing=2 -Wformat=2
-Wmissing-declarations -Wmissing-include-dirs -Wno-unused-parameter
-Wuninitialized -Wold-style-definition -Wno-missing-braces
-Wno-missing-field-initializers -Wswitch-default -Wswitch-enum
-Wbad-function-cast -Wstrict-overflow=5 -Winline -Wundef -Wnested-externs
-Wunreachable-code -Wfloat-equal -Wredundant-decls
-pedantic -ansi
-fno-omit-frame-pointer -ffloat-store -fno-common -fstrict-aliasing
edit2
Casting to unsigned int before the multiplication seem to do the trick, but casting to size_t doesn't. I don't get it - standard says size_t is always unsigned...
edit3
If context can be of any use: https://github.com/msiedlarek/libspm/blob/master/libspm/database.c#L116
edit4
Solution based on steveha's answer:
/* Calculate how meny entries need moving after the removal. */
size_t entries_to_move = (
(database->entries + database->entries_size)
- database_entry - 1
);
/* Move the memory block of entries after the removed one - if any. */
memmove(
database_entry,
database_entry + 1,
sizeof(spm_database_entry_t) * entries_to_move
);
Personally, I favor additional intermediate temporary variables. The compiler will see that they are used for only the one calculation, and will optimize the variables away; but in a debug build, you can single-step, examine the variables, and make sure it really is doing what you expect.
/* Move the memory block of entries after the removed one - if any. */
assert(database_entry >= database->entries &&
database_entry < database->entries + database->entries_size);
size_t i_entry = database_entry - database->entries;
size_t count_to_move = (database->entries_size - 1) - i_entry;
size_t bytes_to_move = count_to_move * sizeof(spm_database_entry_t);
memmove(database_entry, database_entry + 1, bytes_to_move);
Most of the time, bytes_to_move will not be 0, but if it is 0 then memmove() will simply move 0 bytes and no harm done. So we can remove that if statement, unless you had something else inside it that needs doing only when the move happens.
Also, if you do it this way, and you are still getting the warning, you will get a line number that will point you right at what the compiler is worried about.
I suspect the issue relates to the fact that size_t, as returned by sizeof(spm_database_entry_t), is always an unsigned type (usually just a type synonym for unsigned int or unsigned long int, if I remember correctly). There's a theoretical possibility, however, that if the value of database_entry exceeds database->entries + database->entries_size that you'll end up multiplying a signed quantity by an unsigned type, raising the possibility of bugs or integer overflows. Normally, when signed and unsigned types get mixed like this, the smaller type is cast/coerced into the larger type, or, if they're equally ranked, the signed type is coerced to an unsigned. I don't know what the rest of your code looks like, so it's difficult to suggest an improvement.
As an exercise, I'd like to write a macro which tells me if an integer variable is signed. This is what I have so far and I get the results I expect if I try this on a char variable with gcc -fsigned-char or -funsigned-char.
#define ISVARSIGNED(V) (V = -1, (V < 0) ? 1 : 0)
Is this portable? Is there a way to do this without destroying the value of the variable?
#define ISVARSIGNED(V) ((V)<0 || (-V)<0 || (V-1)<0)
doesn't change the value of V. The third test handles the case where V == 0.
On my compiler (gcc/cygwin) this works for int and long but not for char or short.
#define ISVARSIGNED(V) ((V)-1<0 || -(V)-1<0)
also does the job in two tests.
If you're using GCC you can use the typeof keyword to not overwrite the value:
#define ISVARSIGNED(V) ({ typeof (V) _V = -1; _V < 0 ? 1 : 0 })
This creates a temporary variable, _V, that has the same type as V.
As for portability, I don't know. It will work on a two's compliment machine (a.k.a. everything your code will ever run on in all probability), and I believe it will work on one's compliment and sign-and-magnitude machines as well. As a side note, if you use typeof, you may want to cast -1 to typeof (V) to make it safer (i.e. less likely to trigger warnings).
#define ISVARSIGNED(V) ((-(V) < 0) != ((V) < 0))
Without destroying the variable's value. But doesn't work for 0 values.
What about:
#define ISVARSIGNED(V) (((V)-(V)-1) < 0)
This simple solution has no side effects, including the benefit of only referring to v once (which is important in a macro). We use the gcc extension "typeof" to get the type of v, and then cast -1 to this type:
#define IS_SIGNED_TYPE(v) ((typeof(v))-1 <= 0)
It's <= rather than just < to avoid compiler warnings for some cases (when enabled).
A different approach to all the "make it negative" answers:
#define ISVARSIGNED(V) (~(V^V)<0)
That way there's no need to have special cases for different values of V, since ∀ V ∈ ℤ, V^V = 0.
A distinguishing characteristic of signed/unsigned math is that when you right shift a signed number, the most significant bit is copied. When you shift an unsigned number, the new bits are 0.
#define HIGH_BIT(n) ((n) & (1 << sizeof(n) * CHAR_BITS - 1))
#define IS_SIGNED(n) (HIGH_BIT(n) ? HIGH_BIT(n >> 1) != 0 : HIGH_BIT(~n >> 1) != 0
So basically, this macro uses a conditional expression to determine whether the high bit of a number is set. If it's not, the macro sets it by bitwise negating the number. We can't do an arithmetic negation because -0 == 0. We then shift right by 1 bit and test whether sign extension occurred.
This assumes 2's complement arithmetic, but that's usually a safe assumption.
Why on earth do you need it to be a macro? Templates are great for this:
template <typename T>
bool is_signed(T) {
static_assert(std::numeric_limits<T>::is_specialized, "Specialize std::numeric_limits<T>");
return std::numeric_limits<T>::is_signed;
}
Which will work out-of-the-box for all fundamental integral types. It will also fail at compile-time on pointers, which the version using only subtraction and comparison probably won't.
EDIT: Oops, the question requires C. Still, templates are the nice way :P