I have the following program
#include <stdio.h>
#include <stdlib.h>
#include <inttypes.h>
int main(void) {
uint16_t o = 100;
uint32_t i1 = 30;
uint32_t i2 = 20;
o = (uint16_t) (o - (i1 - i2)); /*Case A*/
o -= (uint16_t) (i1 - i2); /*Case B*/
(void)o;
return 0;
}
Case A compiles with no errors.
Case B causes the following error
[error: conversion to ‘uint16_t’ from ‘int’ may alter its value [-Werror=conversion]]
The warning options I'm using are:
-Werror -Werror=strict-prototypes -pedantic-errors -Wconversion -pedantic -Wall -Wextra -Wno-unused-function
I'm using GCC 4.9.2 on Ubuntu 15.04 64-bits.
Why do I get this error in Case B but not in Case A?
PS:
I ran the same example with clang compiler and both cases are compiled fine.
Integer Promotion is a strange thing. Basically, all integer values, of any smaller size, are promoted to int so they can be operated on efficiently, and then converted back to the smaller size when stored. This is mandated by the C standard.
So, Case A really looks like this:
o = (uint16_t) ((int)o - ((uint32_t)i1 - (uint32_t)i2));
(Note that uint32_t does not fit in int, so needs no promotion.)
And, Case B really looks like this:
o = (int)o - (int)(uint16_t) ((uint32_t)i1 - (uint32_t)i2);
The main difference is that Case A has an explicit cast, whereas Case B has an implicit conversion.
From the GCC manual:
-Wconversion
Warn for implicit conversions that may alter a value. ....
So, only Case B gets a warning.
Your case B is equivalent to:
o = o - (uint16_t) (i1 - i2); /*Case B*/
The result is an int which may not fit in uint16_t, so, per your extreme warning options, it produces a warning (and thus an error since you're treating warnings as errors).
Related
Given this snippet
#include <inttypes.h>
uint8_t test(uint32_t foo)
{
if (foo > 0xFFFFFFFF)
{
return 0;
}
if (foo < 0)
{
return 0;
}
return 1;
}
compiling with -Wall -Wextra using gcc 12.2 gives only the following warning:
warning: comparison of unsigned expression in '< 0' is always false [-Wtype-limits]
9 | if (foo < 0)
| ^
I don't understand why the first line if (foo > 0xFFFFFFFF) does not trigger the same warning.
Trying if (foo > UINT32_MAX) does not trigger any compiler warning.
is it a bug or a feature?
These warnings are not mandated by the language rules, it's up to the compiler vendors to add them, or not. The fact the gcc warns you about the second if can be considered a courtesy. For this reason I believe we can't consider this a bug, unless you can spot this particular feature on the gcc documentation, that could change my view.
I checked clang for these warnings and sure enough it doesn't warn you at all, at least not with those flags:
https://godbolt.org/z/c7hbM3K5d
It's only when you try -Weverything it warns you of both cases:
<source>:5:13: warning: result of comparison 'uint32_t' (aka 'unsigned int') > 4294967295 is always false [-Wtautological-type-limit-compare]
if (foo > 0xFFFFFFFF)
~~~ ^ ~~~~~~~~~~
<source>:9:13: warning: result of comparison of unsigned expression < 0 is always false [-Wtautological-unsigned-zero-compare]
if (foo < 0)
~~~ ^ ~
https://godbolt.org/z/9b9q13rTe
+1 for clang in my book.
If you want to see these warnings in your build be sure to use the specific flag (-Wtautological-unsigned-zero-compare and -Wtautological-type-limit-compare), using -Weverything would be too verbose and flag many situations you don't need as mentioned in the comments.
Back to gcc, if you use a wider type for the comparison like for example if (foo > 0xFFFFFFFF0) then you get your warning:
<source>:5:13: warning: comparison is always false due to limited range of data type [-Wtype-limits]
5 | if (foo > 0xFFFFFFFF0)
| ^
https://godbolt.org/z/Edvz7aq3b
We can assume they forgot to include the max value for uint32_t, it can be an off by one mistake, or it can be that the warning implementation focuses on types rather than values, one can only speculate. You can file a report if you're adamant on having this particular warning working for UINT32_MAX, or at least get to the end of it, and find out what is the reason for this behavior.
I have written code like follow:
int a = -1;
unsigned int b = 0xffffffff;
if (a == b)
printf("a == b\n");
else
printf("a != b\n");
printf("a = %x b = %x\n", a, b);
return 0;
And the result is as follow:
It shows that a and b are equal. So I want to know how the computer make this judgement?
In any arithmetic operation with a signed integer a and unsigned integer b as operands, a will be implicitly cast to unsigned. Since -1 signed in this case is 0xffffffff unsigned, a and b compares equal.
The machine representation of your two values a and b is the same bit pattern (on your particular computer and implementation), so the a == b test is true.
BTW, you should enable all warnings and debug info when compiling (e.g. compile with gcc -Wall -Wextra -g if using GCC...). You'll probably get some warnings, because you probably has hit some undefined behavior. And you could run your code step by step in your debugger (e.g. gdb) and query the values (and their machine representations).
When I declare a variable as float and subtract two hexadecimal numbers, I keep getting different answer everytime I compile and and run it. Where as the if I declare an integer variable the result stays the same everytime I compile and run the code. I don't understand why storing the result in float changes everytime I compile with the difference of the same two numbers (0xFF0000 - 0xFF7FF)
int main()
{
float BlocksLeft = 0xFF0000 - 0xFF7FF;
int BLeft = 0xFF0000 - 0xFF7FF;
printf("%08x\n", BlocksLeft);
printf("%08x\n", BLeft);
}
The following line is incorrect:
printf("%08x\n", BlocksLeft);
%x format will indicate compiler the argument you give is an int. This lead to undefined behavior. I tried to compile your code and I got:
>gcc -Wall -Wextra -Werror -std=gnu99 -o stackoverflow.exe stackoverflow.c
stackoverflow.c: In function 'main':
stackoverflow.c:15:4: error: format '%x' expects argument of type 'unsigned int', but argument 2 has type 'double' [-Werror=format=]
printf("%08x\n", BlocksLeft);
^
Please, try to compile with stronger warning level, at least -Wall.
You can correct your program this way, for instance:
#include <stdio.h>
int main()
{
float BlocksLeft = 0xFF0000 - 0xFF7FF;
int BLeft = 0xFF0000 - 0xFF7FF;
printf("%08x\n", (int) BlocksLeft); // Works because BlocksLeft's value is non negative
// or
printf("%08x\n", (unsigned int) BlocksLeft);
// or
printf("%.8e\n", BlocksLeft);
printf("%08x\n", BLeft);
}
This warning should not appear for this code should it?
#include <stdio.h>
int main(void) {
unsigned char x = 5;
unsigned char y = 4;
unsigned int z = 3;
puts((z >= x - y) ? "A" : "B");
return 0;
}
z is a different size but it is the same signedness. Is there something about integer conversions that I'm not aware about? Here's the gcc output:
$ gcc -o test test.c -Wsign-compare
test.c: In function ‘main’:
test.c:10:10: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
puts((z >= x - y) ? "A" : "B");
^
$ gcc --version
gcc (Debian 4.9.1-15) 4.9.1
If z is an unsigned char I do not get the error.
The issue is that additive operators perform the usual arithmetic conversions on arithmetic types which. In this case it results in the integer promotions being performed on the operands, which results in unsigned char being converted to int since signed int can represent all the values of the type of unsigned char.
A related thread Why must a short be converted to an int before arithmetic operations in C and C++? explains the rationale for promotions.
C has this concept called "Integer Promotion".
Basically it means that all maths is done in signed int unless you really insist otherwise, or it doesn't fit.
If I put in the implicit conversions, your example actually reads like this:
puts((z >= (int)x - (int)y) ? "A" : "B");
So, now you see the signed/unsigned mismatch.
Unfortunately, you can't safely correct this problem using casts alone. There are a few options:
puts((z >= (unsigned int)(x - y)) ? "A" : "B");
or
puts((z >= (unsigned int)x - (unsigned int)y) ? "A" : "B");
or
puts(((int)z >= x - y) ? "A" : "B");
But they all suffer from the same problem: what if y is larger than x, and what if z is larger than INTMAX (not that it will in the example)?
A properly correct solution might look like this:
puts((y > x || z >= (unsigned)(x - y)) ? "A" : "B")
In the end, unless you really need the extra bit, it usually best to avoid unsigned integers.
I'm trying to convince gcc (4.8.1) or clang (3.4) to vectorize the following
code on a ivy bridge processor:
#include "stdlib.h"
#include "math.h"
float sumsqr(float *v, float mean, size_t n) {
float ret = 0;
for(size_t i = 0; i < n; i++) {
ret += pow((v[i] - mean), 2);
}
return ret;
}
and compiling it without success
$ gcc -std=c99 -O3 -march=native -mtune=native -ffast-math -S foo.c
is there a way to modify the code without using instrinsics or modify gcc invocation in order to obtain vectorized code?
The pow function is very general and it may not be visible to the compiler what it does (remember that it can compute things like pow(1.8, -3.19). So it might help to use only builtin operations, and not make function calls:
for(size_t i = 0; i < n; i++)
{
float const x = v[i] - mean;
ret += x * x;
}
First, don't use pow if you don't have to, plain multiplication lets gcc vectorize. Now to explain why you are getting this behavior, notice that replacing pow with powf, gcc manages to vectorize. gcc knows that pow(x,2) is x*x, but the issue here is that pow is a function for double. So the compiler must convert the number v[i]-mean to double, compute the square as a double, add it to ret as a double, and only then convert to float. If at least ret was a double, the compiler could vectorize, but as is, all those conversions make it too complicated and not worth vectorizing.