I´m using shift to make optimizations on some math I have to do in a program written in C:
int h;
h = 104;
h = (h<<8)+(h<<6);
printf("offset: %d",h);
I get this result:
offset: -32256
When what I expect is
(104* (2^8) ) + (104 * (2^6)) = 33280
Can anyone explain why I´m getting the negative result and what can I do to get the result I expect?
I´m using the BORLANDC compiler with DOSBOX to run my program if that´s useful.
On Borland C compiler, the size of int is 16-bits. So the statement
h = (h<<8)+(h<<6);
overflows a signed integer. The individual shift operations are fine as the results are within the limit of what a 16-bit signed integer can hold. But the addition results in overflow. Signed integer overflow is undefined behaviour in C.
It seems In the environment where the program was run type int has width of 2 bytes and its maximum value is +32767.
So when a positive value exceedes this limit then the sign bit can be set and you get a negative value.
You can check the maximum poisitive value that can be stored in an object of type int the following way
#include <stdio.h>
#include <limits.h>
int main( void )
{
printf( "The maximum is %d\n", INT_MAX );
}
Instead of the type int you could use type unsigned int or type long or unsigned long
For example the maximum value for type unsigned int provided by your compiler can be equal to 65535. It is enough to store the result of your expression.
Related
Now I am learning the C language and I have two tasks. Define variables and print them Here is my code:
#include <stdio.h>
int main()
{
printf("Excersise 1\n\n");
int cVarOne = -250;
int cVarTwo = 250;
int cVarThree = 4589498;
double cVarFour = 10000000000000000000;
double cVarFive = -9000000000000000000;
printf("%d\n%d\n%d\n%.0lf\n%.0lf\n\n", cVarOne,cVarTwo,cVarThree,cVarFour,cVarFive);
printf("Excersise Two\n\n");
unsigned short cVarSix = 43112;
int cVarSeven = -1357674;
int cVarEight = 1357674;
int cVarNine = -1357674000;
unsigned int cVarTen = 3657895000;
printf("%u\n%d\n%d\n%d\n%lu\n", cVarSeven, cVarEight, cVarNine, cVarTen);
return 0;
}
The problem is that for exercise 1 the following message appears:
Excersise 1
and for exercise two the same numbers that are set as variables are not printed propetly. Here is the result:
Excersise 2
I tried to use different types, but without result. Where am I wrong? What data types and specifiers should be used and why do these errors occur?
I am using this online compiler: https://www.onlinegdb.com/online_c_compiler
Thanks.
The warning in Exercise 2 is self-explanatory, simple use the correct conversion specifier for unsigned int, as taught in chapter 1 of your favourite C book. The rest of this answer is regarding Exercise 1:
Why you get the warning:
Integer constants, these things: 123, have a type just like variables.
10000000000000000000 is not a floating point type since it doesn't end with . (double) or .f (float), but a fixed point signed integer. The compiler will try to find the smallest signed integer type where the value will fit, but it fails to find one at all.
The reason is because 10,000,000,000,000,000,000 = 10^19 is larger than the largest signed 64 bit integer, which can only hold values up to 2^63 - 1 = 9.22*10^18.
The compiler notes however that the value might fit in a 64 bit unsigned integer since 2^64 -1 = 18.44*10^18, large enough. Hence it tries that, but gives you the warning for writing such smelly code.
Solution:
Use 10000000000000000000.0 which is of type double.
Or better yet, use 10e18 which is also double but readable by humans.
General best practices:
Never mix floating point and fixed point in the same expression.
Never mix float and double in the same expression. (Stay clear of float entirely on most mainstream systems like PC.)
When size and portability matters, never use the default integer types of C like int, long etc. Use the types from stdint.h instead.
Each integer type has a fixed size in bytes for internal representations of values. As a result there is a limit for the maximum and a minimum value that can be stored in an integer type.
For example for the type uintmax_t the following characteristics can be provided by the compiler
#include <stdio.h>
#include <inttypes.h>
int main( void )
{
uintmax_t x = UINTMAX_MAX;
printf( "sizeof( uintmax_t ) = %zu and the maximum value is %"
PRIuMAX "\n", sizeof( x ), x );
}
The program output is
sizeof( uintmax_t ) = 8 and the maximum value is 18446744073709551615
When you are using integer constant without suffix like this 10000000000000000000 then the compiler selects the type of the constant the first signed integer type that can represent such a constant. However neither signed integer type including long long int can represent this constant. So the compiler issues a message that such a constant can be represented only in an unsigned integer type.
You could use a constant with a suffix like
double cVarFour = 10000000000000000000llu;
The suffix determines that the constant has the type unsigned long long int.
As for the second message then you have to use correct conversion specifiers with arguments.
The conversion specifier lu is designed to output values of objects of the type unsigned long int. However the corresponding argument of the call of printf
printf("%u\n%d\n%d\n%d\n%lu\n", cVarSeven, cVarEight, cVarNine, cVarTen);
has the type unsigned int
unsigned int cVarTen = 3657895000;
So either use the conversion specifier u like
printf("%u\n%d\n%d\n%d\n%u\n", cVarSeven, cVarEight, cVarNine, cVarTen);
or cast the argument to the type unsigned ling like
printf("%u\n%d\n%d\n%d\n%lu\n", cVarSeven, cVarEight, cVarNine, ( unsigned long )cVarTen);
Also the length modifier l in this format specification %.0lf has no effect. You could just write %.0f
I have a strange problem. I have a variable whose actual value is only negative(only negative integers are generated for this variable). But in the legacy code, an old colleague of mine used uint16 instead of signed int to store the values of that variable. Now if i wanted to print the actual negative value of that variable, how can i do that(what format specifier to us)? For example if actual value is -75, when i print using %d its giving me 5 digit positive value(I think its because of two's complement). I want to print it as 75 or -75.
If a 16-bit negative integer was stored in a uint16_t called x, then the original value may be calculated as x-65536.
This can be printed with any of1:
printf("%ld", x-65536L);
printf("%d", (int) (x-65536));
int y = x-65536;
printf("%d", y);
Subtracting 65536 works because:
Per C 2018 6.5.16.1 2, the value of the right operand (a negative 16-bit integer is converted to the type of the assignment expression (which is essentially the type of the left operand).
Per 6.3.1.3 2, the conversion to an unsigned integer operates by adding or subtracting “one more than the maximum value that can be represented in the new type”. For uint16_t, one more than its maximum is 65536.
With a 16-bit negative integer, adding 65536 once brings it into the range of a uint16_t.
Therefore, subtracting 65536 restores the original value.
Footnote
1 65536 will be long or int according to whether int is 16 bits or more, so these statements are careful to handle the type correctly. The first uses 65536L to ensure the type is long. The rest convert the value to int. This is safe because, although the type of x-65536 could be long, its value fits in an int—unless you are executing in a C implementation that limits int to −32767 to +32767, and the original value may be −32768, in which case you should stick to the first option.
If an int is 32 bits on your system, then the correct format for printing a 16-bit int is %hu for unsigned and %hd for signed.
Examine the output of:
uint16_t n = (uint16_t)-75;
printf("%d\n", (int)n); // Output: 65461
printf("%hu\n", n); // Output: 65461
printf("%hd\n", n); // Output: -75
#include <inttypes.h>
uint16_t foo = -75;
printf("==> %" PRId16 " <==\n", foo); // type mismatch, undefined behavior
Assuming your friend has somehow correctly put the bit representation for the signed integer into the unsigned integer, the only standard compliant way to extract it back would be to use a union as-
typedef union {
uint16_t input;
int16_t output;
} aliaser;
Now, in your code you can do -
aliaser x;
x.input = foo;
printf("%d", x.output);
Go ahead with the union, as explained by another answer. Just wanted to say that if you are using GCC, there's a very cool feature that allows you to do that sort of "bitwise" casting without writing much code:
printf("%d", ((union {uint16_t in; int16_t out}) foo).out);
See https://gcc.gnu.org/onlinedocs/gcc-4.4.1/gcc/Cast-to-Union.html.
If u is an object of unsigned integer type and a negative number whose magnitude is within range of u's type is stored into it, storing -u to an object of u's type will leave it holding the magnitude of that negative number. This behavior does not depend upon how u is represented. For example, if u and v are 16-bit unsigned short, but int is 32 bits, then storing -60000 into u will leave it holding 5536 [the implementation will behave as though it adds 65536 to the value stored until it's within range of unsigned short]. Evaluating -u will yield -5536, and storing -5536 into v will leave it holding 60000.
I have tried to multiply to numbers i.e. 10000 and 10000 + 1 through C program. But I am not getting the correct output.
printf("%lld",(100000)*(100001));
I have tried the above code on different compilers but I am getting same 1410165408 instead of 10000100000.
Well, let's multiply
int64_t a = 100000;
int64_t b = 100001;
int64_t c = a * b;
And we'll get (binary)
1001010100000011010110101010100000 /* 10000100000 decimal */
but if you convert it to int32_t
int32_t d = (int32_t) c;
you'll get the last 32 bits only (and throw away the top 10):
01010100000011010110101010100000 /* 1410165408 decimal */
A simplest way out, probably, is to declare both constants as 64-bit values (LL suffix stands for long long):
printf("%lld",(100000LL)*(100001LL));
In C, the type which is used for a calculation is determined from the type of the operands, not from the type where you store the result in.
Plain integer constants such as 100000 is of type int, because they will fit inside one. The multiplication of 100000 * 100001 will however not fit, so you get integer overflow and undefined behavior. Switching to long won't necessarily solve anything, because it might be 32 bit too.
In addition, printing an int with the %lld format specifier is also undefined behavior on most systems.
The root of all evil here is the crappy default types in C (called "primitive data types" for a reason). Simply get rid of them and all their uncertainties, and all your bugs will go away with them:
#include <stdio.h>
#include <inttypes.h>
int main(void)
{
printf("%"PRIu64, (uint64_t)100000 * (uint64_t)100001);
return 0;
}
Or equivalent: UINT64_C(100000) * UINT64_C(100001).
Your two integers are int, that will make the result int too. That the printf() format specifier says %lld, which needs long long int, doesn't matter.
You can cast or use suffixes:
printf("%lld", 100000LL * 100001LL);
This prints 10000100000. Of course there's still a limit, since the number of bits in a long long int is still constant.
You can do it like this
long long int a = 100000;
long long int b = 100001;
printf("%lld",(a)*(b));
this will give the correct answer.
What you are doing is (100000)*(100001) i.e by default compiler takes 100000 into an integer and multiplies 100001 and stores it in (int)
But during printf it prints (int) as (long long int)
I am not very good at C language and just met a problem I don't understand. The code is:
int main()
{
unsigned int a = 100;
unsigned int b = 200;
float c = 2;
int result_i;
unsigned int result_u;
float result_f;
result_i = (a - b)*2;
result_u = (a - b);
result_f = (a-b)*c;
printf("%d\n", result_i);
printf("%d\n", result_u);
printf("%f\n", result_f);
return 0;
}
And the output is:
-200
-100
8589934592.000000
Program ended with exit code: 0
For (a-b) is negative and a,b are unsigned int type, (a-b) is trivial. And after multiplying a float type number c, the result is 8589934592.000000. I have two questions:
First, why the result is non-trivial after multiplying int type number 2 and assigned to an int type number?
Second, why the result_u is non-trivial even though (a-b) is negative and result_u is unsigned int type?
I am using Xcode to test this code, and the compiler is the default APPLE LLVM 6.0.
Thanks!
Your assumption that a - b is negative is completely incorrect.
Since a and b have unsigned int type all arithmetic operations with these two variables are performed in the domain of unsigned int type. The same applies to mixed "unsigned int with int" arithmetic as well. Such operations implement modulo arithmetic, with the modulo being equal to UINT_MAX + 1.
This means that expression a - b produces a result of type unsigned int. It is a large positive value equal to UINT_MAX + 1 - 100. On a typical platform with 32-bit int it is 4294967296 - 100 = 4294967196.
Expression (a - b) * 2 also produces a result of type unsigned int. It is also a large positive value (UINT_MAX + 1 - 100 multiplied by 2 and taken modulo UINT_MAX + 1). On a typical platform it is 4294967096.
This latter value is too large for type int. Which means that when you force it into a variable result_i, signed integer overflow occurs. The result of signed integer overflow on assignment is implementation defined. In your case result_i ended up being -200. It looks "correct", but this is not guaranteed by the language. (Albeit it might be guaranteed by your implementation.)
Variable result_u receives the correct unsigned result - a positive value UINT_MAX + 1 - 100. But you print that result using %d format specifier in printf, instead of the proper %u. It is illegal to print unsigned int values that do not fit into the range of int using %d specifier. The behavior of your code is undefined for that reason. The -100 value you see in the output is just a manifestation of that undefined behavior. This output is formally meaningless, even though it appears "correct" at the first sight.
Finally, variable result_f receives the "proper" result of (a-b)*c expression, calculated without overflows, since the multiplication is performed in the float domain. What you see is that large positive value I mentioned above, multiplied by 2. It is likely rounded to the precision of float type though, which is implementation-defined. The exact value would be 4294967196 * 2 = 8589934392.
One can argue that the last value you printed is the only one that properly reflects the properties of unsigned arithmetic, i.e. it is "naturally" derived from the actual result of a - b.
You get negative numbers in the printf because you've asked it to print a signed integer with %d. Use %u if you want to see the actual value you ended up with. That will also show you how you ended up with the output for the float multiplication.
I was going through the existing code and when debugging the UTC time which is declared as
unsigned int utc_time;
I could get some positive integer every time by which I would be sure that I get the time. But suddenly in the code I got a negative value for the variable which is declared as an unsigned integer.
Please help me to understand what might be the reason.
Unsigned integers, by their very nature, can never be negative.
You may end up with a negative value if you cast it to a signed integer, or simply assign the value to a signed integer, or even incorrectly treat it as signed, such as with:
#include <stdio.h>
int main (void) {
unsigned int u = 3333333333u;
printf ("unsigned = %u, signed = %d\n", u, u);
return 0;
}
which outputs:
unsigned = 3333333333, signed = -961633963
on my 32-bit integer system.
When it's cast or treated as a signed type. You probably printed your unsigned int as an int, and the bit sequence of the unsigned would have corresponded to a negative signed value.
ie. Perhaps you did:
unsigned int utc_time;
...
printf("%d", utc_time);
Where %d is for signed integers, compared to %u which is used for unsigned. Anyway if you show us the code we'll be able to tell you for certain.
There's no notion of positive or negative in an unsigned variable.
Make sure you using
printf("%u", utc_time);
to display it
In response to the comment %u displays the varible as an unsigned int where as %i or %d will display the varible as a signed int.
Negative numbers in most (all?) C programs are represented as a two's complement of the unsigned number plus one. It's possible that your debugger or a program listing the values doesn't show it as an unsigned type so you see it's two's complement.