C - unsigned int going negative (-ve) - c

What I know is - UNSIGNED INT cannot take negative values.
If I take the maximum value of an UNSIGNED INT and increment it, I should get ZERO i.e. the minimum value and if I take the minimum value and decrement it, I should get the maximum value.
Then, why is this happening ?
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
int main()
{
unsigned int ui;
ui = UINT_MAX;
ui ++;
printf("ui = %d", ui);
ui = 0;
ui --;
printf("\n");
printf("ui = %d", ui);
return EXIT_SUCCESS;
}
Output:
ui = 0
ui = -1

From 'man 3 printf':
d, i The int argument is converted to signed decimal notation
So, although the type of ui is unsigned int, printf is interpreting it as a signed int and showing it as such.

That is because you using %d format specifier that says printf to treat your number as a signed integer.
Try using %u to output unsigned value and you get the desired result.
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
int main()
{
unsigned int ui;
ui = UINT_MAX;
ui ++;
printf("ui = %u", ui);
ui = 0;
ui --;
printf("\n");
printf("ui = %u", ui);
return EXIT_SUCCESS;
}
output:
ui = 0
ui = 4294967295
Check out the reference on possible format specifiers.

printf("%u") should be used for unsigned ints.

You pass the value to an ellipsis function (printf). You should expect nothing about the signedness here.
The %d in the format string controls the sign of the displayed value. There is a cast inside the printf function since you selected the %d. That's why you see a signed value that is equivalent to the binary value FFFFFFFF1.
1 Assuming a 32 bit width for integer.

Related

How do I sign extend and then find the value of the unsigned binary number in C?

I have a variable declared as an int
int a = -3;
I want the twos' complement values sign extended to 16 bits. So, it becomes: 1111 1111 1111 1101 and then find the unsigned value of this number, which would be 65533 (I believe).
In other words, I want to go from -3 to 65533.
Following this answer: Sign extension from 16 to 32 bits in C I'm stuck on the first step. Here's a shortened version of my code:
#include <stdio.h>
#include <string.h>
int main () {
int s = -3;
printf("%x\n", s);
s = (int16_t)s;
printf("%x\n", s);
int16_t i = s;
printf("%x\n", i);
return(0);
}
I compile with gcc test.c and all three printf statements give "fffffffd"
Do you know why the cast isn't working and perhaps any better solutions to the original problem?
Where you appear to be struggling with the issue is understanding just exactly what bits you are dealing with in interpreting the bits as signed or unsigned or as exact-width types.
How do I sign extend and then find the value of the unsigned binary
number in C?
Answer: you don't -- the bits never change....
When you declare int a = -3;, the bits are set in memory. Thereafter, unless you change them, they are the exact same 32-bits (or whatever sizeof a is on your hardware).
The key to understanding sign-extension is it only applies when interpreting a negative value under a two's complement system when casting or assigning to a larger-size type. Otherwise, you are not extending anything, you are just interpreting the bits as a different type.
A short example for your problem will illustrate. Here int a = -3; is declared (defined) and the bits of a thereafter never change. However, you can interpret those bits (or by lose analogy look at them from a different viewpoint) and use those bits as short, unsigned short, int16_t or uint16_t and in all unsigned cases, interpret them as a hexadecimal value, e.g.
#include <stdio.h>
#include <stdint.h>
#ifdef __GNUC__
#include <inttypes.h>
#endif
int main (void) {
int a = -3;
printf ("a (as int) : %d\n"
"a (as short) : %hd\n"
"a (as unsigned short) : %hu\n"
"a (as unsigned short hex) : 0x%0x\n"
"a (as PRId16) : %" PRId16 "\n"
"a (as PRIu16) : %" PRIu16 "\n"
"a (as PRIx16) : 0x%" PRIx16 "\n",
a, (short)a, (unsigned short)a, (unsigned short)a,
(int16_t)a, (uint16_t)a, (uint16_t)a);
return 0;
}
Example Use/Output
$ ./bin/exact3
a (as int) : -3
a (as short) : -3
a (as unsigned short) : 65533
a (as unsigned short hex) : 0xfffd
a (as PRId16) : -3
a (as PRIu16) : 65533
a (as PRIx16) : 0xfffd
Look things over, and look at all the answers, and let us know if you have any further questions.
Your code causes undefined behaviour by using the wrong format specifier for the argument type in printf.
Here is some correct code:
#include <stdio.h>
int main(void)
{
int s = -3;
uint16_t u = s; // u now has the value 65533
printf("%u\n", (unsigned int)u);
printf("%x\n", (unsigned int)u);
}
The code to printf a uint16_t is slightly complicated, it's simpler to cast to unsigned int and use %u or %x which are specifiers for unsigned int.
fffffffd is the correct two's complement value it's just being printed in hex
use %d in your printf statements
#include <stdio.h>
#include <string.h>
int main () {
int s = -3;
printf("%d\n", s);
s = (int16_t)s;
printf("%d\n", s);
int16_t i = s;
printf("%d\n", i);
return(0);
}
and you should see that 65533 value
In C casting signed value to a wider type automatically extends its sign.
To get a complement value you need to cast your value to appropriate unsigned type:
#include <stdio.h>
#include <stdint.h>
int main ()
{
int s = -3;
printf("%x\n", s);
int16_t s16 = (int16_t)s;
printf("%hd\n", s16);
uint16_t c16 = (uint16_t)s16;
printf("%hu\n", c16);
}
The output I get:
fffffffd
-3
65533

Unexpected unsigned integer behavior

I encountered this unexpected output with the following code in which I was verifying the maximum values (represented in decimal form) of the unsigned forms of short and int types when all their bits were set to 1.
#include <stdio.h>
int main()
{
unsigned int n1 = 0xFFFFFFFF;
unsigned short n2 = 0xFFFF;
printf("\nMax int = %+d", n1);
printf("\nMax short = %+d", n2);
return 0;
}
The output I get is (compiled using the Visual Studio 2017 C/C++ Compiler):
Max int = -1
Max short = +65535
Along the lines of unsigned short, I was expecting the maximum value of the unsigned int to be +4294967295. Why isn't it so?
You need to use %u for the format specifier for an unsigned type.
Using printf(), your conversions in the format string must match the type of the arguments, otherwise the behavior is undefined. %d is for int.
Try this for the maximum values:
#include <stdio.h>
int main()
{
printf("Max unsigned int = %u\n", (unsigned)-1);
printf("Max unsigned short = %hu\n", (unsigned short)-1);
return 0;
}
Side notes:
the maximum value of any unsigned type is -1 cast to that type.
Put newline at the end of your lines. Among other reasons, this flushes stdouts buffer with the default setting of line buffered.

Using unsigned data types in C

How to use unsigned int properly? My function unsigned int sub(int num1, int num2);doesn't work when user input a is less than b. If I simply use int, we can expect a negative answer and I need to determine which is larger before subtracting. I know that's easy but maybe there is a way to use unsigned int to avoid that?
For instance when a = 7 and b = 4, the answer is 3.000000. But when I flip them, the code gives me 4294967296.000000 which I think is a memory address.
#include <stdio.h>
unsigned int sub(int num1,int num2){
unsigned int diff = 0;
diff = num1 - num2;
return diff;
}
int main(){
printf("sub(7,4) = %u\n", sub(7,4));
printf("sub(4,7) = %u\n", sub(4,7));
}
output:
sub(7,4) = 3
sub(4,7) = 4294967293
Unsigned numbers are unsigned... This means that they cannot be negative.
Instead they wrap (underflow or overflow).
If you do an experiment with an 8-bit unsigned, then you can see the effect of subtracting 1 from 0:
#include <stdio.h>
#include <stdint.h>
int main(void) {
uint8_t i;
i = 1;
printf("i: %hhu\n", i);
i -= 1;
printf("i: %hhu\n", i);
i -= 1;
printf("i: %hhu\n", i);
return 0;
}
i: 1
i: 0
i: 255
255 is the largest value that an 8-bit unsigned can hold (2^8 - 1).
We can then do the same experiment with a 32-bit unsigned, using your 4 - 7:
#include <stdio.h>
#include <stdint.h>
int main(void) {
uint32_t i;
i = 4;
printf("i: %u\n", i);
i -= 7;
printf("i: %u\n", i);
return 0;
}
i: 4
i: 4294967293
4294967293 is effectively 0 - 3, which wraps to 2^32 - 3.
You also need to be careful of assigning an integer value (the return type of your sub() function) to a float... Generally this is something to avoid.
See below. x() returns 4294967293 as an unsigned int, but it is stored in a float... This float is then printed as 4294967296???
#include <stdio.h>
#include <stdint.h>
unsigned int x(void) {
return 4294967293U;
}
int main(void) {
float y;
y = x();
printf("y: %f\n", y);
return 0;
}
y: 4294967296.000000
This is actually to do with the precision of float... it is impossible to store the absolute value 4294967293 in a float.
What you see is called unsigned integer overflow. As you noted, unsigned integers can't hold negative numbers. So when you try to do something that would result in a negative number, seemingly weird things can occour.
If you work with 32-bit integers,
int (int32_t) can hold numbers between (-2^31) and (+2^31-1), INT_MIN and INT_MAX
unsigned int (uint32_t) can hold numbers between (0) and (2^32-1), UINT_MIN and UINT_MAX
When you try to add something to an int that would lead to a number greater than the type can hold, it will overflow.
unsigned int cannot be used to represent a negative variable. I believe what you wanted is to find the absolute value of the difference between a and b. If so, you can use the abs() function from stdlib.h. The abs() function takes in an int variable i and returns the absolute value of i.
The reason why unsigned int returns a huge number in your case is due to the way integers are represented in the system. You declared diff as an int type, which is capable of storing a negative value, but the same sequence of bits that represents -3 when interpreted as unsigned int represents 4294967296 instead.
It becomes a negative number, unsigned data types dont hold negative number, so instead it becomes a large number
"absolute value of the difference between a and b" does not work for many combinations of a,b. Any a-b that overflows is undefined behavior. Obviously abs(INT_MAX - INT_MIN) will not generate the correct answer.
Also, using abs() invokes undefined behavior with a select value of int. abs(INT_MIN) is undefined behavior when -INT_MIN is not representable as an int.
To calculate the absolute value difference of 2 int, subtract them as unsigned.
unsigned int abssub(int num1,int num2){
return (num1 > num2) ? (unsigned) num1 - num2 : (unsigned) num2 - num1;
}

Convert negative number from String to unsigned Long

I am trying to convert a negative number from String to unsigned long,however am getting an unexpected result due to "-" sign.
Below the code
#include <stdio.h>
#include <string.h>
main()
{
char * pValue = "-1234657890000";
unsigned long ValueLong ;
sscanf( pValue, "%llu", &ValueLong ) ;
printf ("%llu",ValueLong);
}
Instead of having "-1234657890000" as printf output, the code is diplaying this value "18446742839051661616"
Any advise please?
Thank you
The value you provide will not fit in unsigned long(or at least it is not guaranteed to fit). Another point is that -1234567890000 is not at all unsigned. Use long long instead and keep in mind that the literal should be suffixed with LL like so:
char * pValue = "-1234657890000LL";
You said it yourself: "convert a negative number to unsigned long". This is what happens when you convert a negative value to a positive, that is when you "tell your compiler what to do".
You operate with same internal (binary) representation of a value, same sequence of zeroes and ones. Type signed or unsigned tells the compiler how to treat memory that holds the value and is referred to by a variable.
Try
#include <stdio.h>
#include <string.h>
main()
{
char * pValue = "-1234657890000";
long ValueLong ;
sscanf( pValue, "%ld", &ValueLong ) ;
printf ("%ld\n",ValueLong);
unsigned long v = (unsigned long)ValueLong;
printf ("%lu\n",v);
}
You'll get
-1234657890000
18446742839051661616
Here is probably is a more interesting example:
#include <stdio.h>
#include <string.h>
#include <limits.h>
main()
{
char * pValue = "-1";
long ValueLong ;
sscanf( pValue, "%ld", &ValueLong ) ;
printf ("%ld\n",ValueLong);
unsigned long v = (unsigned long)ValueLong;
printf ("%x\n",v);
printf ("%x\n",UINT_MAX);
}
And the output is
-1
ffffffff
ffffffff
Bear in mind that output depends on the target platform
Truly speaking the range of an Integer constant depends upon the compiler. For a 16-bit compiler like Turbo C or Turbo C++ the range is –32768 to 32767. For a 32-bit compiler the range would be even greater. I hope you got the answer. You are crossing the range
Thanks for all,it is solved as per your suggestions as below, the problem was on the unsigned datatype as you mention
#include <stdio.h>
#include <string.h>
main()
{
char * pValue = "-1234657890000";
long long ValueLong ;
sscanf( pValue, "%ld", &ValueLong ) ;
printf ("%ld",ValueLong);
}

Function Returning Value of Type I don't Want it to Return

For a problem at school, I need to convert a ASCII string of character digits to a decimal value. I wrote a function to do this and specified the return type to be an unsigned short as you can see in the code below.
#include <stdio.h>
unsigned short str_2_dec(char* input_string, int numel);
int main()
{
short input;
char input_string[6]= "65535";
input = str_2_dec(input_string, 5);
printf("Final Value: %d", input);
return 0;
}
unsigned short str_2_dec(char* input_string, int numel)
{
int factor = 1;
unsigned short value = 0;
int index;
for(index=0; index <(numel-1); index++)
{
factor *= 10;
}
for(index = numel; index > 0; index--)
{
printf("Digit: %d; Factor: %d; ", *(input_string+(numel-index))-48, factor);
value += factor * ((*(input_string+(numel - index))-48));
printf("value: %d\n\n", value);
factor /= 10;
}
return value;
}
When running this code, the program prints -1 as the final value instead of 65535. It seems it's displaying the corresponding signed value anyway. Seems like something very simple, but I can't find an answer. A response would be greatly appreciated.
The return type for str_2_dec() is unsigned short but you are storing the value in a (signed) short variable. You should declare your variables the appropriate type otherwise you will have problems as you have observed.
In this case, you converted "65535" to an unsigned short which has the bit pattern FFFFHex. That bit pattern was reinterpreted as a (signed) short which is the decimal value -1.
You should change your main() to something like this:
int main()
{
unsigned short input; /* to match the type the function is returning */
char input_string[6]= "65535";
input = str_2_dec(input_string, 5);
printf("Final Value: %hf", input);
return 0;
}
The problem is that you are taking the unsigned short return value of the function and storing it in a (signed) short variable, input. Since the value is outside the range representable in short, and since short is signed, this results in either an implementation-defined result or an implementation-defined signal being raised.
Change the type of input to unsigned short and everything will be fine.
You mean that is printing index as it was a (signed) short here?
short input;
...
printf("Final Value: %d", input);
Update: Since the hint doesn't seem to be catching, I will be more direct: Your declaration of input should be unsigned short input;.
You are using the wrong format specifier in printf. try using %u instead of %d
The problem isn't with the function but with how you are printing the return value.
printf("Final Value: %d", input);
The %d is place-holder for int type, not short.
Use %hu instead.
You didn't use the correct format specifier for the
short input;
printf("final value=%d\n",input);
This makes the difference of your out put.

Resources