This question already has answers here:
void main() { if(sizeof(int) > -1) printf("true"); else printf("false"); ; [duplicate]
(3 answers)
Why sizeof(int) is not greater than -1? [duplicate]
(2 answers)
Closed 7 years ago.
Working on this little piece of code in VS2013, but for some reason it doesn't print.it seems that -1>strlen(str)
Anyone got an idea what i'm doing wrong
char *str="abcd";
if(-1<strlen(str))
printf("The size of the string is %d", strlen(str));
return 0;
Anyone got an idea what i'm doing wrong
strlen() returns a size_t, which is an unsigned integer type. -1 interpreted as an unsigned integer is a large value, so it ends up being greater than the length of your string. You can use the -Wsign-compare flag in gcc and some other compilers to warn you when you try to compare signed and unsigned values.
Also, it doesn't make much sense to compare the length of a string to -1. A length can never be negative; it's always going to be 0 or greater. So you'll probably want to rewrite your code to test against 0, or otherwise properly implement whatever condition you're trying to test for.
if(-1<strlen(str))
printf("The size of the string is %d", strlen(str));
In this code, you might reasonably expect the test to always succeed and the printf() to execute, since the length is always 0 or more. But you're finding that the test actually fails and the printf() never happens because -1 is promoted to an unsigned so that it can be compared to a size_t. The easy solution is to remove the condition altogether: you know the test will always succeed, so there's no need for it. Just remove the if*:
printf("The size of the string is %zu", strlen(str));
*Also, change the print format specifier from %d to %zu since, as Matt McNabb pointed out in a comment, you're trying to print a size_t.
strlen(str) returns an unsigned integer. When comparing signed integer with unsigned integer, the compiler converts the signed value to unsigned. When converted to unsigned, -1 becomes 2^32 - 1 (assuming that strlen returns a 32 bit integer), which is greater than the length of the string you are comparing with.
strlen returns a value of type size_t. This is an unsigned integral type.
The rule in C for when comparing a signed value with a value of the corresponding unsigned type, is that the signed value is converted to the unsigned value.
If the values are of different sized types, for example if your system has 4-byte int and 8-byte size_t, then the rule is that the value of smaller type is converted to the value of the larger type.
Either way this means that -1 is converted to size_t, resulting in SIZE_MAX which is a large positive value. (unsigned types cannot hold negative values).
This large positive value is greater than the length of your string, so the less-than comparison returns false.
Related
This question already has answers here:
How can (-1 >= sizeof(buffer)) ever be true? Program fail to get right results of comparison
(1 answer)
void main() { if(sizeof(int) > -1) printf("true"); else printf("false"); ; [duplicate]
(3 answers)
sizeof() operator in if-statement
(5 answers)
Why is sizeof(int) less than -1? [duplicate]
(3 answers)
Closed 2 years ago.
#include<stdio.h>
int main()
{
if(sizeof(double) > -1)
printf("M");
else
printf("m");
return 0;
}
Size of double is greater than -1, so why is the output m?
This is because sizeof(double) is of type size_t, which is an implementation-defined unsigned integer type of at least 16 bits. See this for info on sizeof in C. See this for more info on size_t
The -1 gets converted to an unsigned type for the comparison (0xffff...), which will always be bigger than sizeof(double).
To compare to a negative number, you can cast it like this: (int)sizeof(double) > -1. Depending on what your objective is, it may also be better to compare to 0 instead.
Your code has implementation defined behavior: sizeof(double) has type size_t, which is an unsigned type with at least 16 bits.
If this type is smaller than int, for example if size_t has 16 bits and int 32 bits, the comparison would be true because size_t would be promoted to int and the comparison would use signed int arithmetics.
Yet on most current platforms, size_t is either the same as unsigned or even larger than unsigned: the rules for evaluating the expression specify that the type with the lesser rank is converted to the type with the higher rank, which means that int is converted to the type of size_t if it is larger, and the comparison is performed using unsigned arithmetics. Converting -1 to an unsigned type produces the largest value of this type, which is guaranteed to be larger than the size of a double, so the comparison is false on these architectures, which is alas very confusing.
I have never seen an architecture with a size_t type smaller than int, but the C Standard allows it, so your code can behave both ways, depending on the characteristics of the target system.
This question already has answers here:
Why does printf not print out just one byte when printing hex?
(5 answers)
Printing hexadecimal characters in C
(7 answers)
How to print 1 byte with printf?
(4 answers)
Closed 5 years ago.
Code
char a;
a = 0xf1;
printf("%x\n", a);
Output
fffffff1
printf() show 4 bytes, that exactly we have one byte in a.
What is the reason of this misbehavior?
How can i correct it?
What is the reason of this misbehavior?
This question looks strangely similar to another I have answered; it even contains a similar value (0xfffffff1). In that answer, I provide some information required to understand what conversion happens when you pass a small value (such as a char) to a variadic function such as printf. There's no point repeating that information here.
If you inspect CHAR_MIN and CHAR_MAX from <limits.h>, you're likely to find that your char type is signed, and so 0xf1 does not fit as an integer value inside of a char.
Instead, it ends up being converted in an implementation-defined manner, which for the majority of us means it's likely to end up with one of the high-order bits becoming the sign bit. When these values are promoted to int (in order to pass to printf), sign extension occurs to preserve the value (that is, a char that has a value of -1 should be converted to an int that has a value of -1 as an int, so too is the underlying representation for your example likely to be transformed from 0xf1 to 0xfffffff1).
printf("CHAR_MIN .. CHAR_MAX: %d .. %d\n", CHAR_MIN, CHAR_MAX);
printf("Does %d fit? %s\n", '\xFF', '\xFF' >= CHAR_MIN && '\xFF' <= CHAR_MAX ? "Yes!"
: "No!");
printf("%d %X\n", (char) -1, (char) -1); // Both of these get converted to int
printf("%d %X\n", -1, -1); // ... and so are equivalent to these
How can i correct it?
Declare a with a type that can fit the value 0xf1, for example int or unsigned char.
printf is a variable argument function, so the compiler does its best but cannot check strict compliance between format specifier and argument type.
Here you're passing a char with a %x (integer, hex) format specifier.
So the value is promoted to a signed integer (because > 127: negative char and char is signed on most systems, on yours that's for sure)
Either:
change a to int (simplest)
change a to unsigned char (as suggested by BLUEPIXY) that takes care of the sign in the promotion
change format to %hhx as stated in the various docs (note that on my gcc 6.2.1 compiler hhx is not recognized, even if hx is)
note that the compiler warns you before reaching printf that you have a problem:
gcc -Wall -Wpedantic test.c
test.c: In function 'main':
test.c:6:5: warning: overflow in implicit constant conversion [-Woverflow]
a = 0xf1;
You should use int a instead of char a because char is unsigned and can store only 1 byte from 0 to 255.
And hex number need many storage to store it, and also int storage size is 2 or 4 bytes. So it's good to use int here to store hex number.
This question already has answers here:
Unsigned int in C behaves negative
(8 answers)
Closed 8 years ago.
I want to ask what is the difference between these two cases ?
Case1:
unsigned int i;
for(i=10;i>=0;i--)
printf("%d",i);
It will result in an infinite loop!
Case2:
unsigned int a=-5;
printf("%d",a);
It will print -5 on the screen.
Now the reason for case 1 is that i is declared as unsigned int so it can not take negative values,hence will always be greater than 0.
But in case 2, if a cannot take negative values, why -5 is being printed???
What is the difference between these two cases?
The difference is that you are printing a as a signed integer.
printf("%d",a);
so while a may be unsigned, then the %d is asking to print the binary value as a signed value. If you want to print it as a unsigned value, then use
printf("%u",a);
Most compilers will warn you about incompatible use of of parameters to printf -- so you could probably catch this by looking at all the warnings and fix it.
When a -ve value is assigned to an unsigned variable, it can't hold that value and that value is added to UINT_MAX and finally you get a positive value.
Note that using wrong specifier to print a data type invokes undefined behavior.
See C11: 7.21.6 p(6):
If a conversion specification is invalid, the behavior is undefined.282)
unsigned int a=-5;
printf("%u",a); // use %u to print unsigned
will print the value of UINT_MAX - 5.
i is declared unsigned so it can not take negative values
That is correct. Most optimizing compilers will notice that, drop the condition from the loop, and issue a warning.
In case 2, if a can not take negative values, why -5 is being printed?
Because on your platform int and unsigned int have such a representation that assigning -5 to unsigned int and then passing it to printf preserves the representation of -5 in the form that lets printf produce the "correct" result.
This is true on your platform, but other platforms may be different. That is why the standard considers this behavior undefined (i.e. printf could produce any output at all). If you print using the unsigned format specifier, you will see a large positive number.
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
unsigned int i;
i = -12;
printf("%d\n" , i);
system("pause");
return 0;
}
I run the above code in Visual Studio 2012. Because I know unsigned refers to nonnegative numbers, I expected the program to report an error. Why does it still run smoothly and print the output?
As 200_success alluded to, there are two things going on here that are combining to produce the correct output, despite the obvious problems of mixing unsigned and signed integer values.
First, the line i = -12 is implicitly converting the (signed) int literal value -12 to an unsigned int. The bits being stored in memory don't change. It's still 0xfffffff4, which is the twos-complement representation of -12. An unsigned int, however, ignores the sign bit (the uppermost bit) and instead treats it as part of the value, so as an unsigned int, this value (0xfffffff4) is interpreted as the number 4294967284. The bottom line here is that C has very loose rules about implicit conversion between signed and unsigned values, especially between integers of the same size. You can verify this by doing:
printf("%u\n", i);
This will print 4294967284.
The second thing that's going on here is that printf doesn't know anything about the arguments you've passed it other than what you tell it via the format string. This is essentially true for all functions in C that are defined with variable argument lists (e.g., int printf(const char *fmt, ...); ) This is because it is impossible for the compiler to know exactly what types of arguments might get passed into this function, so when the compiler generates the assembly code for calling such a function, it can't do type-checking. All it can do is determine the size of each argument, and push the appropriate number of bytes onto the stack. So when you do printf("%d\n", i);, the compiler is just pushing sizeof(unsigned int) bytes onto the stack. It can't do type checking because the function prototype for printf doesn't have any information about the types of any of the arguments, except for the first argument (fmt), which it knows is a const char *. Any subsequent arguments are just copied as generic blobs of a certain number of bytes.
Then, when printf gets called, it just looks at the first sizeof(unsigned int) bytes on the stack, and interprets them how you told it to. Namely, as a signed int value. And since the value stored in those bytes is still just 0xfffffff4, it prints -12.
Edit: Note that by stating that the value in memory is 0xfffffff4, I'm assuming that sizeof(unsigned int) on your machine is 4 bytes. It's possible that unsigned int is defined to be some other size on your machine. However, the same principles still apply, whether the value is 0xfff4 or 0xfffffffffffffff4, or whatever it may be.
This question is similar to this Objective C question. The short answer is, two wrongs make a right.
i = -12 is wrong, in that you are trying to store a negative number in an unsigned int.
printf("%d\n", i) is wrong, in that you are asking printf to interpret an unsigned int as a signed int.
Both of those statements should have resulted in compiler warnings. However, C will happily let you abuse the unsigned int as just a place to store some bits, which is what you've done.
i = -12; is well-defined. When you assign an out-of-range value to an unsigned int, the value is adjusted modulo UINT_MAX + 1 until it comes within range of the unsigned int.
For example if UINT_MAX is 65535, then i = -12 results in i having the value of 65536 - 12 which is 65524.
It is undefined behaviour to mismatch the argument types to printf. When you say %d you must supply an int (or a smaller type that promotes to int under the default argument promotions).
In practice what will usually happen is that the system interprets the bits used to represent the unsigned int as if they were bits used to represent a signed int; of course since it is UB this is not guaranteed to work or even be attempted.
You are indeed saving the -12 as an integer and telling printf (by using %d) that it is a normal int, so it interprets the contents of said variable as an int and prints a -12.
If you used %u in printf you would see what it is you're really storing when intepreting the contents as an unsigned int.
printing may work (as explained by many above) but avoid using i in your code for further calculation. It will give not retain the sign.
Here is a code snippet:
unsigned short a=-1;
unsigned char b=-1;
char c=-1;
unsigned int x=-1;
printf("%d %d %d %d",a,b,c,x);
Hhy the output is this:
65535 255 -1 -1
?
Can anybody please analyze this ?
You are printing the values using %d which is for signed numbers. The value is "converted" to a signed number (it actually stays the same bitwise, but the first bit is interpreted differently).
As for unsigned char and short - they are also converted to 32 bit int, so the values fit in it.
Had you used %lld (and cast the value as long long, otherwise it could be unspecified behavior) even the last two numbers may get printed as unsigned.
Anyway, use %u for unsigned numbers.
How does it work?
Bit value of 255 is 11111111. If treated like an unsigned number, it will be 255. If treated as a signed number - it'll be -1 (the first bit usually determines sign).
When you pass the value to %d in printf, the value is converted to a 32 bit integer, which looks like this: 00000000000000000000000011111111. Since the first bit is 0, the value is printed simply as 255. It works similarily for your short.
The situation is different for 32 bit integer. It is immediately assigned a 11111111111111111111111111111111 value, which stands for -1 in singed notation. And since you have used %d in your printf, it is interpreted as -1.
Basically, you should not assign negative values to "unsigned" variables. You are trying to play tricks on the compiler, and who knows what it will do. Now, "char n = -1;" is OK because "n" can legitimently take on negative values and the compiler knows how to treat it.