#include <stdio.h>
int main(void)
{
short i = 1;
printf("%hd\n", i);
printf("%hd\n", i * i);
return 0;
}
Compiling this gives the warning warning: format specifies type 'short' but the argument has type 'int' [-Wformat] printf("%hd\n", i * i);.
Removing the printf("%hd\n", i * i); statement produces no warning.
Have I done something wrong? It is my understanding that since i has type short int that the value of the expression i * i has type short int but then printf performs an integer promotion to convert this short int result to int form before converting this int to short again (due to the conversion specifier).
I don't particularly see what is wrong about the code though. Is it just a warning telling me to be careful of the possibility that a more complicated expression than a simple short int variable being passed as an argument to printf corresponding to the %hd conversion specifier might result in overflow? Thanks!
In i * i the arguments to the multiplication undergo "integer promotions" and "usual arithmetic conversions" and become int before the multiplication is done. The final result of the multiplication is an int.
The type of i * i is int (when type of i is int, short, char, or _Bool).
When you call printf with "%hd" the corresponding argument should be a short. What happens is that it is automatically converted to int and it's an int that printf sees. The internal code of printf will convert that int to a short to do its thing.
Problem is if you start with an int value outside the range of short.
short i = SHRT_MAX;
int j = i * i;
short k = i * i; // overflow while assigning
printf("%hd %hd", j, k);
// ^ (kinda) overflow inside the innards of printf
Related
This question already has answers here:
How should I print types like off_t and size_t?
(10 answers)
Closed 2 years ago.
I'm a student trying to learn C and C++, I've got a problem with the specifier %d I dont't understand the exception that is written in the console, It's writing
The format %d expects argument of type 'int', but argument 2 has type 'long long unsigned int' [-Wformat]
Here is the code :
#include<stdio.h>
#include<stdlib.h>
int main()
{
short int u=1;
int v=2;
long int w=3;
char x='x';
float y=4;
double z=5;
long double a=6;
long b=7;
printf("short int:%d\n",sizeof(u));
printf("int:%d octets\n",sizeof(v));
printf("long int:%d octets\n",sizeof(w));
printf("char:%d octets\n",sizeof(x));
printf("float:%d octets\n",sizeof(y));
printf("double:%d octets\n",sizeof(z));
printf("long double:%d octets\n",sizeof(a));
printf("long:%d octets\n",sizeof(b));
return 0;
}
The type of the value returned by the operator sizeof is the unsigned integer type size_t The conversion specifier d is used to output values of the type int. You have to use the conversion specifier zu. Otherwise you will get undefined behavior.
For example
printf("short int:%zu\n",sizeof(u));
From the C Standard (7.21.6.1 The fprintf function)
9 If a conversion specification is invalid, the behavior is
undefined.275) If any argument is not the correct type for the
corresponding conversion specification, the behavior is undefined.
The sizeof operator returns a size_t type. This is always unsigned and, on your platform, is a long long unsigned int; on other platforms, it may be just unsigned long or, indeed, some other (unsigned) integer type.
Use the %zu format specifier for arguments of this type; this will work whatever the actual size_t type definition happens to be.
Below are following code written in c using CodeBlocks:
#include <stdio.h>
void main() {
char c;
int i = 65;
c = i;
printf("%d\n", sizeof(c));
printf("%d\n", sizeof(i));
printf("%c", c);
}
Why when printing variable c after it was assigned with int value (c = i), there no need for casting to be made?
A cast is a way to explicitly force a conversion. You only need casts when no implicit conversions take place, or when you wish the result to have another type than what implicit conversion would yield.
In this case, the C standard requires an implicit conversion through the rule for the assignment operator (C11 6.5.16.1/2):
In simple assignment (=), the value of the right operand is converted to the type of the
assignment expression and replaces the value stored in the object designated by the left
operand.
char and int are both integer types. Which in turn means that in this case, the rules for converting integers are implicitly invoked:
6.3.1.3 Signed and unsigned integers
When a value with integer type is converted to another integer type other than _Bool, if
the value can be represented by the new type, it is unchanged.
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type
until the value is in the range of the new type.
Otherwise, the new type is signed and the value cannot be represented in it; either the
result is implementation-defined or an implementation-defined signal is raised.
In your case, the new type char can be either signed or unsigned depending on compiler. The value 65 can be represented by char regardless of signedness, so the first paragraph above applies and the value remains unchanged. For other larger values, you might have ended up with the "value cannot be represented" case instead.
This is a valid conversion between integer types, so no cast is necessary.
Please note that strictly speaking, the result of sizeof(c) etc is type size_t and to print that one correctly with printf, you must use the %zu specifier.
This assignment is performed on compatible types, because char is not much more than a single byte integer, whereas int is usually 4bytes integer type (machine dependant). Still - this (implicit) conversion does not require casting, but You may loose some information in it (higher bytes would get truncated).
Let's examine your program:
char c; int i = 65; c = i; There is no need for a cast in this assignment because the integer type int of variable i is implicitly converted to the integer type of the destination. The value 65 can be represented by type char, so this assignment is fully defined.
printf("%d\n", sizeof(c)); the conversion specifier %d expects an int value. sizeof(c) has value 1 by definition, but with type size_t, which may be larger than int. You must use a cast here: printf("%d\n", (int)sizeof(c)); or possibly use a C99 specific conversion specifier: printf("%zu\n", sizeof(c));
printf("%d\n", sizeof(i)); Same remark as above. The output will be implementation defined but most current systems have 32-bit int and 8-bit bytes, so a size of 4.
printf("%c", c); Passing a char as a variable argument to printf first causes the char value to be promoted to int (or unsigned int) and passed as such. The promotion to unsigned int happens on extremely rare platforms where char is unsigned and has the same size as int. On other platforms, the cast is not needed as %c expects an int argument.
Note also that main must be defined with a return type int.
Here is a modified version:
#include <stdio.h>
#include <limits.h>
int main() {
char c;
int i = 65;
c = i;
printf("%d\n", (int)sizeof(c));
printf("%d\n", (int)sizeof(i));
#if CHAR_MAX == UINT_MAX
/* cast only needed on pathological platforms */
printf("%c\n", (int)c);
#else
printf("%c\n", c);
#endif
return 0;
}
I have the following program.
#include<stdio.h>
int main()
{
char a='b';
int b=11299;
char d[4]="abc";
printf("value of a is %d\n",a);
printf("value of b is %c\n",b);
printf("value of c is %d\n",*d);
char *c=d;
c=c+1;
printf("c is %d\n",*c);
}
I am little confused with %d format specifier. I was thinking that it would print 4 bytes of data. But from the above program (first and last printf) it is evident that it prints only one byte when a char parameter is used. Why does %d print only one byte? How does it know how many bytes to print?
It doesn't print bytes; it prints values -- the value you pass as the corresponding argument -- provided the value passed has the right type.
In your example 1, the argument is the value 'b'. It initially has type char (because the expression a has type char) but variadic arguments are subject to default promotions, which promote any integer type with lower rank than int up to int. Thus, as an argument, the type is int.
In your example 3, the argument is the value 'a'. Likewise it initially has type char (because the expression *d has type char) but it gets promoted to int.
If promotions didn't happen and the types were wrong, though, printf still wouldn't "print fewer bytes". Your program would just have undefined behavior (so anything could happen). For example:
int a = 42;
printf("%lld\n", a); // undefined behavior because int does not
// get promoted implicitly to long long.
In your example 2, the %c format specifier expects an argument of type int; printf converts it to unsigned char and prints the corresponding character. Any value of int is acceptable; it doesn't have to already be a value in the range of unsigned char.
First, you're probably declaration specifying the ISO C90, and this forbids mixed declarations and code, as shown in the warning.
The statement pointer of type char c varible char * c = d; must be made for example of the statement string d char d [4] = "abc"; and not in the middle of the code.
Second: int main (){ function does not return anything, but this must specify to not pull warning.
The revised program would be:
#include<stdio.h>
int main()
{
char a='b';
int b=11299;
char d[4]="abc";
char *c=d;
printf("value of a is %d\n",a);
printf("value of b is %c\n",b);
printf("value of c is %d\n",*d);
c=c+1;
printf("c is %d\n",*c);
return 0;
}
Why can't I use "long long int" with "int" in my C code?
int main(void) {
long long int a;
int b;
a = 1000000000;
b = 3200;
printf("long long int a = %d\n int b = %d", a, b);
return 0;
}
long long int a = 1000000000
int b = 0
You have to use the correct format specifier in the printf function
printf("long long int a = %lld\n int b = %d", a, b);
Otherwise the function behaviour is undefined.
It seems that in the given situation the function considers the value of the long long int object pushed on the stack of the function as an argument like two objects of type int.
You can try this:
printf("long long int a = %lld\n int b = %d", a, b);
%d is used to refer int. If you want to refer long long int then you have to use %lld
Using wrong format specifier leads to undefined behavior the right format specifier to print out a long long int is %lld
C99 standard says:
If any argument is not the correct type for the corresponding conversion specification, the behavior is undefined
The type of a is long long int but you use %d to print it.
There are two possible reasons.
Firstly and most importantly, I see no evidence of your compiler and/or standard library supporting C99, and hence no evidence of your compiler and/or standard library supporting long long int and the corresponding format specifier %lld.
Secondly, you're lying to printf about the type of a; %d tells printf that the argument is of type int, even though it isn't (it's long long int). You should be using %lld...
Technically both of these would invoke undefined behaviour; using %d when your argument isn't an int is UB and even if you were to correctly use %lld you'd be using an invalid format specifier in C89's scanf (%lld is a C99 format specifier) and invoking UB as a result.
Have you considered using Clang in Visual Studio?
Is there a good way to get rid of the following warning? I know it's a type issue in that I'm passing a unsigned long pointer and not an unsigned long, but does printf somehow support pointers as arguments? The pedantic in me would like to get rid of this warning. If not, how do you deal with printing de-referenced pointer values with printf?
#include <stdio.h>
int main (void) {
unsigned long *test = 1;
printf("%lu\n", (unsigned long*)test);
return 0;
}
warning: format specifies type 'unsigned long' but the argument has type
unsigned long *test = 1;
is not valid C. If you want to have a pointer to an object of value 1, you can do:
unsigned long a = 1;
unsigned long *test = &a;
or using a C99 compound literal:
unsigned long *test = &(unsigned long){1UL};
Now also:
printf("%lu\n", (unsigned long*)test);
is incorrect. You actually want:
printf("%lu\n", *test);
to print the value of the unsigned long object *test.
To print the test pointer value (in an implementation-defined way), you need:
printf("%p\n", (void *) test);
You don't really need to typecast test, the %p will take any pointer (even though the spec says it takes a void*). Using the %p conversion specifier is the safest (and amazingly, most portable) mechanism to print a pointer via printf.
#include <stdio.h>
int main (void) {
unsigned long ul = 1;
unsigned long *test = &ul;
printf("%p\n", test);
return 0;
}
EDIT: Oh yeah, and also be careful trying to use types like int, long, etc. relying on their type size. This is implementation-dependent and can (and does) change on varying platforms and compilers for those platforms. The C standard just says that long has to be at least as large as int. The POSIX standard for printf "length modifiers" don't specify size in bits, but rather by C type. Thus, you can't presume that sizeof(long) == sizeof(void*) (well, they make that assumption in the Linux kernel, but it's married to gcc where that is always true).