Why different format specifiers give different outputs with printf? [closed] - c

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 7 years ago.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Improve this question
I wanted to know why the output of this code is 0?
y.beta and p->beta are the same variable, so shouldn't the output be 3?
int main() {
struct MyType {
int alpha;
double beta;
};
static struct MyType y;
struct MyType *p = &y;
y.alpha = 1;
p->alpha = 2;
p->beta = 2.5;
y.beta = 1.5;
printf("%d", y.beta + p->beta);
return 0;
}

You invoke undefined behavior by passing the wrong type of argument to printf:
printf("%d", y.beta + p->beta);
%d expects an int to be passed, y.beta + p->beta is a double value.
Change the code to:
printf("%f\n", y.beta + p->beta);
Undefined behavior means anything can happen; it is vain to try and understand why it prints 0. Something else may happen on a different machine, or at a different time, or even if it is raining or if the boss is coming ;-)

As chqrlie correctly pointed out, printf("%d", somevariable) expects that the variable is passed as an int, whereas your result is a double type value.
%d is called a format specifier. These are required because different data types are passed and interpreted in different ways by the processor, possibly at a different place in memory or in different registers (as is the case for the x86_64 platform). So even the same memory location, with the same bit pattern may be interpreted very differently based on the data type. That's what is happening in this other example:
int main() {
int a = 1048577;;
printf("%d\n", a);
printf("%hd", a); //hd is the format specifier for short int
return 0;
}
output:
1048577
1
As you see, the same variable a is interpreted differently based on what data type you specify.
Why is it so?
This is the binary representation of 1048577 (assuming 32-bit int):
00000000 00010000 00000000 00000001
If the format specifier is short int (%hd) then is shorts are 16 bit wide, only the 16 low order bits from the value are used, hence the output is 1.
Something similar may be happening in your case on some architectures (x86 32 bits) or something worse on other ones where the double value is passed in a different way and printf gets the int from a location where nothing specific was written by the caller and any random value could happen to be there, such as 0.
The solution to this would be to modify your code to
printf("%f", y.beta + p->beta); as pointed out by chqrlie

Related

Signed long to float and then to signed short [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 14 days ago.
Improve this question
Here is my code:
#include <stdio.h>
typedef signed long int32_t;
typedef signed short int16_t;
int main() {
int32_t var1 = 1200;
int16_t var2;
var2 = (int16_t)((float)var1/1000.0);
printf("Hello World: %d", var2); // prints 1 should print 1.2
return 0;
}
Typecasting between datatypes in C. As a result, I am trying to get the value of 'var2' as 1.2 in the signed short, but I have got value 1. I have to use the 16bit register and I cannot use 32bit float.
printf("Hello World: %d", var2); // prints 1 should print 1.2
No it should not.
(float)var1 converts to float.
(float)var1/ 1000.0 - result 1.2
(int16_t)1.2 - converts to integer and the result is 1
BTW you cant print 1.2 using %d format. To 100% correct you should use %hd format to print short integer.
Casting does not binary copy only converts between the types
var2 is a "signed short" type and it can only contains integer value. If you assign to it a decimal number it truncate the decimal part (0.2) and retains only the integer part (1). I hope I was helpful. Have a nice day!
You already have it in a 16-bit int in var1. Your representation is called "scaled integer". Just do the conversion when you need to print the value.
printf("%f\n", (float)(var1/1000.0));

Why does my program repeat forever instead of giving the maximum integer value? [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 5 years ago.
Improve this question
I am currently learning C, and I made a program that I thought would give me the maximum integer value, but it just loops forever. I can't find an answer as to why it won't end.
#include <stdio.h>
int main()
{
unsigned int i = 0;
signed int j = i;
for(; i+1 == ++j; i++);
printf(i);
return 0;
}
Any help would be appreciated, thank you!
Your code has undefined behavior. And that's not because of unsigned integer overflow (it wraps when the value is too big). It is the signed integer overflow that is undefined behavior.
Also note that if your intention is to know the maximum value that an int can hold use the macro INT_MAX.
maximum value for an object of type int
INT_MAX +32767 // 215 - 1
Your should write the printf properly. (Pass the format specifier then the other arguments as specified by format specifier).
I thought would give me the maximum integer value
The maximum signed int value cannot be portably found experimentally with out risking undefined behavior (UB). In OP's code, eventually ++j overflows (UB). UB includes loops forever.
As #coderredoc well answered, instead use printf("%d\n", INT_MAX);
To find the maximum unsigned value:
printf("%u\n", UINT_MAX);
// or
unsigned maxu = -1u;
printf("%u\n", maxu);

a is a double, printf("%d", a); works differently in IA32 and IA32-64 [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 6 years ago.
Improve this question
Why does The following code work totally differently on IA-32 and x86-64?
#include <stdio.h>
int main() {
double a = 10;
printf("a = %d\n", a);
return 0;
}
On IA-32, the result is always 0.
However, on x86-64 the result can be anything between MAX_INT and MIN_INT.
%d actually is used for printing int. Historically the d stood for "decimal", to contrast with o for octal and x for hexadecimal.
For printing double you should use %e, %f or %g.
Using the wrong format specifier causes undefined behaviour which means anything may happen, including unexpected results.
Passing an argument to printf() that doesn't match the format specifiers in the format string is undefined behaviour... and with undefined behaviour, anything could happen and the results aren't necessarily consistent from one instance to another -- so it should be avoided.
As for why you see a difference between x86 (32-bit) and x86-64, it's likely because of differences in the way parameters are passed in each case.
In the x86 case, the arguments to printf() are likely being passed on the stack, aligned on 4-byte boundaries -- so when printf() processes the %d specifier it reads a 4-byte int from the stack, which is actually the lower 4 bytes from a. Since a was 10 those bytes have no bits set, so they're interpreted as an int value of 0.
In the x86-64 case, the arguments to printf() are all passed in registers (though some would be on the stack if there were enough of them)... but double arguments are passed in different registers than int arguments (such as %xmm0 instead of %rsi). So when printf() tries to process an int argument to match the %d specifier, it takes it from a different register that the one a was passed in, and uses whatever garbage value was left in the register instead of the lower bytes of a, and interprets that as some garbage int value.

C - Why this strange output in printf() [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 7 years ago.
Improve this question
I am confused with the strange output of the following c program.
I am using TurboC and DevC compiler
I will be really pleased if someone will help me out in this.
Program
#include<stdio.h>
#include<conio.h>
int main()
{
clrscr();
printf("%d","hb");
printf("%d","abcde"-"abcde");
//Output is -6 why ?
return 0;
}
Outputs
For TurboC
printf("%d","hb");
//Output is 173 Why ?
// No matter what I write in place of "hb" the output is always 173
printf("%d","abcde"-"abcde");
//Output is -6 why ?
For Dev C
printf("%d","hb");
//Output is 4210688 Why ?
// No matter what I write in place of "hb" the output is always 4210688
printf("%d","abcde"-"abcde");
//Output is 0 why ?
Here, you are passing the memory address of a string literal (a char*):
printf("%d","hb");
However, the specifier which should be used is %p (standing for pointer):
printf("%p\n", "hb"); // The output is in hexadecimal
This will ensure that the same representation size is used by printf when displaying it as for when it was passed to printf. Using %d (int specifier) will result in undefined behaviour when sizeof(int) is not the same as sizeof(char*), and even if the sizes would be equal, using %d may result in having negative values printed (if the most significant bit is set - the sign bit of an int).
As for any memory address, you can't expect it to be the same after the program was recompiled, and even less when using different toolchains.
When the output was the same after changing the "hb" literal with another one, it means that it was allocated at the same address.
Here, two pointers to string literals are subtracted:
printf("%d","abcde"-"abcde");
The result of subtracting two pointers is the number of elements of that type between the addresses pointed by them. But please note, the behaviour is only defined when the pointers point to elements from the same array, or to the first element just after the end of the array.
Again, %d may not be the right specifier to be used. An integer type with its size at least equal to the pointer type may be used, maybe long long int (this should be checked against the specific platform). A subtraction overflow may still happen, or the result may not fit into the cast type, and then the behaviour is again undefined.
char *p1, *p2; // These should be initialized and NOT point to different arrays
printf("%lld\n", (long long int)(p1 - p2));
Also note, C standard library provides stddef.h, which defines the ptrdiff_t type used to store a pointer difference. See this: C: Which character should be used for ptrdiff_t in printf?
Note: As there are two different char arrays, the pointer subtraction is undefined, and therefore information below is only based on assumptions, and presented only because the OP mentioned that this was an exam question.
In our case, as sizeof(char) is 1, it represents exactly the difference in bytes. The difference of -6 is telling that the two identical literals "abcde" were placed in memory first next to the second. The literal is including the string terminator, so it's size is 6.
The other thing that can be deduced from here is that the compiler used by DevC++ was "smarter" (or had other optimization options passed to), to create a single copy in the memory for the "abcde" literal, hence the difference of 0.
A string literal is usually placed in the read-only memory, and the program should not try to modify it, so if the compiler (or the linker in some cases) can "detect" a duplicate, it may reuse the previous literal instead.

Floating point to string representation [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 8 years ago.
Improve this question
Consider the following code snippet:
char str[1000];
float b ;
b= 0.0615;
sprintf( &(str[0]), "%1.0e", b);
After the execution of the last statement, I expected the str to contain 6.15e-2. However, I am getting the value as 5e-315.
Where am I going wrong. How to get the expected value?
You cannot get two digits precision with that format string, as you specified only one digit after comma (that is the .0 part after the 1).
What works for me is
#include <stdio.h>
#include <string.h>
main() {
char str[1000];
float b ;
b= 0.0615;
sprintf( &(str[0]), "%.2e", b);
puts(str);
}
prints 6.15e-02
The almighty C/C++ documentation says:
.number:
For a, A, e, E, f and F specifiers: this is the number of digits to be
printed after the decimal point (by default, this is 6).
My bet is you forgot including stdio.h.
There seems to be a mismatch between what type the compiler passes to sprintf and what sprintf actually reads (as described in cmaster's answer).
Apparently your compiler does not realize that sprintf() takes all floating point arguments as doubles. Consequently, it passes only the 32 bits of the float to the function, which erroneously interpretes 64 bits as a double. It should work fine if you cast the float to a double before passing it to sprintf().

Resources