Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 14 days ago.
Improve this question
Here is my code:
#include <stdio.h>
typedef signed long int32_t;
typedef signed short int16_t;
int main() {
int32_t var1 = 1200;
int16_t var2;
var2 = (int16_t)((float)var1/1000.0);
printf("Hello World: %d", var2); // prints 1 should print 1.2
return 0;
}
Typecasting between datatypes in C. As a result, I am trying to get the value of 'var2' as 1.2 in the signed short, but I have got value 1. I have to use the 16bit register and I cannot use 32bit float.
printf("Hello World: %d", var2); // prints 1 should print 1.2
No it should not.
(float)var1 converts to float.
(float)var1/ 1000.0 - result 1.2
(int16_t)1.2 - converts to integer and the result is 1
BTW you cant print 1.2 using %d format. To 100% correct you should use %hd format to print short integer.
Casting does not binary copy only converts between the types
var2 is a "signed short" type and it can only contains integer value. If you assign to it a decimal number it truncate the decimal part (0.2) and retains only the integer part (1). I hope I was helpful. Have a nice day!
You already have it in a 16-bit int in var1. Your representation is called "scaled integer". Just do the conversion when you need to print the value.
printf("%f\n", (float)(var1/1000.0));
Related
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 1 year ago.
Improve this question
For example
printf("%x",16);
This printf prints 10 instead of 16 why it is not take the value as hexadecimal pls someone explain this
16 is a decimal integer constant. Thus the hexadecimal representation of the constant is 10. If you want to specify a hexadecimal integer constant then write 0x16u. Pay attention to that the conversion specifier x expects an argument of the type unsigned int. So the suffix u is used in the hexadecimal integer constant.
So the call of printf can look like
printf( "%x\n", 0x16u );
and the output will be 16.
Or
printf( "%#x\n", 0x16u );
and the output will be 0x16.
When you pass a number to a function, you are not passing decimal or hexadecimal. Conceptually, you are passing a value, an abstract mathematical number.
That number is represented in some type, and C uses binary to represent values in integer types (along with some supplement of binary for signed numbers, usually two’s complement).
Whether you write printf("%x\n", 16u) or printf("%x\n", 0x10u);, the compiler converts that numeral in source code, 16u or 0x10u, to an unsigned int value, represented in binary. It is that resulting value that is passed to printf, not a decimal “16” or a hexadecimal “10”.
(I use 16u and 0x10u rather than 16 or 0x10 because printf expects an unsigned int, not an int.)
The %x directive tells printf to expect an unsigned int value and to convert it to a hexadecimal numeral. So the original input form is irrelevant; %x means to produce hexadecimal regardless of the original form.
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 5 years ago.
Improve this question
I am currently learning C, and I made a program that I thought would give me the maximum integer value, but it just loops forever. I can't find an answer as to why it won't end.
#include <stdio.h>
int main()
{
unsigned int i = 0;
signed int j = i;
for(; i+1 == ++j; i++);
printf(i);
return 0;
}
Any help would be appreciated, thank you!
Your code has undefined behavior. And that's not because of unsigned integer overflow (it wraps when the value is too big). It is the signed integer overflow that is undefined behavior.
Also note that if your intention is to know the maximum value that an int can hold use the macro INT_MAX.
maximum value for an object of type int
INT_MAX +32767 // 215 - 1
Your should write the printf properly. (Pass the format specifier then the other arguments as specified by format specifier).
I thought would give me the maximum integer value
The maximum signed int value cannot be portably found experimentally with out risking undefined behavior (UB). In OP's code, eventually ++j overflows (UB). UB includes loops forever.
As #coderredoc well answered, instead use printf("%d\n", INT_MAX);
To find the maximum unsigned value:
printf("%u\n", UINT_MAX);
// or
unsigned maxu = -1u;
printf("%u\n", maxu);
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 7 years ago.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Improve this question
I wanted to know why the output of this code is 0?
y.beta and p->beta are the same variable, so shouldn't the output be 3?
int main() {
struct MyType {
int alpha;
double beta;
};
static struct MyType y;
struct MyType *p = &y;
y.alpha = 1;
p->alpha = 2;
p->beta = 2.5;
y.beta = 1.5;
printf("%d", y.beta + p->beta);
return 0;
}
You invoke undefined behavior by passing the wrong type of argument to printf:
printf("%d", y.beta + p->beta);
%d expects an int to be passed, y.beta + p->beta is a double value.
Change the code to:
printf("%f\n", y.beta + p->beta);
Undefined behavior means anything can happen; it is vain to try and understand why it prints 0. Something else may happen on a different machine, or at a different time, or even if it is raining or if the boss is coming ;-)
As chqrlie correctly pointed out, printf("%d", somevariable) expects that the variable is passed as an int, whereas your result is a double type value.
%d is called a format specifier. These are required because different data types are passed and interpreted in different ways by the processor, possibly at a different place in memory or in different registers (as is the case for the x86_64 platform). So even the same memory location, with the same bit pattern may be interpreted very differently based on the data type. That's what is happening in this other example:
int main() {
int a = 1048577;;
printf("%d\n", a);
printf("%hd", a); //hd is the format specifier for short int
return 0;
}
output:
1048577
1
As you see, the same variable a is interpreted differently based on what data type you specify.
Why is it so?
This is the binary representation of 1048577 (assuming 32-bit int):
00000000 00010000 00000000 00000001
If the format specifier is short int (%hd) then is shorts are 16 bit wide, only the 16 low order bits from the value are used, hence the output is 1.
Something similar may be happening in your case on some architectures (x86 32 bits) or something worse on other ones where the double value is passed in a different way and printf gets the int from a location where nothing specific was written by the caller and any random value could happen to be there, such as 0.
The solution to this would be to modify your code to
printf("%f", y.beta + p->beta); as pointed out by chqrlie
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 8 years ago.
Improve this question
Consider the following code snippet:
char str[1000];
float b ;
b= 0.0615;
sprintf( &(str[0]), "%1.0e", b);
After the execution of the last statement, I expected the str to contain 6.15e-2. However, I am getting the value as 5e-315.
Where am I going wrong. How to get the expected value?
You cannot get two digits precision with that format string, as you specified only one digit after comma (that is the .0 part after the 1).
What works for me is
#include <stdio.h>
#include <string.h>
main() {
char str[1000];
float b ;
b= 0.0615;
sprintf( &(str[0]), "%.2e", b);
puts(str);
}
prints 6.15e-02
The almighty C/C++ documentation says:
.number:
For a, A, e, E, f and F specifiers: this is the number of digits to be
printed after the decimal point (by default, this is 6).
My bet is you forgot including stdio.h.
There seems to be a mismatch between what type the compiler passes to sprintf and what sprintf actually reads (as described in cmaster's answer).
Apparently your compiler does not realize that sprintf() takes all floating point arguments as doubles. Consequently, it passes only the 32 bits of the float to the function, which erroneously interpretes 64 bits as a double. It should work fine if you cast the float to a double before passing it to sprintf().
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 9 years ago.
Improve this question
I am facing an unsigned integer over flow issue
i.e unsigned int x= < max value * max value >
when I print x it is giving me the -ve value even though it is of unsigned integer
I am eager to understands how the compiler is making that as a negative vale
how do I over come this problem ??
thank you in advance
The compiler itself is not treating it as a signed value, that's almost certainly because you're using the %d format string for outputting it.
Use the %u one for unsigned decimal values and you see it have the "right" value (right in terms of signedness, not right in terms of magnitude, which will be wrong because you've performed an operation leading to overflow).
How are you printing it? Probably using printf. printf prints your unsigned ints as if they were signed (at least if you use %d). But this doesn't change the fact that the number is unsigned and hence positive.
Here's how you can check it: compare it to 0 and see what happens. So add this right after your printf:
if (x>=0) printf("positive\n");
else printf("negative\n");
and see what happens.