Floating point to string representation [closed] - c

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 8 years ago.
Improve this question
Consider the following code snippet:
char str[1000];
float b ;
b= 0.0615;
sprintf( &(str[0]), "%1.0e", b);
After the execution of the last statement, I expected the str to contain 6.15e-2. However, I am getting the value as 5e-315.
Where am I going wrong. How to get the expected value?

You cannot get two digits precision with that format string, as you specified only one digit after comma (that is the .0 part after the 1).
What works for me is
#include <stdio.h>
#include <string.h>
main() {
char str[1000];
float b ;
b= 0.0615;
sprintf( &(str[0]), "%.2e", b);
puts(str);
}
prints 6.15e-02
The almighty C/C++ documentation says:
.number:
For a, A, e, E, f and F specifiers: this is the number of digits to be
printed after the decimal point (by default, this is 6).

My bet is you forgot including stdio.h.
There seems to be a mismatch between what type the compiler passes to sprintf and what sprintf actually reads (as described in cmaster's answer).

Apparently your compiler does not realize that sprintf() takes all floating point arguments as doubles. Consequently, it passes only the 32 bits of the float to the function, which erroneously interpretes 64 bits as a double. It should work fine if you cast the float to a double before passing it to sprintf().

Related

Sqrt of a floating point number [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 6 years ago.
Improve this question
Write a C program that asks the user to enter a floating point number from the keyboard and then prints out the square root of that number is the question. What am I doing wrong?
#include <stdio.h>
#include <math.h>
int main(int argc, char *argv[])
{
double x, result;
printf("Enter a positive number.\n");
scanf("&f", &x);
result = sqrt(x);
printf("The square root of %f is %f.\n", x, result);
return 0;
}
The unary '&' operator delivers the reference (to the operand variable's memory address), while the '%' operator in the context of a scanf or printf, for instance, in conjunction with a particular ANSI C symbols for variable type, such as 'lf' for type double, is known as a format specifier. By placing an integer value between the two, as in '%2lf', one can specify the precision to be read or printed. %f specifies a float type variable, and this achieves lower precision than a double. See the docs too. By the way, in C++, precision is specified otherwise.
So:
double x, result;
printf("Enter a positive number.\n");
scanf("%f", &x); //<--- use %lf (for 'long float' ) instead of &f
Use this
scanf("%lf",&x);
instead of
scanf("&f", &x);

a is a double, printf("%d", a); works differently in IA32 and IA32-64 [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 6 years ago.
Improve this question
Why does The following code work totally differently on IA-32 and x86-64?
#include <stdio.h>
int main() {
double a = 10;
printf("a = %d\n", a);
return 0;
}
On IA-32, the result is always 0.
However, on x86-64 the result can be anything between MAX_INT and MIN_INT.
%d actually is used for printing int. Historically the d stood for "decimal", to contrast with o for octal and x for hexadecimal.
For printing double you should use %e, %f or %g.
Using the wrong format specifier causes undefined behaviour which means anything may happen, including unexpected results.
Passing an argument to printf() that doesn't match the format specifiers in the format string is undefined behaviour... and with undefined behaviour, anything could happen and the results aren't necessarily consistent from one instance to another -- so it should be avoided.
As for why you see a difference between x86 (32-bit) and x86-64, it's likely because of differences in the way parameters are passed in each case.
In the x86 case, the arguments to printf() are likely being passed on the stack, aligned on 4-byte boundaries -- so when printf() processes the %d specifier it reads a 4-byte int from the stack, which is actually the lower 4 bytes from a. Since a was 10 those bytes have no bits set, so they're interpreted as an int value of 0.
In the x86-64 case, the arguments to printf() are all passed in registers (though some would be on the stack if there were enough of them)... but double arguments are passed in different registers than int arguments (such as %xmm0 instead of %rsi). So when printf() tries to process an int argument to match the %d specifier, it takes it from a different register that the one a was passed in, and uses whatever garbage value was left in the register instead of the lower bytes of a, and interprets that as some garbage int value.

Store a long decimal in C [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 6 years ago.
Improve this question
I am trying to do a physics problem and need to store a value around 5 * 10-11;
After trying float, long double and a few others none of them see to be long enough. Is there a data type that will allow me to do so?
Thanks
long double I = 0;
I = 0.01902*pow(0.00318,3)/12;
printf("%Lf\n",I);
Output is 0.000000
long double I = 0;
I = 0.01902*pow(0.00318,3)/12;
At this moment, I's value is approximately 5.096953e-11. Then...
printf("%Lf\n", I);
The sole format specifier in this printf() call is %Lf. This indicates that the argument is a long double (L), and that it should be printed as a floating-point number (f). Finally, as the precision (number of digits printed after the period) is not explicitly given, it is assumed to be 6. This means that up to 6 digits will be printed after the period.
There are several ways to fix this. Two of them would be...
printf(".15Lf\n", I);
This will set the precision to be 15. As such, 15 digits will be printed after the period. And...
printf("%Le\n", I);
This will print the number in scientific notation, that is, 5.096953e-11. It too can be configured to print more digits if you want them.

Why different format specifiers give different outputs with printf? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 7 years ago.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Improve this question
I wanted to know why the output of this code is 0?
y.beta and p->beta are the same variable, so shouldn't the output be 3?
int main() {
struct MyType {
int alpha;
double beta;
};
static struct MyType y;
struct MyType *p = &y;
y.alpha = 1;
p->alpha = 2;
p->beta = 2.5;
y.beta = 1.5;
printf("%d", y.beta + p->beta);
return 0;
}
You invoke undefined behavior by passing the wrong type of argument to printf:
printf("%d", y.beta + p->beta);
%d expects an int to be passed, y.beta + p->beta is a double value.
Change the code to:
printf("%f\n", y.beta + p->beta);
Undefined behavior means anything can happen; it is vain to try and understand why it prints 0. Something else may happen on a different machine, or at a different time, or even if it is raining or if the boss is coming ;-)
As chqrlie correctly pointed out, printf("%d", somevariable) expects that the variable is passed as an int, whereas your result is a double type value.
%d is called a format specifier. These are required because different data types are passed and interpreted in different ways by the processor, possibly at a different place in memory or in different registers (as is the case for the x86_64 platform). So even the same memory location, with the same bit pattern may be interpreted very differently based on the data type. That's what is happening in this other example:
int main() {
int a = 1048577;;
printf("%d\n", a);
printf("%hd", a); //hd is the format specifier for short int
return 0;
}
output:
1048577
1
As you see, the same variable a is interpreted differently based on what data type you specify.
Why is it so?
This is the binary representation of 1048577 (assuming 32-bit int):
00000000 00010000 00000000 00000001
If the format specifier is short int (%hd) then is shorts are 16 bit wide, only the 16 low order bits from the value are used, hence the output is 1.
Something similar may be happening in your case on some architectures (x86 32 bits) or something worse on other ones where the double value is passed in a different way and printf gets the int from a location where nothing specific was written by the caller and any random value could happen to be there, such as 0.
The solution to this would be to modify your code to
printf("%f", y.beta + p->beta); as pointed out by chqrlie

Unsigned int overflow [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 9 years ago.
Improve this question
I am facing an unsigned integer over flow issue
i.e unsigned int x= < max value * max value >
when I print x it is giving me the -ve value even though it is of unsigned integer
I am eager to understands how the compiler is making that as a negative vale
how do I over come this problem ??
thank you in advance
The compiler itself is not treating it as a signed value, that's almost certainly because you're using the %d format string for outputting it.
Use the %u one for unsigned decimal values and you see it have the "right" value (right in terms of signedness, not right in terms of magnitude, which will be wrong because you've performed an operation leading to overflow).
How are you printing it? Probably using printf. printf prints your unsigned ints as if they were signed (at least if you use %d). But this doesn't change the fact that the number is unsigned and hence positive.
Here's how you can check it: compare it to 0 and see what happens. So add this right after your printf:
if (x>=0) printf("positive\n");
else printf("negative\n");
and see what happens.

Resources