printf incorrect output for doubles [closed] - c

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
I compute two numbers using a CPU and a GPU. They are both double precision floating point numbers.
When I print them using printf I get:
CPU=11220648352037704534577864266910298593595406193008158076631751727790009102214012225556499863999825039525639767460911091800702910643896210872459798230329601182926117099298535084878987264.00000 GPU=-4.65287
using:
void print(const double *data1, const double *data2) {
...
printf("CPU=%.5f\tGPU=%.5f\n", data1[k], data2[k]);
}
Which is way to many digits I would expect. Why do I get this? Do I overflow, underflow, corrupt memory? Please help.
Thanks.

You used the printf format %.5f. That means "print plain decimal digits all the way down to five places after the decimal." If you want scientific notation instead, which is more common with such large numbers, you should use %.5g, which means "print automatic format digits all the way down to five places after the decimal...or really four places in scientific notation."
Note that such huge numbers as you have are definitely within range for a double. There is nothing unusual about the value of the number in the code you have posted.

Related

How to print huge decimal number in C? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I have a variable that stores huge value like 0.11111121323532534 but I dont know how to print it as a whole, I found the use of printf('%.30g', variable) but dont know how it works. Is it the only way to output this variable and why it works
The default floating point precision for printing purposes in C is six digits. That's the equivalent of:
printf("%.6f", variable)
In order to have more than 6 digits displayed after the decimal point you have to format the number as you have done. However, you used the "%g" format type which returns the shorter of the float (%f) or the scientific notation (%e) format.
On top of that, a double may only accurately represent up until ~15/16 numbers in total, regardless of the decimal point, being 8bytes in allocated size (with 1 bit for the sign, 11 for the exponent and 52 for your mantissa), leaving you able to mostly represent numbers smaller than 2^53.
printf("%.15f", variable)
The above will work as well, and will allow up to 15 digits to be displayed after the decimal point.

Unlimited integer precision [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
From Python Numeric Types:
Integers have unlimited precision.
This test
#include <stdio.h>
int main(void) {
printf("%zu bytes", sizeof(long long));
return 0;
}
gives me 8 bytes or 64 bits under Linux.
How this is implemented in cpython (this was answered in the comment section)?
What happens when the integer exceeds long long in the implementation?
How big is the speed difference in arithmetics between pre-8-byte and post-8-byte integers?
What if I have a bigger number in Python (not going into 8 bytes)?
Python will adapt and always store the correct number, no approximation. Even with very big INTEGER (but this is not true with other types)
How is it stored in the system, literally? Like a few smaller integers?
That's python implementation. You could find it from the source code here : svn.python.org/projects/python/trunk/Objects/longobject.c (thanks #samgak)
Does it hugely slow down the arithmetics on this number?
Yes like in other languages when the number becomes bigger than .e.g 2^32 on 32 bits systems the arithmetic becomes slower. How much is implementation dependent.
What does Python do, when it encounters such number?
Huge integers are stored in a different way and all arithmetic is adapted to fit.
Is there any difference in Python's 2 & 3 behavior except storing as long in Python 2 and appending L to string representation?
Python 2 and 3 should have the same high level behaviour.

Are there ways to specify the conversion character for double and float in C without having to know the number of its decimal? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
In C I have learned that to specify a conversion character for float or double I will have to use %.xf. Is it possible to not specify and display double or float without having to count up the number of its decimal.
EDIT:Sorry if I am not understanding this because I am just a beginner at programming in general, and C is my first language.
double reallyBigPi = 3.1415914159141591415914159;
printf("Big Pi = %f\n\n", reallyBigPi);
My goal is to print out this input but using the suggested %f I was only able to get Big Pi = 3.141591. So in the end if I want the get that amount I will have to count up the decmial point?
Sadly, neither the C nor the C++ standard libraries expose an interface to Dragon4 (*) (or an improved algorithm) which would correctly determine the number of fractional decimal numbers from a binary floating point representing a decimal number. The libraries are required (by IEEE 754) to correctly format the values, though.
Obviously, determining the number of fractional digits is not equivalent to omitting the precision. Simply omitting the precision behaves the same as specifying the default precision (which, I think, is 6).
You may be able to use Double Conversion as that does expose an interface determining the number of decimals. I found Double Conversion relatively hard to use, though.
(*) Dragon4 is an algorithm described in "How to print floating-point numbers accurately". Note that this link is to an ACM site which asks for payment for the article. I don't have a link to a [legal] free source of the paper.

How to convert 1692.75 to 1.69e+03 in C? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I'm just beginning and I don't know how to change floating point form to exponential form.
A float is a float and has no other representation other than it's binary representation in memory. But you can change the way you print it to the console.
This can be done by specifying it in the printf function.
see printf
What you need is printf("%.2e",myfloat)
Those two "numbers" are simply the result of formatting the same floating point number in two different ways. No number conversion or casting is involved.
If you are concerned about internal representation - don't worry, it's all the same under the hood.
If you are going to print x = 1692.75 in desired form, use printf("%2.2e\n", x);
They are one of the same thing.
If you wish to print it see the manual page for printf
You don't have to change the internal representation of the float, which is binary and has nothing to do with what you see when you print the value.
If you just want to print your float you can you the printffamily of functions:
printf("%.2e", 1692.75);
should do the trick.

Looking for a tool that would tell me which integer-widths I need for a calculation in C to not overflow [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I have a lengthy calculation (polynomial of 4th degree with fixed decimals) that I have to carry out on a microcontroller (TI/LuminaryMicro lm3s9l97 [CortexM3] if somebody is interested).
When I use 32bit-Integers, some calculations flow over. When I use 64bit Integers the compiler emits an ungodly amount of code to simulate 64bit-multiplication on the 32bit-processor.
I am looking for a program into which I could input (just for example):
int a, b, c;
c = a * b; // Do the multiplication
c >>= 10; // Correct for fixed decimal point
c *= a*b;
where I could specify, that a and b would be in the range of [15000..30000] [40000..100000] respectively and it would tell me what sizes the integers need to not overflow (and/or underflow; I would possibly get a false positive there for the >> 10) in the specified domain, so that I could use 32bit-integers where possible.
Does something like this exists already or do I have to roll my own?
Thanks!
I think you have to roll your own. Implementing an extended sequence of muls and divs in fixed-point can be tricky. If fixed-point is applied without careful thought, overflow can happen quite easily. When implementing such a formula, I use a spreadsheet to experiment with the following:
Ordering of operations: muls require twice the number of bits on the left-hand side, i.e. multiplying two 22.10 numbers can yield a 44-bit result. Div operations reduce the number needed on the LHS. Strategically re-ordering the equation's evaluation, or even rewriting it (expanding, factoring, etc) can provide opportunities to improve precision.
Pre-computed scalars: along the same lines, pre-computing values may help. These scalars may not be need to be constant, since look-up tables may be used to store a collection of pre-computed values.
Loss of precision: is 10-bits of precision really needed at steps in the evaluation of the equation? Perhaps some steps need lower precision, leaving more bits on the LHS to avoid overflow.
Given these concerns (all of which are application-specific), optimal use of fixed-point math remains very much a manual exercise. There are good resources on the web. I've found this one useful on occasion.
Ada might be able to do that using range types.

Resources