How to print a float on stdout without printf()? - c

I'm on environment that has no printf() or any equivalent, so I'm writing it myself. But I have no idea how to perform such a conversion of float types. I tried to seen how gcc does it, but it's really hard to understand.

Floating-point formatting is very easy to get wrong. Writing a simplistic implementation that works for "most" numbers is deceptively easy, but it is likely to break on very large numbers, very small numbers, and numbers close to zero, not to mention IEEE754 subnormals, infinities and NaN. It might also get wrong the trailing decimals, failing to provide a string representation that allows reproducing the float bit-by-bit.
Fortunately, there are libraries out there that implement the work of formatting floating-point numbers, either for education, for embedded systems, or to improve on some aspect of standard-library formatting. If possible, I recommend that you incorporate David Gay's dtoa library, which has been extensively tested in Python and elsewhere.

You can take a look at musl libc implementation. musl is a lightweight libc.
In fmt_fp function defined in src/stdio/vfprintf.c, they are basically converting a float to a string for fprintf conversion specifiers like f.
If you search on the internet with keyword ftoa, you will find some other implementations of functions converting a float to a string.

Related

Check for precision loss when converting string to float

I have a string representing a rational number.
I want to convert the string to a float with strtof(nptr, &endptr)
The problem is that e.g. a string "1.0000000000000000000001" will be converted to 1. without raising any flags (iirc).
Therefore my question: How does one catch this precision loss?
How does one catch this precision loss?
One doesn't, at least not with anything in the standard library; none of the strto* conversion functions will tell you if the value cannot be represented exactly.
Edit
I know that's not terribly helpful, but it means you'll have to go outside anything in the standard library. You'll either have to write your own conversion routines that somehow keep track of precision loss (I have no idea how you would implement this), or you'll have to go with some arbitrary-precision library like GMP, or you'll have to implement your own version of binary-coded decimal and hand-hack your own API to assign, compare, and manipulate BCD values.
C just doesn't give you the tools needed to do that kind of analysis.

Integer Based Sensor Fusion/Kalman Filter

Is anyone aware of a sensor fusion implementation that uses only integer operations instead of all the floating point accumulates/divides/multiplies in most open source implementations?
On my processor performing repeated floating point calculations are expensive and I want to reduce them as much as possible. I might lose some precision but my application does not require a highly precise output.
Is there any issue turning all the variables to ints and just taking the hit in precision? Any advice would be great, thanks all.
The use of fixed-point is the best solution for flexible maths operations on a device with no FPU.
Anthony Williams' fixed point maths library would suit, it uses a 64 bit integer type to provide a 34Q28 (34 integer bits, 28 fractional bit) format floating point type with extensive maths, operator and conversion functions. It is written in C++ to create a fixed type as a class, with extensive operator overloading and standard maths functions so that it is largely inter-changeable with float or double in existing code.
I realise the the question is tagged C but you need not use C++ syntax extensively, just compile your C code as C++, include the fixed.hpp header, replace float or double with fixed and compile/link the fixed.cpp file with your project.

Looking for Ansi C89 arbitrary precision math library

I wrote an Ansi C compiler for a friend's custom 16-bit stack-based CPU several years ago but I never got around to implementing all the data types. Now I would like to finish the job so I'm wondering if there are any math libraries out there that I can use to fill the gaps. I can handle 16-bit integer data types since they are native to the CPU and therefore I have all the math routines (ie. +, -, *, /, %) done for them. However, since his CPU does not handle floating point then I have to implement floats/doubles myself. I also have to implement the 8-bit and 32-bit data types (bother integer and floats/doubles). I'm pretty sure this has been done and redone many times and since I'm not particularly looking forward to recreating the wheel I would appreciate it if someone would point me at a library that can help me out.
Now I was looking at GMP but it seems to be overkill (library must be absolutely huge, not sure my custom compiler would be able to handle it) and it takes numbers in the form of strings which would be wasteful for obvious reasons. For example :
mpz_set_str(x, "7612058254738945", 10);
mpz_set_str(y, "9263591128439081", 10);
mpz_mul(result, x, y);
This seems simple enough, I like the api... but I would rather pass in an array rather than a string. For example, if I wanted to multiply two 32-bit longs together I would like to be able to pass it two arrays of size two where each array contains two 16-bit values that actually represent a 32-bit long and have the library place the output into an output array. If I needed floating point then I should be able to specify the precision as well.
This may seem like asking for too much but I'm asking in the hopes that someone has seen something like this.
Many thanks in advance!
Let's divide the answer.
8-bit arithmetic
This one is very easy. In fact, C already talks about this under the term "integer promotion". This means that if you have 8-bit data and you want to do an operation on them, you simply pad them with zero (or one if signed and negative) to make them 16-bit. Then you proceed with the normal 16-bit operation.
32-bit arithmetic
Note: so long as the standard is concerned, you don't really need to have 32-bit integers.
This could be a bit tricky, but it is still not worth using a library for. For each operation, you would need to take a look at how you learned to do them in elementary school in base 10, and then do the same in base 216 for 2 digit numbers (each digit being one 16-bit integer). Once you understand the analogy with simple base 10 math (and hence the algorithms), you would need to implement them in assembly of your CPU.
This basically means loading the most significant 16 bit on one register, and the least significant in another register. Then follow the algorithm for each operation and perform it. You would most likely need to get help from overflow and other flags.
Floating point arithmetic
Note: so long as the standard is concerned, you don't really need to conform to IEEE 754.
There are various libraries already written for software emulated floating points. You may find this gcc wiki page interesting:
GNU libc has a third implementation, soft-fp. (Variants of this are also used for Linux kernel math emulation on some targets.) soft-fp is used in glibc on PowerPC --without-fp to provide the same soft-float functions as in libgcc. It is also used on Alpha, SPARC and PowerPC to provide some ABI-specified floating-point functions (which in turn may get used by GCC); on PowerPC these are IEEE quad functions, not IBM long double ones.
Performance measurements with EEMBC indicate that soft-fp (as speeded up somewhat using ideas from ieeelib) is about 10-15% faster than fp-bit and ieeelib about 1% faster than soft-fp, testing on IBM PowerPC 405 and 440. These are geometric mean measurements across EEMBC; some tests are several times faster with soft-fp than with fp-bit if they make heavy use of floating point, while others don't make significant use of floating point. Depending on the particular test, either soft-fp or ieeelib may be faster; for example, soft-fp is somewhat faster on Whetstone.
One answer could be to take a look at the source code for glibc and see if you could salvage what you need.

Floating-point conversion without strtod/sprintf

Since I have decided to use UTF-16 internally in a program that should run on Windows and Linux, I need replacement for some string handling functions, since I do not want to convert to and from the native char representation for user-mode code. However, if float conversion is slow compared to running iconv, I can use a wrapper around strtod/sprintf
WINE did.
These conversions to and from decimal are difficult to make both fast and correct. The naïve (but correct) versions assume multi-precision integers, an implementation of which you were perhaps not planning on depending on. In short, wrap your existing stdtod/sprintf and do not worry about the overhead, it will be less than the loss of performance in using a naïve implementation of these functions.
In the “naïve incorrect” category, there is an implementation of strtod() floating around that all interpreters use when the host is lacking one. This implementation is terrible (it may return a result off by several ULPs), but if you do not mind, you could adapt this code to manipulate UTF-16 characters.
NOTE: there is a swprintf() in C99 I think, but it is for strings of wchar_t, which does not have to be UTF-16, so that may not work for you.

Writing my own float parser

I am trying to write a parser in C and part of its job is to convert a series of characters into a double. Up to now I have been using strtod but I find it to be quite dangerous and it won't handle cases where the number is at the end of the buffer, which is not null terminated.
I thought I'd write my own. If I have a string representation of a number of the form a.b, will I be nieve to think that I can just calculate (double)a + ((double)b / (double)10^n), where n is the number of digits in b?
For example, 23.4563:
a = 23
b = 4563
final answer: 23 + (4563/10000)
Or would that produce inaccurate results with regard to the IEEE format of floats?
It is hard to read floating-point numerals accurately, in the sense that there are various problems that must be carefully addressed, and many people fail to do so. However, it is a solved problem. To start, see How to read floating point numbers accurately, June 1990, by William D. Clinger.
I agree with Roddy, you are likely better off copying the data into a buffer and using existing library functions. (However, you should check that your C implementation provides correctly rounded conversion of floating-point numerals. The C standard does not require it, and some implementations do not provide it.)
You may be interested in this answer of mine to a somewhat related question.
The parser in that answer converts decimal floating point numbers (represented as strings) into IEEE-754 floats and doubles with proper rounding.
As far as I remember, about the only issue in the code is that it may not handle the cases when the exponent part is too big (doesn't fit into an integer) and should amount to returning either an error or INF.
Otherwise, it should give you a good idea of what to do (if you have any idea at all of what you're doing:).
As already said, it's difficult, you need extra precision, etc...
But if you have restricted inputs, and want to know if you can still correctly convert these restricted decimal to binary with semi naive algorithm and standard IEEE 754 ops, you might be interested in my answer to
How to manually parse a floating point number from a string

Resources