Handling Decimals on Embedded C - c

I have my code below and I want to ask what's the best way in solving numbers (division, multiplication, logarithm, exponents) up to 4 decimals places? I'm using PIC16F1789 as my device.
float sensorValue;
float sensorAverage;
void main(){
//Get an average data by testing 100 times
for(int x = 0; x < 100; x++){
// Get the total sum of all 100 data
sensorValue = (sensorValue + ADC_GetConversion(SENSOR));
}
// Get the average
sensorAverage = sensorValue/100.0;
}

In general, on MCUs, floating point types are more costly (clocks, code) to process than integer types. While this is often true for devices which have a hardware floating point unit, it becomes a vital information on devices without, like the PIC16/18 controllers. These have to emulate all floating point operations in software. This can easily cost >100 clock cycles per addition (much more for multiplication) and bloats the code.
So, best is to avoid float (not to speak of double on such systems.
For your example, the ADC returns an integer type anyway, so the summation can be done purely with integer types. You just have to make sure the summand does not overflow, so it has to hold ~100 * for your code.
Finally, to calculate the average, you can either divide the integer by the number of iterations (round to zero), or - better - apply a simple "round to nearest" by:
#define NUMBER_OF_ITERATIONS 100
sensorAverage = (sensorValue + NUMBER_OF_ITERATIONS / 2) / NUMBER_OF_ITERATIONS;
If you really want to speed up your code, set NUMBER_OF_ITERATIONS to a power of two (64 or 128 here), if your code can tolerate this.
Finally: To get not only the integer part of the division, you can treat the sum (sensoreValue) as a fractional value. For the given 100 iterations, you can treat it as decimal fraction: when converting to a string, just print a decimal point left of the lower 2 digits. As you divide by 100, there will be no more than two significal digits of decimal fraction. If you really need 4 digits, e.g. for other operations, you can multiply the sum by 100 (actually, it is 10000, but you already have multipiled it by 100 by the loop).
This is called decimal fixed point. Faster for processing (replaces multiplication by shifts) would be to use binary fixed point, as I stated above.
On PIC16, I would strongly suggest to think of using binary fraction as multiplication and division are very costly on this platform. In general, they are not well suited for signal processing. If you need to sustain some performance, an ARM Cortex-M0 or M4 would be the far better choice - at similar prices.

In your example it is trivial to avoid non-integer representations altogether, however to answer your question more generally an ISO compliant compiler will support floating point arithmetic and the library, but for performance and code size reasons you may want to avoid that.
Fixed-point arithmetic is what you probably need. For simple calculations an ad-hoc approach to fixed point can be used whereby for example you treat the units of sensorAverage in your example as hundredths (1/100), and avoid the expensive division altogether. However if you want to perform full maths library operations, then a better approach is to use a fixed-point library. One such library is presented in Optimizing Applications with Fixed-Point Arithmetic by Anthony Williams. The code is C++ and PIC16 may lack a decent C++ compiler, but the methods can be ported somewhat less elegantly to C. It also uses a huge 64bit fixed-point 36Q28 format, which would be expensive and slow on PIC16; you might want to adapt it to use 16Q16 perhaps.

If you are really concerned about performance, stick to integer arithmetics, try to make the number of samples to average a power of two so the division can be made by means of bit shifts, however if it is not a power of two lets say 100 (as Olaf point out for fixed point) you can also use bit shifts and additions: How can I multiply and divide using only bit shifting and adding?
If you are not concerned about performace and still want to work with floats (you already got warned this may not be very fast in a PIC16 and may use a lot of flash), math.h has the following functions: http://en.cppreference.com/w/c/numeric/math including exponeciation: pow(base,exp) and logarithms* only base 2, base 10 and base e, for arbitrary base use the change of base logarithmic property

Related

Can you replace floating points divisions by integer operations?

The Cortex MCU I'm using doesn't have support for floating point divisions in hardware. The GCC compiler solves this by doing them software based, but warns that it can be very slow.
Now I was wondering how I could avoid them altogether. For example, I could blow up the value factor 10000 (integer multiplication), and divide by another large factor (integer division), and get the exact same result.
But would these two operations be actually faster in general than a single floating point operation? For example, does it make sense to replace:
int result = 100 * 0.95f
by
int result = (100 * 9500) / 10000
to get 95% ?
It's better to get rid of division altogether if you can. This is relatively easy if the divisor is a compile-time constant. Typically you arrange things so that any division operation can be replaced by a bitwise shift. So for your example:
unsigned int x = 100;
unsigned int y = (x * (unsigned int)(0.95 * 1024)) >> 10; // y = x * 0.95
Obviously you need to be very aware of the range of x so that you can avoid overflow in the intermediate result.
And as always, remember that premature optimisation is evil - only use fixed point optimisations such as this if you have identified a performance bottleneck.
Yes the integer expression will be faster -the integer divide and multiply instructions are single machine instructions whereas the floating point operations will be either function calls (for direct software floating point) or exception handlers (for FPU instruction emulation) - either way, each operation will comprise multiple instructions.
However, while for simple operations, integer expressions and ad-hoc fixed-point (scaled integer) expressions may be adequate, for math intensive applications involving trigonometry functions and logarithms etc. in can become complex. For that you might employ a common fixed-point representation and library. This is easiest in C++ rather than C, as exemplified by Anthony Williams' fixed point library, where due to extensive operator and function overloading, in most cases you can simply replace the float or double keywords with fixed and existing expressions and algorithms will work with comparable performance to to an FPU equiped ARM for many operations. If you are not comfortable using C++, the rest of your code need not use any C++ specific features, and can essentially be C code compiled as C++.

Computing float to 2 digits only (not formatting/rounding off 6 digits after decimal to 2)

I am running a computation on an embedded processor which involves a float operation like this:
a = (float)23 / (float)3; // a = 7.666....7
This is a long computation, and my app is OK with a certain amount of rounding error; I mean 7.67 or 7.66 doesn't matter. Is there a way to cut down the amount of computation time spent in computing the float, or telling math.h to 2 digits ?
Any idea how to do this ?
PS:
Now I know many will suggest using fixed point, but I have specific requirements.
Precision is determined by the data type, common types like float and double have a fixed precision that can't change, but there are libraries like libfixmath that let you perform fast non-integer maths (in this case using int32_t)
There is nothing you can 'tell' math.h (in this regard); float does not have precision options.
If you want less computational intensive calculations you must use other types or methods.
You already mention the best candidate: fixed-point. But, for obscure reasons, you say that you can't use it.
Other idea is to scale up your calculations by for example a factor 10,000, and keep everything as int (or longer). Then, only scale down to float when needed. But of course, it depends on your problem if you can do this.

Binary, Floats, and Modern Computers

I have been reading a lot about floats and computer-processed floating-point operations. The biggest question I see when reading about them is why are they so inaccurate? I understand this is because binary cannot accurately represent all real numbers, so the numbers are rounded to the 'best' approximation.
My question is, knowing this, why do we still use binary as the base for computer operations? Surely using a larger base number than 2 would increase the accuracy of floating-point operations exponentially, would it not?
What are the advantages of using a binary number system for computers as opposed to another base, and has another base ever been tried? Or is it even possible?
First of all: You cannot represent all real numbers even when using say, base 100. But you already know this. Anyway, this means: Inaccuracy will always arise due to 'not being able to represent all real numbers'.
Now lets talk about "what can higher bases bring to you when doing math?": Higher bases bring exactly 'nothing' in terms of precision. Why?
If you want to use base 4, then a 16 digit base 4 number provides exactly 416 different values.
But you can get the same number of different values from a 32 digit base 2 number (232 = 416).
As another answer already said: Transistors can either be on or off. So your newly designed base 4 registers need to be an abstraction over (base 2) ON/OFF 'bits'. This means: Use two 'bits' to represent a base 4 digit. But you'll still get exactly 2N levels by spending N 'bits' (or N/2 base-4 digits). You can only get better accuracy by spending more bits, not by increasing the base. Which base you 'imagine/abstract' your numbers to be in (e.g. like how printf can print these base-2 numbers in base-10) is really just a matter of abstraction and not precision.
Computers are built on transistors, which have a "switched on" state, and a "switched off" state. This corresponds to high and low voltage. Pretty much all digital integrated circuits work in this binary fashion.
Ignoring the fact that transistors just simply work this way, using a different base (e.g. base 3) would require these circuits to operate at an intermediate voltage state (or several) as well as 0V and their highest operating voltage. This is more complicated, and can result in problems at high frequencies - how can you tell whether a signal is just transitioning between 2V and 0V, or actually at 1V?
When we get down to the floating point level, we are (as nhahtdh mentioned in their answer) mapping an infinite space of numbers down to a finite storage space. It's an absolute guarantee that we'll lose some precision. One advantage of IEEE floats, though, is that the precision is relative to the magnitude of the value.
Update: You should also check out Tunguska, a ternary computer emulator. It uses base-3 instead of base-2, which makes for some interesting (albeit mind-bending) concepts.
We are essentially mapping a finite space to an infinite set of real number. So it is not even a problem of base anyway.
Base 2 is chosen, like Polynomial said, for implementation reason, as it is easier to differentiate 2 levels of energy.
We either throw more space to represent more numbers/increase precision, or limit the range that we want to encode, or a mix of them.
It boils out to getting the most from available chip area.
If you use on/off switches to represent numbers, you can't get more precision per switch than with a base-2 representation. This is simply because N switches can represent 2^N quantities no matter what you choose these values to be. There were early machines that used base 16 floating point digits, but each of these needed 4 binary bits, so the overall precision per bit was the same as base 2 (actually somewhat less due to edge cases).
If you choose a base that's not a power of 2, precision is obviously lost. For example you need 4 bits to represent one decimal digit, but 6 of the available values of those 4 bits are never used. This system is called binary-coded decimal and it's still used occassionally, usually when doing computations with money.
Multi-level logic could efficiently implement other bases, but at least with current chip technologies, it turns out to be very expensive to implement more than 2 levels. Even quantum computers are being design assuming two quantum levels: quantum bits or qubits.
The nature of the world and math is what makes the floating point situation hopeless. There is a hierarchy of real numbers Integer -> Rational -> Algebraic -> Transendental. There's a wonderful mathematical proof, Cantor diagonalization, that most numbers, i.e. a "bigger infinity" than the other sets, are Transdendental. Yet no matter what floating point system you choose, there will still be lowly rational numbers with no perfect representation (i.e. 1/3 in base 10). This is our universe. No amount of clever hardware design will change it.
Software can and does use rational representations, storing a numerator and denominator as integers. However with these your programmer's hands are tied. For example, square root is not "closed." Sqrt(2) has no rational representation.
There has been research with algebraic number reps and "lazy" reps of arbitrary reals that produce more digits as needed. Most work of this type seems to be in computational geometry.
Your first paragraph makes sense but the second is a non-sequiter. A larger base would not make a difference to the precision.
The precison of a number depends on the amount of storage that is used for it - for example a 16 bit binary number has the same precision as a 2 x 256 base number - both take up the same amount of information.
See the Usual floating point reference. for more detail - and it generalises to all bases.
Yes computers have been built using other bases - I know of ones that use base 10 (decimal) cf wikipaedia
Yes, there are/have been computers that use other than binary (i.e., other than base 2 representations and aritmetic):
Decimal computers.
Designers of computing systems have looked into many alternatives. But it's hard to find a model that's as simple to implement in a physical device than one using two discrete states. So start with a binary circuit that's very easy and cheap to build and work up to a computer with complex operations. That's the history of binary in a nutshell.
I am not a EE, so everything I say below may be totally wrong. But...
The advantage of binary is that it maps very cleanly to distinguishing between on/off (or, more accurately, high/low voltage) states in real circuits. Trying to distinguish between multiple voltages would, I think, present a bit more of a challenge.
This may go completely out the window if quantum computers make it out of the lab.
There are 2 issues arising from the use of binary floating-point numbers to represent mathematical real numbers -- well, there are probably a lot more issues, but 2 is enough for the time being.
All computer numbers are finite so any number which requires an
infinite number of digits cannot be accurately represented on a
computer, whatever number base is chosen. So that deals with pi, e,
and most of the other real numbers.
Whatever base is chosen will have difficulties representing (finitely) some fractions. Base 2 can only approximate any fraction with a factor of 3 in the denominator, but base 5 or base 7 do too.
Over the years computers with circuitry based on devices with more than 2 states have been built. The old Soviet Union developed a series of computers with 3-state devices and at least one US computer manufacturer at one time offered computers using 10-state devices for arithmetic.
I suspect that binary representation has won out (so far) because it is simple, both to reason about and to implement with current electronic devices.
I vote that we move to a Rational number system storage. Two 32 bit intergers that will evaluate as p/q. Multiplication and division will be really cheap operations. Yeah there will be redundant evaluated numbers (1/2 = 2/4), but who really uses the full dynamic range of a 64 bit double anyways.
I'm neither an electrical engineer nor a mathematician, so take that into consideration when I make the following statement:
All floating point numbers can be represented as integers.

How can floating point calculations be made deterministic?

Floating point calculation is neither associative nor distributive on processors. So,
(a + b) + c is not equal to a + (b + c)
and a * (b + c) is not equal to a * b + a * c
Is there any way to perform deterministic floating point calculation that do not give different results. It would be deterministic on uniprocessor ofcourse, but it would not be deterministic in multithreaded programs if threads add to a sum for example, as there might be different interleavings of the threads.
So my question is, how can one achieve deterministic results for floating point calculations in multithreaded programs?
Floating-point is deterministic. The same floating-point operations, run on the same hardware, always produces the same result. There is no black magic, noise, randomness, fuzzing, or any of the other things that people commonly attribute to floating-point. The tooth fairy does not show up, take the low bits of your result, and leave a quarter under your pillow.
Now, that said, certain blocked algorithms that are commonly used for large-scale parallel computations are non-deterministic in terms of the order in which floating-point computations are performed, which can result in non-bit-exact results across runs.
What can you do about it?
First, make sure that you actually can't live with the situation. Many things that you might try to enforce ordering in a parallel computation will hurt performance. That's just how it is.
I would also note that although blocked algorithms may introduce some amount of non-determinism, they frequently deliver results with smaller rounding errors than do naive unblocked serial algorithms (surprising but true!). If you can live with the errors produced by a naive serial algorithm, you can probably live with the errors of a parallel blocked algorithm.
Now, if you really, truly, need exact reproducibility across runs, here are a few suggestions that tend not to adversely affect performance too much:
Don't use multithreaded algorithms that can reorder floating-point computations. Problem solved. This doesn't mean you can't use multithreaded algorithms at all, merely that you need to ensure that each individual result is only touched by a single thread between synchronization points. Note that this can actually improve performance on some architectures if done properly, by reducing D$ contention between cores.
In reduction operations, you can have each thread store its result to an indexed location in an array, wait for all threads to finish, the accumulate the elements of the array in order. This adds a small amount of memory overhead, but is generally pretty tolerable, especially when the number of threads is "small".
Find ways to hoist the parallelism. Instead of computing 24 matrix multiplications, each one of which uses parallel algorithms, compute 24 matrix products in parallel, each one of which uses a serial algorithm. This, too, can be beneficial for performance (sometimes enormously so).
There are lots of other ways to handle this. They all require thought and care. Parallel programming usually does.
Edit: I've removed my old answer since I seem to have misunderstood OP's question. If you want to see it you can read the edit history.
I think the ideal solution would be to switch to having a separate accumulator for each thread. This avoids all locking, which should make a drastic difference to performance. You can simply sum the accumulators at the end of the whole operation.
Alternatively, if you insist on using a single accumulator, one solution is to use "fixed-point" rather than floating point. This can be done with floating-point types by including a giant "bias" term in your accumulator to lock the exponent at a fixed value. For example if you know the accumulator will never exceed 2^32, you can start the accumulator at 0x1p32. This will lock you at 32 bits of precision to the left of the radix point, and 20 bits of fractional precision (assuming double). If that's not enough precision, you could us a smaller bias (assuming the accumulator will not grow too large) or switch to long double. If long double is 80-bit extended format, a bias of 2^32 would give 31 bits of fractional precision.
Then, whenever you want to actually "use" the value of the accumulator, simply subtract out the bias term.
Even using a high-precision fixed point datatype would not solve the problem of making the results for said equations determinisic (except in certain cases). As Keith Thompson pointed out in a comment, 1/3 is a trivial counter-example of a value that cannot be stored correctly in either a standard base-10 or base-2 floating point representation (regardless of precision or memory used).
One solution that, depending upon particular needs, may address this issue (it still has limits) is to use a Rational number data-type (one that stores both a numerator and denominator). Keith suggested GMP as one such library:
GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers. There is no practical limit to the precision...
Whether it is suitable (or adequate) for this task is another story...
Happy coding.
Use a decimal type or library supporting such a type.
Try storing each intermediate result in a volatile object:
volatile double a_plus_b = a + b;
volatile double a_plus_b_plus_c = a_plus_b + c;
This is likely to have nasty effects on performance. I suggest measuring both versions.
EDIT: The purpose of volatile is to inhibit optimizations that might affect the results even in a single-threaded environment, such as changing the order of operations or storing intermediate results in wider registers. It doesn't address multi-threading issues.
EDIT2: Something else to consider is that
A floating expression may be contracted, that is, evaluated as though
it were an atomic operation, thereby omitting rounding errors implied
by the source code and the expression evaluation method.
This can be inhibited by using
#include <math.h>
...
#pragma STDC FP_CONTRACT off
Reference: C99 standard (large PDF), sections 7.12.2 and 6.5 paragraph 8. This is C99-specific; some compilers might not support it.
Use packed decimal.

How can I introduce a small number with a lot of significant figures into a C program?

I'm not particularly knowledgable about programming and I'm trying to figure out how to get a precise value calculated in a C program. I need a constant to the power of negative 7, with 5 significant figures. Any suggestions (keeping in mind I know very little, have never programmed in anything but c and only during required courses that I took years ago at school)?
Thanks!
You can get high-precision math from specialized libraries, but if all you need is 5 significant digits then the built-in float and double types will do fine. Let's go with double for maximum precision.
The negative 7th power is just 1 over your number to the 7th power, so...
double k = 1.2345678; // your constant, whatever it is
double ktominus7 = 1.0 / (k * k * k * k * k * k * k);
...and that's it!
If you want to print out the value, you can do something like
printf("My number is: %9.5g\n", ktominus7);
For a constant value, the required calculation is going to be constant too. So, I recommend you calculate the value using your [desktop calculator / MATLAB / other] then hard-code it in your C code.
In the realm of computer floating-point formats, five significant digits is not a lot. The 32-bit IEEE-754 floating-point type used for float in most implementations of C has 24 bits of precision, which is about 7.2 decimal digits. So you can just use floating-point with no fear. double usually has 53 bits of precision (almost 16 decimal digits). Carl Smotricz's answer is fine, but there's also a pow function in C that you can pass -7.0 to.
There are times when you have to be careful about numerical analysis of your algorithm to ensure you aren't losing precision with intermediate results, but this doesn't sound like one of them.
long double offers the best precision in most cases and can be statically allocated and re-used to keep waste to a minimum. See also quadruple precision. Both change from platform to platform. Quadruple precision says the left most bit (1) continues to dictate signedness, while the next 15 bits dictate the exponent. IEEE 754 (i.e binary128) if the links provided aren't enough, they all lead back to long double :)
Simple shifting should take care of the rest, if I understand you correctly?
you can use log to transform small numbers into larger numbers and do your math on the log transformed version. it's kind of tricky but it will work most of the time. you can also switch to python which does not have this problem as much.

Resources