The Cortex MCU I'm using doesn't have support for floating point divisions in hardware. The GCC compiler solves this by doing them software based, but warns that it can be very slow.
Now I was wondering how I could avoid them altogether. For example, I could blow up the value factor 10000 (integer multiplication), and divide by another large factor (integer division), and get the exact same result.
But would these two operations be actually faster in general than a single floating point operation? For example, does it make sense to replace:
int result = 100 * 0.95f
by
int result = (100 * 9500) / 10000
to get 95% ?
It's better to get rid of division altogether if you can. This is relatively easy if the divisor is a compile-time constant. Typically you arrange things so that any division operation can be replaced by a bitwise shift. So for your example:
unsigned int x = 100;
unsigned int y = (x * (unsigned int)(0.95 * 1024)) >> 10; // y = x * 0.95
Obviously you need to be very aware of the range of x so that you can avoid overflow in the intermediate result.
And as always, remember that premature optimisation is evil - only use fixed point optimisations such as this if you have identified a performance bottleneck.
Yes the integer expression will be faster -the integer divide and multiply instructions are single machine instructions whereas the floating point operations will be either function calls (for direct software floating point) or exception handlers (for FPU instruction emulation) - either way, each operation will comprise multiple instructions.
However, while for simple operations, integer expressions and ad-hoc fixed-point (scaled integer) expressions may be adequate, for math intensive applications involving trigonometry functions and logarithms etc. in can become complex. For that you might employ a common fixed-point representation and library. This is easiest in C++ rather than C, as exemplified by Anthony Williams' fixed point library, where due to extensive operator and function overloading, in most cases you can simply replace the float or double keywords with fixed and existing expressions and algorithms will work with comparable performance to to an FPU equiped ARM for many operations. If you are not comfortable using C++, the rest of your code need not use any C++ specific features, and can essentially be C code compiled as C++.
Related
In my program I have a great presence of the operation n % 10. I know that the module operation can be done much faster when we have n% m where m is the power of 2, since it can be replaced by n & (m-1 ), however Is there any faster way to calculate modulus if the operand is 10?
In my case n is a uint8_t in some cases and in other cases n is an uint32_t.
Because most modern processors can do multiplication much, much faster than division, it is often possible to speed up division and modulus operations where the dividend is a known small constant by replacing the division with one or two multiplications and a few other fast operations (such as shift and addition).
To do so requires computing at compile-time some magic numbers dependent on the dividend; fortunately most modern compilers know how to do this so you don't need to do anything to take advantage. Just let your compiler do the heavy lifting for you, as #chux suggests in an excellent answer.
You can help the compiler by using unsigned types; for some dividends, signed division and modulus are harder to replace.
The basic outline of the optimisation of modulus looks like this:
If you had exact arithmetic, you could replace x % p with p * ((x * (1/p)) % 1). For constant p, 1/p can be precomputed at compile time. The %1 operation simply consists of discarding the fraction part, which is just a right-shift. So that replaces a division with two multiplies, and if p only has a few bits set, the multiply by p might be further optimised into a few left-shifts.
We can do that computation with fixed-point arithmetic, taking advantage of the fact that most processors produce a double-sized result for integer multiplication. Since we don't care about the integer part of the inner multiplication and we know that the result of the outer multiplication must be less than p, we only need to reserve ceil(log2 p) bits for the integer part of the computation, leaving the rest of the bits for the fraction. And that might give us enough precision to correctly handle the possible range of values of x, particularly if x has a limited range (eg. uint8_t or even uint16_t). The key is finding a position of the fixed-point which minimises the error in representation of 1/p.
For many small values of p, that works. For others, there is an alternative (but slower) solution which involves estimating q = x/p using multiplication by the inverse, and then computing x - q * p. If the estimate of q can be guaranteed to be either correct or off by one in a known direction, we only need to correct the final computation by conditionally adding or subtracting p; that can be accomplished without a branch on many modern CPUs. (The direction of the error is known because it will depend only on whether the approximation we chose for the inverse of the dividend was too small or too big.)
In the very specific case of x % 10 where x is a uint_8, you might be able to do better than the above using a 256-byte lookup table. That would only be worthwhile if you were doing the modulus operation in a tight loop over a large number of values, and even then you'd want to profile carefully to verify that it is an improvement.
I doubt whether that's the best expenditure of your time; there are probably much more fruitful optimisation opportunities in your application.
however Is there any faster way to calculate modulus if the operand is 10?
With a good compiler, no. The compiler would have already emitted good code. You can explore different optimization settings with the compiler.
OTOH, if you know of some restrictions that the compiler cannot assume with n % 10, like values are always positive or of a sub-range, you might be able to out optimize the compiler.
Such micro-optimisation is usually not efficient use of programmer's time.
I have my code below and I want to ask what's the best way in solving numbers (division, multiplication, logarithm, exponents) up to 4 decimals places? I'm using PIC16F1789 as my device.
float sensorValue;
float sensorAverage;
void main(){
//Get an average data by testing 100 times
for(int x = 0; x < 100; x++){
// Get the total sum of all 100 data
sensorValue = (sensorValue + ADC_GetConversion(SENSOR));
}
// Get the average
sensorAverage = sensorValue/100.0;
}
In general, on MCUs, floating point types are more costly (clocks, code) to process than integer types. While this is often true for devices which have a hardware floating point unit, it becomes a vital information on devices without, like the PIC16/18 controllers. These have to emulate all floating point operations in software. This can easily cost >100 clock cycles per addition (much more for multiplication) and bloats the code.
So, best is to avoid float (not to speak of double on such systems.
For your example, the ADC returns an integer type anyway, so the summation can be done purely with integer types. You just have to make sure the summand does not overflow, so it has to hold ~100 * for your code.
Finally, to calculate the average, you can either divide the integer by the number of iterations (round to zero), or - better - apply a simple "round to nearest" by:
#define NUMBER_OF_ITERATIONS 100
sensorAverage = (sensorValue + NUMBER_OF_ITERATIONS / 2) / NUMBER_OF_ITERATIONS;
If you really want to speed up your code, set NUMBER_OF_ITERATIONS to a power of two (64 or 128 here), if your code can tolerate this.
Finally: To get not only the integer part of the division, you can treat the sum (sensoreValue) as a fractional value. For the given 100 iterations, you can treat it as decimal fraction: when converting to a string, just print a decimal point left of the lower 2 digits. As you divide by 100, there will be no more than two significal digits of decimal fraction. If you really need 4 digits, e.g. for other operations, you can multiply the sum by 100 (actually, it is 10000, but you already have multipiled it by 100 by the loop).
This is called decimal fixed point. Faster for processing (replaces multiplication by shifts) would be to use binary fixed point, as I stated above.
On PIC16, I would strongly suggest to think of using binary fraction as multiplication and division are very costly on this platform. In general, they are not well suited for signal processing. If you need to sustain some performance, an ARM Cortex-M0 or M4 would be the far better choice - at similar prices.
In your example it is trivial to avoid non-integer representations altogether, however to answer your question more generally an ISO compliant compiler will support floating point arithmetic and the library, but for performance and code size reasons you may want to avoid that.
Fixed-point arithmetic is what you probably need. For simple calculations an ad-hoc approach to fixed point can be used whereby for example you treat the units of sensorAverage in your example as hundredths (1/100), and avoid the expensive division altogether. However if you want to perform full maths library operations, then a better approach is to use a fixed-point library. One such library is presented in Optimizing Applications with Fixed-Point Arithmetic by Anthony Williams. The code is C++ and PIC16 may lack a decent C++ compiler, but the methods can be ported somewhat less elegantly to C. It also uses a huge 64bit fixed-point 36Q28 format, which would be expensive and slow on PIC16; you might want to adapt it to use 16Q16 perhaps.
If you are really concerned about performance, stick to integer arithmetics, try to make the number of samples to average a power of two so the division can be made by means of bit shifts, however if it is not a power of two lets say 100 (as Olaf point out for fixed point) you can also use bit shifts and additions: How can I multiply and divide using only bit shifting and adding?
If you are not concerned about performace and still want to work with floats (you already got warned this may not be very fast in a PIC16 and may use a lot of flash), math.h has the following functions: http://en.cppreference.com/w/c/numeric/math including exponeciation: pow(base,exp) and logarithms* only base 2, base 10 and base e, for arbitrary base use the change of base logarithmic property
Regarding minimising the error in floating-point operations, if I have an operation such as the following in C:
float a = 123.456;
float b = 456.789;
float r = 0.12345;
a = a - (r * b);
Will the result of the calculation change if I split the multiplication and subtraction steps out, i.e.:
float c = r * b;
a = a - c;
I am wondering whether a CPU would then treat these calculations differently and thereby the error may be smaller in one case?
If not, which I presume anyway, are there any good rules-of-thumb to mitigate against floating-point error? Can I massage data in a way that will help?
Please don't just say "use higher precision" - that's not what I'm after.
EDIT
For information about the data, in the general sense errors seem to be worse when the operation results in a very large number like 123456789. Small numbers, such as 1.23456789, seem to yield more accurate results after operations. Am I imagining this, or would scaling larger numbers help accuracy?
Note: this answer starts with a lengthy discussion of the distinction between a = a - (r * b); and float c = r * b; a = a - c; with a c99-compliant compiler. The part of the question about the goal of improving accuracy while avoiding extended precision is covered at the end.
Extended floating-point precision for intermediate results
If your C99 compiler defines FLT_EVAL_METHOD as 0, then the two computations can be expected to produce exactly the same result. If the compiler defines FLT_EVAL_METHOD to 1 or 2, then a = a - (r * b); will be more precise for some values of a, r and b, because all intermediate computations will be done at an extended precision (double for the value 1 and long double for the value 2).
The program cannot set FLT_EVAL_METHOD, but you can use commandline options to change the way your compiler computes with floating-point, and that will make it change its definition accordingly.
Contraction of some intermediate results
Depending whether you use #pragma fp_contract in your program and on your compiler's default value for this pragma, some compound floating-point expressions can be contracted into single instructions that behave as if the intermediate result was computed with infinite precision. This happens to be a possibility for your example when targeting a modern processor, as the fused-multiply-add instruction will compute a directly and as accurately as allowed by the floating-point type.
However, you should bear in mind that the contraction only take place at the compiler's option, without any guarantees. The compiler uses the FMA instruction to optimize speed, not accuracy, so the transformation may not take place at lower optimization levels. Sometimes several transformations are possible (e.g. a * b + c * d can be computed either as fmaf(c, d, a*b) or as fmaf(a, b, c*d)) and the compiler may choose one or the other.
In short, the contraction of floating-point computations is not intended to help you achieve accuracy. You might as well make sure it is disabled if you like reproducible results.
However, in the particular case of the fused-multiply-add compound operation, you can use the C99 standard function fmaf() to tell the compiler to compute the multiplication and addition in a single step with a single rounding. If you do this, then the compiler will not be allowed to produce anything else than the best result for a.
float fmaf(float x, float y, float z);
DESCRIPTION
The fma() functions compute (x*y)+z, rounded as one ternary operation:
they compute the value (as if) to infinite precision and round once to
the result format, according to the current rounding mode.
Note that if the FMA instruction is not available, your compiler's implementation of the function fmaf() will at best just use higher precision, and if this happens on your compilation platform, your might just as well use the type double for the accumulator: it will be faster and more accurate than using fmaf(). In the worst case, a flawed implementation of fmaf() will be provided.
Improving accuracy while only using single-precision
Use Kahan summation if your computation involves a long chain of additions. Some accuracy can be gained by simply summing the r*b terms computed as single-precision products, assuming there are many of them. If you wish to gain more accuracy, you might want to compute r*b itself exactly as the sum of two single-precision numbers, but if you do this you might as well switch to double-single arithmetics entirely. Double-single arithmetics would be the same as the double-double technique succinctly described here, but with single-precision numbers instead.
Suppose I have a very small float a (for instance a=0.5) that enters the following expression:
6000.f * a * a;
Does the order of the operands make any difference? Is it better to write
6000.f * (a*a);
Or even
float result = a*a;
result *= 6000.f;
I've checked the classic What Every Computer Scientist Should Know About Floating-Point Arithmetic but couldn't find anything.
Is there an optimal way to order operands in a floating point operation?
It really depends on the values and your goals. For instance if a is very small, a*a might be zero, whereas 6000.0*a*a (which means (6000.0*a)*a) could still be nonzero. For avoiding overflow and underflow, the general rule is to apply the associative law to first perform multiplications where the operands' logs have opposite sign, which means squaring first is generally a worst strategy. On the other hand, for performance reasons, squaring first might be a very good strategy if you can reuse the value of the square. You may encounter yet another issue, which could matter more for correctness than overflow/underflow issues if your numbers will never be very close to zero or infinity: certain multiplications may be guaranteed to have exact answers, while others involve rounding. In general you'll get the most accurate results by minimizing the number of rounding steps that happen.
The optimal way depends on the purpose, really.
First of all, multiplication is faster than division.
So if you have to write a = a / 2;, it is better to write a = a * 0.5f;.
Your compiler is usually smart enough to replace division with multiplication on constants if the results is the same, but it will not do that with variables of course.
Sometimes, you can optimize a bit by replacing divisions with multiplications, but there may be problems with precision.
Some other operations may be faster but less precise.
Let's take an example.
float f = (a * 100000) / (b * 10);
float g = (a / b) * (100000 / 10);
These are mathematically equivalent but the result can be a little different.
The first uses two multiplication and one division, the second uses one division and one multiplication. In both cases there may be a loss in precision, it depends on the size of a and b, if they are small values first works better, if they are large values second works better
Then... if you have several constants and you want speed, group contants together.
float a = 6.3f * a * 2.0f * 3.1f;
Just write
a = a * (6.3f * 2.0f * 3.1f);
Some compiler optimize well, some other optimize less, but in both cases there is no risk in keeping all constants together.
After we say this we should talk for hours on how processors works.
Even the same family like intel works in a different way between generations!
Some compilers uses SSE instructions, some other doesn't.
Some processor supports SSE2, some SSE, some only MMX... some system don't have an FPU neither!
Each system do better some calculations than other, finding a common thing is hard.
You should just write a readable code, clean and simple, without worryng too much about these unpredictable very low level optimizations.
If your expression looks complicated, do some algebra and\or go to wolframalpha search engine and ask him to optimize that for you :)
Said that, you don't really need to declare one variable and replace its content over and over, compiler usually can optimize less in this situation.
a = 5 + b;
a /= 2 * c;
a += 2 - c;
a *= 7;
just write your expression avoiding this mess :)
a = ((5 + b) / (2 * c) + 2 - c) * 7;
About your specific example, 6000.f * a * a, just write it as you write it, no need to change it; it is fine as it is.
Not typically, no.
That being said, if you're doing multiple operations with large values, it may make sense to order them in a way that avoids overflows or reduces precision errors, based on their precedence and associativity, if the algorithm provides a way to make that obvious. This would, however, require advance knowledge of the values involved, and not just be based on the syntax.
There are indeed algorithms to minimize cumulative error in a sequence of floating-point operations. One such is http://en.wikipedia.org/wiki/Kahan_summation_algorithm. Others exist for other operations: http://www.cs.cmu.edu/~quake-papers/related/Priest.ps.
Floating point calculation is neither associative nor distributive on processors. So,
(a + b) + c is not equal to a + (b + c)
and a * (b + c) is not equal to a * b + a * c
Is there any way to perform deterministic floating point calculation that do not give different results. It would be deterministic on uniprocessor ofcourse, but it would not be deterministic in multithreaded programs if threads add to a sum for example, as there might be different interleavings of the threads.
So my question is, how can one achieve deterministic results for floating point calculations in multithreaded programs?
Floating-point is deterministic. The same floating-point operations, run on the same hardware, always produces the same result. There is no black magic, noise, randomness, fuzzing, or any of the other things that people commonly attribute to floating-point. The tooth fairy does not show up, take the low bits of your result, and leave a quarter under your pillow.
Now, that said, certain blocked algorithms that are commonly used for large-scale parallel computations are non-deterministic in terms of the order in which floating-point computations are performed, which can result in non-bit-exact results across runs.
What can you do about it?
First, make sure that you actually can't live with the situation. Many things that you might try to enforce ordering in a parallel computation will hurt performance. That's just how it is.
I would also note that although blocked algorithms may introduce some amount of non-determinism, they frequently deliver results with smaller rounding errors than do naive unblocked serial algorithms (surprising but true!). If you can live with the errors produced by a naive serial algorithm, you can probably live with the errors of a parallel blocked algorithm.
Now, if you really, truly, need exact reproducibility across runs, here are a few suggestions that tend not to adversely affect performance too much:
Don't use multithreaded algorithms that can reorder floating-point computations. Problem solved. This doesn't mean you can't use multithreaded algorithms at all, merely that you need to ensure that each individual result is only touched by a single thread between synchronization points. Note that this can actually improve performance on some architectures if done properly, by reducing D$ contention between cores.
In reduction operations, you can have each thread store its result to an indexed location in an array, wait for all threads to finish, the accumulate the elements of the array in order. This adds a small amount of memory overhead, but is generally pretty tolerable, especially when the number of threads is "small".
Find ways to hoist the parallelism. Instead of computing 24 matrix multiplications, each one of which uses parallel algorithms, compute 24 matrix products in parallel, each one of which uses a serial algorithm. This, too, can be beneficial for performance (sometimes enormously so).
There are lots of other ways to handle this. They all require thought and care. Parallel programming usually does.
Edit: I've removed my old answer since I seem to have misunderstood OP's question. If you want to see it you can read the edit history.
I think the ideal solution would be to switch to having a separate accumulator for each thread. This avoids all locking, which should make a drastic difference to performance. You can simply sum the accumulators at the end of the whole operation.
Alternatively, if you insist on using a single accumulator, one solution is to use "fixed-point" rather than floating point. This can be done with floating-point types by including a giant "bias" term in your accumulator to lock the exponent at a fixed value. For example if you know the accumulator will never exceed 2^32, you can start the accumulator at 0x1p32. This will lock you at 32 bits of precision to the left of the radix point, and 20 bits of fractional precision (assuming double). If that's not enough precision, you could us a smaller bias (assuming the accumulator will not grow too large) or switch to long double. If long double is 80-bit extended format, a bias of 2^32 would give 31 bits of fractional precision.
Then, whenever you want to actually "use" the value of the accumulator, simply subtract out the bias term.
Even using a high-precision fixed point datatype would not solve the problem of making the results for said equations determinisic (except in certain cases). As Keith Thompson pointed out in a comment, 1/3 is a trivial counter-example of a value that cannot be stored correctly in either a standard base-10 or base-2 floating point representation (regardless of precision or memory used).
One solution that, depending upon particular needs, may address this issue (it still has limits) is to use a Rational number data-type (one that stores both a numerator and denominator). Keith suggested GMP as one such library:
GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers. There is no practical limit to the precision...
Whether it is suitable (or adequate) for this task is another story...
Happy coding.
Use a decimal type or library supporting such a type.
Try storing each intermediate result in a volatile object:
volatile double a_plus_b = a + b;
volatile double a_plus_b_plus_c = a_plus_b + c;
This is likely to have nasty effects on performance. I suggest measuring both versions.
EDIT: The purpose of volatile is to inhibit optimizations that might affect the results even in a single-threaded environment, such as changing the order of operations or storing intermediate results in wider registers. It doesn't address multi-threading issues.
EDIT2: Something else to consider is that
A floating expression may be contracted, that is, evaluated as though
it were an atomic operation, thereby omitting rounding errors implied
by the source code and the expression evaluation method.
This can be inhibited by using
#include <math.h>
...
#pragma STDC FP_CONTRACT off
Reference: C99 standard (large PDF), sections 7.12.2 and 6.5 paragraph 8. This is C99-specific; some compilers might not support it.
Use packed decimal.