EXIF fraction calculation - c

I am modifying jpeg EXIF data. Some data requires to be defined as a fractions.
Here, I have two problems:
1.) Which is the correct fraction "format"? For instance when I have an exposure time ("ExposureTime") of 30000µs and store it as 30000/1000000 the EXIF viewer shows the wrong exposure. Storing as "1/30" returns the correct result. Do all fractions have to be "1/x"?
2.) How can I calculate the fraction fast? The method I am using now (similar to "Dec2Frac" in [1]) is very slow.
Regards,
[1] Calculating EXIF exposure time as a fraction (Delphi)

This is the code I have used in C# for calculating a fraction for EXIF GPS data. The method returns an array of two integers - one being the numerator and one the denominator.
public static int[] GetFraction(Decimal value)
{
int denominator = 1;
int numeratorMultiplier = 1;
Decimal numerator = value * numeratorMultiplier;
int failSafe = 0;
while (Decimal.Remainder(numerator, 1m) != 0m && failSafe < 20 && ((long)numerator * 10) < Int32.MaxValue)
{
denominator *= 10;
numeratorMultiplier *= 10;
numerator = value * numeratorMultiplier;
failSafe++;
}
return new int[] {Decimal.ToInt32(numerator), denominator};
}

Related

Rounding function in C

So i am trying to write a code which can allow me to round UP any number to 3 decimal places. My code for rounding up a number was like this :
for (rowIndex = 0; rowIndex < MAX_ROWS; rowIndex++)
{
for (columnIndex = 0; columnIndex < MAX_COLUMNS; columnIndex++)
{
printf("%.3f ", ceil(rawData[rowIndex][columnIndex] * 1000.0) / 1000.0);
}
}
But yesterday my teacher told us to use a code which has a structure like this:
float roundValue(float value, int decimalPlaces)
{
// Place rounding code here
return value;
}
i am not quite sure how to write the code in this format! I am a beginner in coding so this might be so silly.
UPDATE:
so i just read all the comments below and tried to write the code but still has a problem. my code is :
double roundValue(double value, int decimalPlaces)
{
value = roundf( value * pow(10, decimalPlaces)) / pow(10, decimalPlaces);
return value;
}
int main(void)
{
int rowIndex = 0;
int columnIndex = 0;
double rawData[MAX_ROWS][MAX_COLUMNS]; // 2-dimensional array to store our
raw data
double value = rawData[MAX_ROWS][MAX_COLUMNS];
int decimalPlaces = 3;
// Print out the roundup data array
printf(" --- ROUNDED DATA ---\n");
for (rowIndex = 0; rowIndex < MAX_ROWS; rowIndex++)
{
for (columnIndex = 0; columnIndex < MAX_COLUMNS; columnIndex++)
{
printf("%.3f ", roundValue(value, 3));
}
printf("\n");
}
return 0;
}
it gives me only 0 for all the numbers.
Based on this answer, you could use the roundf function found in math.h:
#include <stdio.h>
#include <math.h>
/* function that rounds a float to the specified number of decimals */
float roundValue(float value, int decimalPlaces)
{
value = roundf(value * pow(10, decimalPlaces)) / pow(10, decimalPlaces);
return value;
}
/*to see the results: */
int main()
{
float value = 12.34567;
printf("%f", roundValue(value, 3));
return 0;
}
Compilation/run:
$ gcc -lm main.c
$ ./a.out
12.346000
He just told you to write your code in a function that you can call in the main() function.
So instead of rewriting your code every time you need a round value you can use a function, you give it the number that you want to calculate the round value for, and it will give you the result so your code won't be repetitive
Essentially it can't be done. The problem is that 0.1, or 0.001, cannot be represented in floating point format exactly. So you can only round to the nearest representation rounded = floor(x * 1000 + 0.5)/1000.0. It's best to use the full accuracy of the double, then round at the last moment, for display.
printf("%.3g", x);
will achieve this for you. In combination with strtod it's also another technique for rounding.
.. to round UP any number to 3 decimal places.
my teacher told us to use a code ... like float roundValue(float value, int decimalPlaces)
Without going to higher precision, it is very difficult to meet OP's goal with the best answer for all value.
Rounding a floating pointer value a) up or b) to the nearest representable 0.001 (or 10-n) is usually done in steps.
1) Multiply by 10n
2) Round a) up or b) to nearest
3) Divide by 10n
float roundValue(float value, int decimalPlaces) {
// form the power of 10
assert(decimalPlaces >= 0 && decimalPlaces <= 9);
int power_of_10 = 1;
while (decimalPlaces-- > 0) power_of_10 *= 10;
double fpower_of_10 = power_of_10; // or just use `pow(10, decimalPlaces);
Scaling by a power of 10 introduces imprecision. This slight error is magnified in the rounding step. A simple work-around it to use higher precision math. Fortunately the coding goal started with a float value and double often has higher precision.
Scaling by a power-of 10 can cause overflow, yet that is not likely when value is float and the product is double which has a wider range.
double y = value * fpower_of_10;
// round
double rounded_y = ceil(y); // round up
// or
double rounded_y = round(y); // round to nearest
The quotient will rarely provide an exact multiple of 0.001 (or whatever power-of-10) but a floating point value that is near a multiple of 0.001.
y = rounded_y / fpower_of_10;
return y;
}
Usage follows. Recall that unless your floating point types use a FLT_RADIX == 10 (very rare these days, usually 2), The result with only be near the desired "number to n decimal places.". If done well, the result will be the nearest possible float/double.
printf("%f\n", roundValue(123.456789, 3));
printf("%.10f\n", roundValue(123.456789, 3)); // to see more
More: an easy way to avoid overflow issues if higher precision is not available or used is to recognize that great C floating-points values have no fractional part and need no rounding.
float roundValue(float value, int decimalPlaces) {
float int_ptr;
float frac_part = modff(value, &int_ptr);
if (frac_part == 0) return value;
double pow10 = pow(10, decimalPlaces);
return round(value * pow10)/pow10; // or ceil()
}
There are other small subtle issues not noted here. double rounding, NaN, rounding mode, round() vs. rint(), nearbyint().

a slight confusion with floating point values in C. (cs50 pset1 greedy.c)

I can't understand why the code below doesn't work properly with the value 4.2. I learnt using a debugger that 4.2 isn't actually the number four point two; rather as a floating point value 4.2 becomes 4.19999981
To make up for this, I just added change = change + 0.00001; there on line 18.
Why do I have to do that? Why is this the way floating point integers work?
#include <stdio.h>
#include <cs50.h>
float change;
int coinTotal;
int main(void)
{
do {
// Prompting the user to give the change amount
printf("Enter change: ");
// Getting a float from the user
change = get_float();
}
while (change < 0);
change = change + 0.00001;
// Subtracting quarters from the change given
for (int i = 0; change >= 0.25; i++)
{
change = change - 0.25;
coinTotal++;
}
// Subtracting nickels from the remaining change
for(int i = 0; change >= 0.1; i++)
{
change = change - 0.1;
coinTotal++;
}
// Subtracting dimes from the remaining change
for(int i = 0; change >= 0.05; i++)
{
change = change - 0.05;
coinTotal++;
}
// Subtracting pennies from the remaining change
for(int i = 0; change >= 0.01; i++)
{
change = change - 0.01;
coinTotal++;
}
// Printing total coins used
printf("%i\n", coinTotal);
}
Typically float can represent about 232 different values exactly. With float, 4.2 is not one of them. Instead the value is about 4.19999981 as OP has reported.
Working with fractional money is tricky. Rarely is float an acceptable type for money. This details some alternatives like base-10 FP, double, integers and custom types.
If code stays with some FP type, change >= 0.1, and other compares, need to alter to change >= (0.01 - 0.005) or the like. The compare needs to be tolerant of values just less than or greater than a multiple of 0.01.
As you have discovered. It's impossible to represent rational numbers as floating-point values on computers, due to the fact that the machine is storing it in a somewhat fixed sized ammount of bits.
The most common standard is IEEE 754 Check here
Most commonly you will work with floats that are in single precision (32 bits in total). The number is represented as 1 bit for sign, 8 bits for exponent , 23 bits for mantissa.
The representation is as follows x=S*M*B^E where:
S - sign (-1 or 1)
M - mantissa (a normalized fraction)
B - Base (here as 2)
E - exponent ( 8bits -> -128,127 or 0,255 depending on definition in standard)
This fraction is (M) causing the problems with accurate representation of values. You need to represent a certain aproximation while being given a limited ammount of bits (You can only accurately represent values that can be combined by summing 1/2, 1/4, 1/8... )
Commonly 32 bits allows you for precision for around 6 places in fraction.
You can use 64 bit (double) for a greater range and slightly better precision.
Make every number in your program 100 times bigger, use the math.h roundf function, and divide the result by 100 when you are about to print the value to the screen.

Round-off error when calculating a geometric mean [duplicate]

I need to compute the geometric mean of a large set of numbers, whose values are not a priori limited. The naive way would be
double geometric_mean(std::vector<double> const&data) // failure
{
auto product = 1.0;
for(auto x:data) product *= x;
return std::pow(product,1.0/data.size());
}
However, this may well fail because of underflow or overflow in the accumulated product (note: long double doesn't really avoid this problem). So, the next option is to sum-up the logarithms:
double geometric_mean(std::vector<double> const&data)
{
auto sumlog = 0.0;
for(auto x:data) sum_log += std::log(x);
return std::exp(sum_log/data.size());
}
This works, but calls std::log() for every element, which is potentially slow. Can I avoid that? For example by keeping track of (the equivalent of) the exponent and the mantissa of the accumulated product separately?
The "split exponent and mantissa" solution:
double geometric_mean(std::vector<double> const & data)
{
double m = 1.0;
long long ex = 0;
double invN = 1.0 / data.size();
for (double x : data)
{
int i;
double f1 = std::frexp(x,&i);
m*=f1;
ex+=i;
}
return std::pow( std::numeric_limits<double>::radix,ex * invN) * std::pow(m,invN);
}
If you are concerned that ex might overflow you can define it as a double instead of a long long, and multiply by invN at every step, but you might lose a lot of precision with this approach.
EDIT For large inputs, we can split the computation in several buckets:
double geometric_mean(std::vector<double> const & data)
{
long long ex = 0;
auto do_bucket = [&data,&ex](int first,int last) -> double
{
double ans = 1.0;
for ( ;first != last;++first)
{
int i;
ans *= std::frexp(data[first],&i);
ex+=i;
}
return ans;
};
const int bucket_size = -std::log2( std::numeric_limits<double>::min() );
std::size_t buckets = data.size() / bucket_size;
double invN = 1.0 / data.size();
double m = 1.0;
for (std::size_t i = 0;i < buckets;++i)
m *= std::pow( do_bucket(i * bucket_size,(i+1) * bucket_size),invN );
m*= std::pow( do_bucket( buckets * bucket_size, data.size() ),invN );
return std::pow( std::numeric_limits<double>::radix,ex * invN ) * m;
}
I think I figured out a way to do it, it combined the two routines in the question, similar to Peter's idea. Here is an example code.
double geometric_mean(std::vector<double> const&data)
{
const double too_large = 1.e64;
const double too_small = 1.e-64;
double sum_log = 0.0;
double product = 1.0;
for(auto x:data) {
product *= x;
if(product > too_large || product < too_small) {
sum_log+= std::log(product);
product = 1;
}
}
return std::exp((sum_log + std::log(product))/data.size());
}
The bad news is: this comes with a branch. The good news: the branch predictor is likely to get this almost always right (the branch should only rarely be triggered).
The branch could be avoided using Peter's idea of a constant number of terms in the product. The problem with that is that overflow/underflow may still occur within only a few terms, depending on the values.
You may be able to accelerate this by multiplying numbers as in your original solution and only converting to logarithms every certain number of multiplications (depending on the size of your initial numbers).
A different approach which would give better accuracy and performance than the logarithm method would be to compensate out-of-range exponents by a fixed amount, maintaining an exact logarithm of the cancelled excess. Like so:
const int EXP = 64; // maximal/minimal exponent
const double BIG = pow(2, EXP); // overflow threshold
const double SMALL = pow(2, -EXP); // underflow threshold
double product = 1;
int excess = 0; // number of times BIG has been divided out of product
for(int i=0; i<n; i++)
{
product *= A[i];
while(product > BIG)
{
product *= SMALL;
excess++;
}
while(product < SMALL)
{
product *= BIG;
excess--;
}
}
double mean = pow(product, 1.0/n) * pow(BIG, double(excess)/n);
All multiplications by BIG and SMALL are exact, and there's no calls to log (a transcendental, and therefore particularly imprecise, function).
There is simple idea to reduce computation and also to prevent overflow. You can group together numbers say atleast two at time and calculate their log and then evaluate their sum.
log(abcde) = 5*log(K)
log(ab) + log(cde) = 5*log(k)
Summing logs to compute products stably is perfectly fine, and rather efficient (if this is not enough: there are ways to get vectorized logarithms with a few SSE operations -- there are also Intel MKL's vector operations).
To avoid overflow, a common technique is to divide every number by the maximum or minimum magnitude entry beforehand (or sum log differences to the log max or log min). You can also use buckets if the numbers vary a lot (eg. sum the log of small numbers and large numbers separately). Note that typically neither of this is needed except for very large sets since the log of a double is never huge (between say -700 and 700).
Also, you need to keep track of the signs separately.
Computing log x keeps typically the same number of significant digits as x, except when x is close to 1: you want to use std::log1p if you need to compute prod(1 + x_n) with small x_n.
Finally, if you have roundoff error problems when summing, you can use Kahan summation or variants.
Instead of using logarithms, which are very expensive, you can directly scale the results by powers of two.
double geometric_mean(std::vector<double> const&data) {
double huge = scalbn(1,512);
double tiny = scalbn(1,-512);
int scale = 0;
double product = 1.0;
for(auto x:data) {
if (x >= huge) {
x = scalbn(x, -512);
scale++;
} else if (x <= tiny) {
x = scalbn(x, 512);
scale--;
}
product *= x;
if (product >= huge) {
product = scalbn(product, -512);
scale++;
} else if (product <= tiny) {
product = scalbn(product, 512);
scale--;
}
}
return exp2((512.0*scale + log2(product)) / data.size());
}

Calculate maclaurin series for sin using C

I wrote a code for calculating sin using its maclaurin series and it works but when I try to calculate it for large x values and try to offset it by giving a large order N (the length of the sum) - eventually it overflows and doesn't give me correct results. This is the code and I would like to know is there an additional way to optimize it so it works for large x values too (it already works great for small x values and really big N values).
Here is the code:
long double calcMaclaurinPolynom(double x, int N){
long double result = 0;
long double atzeretCounter = 2;
int sign = 1;
long double fraction = x;
for (int i = 0; i <= N; i++)
{
result += sign*fraction;
sign = sign*(-1);
fraction = fraction*((x*x) / ((atzeretCounter)*(atzeretCounter + 1)));
atzeretCounter += 2;
}
return result;
}
The major issue is using the series outside its range where it well converges.
As OP said "converted x to radX = (x*PI)/180" indicates the OP is starting with degrees rather than radians, the OP is in luck. The first step in finding my_sin(x) is range reduction. When starting with degrees, the reduction is exact. So reduce the range before converting to radians.
long double calcMaclaurinPolynom(double x /* degrees */, int N){
// Reduce to range -360 to 360
// This reduction is exact, no round-off error
x = fmod(x, 360);
// Reduce to range -180 to 180
if (x >= 180) {
x -= 180;
x = -x;
} else if (x <= -180) {
x += 180;
x = -x;
}
// Reduce to range -90 to 90
if (x >= 90) {
x = 180 - x;
} else if (x <= -90) {
x = -180 - x;
}
//now convert to radians.
x = x*PI/180;
// continue with regular code
Alternative, if using C11, use remquo(). Search SO for sample code.
As #user3386109 commented above, no need to "convert back to degrees".
[Edit]
With typical summation series, summing the least significant terms first improves the precision of the answer. With OP's code this can be done with
for (int i = N; i >= 0; i--)
Alternatively, rather than iterating a fixed number of times, loop until the term has no significance to the sum. The following uses recursion to sum the least significant terms first. With range reduction in the -90 to 90 range, the number of iterations is not excessive.
static double sin_d_helper(double term, double xx, unsigned i) {
if (1.0 + term == 1.0)
return term;
return term - sin_d_helper(term * xx / ((i + 1) * (i + 2)), xx, i + 2);
}
#include <math.h>
double sin_d(double x_degrees) {
// range reduction and d --> r conversion from above
double x_radians = ...
return x_radians * sin_d_helper(1.0, x_radians * x_radians, 1);
}
You can avoid the sign variable by incorporating it into the fraction update as in (-x*x).
With your algorithm you do not have problems with integer overflow in the factorials.
As soon as x*x < (2*k)*(2*k+1) the error - assuming exact evaluation - is bounded by abs(fraction), i.e., the size of the next term in the series.
For large x the biggest source for errors is truncation resp. floating point errors that are magnified via cancellation of the terms of the alternating series. For k about x/2 the terms around the k-th term have the biggest size and have to be offset by other big terms.
Halving-and-Squaring
One easy method to deal with large x without using the value of pi is to employ the trigonometric theorems where
sin(2*x)=2*sin(x)*cos(x)
cos(2*x)=2*cos(x)^2-1=cos(x)^2-sin(x)^2
and first reduce x by halving, simultaneously evaluating the Maclaurin series for sin(x/2^n) and cos(x/2^n) and then employ trigonometric squaring (literal squaring as complex numbers cos(x)+i*sin(x)) to recover the values for the original argument.
cos(x/2^(n-1)) = cos(x/2^n)^2-sin(x/2^n)^2
sin(x/2^(n-1)) = 2*sin(x/2^n)*cos(x/2^n)
then
cos(x/2^(n-2)) = cos(x/2^(n-1))^2-sin(x/2^(n-1))^2
sin(x/2^(n-2)) = 2*sin(x/2^(n-1))*cos(x/2^(n-1))
etc.
See https://stackoverflow.com/a/22791396/3088138 for the simultaneous computation of sin and cos values, then encapsulate it with
def CosSinForLargerX(x,n):
k=0
while abs(x)>1:
k+=1; x/=2
c,s = getCosSin(x,n)
r2=0
for i in range(k):
s2=s*s; c2=c*c; r2=s2+c2
s = 2*c*s
c = c2-s2
return c/r2,s/r2

Efficient implementation of natural logarithm (ln) and exponentiation

I'm looking for implementation of log() and exp() functions provided in C library <math.h>. I'm working with 8 bit microcontrollers (OKI 411 and 431). I need to calculate Mean Kinetic Temperature. The requirement is that we should be able to calculate MKT as fast as possible and with as little code memory as possible. The compiler comes with log() and exp() functions in <math.h>. But calling either function and linking with the library causes the code size to increase by 5 Kilobytes, which will not fit in one of the micro we work with (OKI 411), because our code already consumed ~12K of available ~15K code memory.
The implementation I'm looking for should not use any other C library functions (like pow(), sqrt() etc). This is because all library functions are packed in one library and even if one function is called, the linker will bring whole 5K library to code memory.
EDIT
The algorithm should be correct up to 3 decimal places.
Using Taylor series is not the simplest neither the fastest way of doing this. Most professional implementations are using approximating polynomials. I'll show you how to generate one in Maple (it is a computer algebra program), using the Remez algorithm.
For 3 digits of accuracy execute the following commands in Maple:
with(numapprox):
Digits := 8
minimax(ln(x), x = 1 .. 2, 4, 1, 'maxerror')
maxerror
Its response is the following polynomial:
-1.7417939 + (2.8212026 + (-1.4699568 + (0.44717955 - 0.056570851 * x) * x) * x) * x
With the maximal error of: 0.000061011436
We generated a polynomial which approximates the ln(x), but only inside the [1..2] interval. Increasing the interval is not wise, because that would increase the maximal error even more. Instead of that, do the following decomposition:
So first find the highest power of 2, which is still smaller than the number (See: What is the fastest/most efficient way to find the highest set bit (msb) in an integer in C?). That number is actually the base-2 logarithm. Divide with that value, then the result gets into the 1..2 interval. At the end we will have to add n*ln(2) to get the final result.
An example implementation for numbers >= 1:
float ln(float y) {
int log2;
float divisor, x, result;
log2 = msb((int)y); // See: https://stackoverflow.com/a/4970859/6630230
divisor = (float)(1 << log2);
x = y / divisor; // normalized value between [1.0, 2.0]
result = -1.7417939 + (2.8212026 + (-1.4699568 + (0.44717955 - 0.056570851 * x) * x) * x) * x;
result += ((float)log2) * 0.69314718; // ln(2) = 0.69314718
return result;
}
Although if you plan to use it only in the [1.0, 2.0] interval, then the function is like:
float ln(float x) {
return -1.7417939 + (2.8212026 + (-1.4699568 + (0.44717955 - 0.056570851 * x) * x) * x) * x;
}
The Taylor series for e^x converges extremely quickly, and you can tune your implementation to the precision that you need. (http://en.wikipedia.org/wiki/Taylor_series)
The Taylor series for log is not as nice...
If you don't need floating-point math for anything else, you may compute an approximate fractional base-2 log pretty easily. Start by shifting your value left until it's 32768 or higher and store the number of times you did that in count. Then, repeat some number of times (depending upon your desired scale factor):
n = (mult(n,n) + 32768u) >> 16; // If a function is available for 16x16->32 multiply
count<<=1;
if (n < 32768) n*=2; else count+=1;
If the above loop is repeated 8 times, then the log base 2 of the number will be count/256. If ten times, count/1024. If eleven, count/2048. Effectively, this function works by computing the integer power-of-two logarithm of n**(2^reps), but with intermediate values scaled to avoid overflow.
Would basic table with interpolation between values approach work? If ranges of values are limited (which is likely for your case - I doubt temperature readings have huge range) and high precisions is not required it may work. Should be easy to test on normal machine.
Here is one of many topics on table representation of functions: Calculating vs. lookup tables for sine value performance?
Necromancing.
I had to implement logarithms on rational numbers.
This is how I did it:
Occording to Wikipedia, there is the Halley-Newton approximation method
which can be used for very-high precision.
Using Newton's method, the iteration simplifies to (implementation), which has cubic convergence to ln(x), which is way better than what the Taylor-Series offers.
// Using Newton's method, the iteration simplifies to (implementation)
// which has cubic convergence to ln(x).
public static double ln(double x, double epsilon)
{
double yn = x - 1.0d; // using the first term of the taylor series as initial-value
double yn1 = yn;
do
{
yn = yn1;
yn1 = yn + 2 * (x - System.Math.Exp(yn)) / (x + System.Math.Exp(yn));
} while (System.Math.Abs(yn - yn1) > epsilon);
return yn1;
}
This is not C, but C#, but I'm sure anybody capable to program in C will be able to deduce the C-Code from that.
Furthermore, since
logn(x) = ln(x)/ln(n).
You have therefore just implemented logN as well.
public static double log(double x, double n, double epsilon)
{
return ln(x, epsilon) / ln(n, epsilon);
}
where epsilon (error) is the minimum precision.
Now as to speed, you're probably better of using the ln-cast-in-hardware, but as I said, I used this as a base to implement logarithms on a rational numbers class working with arbitrary precision.
Arbitrary precision might be more important than speed, under certain circumstances.
Then, use the logarithmic identities for rational numbers:
logB(x/y) = logB(x) - logB(y)
In addition to Crouching Kitten's answer which gave me inspiration, you can build a pseudo-recursive (at most 1 self-call) logarithm to avoid using polynomials. In pseudo code
ln(x) :=
If (x <= 0)
return NaN
Else if (!(1 <= x < 2))
return LN2 * b + ln(a)
Else
return taylor_expansion(x - 1)
This is pretty efficient and precise since on [1; 2) the taylor series converges A LOT faster, and we get such a number 1 <= a < 2 with the first call to ln if our input is positive but not in this range.
You can find 'b' as your unbiased exponent from the data held in the float x, and 'a' from the mantissa of the float x (a is exactly the same float as x, but now with exponent biased_0 rather than exponent biased_b). LN2 should be kept as a macro in hexadecimal floating point notation IMO. You can also use http://man7.org/linux/man-pages/man3/frexp.3.html for this.
Also, the trick
unsigned long tmp = *(ulong*)(&d);
for "memory-casting" double to unsigned long, rather than "value-casting", is very useful to know when dealing with floats memory-wise, as bitwise operators will cause warnings or errors depending on the compiler.
Possible computation of ln(x) and expo(x) in C without <math.h> :
static double expo(double n) {
int a = 0, b = n > 0;
double c = 1, d = 1, e = 1;
for (b || (n = -n); e + .00001 < (e += (d *= n) / (c *= ++a)););
// approximately 15 iterations
return b ? e : 1 / e;
}
static double native_log_computation(const double n) {
// Basic logarithm computation.
static const double euler = 2.7182818284590452354 ;
unsigned a = 0, d;
double b, c, e, f;
if (n > 0) {
for (c = n < 1 ? 1 / n : n; (c /= euler) > 1; ++a);
c = 1 / (c * euler - 1), c = c + c + 1, f = c * c, b = 0;
for (d = 1, c /= 2; e = b, b += 1 / (d * c), b - e/* > 0.0000001 */;)
d += 2, c *= f;
} else b = (n == 0) / 0.;
return n < 1 ? -(a + b) : a + b;
}
static inline double native_ln(const double n) {
// Returns the natural logarithm (base e) of N.
return native_log_computation(n) ;
}
static inline double native_log_base(const double n, const double base) {
// Returns the logarithm (base b) of N.
return native_log_computation(n) / native_log_computation(base) ;
}
Try it Online
Building off #Crouching Kitten's great natural log answer above, if you need it to be accurate for inputs <1 you can add a simple scaling factor. Below is an example in C++ that i've used in microcontrollers. It has a scaling factor of 256 and it's accurate to inputs down to 1/256 = ~0.04, and up to 2^32/256 = 16777215 (due to overflow of a uint32 variable).
It's interesting to note that even on an STMF103 Arm M3 with no FPU, the float implementation below is significantly faster (eg 3x or better) than the 16 bit fixed-point implementation in libfixmath (that being said, this float implementation still takes a few thousand cycles so it's still not ~fast~)
#include <float.h>
float TempSensor::Ln(float y)
{
// Algo from: https://stackoverflow.com/a/18454010
// Accurate between (1 / scaling factor) < y < (2^32 / scaling factor). Read comments below for more info on how to extend this range
float divisor, x, result;
const float LN_2 = 0.69314718; //pre calculated constant used in calculations
uint32_t log2 = 0;
//handle if input is less than zero
if (y <= 0)
{
return -FLT_MAX;
}
//scaling factor. The polynomial below is accurate when the input y>1, therefore using a scaling factor of 256 (aka 2^8) extends this to 1/256 or ~0.04. Given use of uint32_t, the input y must stay below 2^24 or 16777216 (aka 2^(32-8)), otherwise uint_y used below will overflow. Increasing the scaing factor will reduce the lower accuracy bound and also reduce the upper overflow bound. If you need the range to be wider, consider changing uint_y to a uint64_t
const uint32_t SCALING_FACTOR = 256;
const float LN_SCALING_FACTOR = 5.545177444; //this is the natural log of the scaling factor and needs to be precalculated
y = y * SCALING_FACTOR;
uint32_t uint_y = (uint32_t)y;
while (uint_y >>= 1) // Convert the number to an integer and then find the location of the MSB. This is the integer portion of Log2(y). See: https://stackoverflow.com/a/4970859/6630230
{
log2++;
}
divisor = (float)(1 << log2);
x = y / divisor; // FInd the remainder value between [1.0, 2.0] then calculate the natural log of this remainder using a polynomial approximation
result = -1.7417939 + (2.8212026 + (-1.4699568 + (0.44717955 - 0.056570851 * x) * x) * x) * x; //This polynomial approximates ln(x) between [1,2]
result = result + ((float)log2) * LN_2 - LN_SCALING_FACTOR; // Using the log product rule Log(A) + Log(B) = Log(AB) and the log base change rule log_x(A) = log_y(A)/Log_y(x), calculate all the components in base e and then sum them: = Ln(x_remainder) + (log_2(x_integer) * ln(2)) - ln(SCALING_FACTOR)
return result;
}

Resources