I have a number of time series each containing a sequence of 400 numbers that are close to each other. I have thousands of time series; each has its own series of close numbers.
TimeSeries1 = 184.56, 184.675, 184.55, 184.77, ...
TimeSeries2 = 145.73, 145.384, 145.96, 145.33, ...
TimeSeries3 = -126.48, -126.78, -126.55, ...
I can store an 8 byte double for each time Series, so for most of the time series, I can compress each double to a single byte by multiplying by 100 and taking the delta of the current value and the previous value.
Here is my compress/decompress code:
struct{
double firstValue;
double nums[400];
char compressedNums[400];
int compressionOK;
} timeSeries;
void compress(void){
timeSeries.firstValue = timeSeries.nums[0];
double lastValue = timeSeries.firstValue;
for (int i = 1; i < 400; ++i){
int delta = (int) ((timeSeries.nums[i] * 100) - (lastValue* 100));
timeSeries.compressionOK = 1;
if (delta > CHAR_MAX || delta < -CHAR_MAX){
timeSeries.compressionOK = 0;
return;
}
else{
timeSeries.compressedNums[i] = (char) delta;
lastValue = timeSeries.nums[i];
}
}
}
double decompressedNums[400];
void decompress(void){
if (timeSeries.compressionOK){
double lastValue = timeSeries.firstValue;
for (int i = 1; i < 400; ++i){
decompressedNums[i] = lastValue + timeSeries.compressedNums[i] / 100.0;
lastValue = decompressedNums[i];
}
}
}
I can tolerate some lossiness, on the order of .005 per number. However, I am getting more loss than I can tolerate, especially since a precision loss in one of the compressed series carries forward and causes an increasing amount of loss.
So my questions are:
Is there something I can change to reduce the lossiness?
Is there an altogether different compression method that has a comparable, or better, than this 8 to 1 ratio?
You can avoid the slow drift in precision by working out the delta not from the precise value of the previous element, but rather from the computed approximation of the previous element (i.e. the sum of the deltas). That way, you will always get the closest approximation to the next value.
Personally, I'd use integer arithmetic for this purpose, but it will probably be fine with floating point arithmetic too, since floating point is reproducible even if not precise.
Look at the values as stored in memory:
184. == 0x4067000000000000ull
184.56 == 0x406711eb851eb852ull
The first two bytes are the same but the last six bytes are different.
For integer deltas, multiply by 128 instead of 100, this will get you 7 bits of the fractional part. If the delta is too large for one byte use a three byte sequence {0x80, hi_delta, lo_delta}, so 0x80 is used a special indicator. If the delta happened to be -128, then that would be {0x80, 0xff, 0x80}.
You should round the values before converting to an int to avoid the problems, as in this code.
#include <limits.h>
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
enum { TS_SIZE = 400 };
typedef struct
{
double firstValue;
double nums[TS_SIZE];
signed char compressedNums[TS_SIZE];
int compressionOK;
} timeSeries;
static
void compress(timeSeries *t1)
{
t1->firstValue = t1->nums[0];
double lastValue = t1->firstValue;
for (int i = 1; i < TS_SIZE; ++i)
{
int delta = (int) round((t1->nums[i] - lastValue) * 100.0);
t1->compressionOK = 1;
if (delta > CHAR_MAX || delta < -CHAR_MAX)
{
printf("Delta too big: %d (%.3f) vs %d (%.3f) = delta %.3f\n",
i-1, t1->nums[i-1], i, t1->nums[i], t1->nums[i] - t1->nums[i-1]);
t1->compressionOK = 0;
return;
}
else
{
t1->compressedNums[i] = (char) delta;
lastValue = t1->nums[i];
}
}
}
static
void decompress(timeSeries *t1)
{
if (t1->compressionOK)
{
double lastValue = t1->firstValue;
for (int i = 1; i < TS_SIZE; ++i)
{
t1->nums[i] = lastValue + t1->compressedNums[i] / 100.0;
lastValue = t1->nums[i];
}
}
}
static void compare(const timeSeries *t0, const timeSeries *t1)
{
for (int i = 0; i < TS_SIZE; i++)
{
char c = (fabs(t0->nums[i] - t1->nums[i]) > 0.005) ? '!' : ' ';
printf("%c %03d: %.3f vs %.3f = %+.3f\n", c, i, t0->nums[i], t1->nums[i], t0->nums[i] - t1->nums[i]);
}
}
int main(void)
{
timeSeries t1;
timeSeries t0;
int i;
for (i = 0; i < TS_SIZE; i++)
{
if (scanf("%lf", &t0.nums[i]) != 1)
break;
}
if (i != TS_SIZE)
{
printf("Reading problems\n");
return 1;
}
t1 = t0;
for (i = 0; i < 10; i++)
{
printf("Cycle %d:\n", i+1);
compress(&t1);
decompress(&t1);
compare(&t0, &t1);
}
return 0;
}
With the following data (generated from integers in the range 18456..18855 divided by 100 and randomly perturbed by a small amount (about 0.3%, to keep the values close enough together), I got the same data over, and over again, for the full 10 cycles of compression and decompression.
184.60 184.80 184.25 184.62 184.49 184.94 184.95 184.39 184.50 184.96
184.54 184.72 184.84 185.02 184.83 185.01 184.43 185.00 184.74 184.88
185.04 184.79 184.55 184.94 185.07 184.60 184.55 184.57 184.95 185.07
184.61 184.57 184.57 184.98 185.24 185.11 184.89 184.72 184.77 185.29
184.98 184.91 184.76 184.89 185.26 184.94 185.09 184.68 184.69 185.04
185.39 185.05 185.41 185.41 184.74 184.77 185.16 184.84 185.31 184.90
185.18 185.15 185.03 185.41 185.18 185.25 185.01 185.31 185.36 185.29
185.62 185.48 185.40 185.15 185.29 185.19 185.32 185.60 185.39 185.22
185.66 185.48 185.53 185.59 185.27 185.69 185.29 185.70 185.77 185.40
185.41 185.23 185.84 185.30 185.70 185.18 185.68 185.43 185.45 185.71
185.60 185.82 185.92 185.40 185.85 185.65 185.92 185.80 185.60 185.57
185.64 185.39 185.48 185.36 185.69 185.76 185.45 185.72 185.47 186.04
185.81 185.80 185.94 185.64 186.09 185.95 186.03 185.55 185.65 185.75
186.03 186.02 186.24 186.19 185.62 186.13 185.98 185.84 185.83 186.19
186.17 185.80 186.15 186.10 186.32 186.25 186.09 186.20 186.06 185.80
186.02 186.40 186.26 186.15 186.35 185.90 185.98 186.19 186.15 185.84
186.34 186.20 186.41 185.93 185.97 186.46 185.92 186.19 186.15 186.32
186.06 186.25 186.47 186.56 186.47 186.33 186.55 185.98 186.36 186.35
186.65 186.60 186.52 186.13 186.39 186.55 186.50 186.45 186.29 186.24
186.81 186.61 186.80 186.60 186.75 186.83 186.86 186.35 186.34 186.53
186.60 186.69 186.32 186.23 186.39 186.71 186.65 186.37 186.37 186.54
186.81 186.84 186.78 186.50 186.47 186.44 186.36 186.59 186.87 186.70
186.90 186.47 186.50 186.74 186.80 186.86 186.72 186.63 186.78 186.52
187.22 186.71 186.56 186.90 186.95 186.67 186.79 186.99 186.85 187.03
187.04 186.89 187.19 187.33 187.09 186.92 187.35 187.29 187.04 187.00
186.79 187.32 186.94 187.07 186.92 187.06 187.39 187.20 187.35 186.78
187.47 187.54 187.33 187.07 187.39 186.97 187.48 187.10 187.52 187.55
187.06 187.24 187.28 186.92 187.60 187.05 186.95 187.26 187.08 187.35
187.24 187.66 187.57 187.75 187.15 187.08 187.55 187.30 187.17 187.17
187.13 187.14 187.40 187.71 187.64 187.32 187.42 187.19 187.40 187.66
187.93 187.27 187.44 187.35 187.34 187.54 187.70 187.62 187.99 187.97
187.51 187.36 187.82 187.75 187.56 187.53 187.38 187.91 187.63 187.51
187.39 187.54 187.69 187.84 188.16 187.61 188.03 188.06 187.53 187.51
187.93 188.04 187.77 187.69 188.03 187.81 188.04 187.82 188.14 187.96
188.05 187.63 188.35 187.65 188.00 188.27 188.20 188.21 187.81 188.04
187.87 187.96 188.18 187.98 188.46 187.89 187.77 188.18 187.83 188.03
188.48 188.09 187.82 187.90 188.40 188.32 188.33 188.29 188.58 188.53
187.88 188.32 188.57 188.14 188.02 188.25 188.62 188.43 188.19 188.54
188.20 188.06 188.31 188.19 188.48 188.44 188.69 188.63 188.34 188.76
188.32 188.82 188.45 188.34 188.44 188.25 188.39 188.83 188.49 188.18
Until I put the rounding in, the values would rapidly drift apart.
If you don't have round() — which was added to Standard C in the C99 standard — then you can use these lines in place of round():
int delta;
if (t1->nums[i] > lastValue)
delta = (int) (((t1->nums[i] - lastValue) * 100.0) + 0.5);
else
delta = (int) (((t1->nums[i] - lastValue) * 100.0) - 0.5);
This rounds correctly for positive and negative values. You could also factor that into a function; in C99, you could make it an inline function, but if that worked, you would have the round() function in the library, too. I used this code at first before switching to the round() function.
Related
I am trying to slowly decelerate based on a percentage.
Basically: if percentage is 0 the speed should be speed_max, if the percentage hits 85 the speed should be speed_min, continuing with speed_min until the percentage hits 100%. At percentages between 0% and 85%, the speed should be calculated with the percentage.
I started writing the code already, though I am not sure how to continue:
// Target
int degrees = 90;
// Making sure we're at 0
resetGyro(0);
int speed_max = 450;
int speed_min = 150;
float currentDeg = 0;
float percentage = 0;
while(percentage < 100)
{
//??
getGyroDeg(¤tDeg);
percentage = (degrees/100)*currentDeg;
}
killMotors(1);
Someone in the comments asked why I am doing this.
Unfortunately, I am working with very limited hardware and a pretty bad gyroscope, all while trying to guarantee +- 1 degree precision.
To do this, I am starting at speed_max, slowly decreasing to speed_min (this is to have better control over the motors) when nearing the target value (90).
Why does it stop decelerating at 85%? This is to really be precise and hit the target value successfully.
Assuming speed is linearly calculated based on percentages from 0 to 85 (and stays at speed_min with percentage is gt 85), then this is your formula for calculating speed:
if (percentage >= 85)
{
speed = speed_min;
}
else
{
speed = speed_max - (((speed_max - speed_min)*percentage)/85);
}
Linear interpolation is fairly straight forward.
At percentage 0, the speed should be speed_max.
At percentage 85, the speed should be speed_min.
At percentage values greater than 85, the speed should still be speed_min.
Between 0 and 85, the speed should be linearly interpolated between speed_max and speed_min, so percentage is a 'amount of drop from maximum speed'.
Assuming percentage is of type float:
float speed_from_percentage(float percent)
{
if (percent <= 0.0)
return speed_max;
if (percent >= 85.0)
return speed_min;
return speed_min + (speed_max - speed_min) * (85.0 - percentage) / 85.0;
}
You can also replace the final return with the equivalent:
return speed_max - (speed_max - speed_min) * percentage / 85.0;
If you're truly pedantic, all the constants should be suffixed with F to indicate float and hence use float arithmetic instead of double arithmetic. And hence you should probably also use float for speed_min and speed_max. If everything is meant to be integer arithmetic, you can change float to int and drop the .0 from the expressions.
Assuming getGyroDeg is input from the controller, what you are describing is a proportional control. A constant response curve, ie, 0 to 85 has an output of 450 to 150, and 150 after that, is an ad-hoc approach, based on experience. However, a properly initialised PID controller generally attains a faster time to set-point and greater stability.
#include <stdio.h>
#include <time.h>
#include <assert.h>
#include <stdlib.h>
static float sim_current = 0.0f;
static float sim_dt = 0.01f;
static float sim_speed = 0.0f /* 150.0f */;
static void getGyroDeg(float *const current) {
assert(current);
sim_current += sim_speed * sim_dt;
/* Simulate measurement error. */
*current = sim_current + 3.0 * ((2.0 * rand() / RAND_MAX) - 1.0);
}
static void setGyroSpeed(const float speed) {
assert(speed >= /*150.0f*/-450.0f && speed <= 450.0f);
sim_speed = speed;
}
int main(void) {
/* https://en.wikipedia.org/wiki/PID_controller
u(t) = K_p e(t) + K_i \int_0^t e(\theta)d\theta + K_d de(t)/dt */
const float setpoint = 90.0f;
const float max = 450.0f;
const float min = -450.0f/* 150.0f */;
/* Random value; actually get this number. */
const float dt = 1.0f;
/* Tune these. */
const float kp = 30.0f, ki = 4.0f, kd = 2.0f;
float current, last = 0.0f, integral = 0.0f;
float t = 0.0f;
float e, p, i, d, pid;
size_t count;
for(count = 0; count < 40; count++) {
getGyroDeg(¤t);
e = setpoint - current;
p = kp * e;
i = ki * integral * dt;
d = kd * (e - last) / dt;
last = e;
pid = p + i + d;
if(pid > max) {
pid = max;
} else if(pid < min) {
pid = min;
} else {
integral += e;
}
setGyroSpeed(pid);
printf("%f\t%f\t%f\n", t, sim_current, pid);
t += dt;
}
return EXIT_SUCCESS;
}
Here, instead of the speed linearly decreasing, it calculates the speed in a control loop. However, if the minimum is 150, then it's not going to achieve greater stability; if you go over 90, then you have no way of getting back.
If the controls are [-450, 450], it goes through zero and it is much nicer; I think this might be what you are looking for. It actively corrects for errors.
How can you generate a floating point given an integer and the decimal position?
For example:
int decimal = 1000;
int decimal_position = 3;
float value = 1.000;
I have accomplished this by using powers but that is not efficient
decimal/pow(10, decimal_position)
You can do this with a few integer multiplications and one floating point division:
int decimal = 1000;
int decimal_position = 3;
int offset = 1, i;
for (i=0; i<decimal_position; i++) {
offset *= 10;
}
float value = (float)decimal / offset;
Note that this works assuming decimal_position is non-negative and that 10decimal_position fits in an int.
How can you generate a floating point given an integer and the decimal position?
I have accomplished this by using powers but that is not efficient
float value = decimal/pow(10, decimal_position);
It depends on the range of decimal_position.
With 0 <= decimal_position < 8, code could use a table look-up.
const float tens[8] = { 1.0f, 0.1f, ..., 1.0e-7f };
float value = decimal*tens[decimal_position];
Yet to handle all int decimal and int decimal_position that result in a finite value, using float powf(float ), rather than double pow(double), should be the first choice.
// float power function
float value = decimal/powf(10.0f, decimal_position);
If not the best value is needed, code could *. This is slightly less precise as 0.1f is not exactly math 0.1. Yet * is usually faster than /.
float value = decimal*powf(0.1f, decimal_position);
Looping to avoid powf() could be done for small values of decimal_position
if (decimal_position < 0) {
if (decimal_position > -N) {
float ten = 1.0f;
while (++decimal_position < 0) ten *= 10.0f;
value = decimal*ten;
while (++decimal_position < 0) value /= 10.0f; // or value *= 0.1f;
} else {
value = decimal*powf(10.0f, -decimal_position);
}
} else {
if (decimal_position < N) {
float ten = 1.0f;
while (decimal_position-- > 0) ten *= 10.0f;
value = decimal/ten;
} else {
value = decimal/powf(10.0f, decimal_position); // alternate: *powf(0.1f, ...
}
}
Select processors may benefit with using pow() vs. powf(), yet I find powf() more commonly faster.
Of course if int decimal and int decimal_position are such that an integer answer is possible:
// example, assume 32-bit `int`
if (decimal_position <= 0 && decimal_position >= -9) {
const long long[10] = {1,10,100,1000,..., 1000000000};
value = decimal*i_ten[-decimal_position];
} else {
value = use above code ...
Or if abs(decimal_position) <= 19 and FP math expensive, consider:
unsigned long long ipow10(unsigned expo) {
unsigned long long ten = 10;
unsigned long long y = 1;
while (expo > 0) {
if (expo % 2u) {
y = ten * y;
}
expo /= 2u;
x *= ten;
}
return y;
}
if (decimal_position <= 0) {
value = 1.0f*decimal*ipow10(-decimal_position);
} else {
value = 1.0f*decimal/ipow10(decimal_position);
}
Or if abs(decimal_position) <= 27 ...
if (decimal_position <= 0) {
value = scalbnf(decimal, -decimal_position) * ipow5(-decimal_position);
} else {
value = scalbnf(decimal, -decimal_position) / ipow5(decimal_position);
}
I have a problem that, after much head scratching, I think is to do with very small numbers in a long-double.
I am trying to implement Planck's law equation to generate a normalised blackbody curve at 1nm intervals between a given wavelength range and for a given temperature. Ultimately this will be a function accepting inputs, for now it is main() with the variables fixed and outputting by printf().
I see examples in matlab and python, and they are implementing the same equation as me in a similar loop with no trouble at all.
This is the equation:
My code generates an incorrect blackbody curve:
I have tested key parts of the code independently. After trying to test the equation by breaking it into blocks in excel I noticed that it does result in very small numbers and I wonder if my implementation of large numbers could be causing the issue? Does anyone have any insight into using C to implement equations? This a new area to me and I have found the maths much harder to implement and debug than normal code.
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
//global variables
const double H = 6.626070040e-34; //Planck's constant (Joule-seconds)
const double C = 299800000; //Speed of light in vacume (meters per second)
const double K = 1.3806488e-23; //Boltzmann's constant (Joules per Kelvin)
const double nm_to_m = 1e-6; //conversion between nm and m
const int interval = 1; //wavelength interval to caculate at (nm)
//typedef structure to hold results
typedef struct {
int *wavelength;
long double *radiance;
long double *normalised;
} results;
int main() {
int min = 100 , max = 3000; //wavelength bounds to caculate between, later to be swaped to function inputs
double temprature = 200; //temprature in kelvin, later to be swaped to function input
double new_valu, old_valu = 0;
static results SPD_data, *SPD; //setup a static results structure and a pointer to point to it
SPD = &SPD_data;
SPD->wavelength = malloc(sizeof(int) * (max - min)); //allocate memory based on wavelength bounds
SPD->radiance = malloc(sizeof(long double) * (max - min));
SPD->normalised = malloc(sizeof(long double) * (max - min));
for (int i = 0; i <= (max - min); i++) {
//Fill wavelength vector
SPD->wavelength[i] = min + (interval * i);
//Computes radiance for every wavelength of blackbody of given temprature
SPD->radiance[i] = ((2 * H * pow(C, 2)) / (pow((SPD->wavelength[i] / nm_to_m), 5))) * (1 / (exp((H * C) / ((SPD->wavelength[i] / nm_to_m) * K * temprature))-1));
//Copy SPD->radiance to SPD->normalised
SPD->normalised[i] = SPD->radiance[i];
//Find largest value
if (i <= 0) {
old_valu = SPD->normalised[0];
} else if (i > 0){
new_valu = SPD->normalised[i];
if (new_valu > old_valu) {
old_valu = new_valu;
}
}
}
//for debug perposes
printf("wavelength(nm) radiance(Watts per steradian per meter squared) normalised radiance\n");
for (int i = 0; i <= (max - min); i++) {
//Normalise SPD
SPD->normalised[i] = SPD->normalised[i] / old_valu;
//for debug perposes
printf("%d %Le %Lf\n", SPD->wavelength[i], SPD->radiance[i], SPD->normalised[i]);
}
return 0; //later to be swaped to 'return SPD';
}
/*********************UPDATE Friday 24th Mar 2017 23:42*************************/
Thank you for the suggestions so far, lots of useful pointers especially understanding the way numbers are stored in C (IEEE 754) but I don't think that is the issue here as it only applies to significant digits. I implemented most of the suggestions but still no progress on the problem. I suspect Alexander in the comments is probably right, changing the units and order of operations is likely what I need to do to make the equation work like the matlab or python examples, but my knowledge of maths is not good enough to do this. I broke the equation down into chunks to take a closer look at what it was doing.
//global variables
const double H = 6.6260700e-34; //Planck's constant (Joule-seconds) 6.626070040e-34
const double C = 299792458; //Speed of light in vacume (meters per second)
const double K = 1.3806488e-23; //Boltzmann's constant (Joules per Kelvin) 1.3806488e-23
const double nm_to_m = 1e-9; //conversion between nm and m
const int interval = 1; //wavelength interval to caculate at (nm)
const int min = 100, max = 3000; //max and min wavelengths to caculate between (nm)
const double temprature = 200; //temprature (K)
//typedef structure to hold results
typedef struct {
int *wavelength;
long double *radiance;
long double *normalised;
} results;
//main program
int main()
{
//setup a static results structure and a pointer to point to it
static results SPD_data, *SPD;
SPD = &SPD_data;
//allocate memory based on wavelength bounds
SPD->wavelength = malloc(sizeof(int) * (max - min));
SPD->radiance = malloc(sizeof(long double) * (max - min));
SPD->normalised = malloc(sizeof(long double) * (max - min));
//break equasion into visible parts for debuging
long double aa, bb, cc, dd, ee, ff, gg, hh, ii, jj, kk, ll, mm, nn, oo;
for (int i = 0; i < (max - min); i++) {
//Computes radiance at every wavelength interval for blackbody of given temprature
SPD->wavelength[i] = min + (interval * i);
aa = 2 * H;
bb = pow(C, 2);
cc = aa * bb;
dd = pow((SPD->wavelength[i] / nm_to_m), 5);
ee = cc / dd;
ff = 1;
gg = H * C;
hh = SPD->wavelength[i] / nm_to_m;
ii = K * temprature;
jj = hh * ii;
kk = gg / jj;
ll = exp(kk);
mm = ll - 1;
nn = ff / mm;
oo = ee * nn;
SPD->radiance[i] = oo;
}
//for debug perposes
printf("wavelength(nm) | radiance(Watts per steradian per meter squared)\n");
for (int i = 0; i < (max - min); i++) {
printf("%d %Le\n", SPD->wavelength[i], SPD->radiance[i]);
}
return 0;
}
Equation variable values during runtime in xcode:
I notice a couple of things that are wrong and/or suspicious about the current state of your program:
You have defined nm_to_m as 10-9,, yet you divide by it. If your wavelength is measured in nanometers, you should multiply it by 10-9 to get it in meters. To wit, if hh is supposed to be your wavelength in meters, it is on the order of several light-hours.
The same is obviously true for dd as well.
mm, being the exponential expression minus 1, is zero, which gives you infinity in the results deriving from it. This is apparently because you don't have enough digits in a double to represent the significant part of the exponential. Instead of using exp(...) - 1 here, try using the expm1() function instead, which implements a well-defined algorithm for calculating exponentials minus 1 without cancellation errors.
Since interval is 1, it doesn't currently matter, but you can probably see that your results wouldn't match the meaning of the code if you set interval to something else.
Unless you plan to change something about this in the future, there shouldn't be a need for this program to "save" the values of all calculations. You could just print them out as you run them.
On the other hand, you don't seem to be in any danger of underflow or overflow. The largest and smallest numbers you use don't seem to be a far way from 10±60, which is well within what ordinary doubles can deal with, let alone long doubles. The being said, it might not hurt to use more normalized units, but at the magnitudes you currently display, I wouldn't worry about it.
Thanks for all the pointers in the comments. For anyone else running into a similar problem with implementing equations in C, I had a few silly errors in the code:
writing a 6 not a 9
dividing when I should be multiplying
an off by one error with the size of my array vs the iterations of for() loop
200 when I meant 2000 in the temperature variable
As a result of the last one particularly I was not getting the results I expected (my wavelength range was not right for plotting the temperature I was calculating) and this was leading me to the assumption that something was wrong in the implementation of the equation, specifically I was thinking about big/small numbers in C because I did not understand them. This was not the case.
In summary, I should have made sure I knew exactly what my equation should be outputting for given test conditions before implementing it in code. I will work on getting more comfortable with maths, particularly algebra and dimensional analysis.
Below is the working code, implemented as a function, feel free to use it for anything but obviously no warranty of any kind etc.
blackbody.c
//
// Computes radiance for every wavelength of blackbody of given temprature
//
// INPUTS: int min wavelength to begin calculation from (nm), int max wavelength to end calculation at (nm), int temperature (kelvin)
// OUTPUTS: pointer to structure containing:
// - spectral radiance (Watts per steradian per meter squared per wavelength at 1nm intervals)
// - normalised radiance
//
//include & define
#include "blackbody.h"
//global variables
const double H = 6.626070040e-34; //Planck's constant (Joule-seconds) 6.626070040e-34
const double C = 299792458; //Speed of light in vacuum (meters per second)
const double K = 1.3806488e-23; //Boltzmann's constant (Joules per Kelvin) 1.3806488e-23
const double nm_to_m = 1e-9; //conversion between nm and m
const int interval = 1; //wavelength interval to calculate at (nm), to change this line 45 also need to be changed
bbresults* blackbody(int min, int max, double temperature) {
double new_valu, old_valu = 0; //variables for normalising result
bbresults *SPD;
SPD = malloc(sizeof(bbresults));
//allocate memory based on wavelength bounds
SPD->wavelength = malloc(sizeof(int) * (max - min));
SPD->radiance = malloc(sizeof(long double) * (max - min));
SPD->normalised = malloc(sizeof(long double) * (max - min));
for (int i = 0; i < (max - min); i++) {
//Computes radiance for every wavelength of blackbody of given temperature
SPD->wavelength[i] = min + (interval * i);
SPD->radiance[i] = ((2 * H * pow(C, 2)) / (pow((SPD->wavelength[i] * nm_to_m), 5))) * (1 / (expm1((H * C) / ((SPD->wavelength[i] * nm_to_m) * K * temperature))));
//Copy SPD->radiance to SPD->normalised
SPD->normalised[i] = SPD->radiance[i];
//Find largest value
if (i <= 0) {
old_valu = SPD->normalised[0];
} else if (i > 0){
new_valu = SPD->normalised[i];
if (new_valu > old_valu) {
old_valu = new_valu;
}
}
}
for (int i = 0; i < (max - min); i++) {
//Normalise SPD
SPD->normalised[i] = SPD->normalised[i] / old_valu;
}
return SPD;
}
blackbody.h
#ifndef blackbody_h
#define blackbody_h
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
//typedef structure to hold results
typedef struct {
int *wavelength;
long double *radiance;
long double *normalised;
} bbresults;
//function declarations
bbresults* blackbody(int, int, double);
#endif /* blackbody_h */
main.c
#include <stdio.h>
#include "blackbody.h"
int main() {
bbresults *TEST;
int min = 100, max = 3000, temp = 5000;
TEST = blackbody(min, max, temp);
printf("wavelength | normalised radiance | radiance |\n");
printf(" (nm) | - | (W per meter squr per steradian) |\n");
for (int i = 0; i < (max - min); i++) {
printf("%4d %Lf %Le\n", TEST->wavelength[i], TEST->normalised[i], TEST->radiance[i]);
}
free(TEST);
free(TEST->wavelength);
free(TEST->radiance);
free(TEST->normalised);
return 0;
}
Plot of output:
I've been following the guide my prof gave us, but I just can't find where I went wrong. I've also been going through some other questions about implementing the Taylor Series in C.
Just assume that RaiseTo(raise a number to the power of x) is there.
double factorial (int n)
{
int fact = 1,
flag;
for (flag = 1; flag <= n; flag++)
{
fact *= flag;
}
return flag;
}
double sine (double rad)
{
int flag_2,
plusOrMinus2 = 0; //1 for plus, 0 for minus
double sin,
val2 = rad,
radRaisedToX2,
terms;
terms = NUMBER_OF_TERMS; //10 terms
for (flag_2 = 1; flag_2 <= 2 * terms; flag_2 += 2)
{
radRaisedToX2 = RaiseTo(rad, flag_2);
if (plusOrMinus2 == 0)
{
val2 -= radRaisedToX2/factorial(flag_2);
plusOrMinus2++; //Add the next number
}
else
{
val2 += radRaisedToX2/factorial(flag_2);
plusOrMinus2--; //Subtract the next number
}
}
sin = val2;
return sin;
}
int main()
{
int degree;
scanf("%d", °ree);
double rad, cosx, sinx;
rad = degree * PI / 180.00;
//cosx = cosine (rad);
sinx = sine (rad);
printf("%lf \n%lf", rad, sinx);
}
So during the loop, I get the rad^x, divide it by the factorial of the odd number series starting from 1, then add or subtract it depending on what's needed, but when I run the program, I get outputs way above one, and we all know that the limits of sin(x) are 1 and -1, I'd really like to know where I went wrong so I could improve, sorry if it's a pretty bad question.
Anything over 12! is larger than can fit into a 32-bit int, so such values will overflow and therefore won't return what you expect.
Instead of computing the full factorial each time, take a look at each term in the sequence relative to the previous one. For any given term, the next one is -((x*x)/(flag_2*(flag_2-1)) times the previous one. So start with a term of x, then multiply by that factor for each successive term.
There's also a trick to calculating the result to the precision of a double without knowing how many terms you need. I'll leave that as an exercise to the reader.
In the function factorial you are doing an int multiply before assigned to the double return value of the function. Factorials can easily break the int range, such as 20! = 2432902008176640000.
You also returned the wrong variable - the loop counter!
Please change the local variable to double, as
double factorial (int n)
{
double fact = 1;
int flag;
for (flag = 1; flag <= n; flag++)
{
fact *= flag;
}
return fact; // it was the wrong variable, and wrong type
}
Also there is not even any need for a factorial calculation. Note that each term of the series multiplies the previous term by rad and divides by the term number - with a change of sign.
Another fairly naive, 5-minute approach involves computing a look-up table that contains the first 20 or so factorials, i.e 1! .. 20! This requires very little memory and can increase speed over the 'each-time' computation method. A further optimization can easily be realized in the function that pre-computes the factorials, taking advantage of the relationship each has to the previous one.
An approach that efficiently eliminated branching (if X do Y else do Z) in the loops of the two trig functions would provide yet more speed again.
C code
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
const int nMaxTerms=20;
double factorials[nMaxTerms];
double factorial(int n)
{
if (n==1)
return 1;
else
return (double)n * factorial(n - 1.0);
}
void precalcFactorials()
{
for (int i=1; i<nMaxTerms+1; i++)
{
factorials[i-1] = factorial(i);
}
}
/*
sin(x) = x - (x^3)/3! + (x^5)/5! - (x^7)/7! .......
*/
double taylorSine(double rads)
{
double result = rads;
for (int curTerm=1; curTerm<=(nMaxTerms/2)-1; curTerm++)
{
double curTermValue = pow(rads, (curTerm*2)+1);
curTermValue /= factorials[ curTerm*2 ];
if (curTerm & 0x01)
result -= curTermValue;
else
result += curTermValue;
}
return result;
}
/*
cos(x) = 1 - (x^2)/2! + (x^4)/4! - (x^6)/6! .......
*/
double taylorCos(double rads)
{
double result = 1.0;
for (int curTerm=1; curTerm<=(nMaxTerms/2)-1; curTerm++)
{
double curTermValue = pow(rads, (curTerm*2) );
curTermValue /= factorials[ (curTerm*2) - 1 ];
if (curTerm & 0x01)
result -= curTermValue;
else
result += curTermValue;
}
return result;
}
int main()
{
precalcFactorials();
printf("Math sin(0.5) = %f\n", sin(0.5));
printf("taylorSin(0.5) = %f\n", taylorSine(0.5));
printf("Math cos(0.5) = %f\n", cos(0.5));
printf("taylorCos(0.5) = %f\n", taylorCos(0.5));
return 0;
}
output
Math sin(0.5) = 0.479426
taylorSin(0.5) = 0.479426
Math cos(0.5) = 0.877583
taylorCos(0.5) = 0.877583
Javascript
Implemented in javascript, the code produces seemingly identical results (I didn't test very much) to the inbuilt Math library when summing just 7 terms in the sin/cos functions.
window.addEventListener('load', onDocLoaded, false);
function onDocLoaded(evt)
{
console.log('starting');
for (var i=1; i<21; i++)
factorials[i-1] = factorial(i);
console.log('calculated');
console.log(" Math.cos(0.5) = " + Math.cos(0.5));
console.log("taylorCos(0.5) = " + taylorCos(0.5));
console.log('-');
console.log(" Math.sin(0.5) = " + Math.sin(0.5));
console.log("taylorSine(0.5) = " + taylorSine(0.5));
}
var factorials = [];
function factorial(n)
{
if (n==1)
return 1;
else
return n * factorial(n-1);
}
/*
sin(x) = x - (x^3)/3! + (x^5)/5! - (x^7)/7! .......
*/
function taylorSine(x)
{
var result = x;
for (var curTerm=1; curTerm<=7; curTerm++)
{
var curTermValue = Math.pow(x, (curTerm*2)+1);
curTermValue /= factorials[ curTerm*2 ];
if (curTerm & 0x01)
result -= curTermValue;
else
result += curTermValue;
}
return result;
}
/*
cos(x) = 1 - (x^2)/2! + (x^4)/4! - (x^6)/6! .......
*/
function taylorCos(x)
{
var result = 1.0;
for (var curTerm=1; curTerm<=7; curTerm++)
{
var curTermValue = Math.pow(x, (curTerm*2));
curTermValue /= factorials[ (curTerm*2)-1 ];
if (curTerm & 0x01)
result -= curTermValue;
else
result += curTermValue;
}
return result;
}
I have a logical problem in my code, maybe it is caused by overflowing but I can't solve this on my own, so I would be thankful if anyone can help me.
In the following piece of code, I have implemented the function taylor_log(), which can count "n" iterations of taylor polynomial. In the void function I am looking for number of iterations (*limit) which is enough to count a logarithm with desired accuracy compared to log function from .
The thing is that sometimes UINT_MAX is not enough iterations to get the desired accuracy and at this point I want to let the user know that the number of needed iterations is higher than UINT_MAX. But my code don't work, for example for x = 1e+280, eps = 623. It just counts, counts and never give result.
TaylorPolynomial
double taylor_log(double x, unsigned int n){
double f_sum = 1.0;
double sum = 0.0;
for (unsigned int i = 1; i <= n; i++)
{
f_sum *= (x - 1) / x;
sum += f_sum / i;
}
return sum;
}
void guessIt(double x, double eps, unsigned int *limit){
*limit = 10;
double real_log = log(x);
double t_log = taylor_log(x, *limit);
while(myabs(real_log - t_log) > eps)
{
if (*limit == UINT_MAX)
{
*limit = 0;
break;
}
if (*limit >= UINT_MAX/2)
{
*limit = UINT_MAX;
t_log = taylor_log(x, *limit);
}
else
{
*limit = (*limit) *2;
t_log = taylor_log(x, *limit);
}
}
}
EDIT: Ok guys, thanks for your reactions so far. I have changed my code to this:
if (*limit == UINT_MAX-1)
{
*limit = 0;
break;
}
if (*limit >= UINT_MAX/2)
{
*limit = UINT_MAX-1;
t_log = taylor_log(x, *limit);
}
but it still doesn't work correctly, I have set printf to the beggining of taylor_log() function to see the value of "n" and its (..., 671088640, 1342177280, 2684354560, 5, 4, 3, 2, 2, 1, 2013265920, ...). Don't understand it..
This code below assigns the limit to UINT_MAX
if (*limit >= UINT_MAX/2)
{
*limit = UINT_MAX;
t_log = taylor_log(x, *limit);
}
And your for loop is defined like this:
for (unsigned int i = 1; i <= n; i++)
i will ALWAYS be less than or equal to UINT_MAX because there is never going to be a value of i that is greater than UINT_MAX. Because that's the largest value i could ever be. So there is certainly overflow and your loop exit condition is never met. i rolls over to zero and the process repeats indefinitely.
You should change your loop condition to i < n or change your limit to UINT_MAX - 1.
[Edit]
OP coded correctly but must insure a limited range (0.5 < x < 2.0 ?)
Below is a code version that self determines when to stop. Iteration count goes high near x near 0.5 and 2.0. The iteration count needed goes into the millions. Such the alternative coded far below.
double taylor_logA(double x) {
double f_sum = 1.0;
double sum = 0.0;
for (unsigned int i = 1; ; i++) {
f_sum *= (x - 1) / x;
double sum_before = sum;
sum += f_sum / i;
if (sum_before == sum) {
printf("%d\n", i);
break;
}
}
return sum;
}
Wrongalternative implementation of the series: Ref
Sample alternative - it converges faster.
double taylor_log2(double x, unsigned int n) {
double f_sum = 1.0;
double sum = 0.0;
for (unsigned int i = 1; i <= n; i++) {
f_sum *= (x - 1) / 1; // / 1 (or remove)
if (i & 1) sum += f_sum / i;
else sum -= f_sum / i; // subtract even terms
}
return sum;
}
A reasonable number of terms will converge as needed.
Alternatively, continue until terms are too small (maybe 50 or so)
double taylor_log3(double x) {
double f_sum = 1.0;
double sum = 0.0;
for (unsigned int i = 1; ; i++) {
double sum_before = sum;
f_sum *= x - 1;
if (i & 1) sum += f_sum / i;
else sum -= f_sum / i;
if (sum_before == sum) {
printf("%d\n", i);
break;
}
}
return sum;
}
Other improvements possible. example see More efficient series
First, using std::numeric_limits<unsigned int>::max() will make your code more c++-ish than c-ish. Second, you can use the integral type unsigned long long and std::numeric_limits<unsigned long long>::max() for the limit, which is pretty mush the limit for an integral type. If you want a higher limit, you may use long double. floating points also allows you to use infinity with std::numeric_limits<double>::infinity() note that infinity work with double, float and long double.
If neither of these types provide you the precision you need, look at boost::multiprecision
First of all, the Taylor series for the logarithm function only converges for values of 0 < x < 2, so it's quite possible that the eps precision is never hit.
Secondly, are you sure that it loops forever, instead of hitting the *limit >= UINT_MAX/2 after a very long time?
OP is using the series well outside its usable range of 0.5 x < 2.0 with calls like taylor_log(1e280, n)
Even within the range, x values near the limits of 0.5 and 2.0 converge very slowly needing millions+ of iterations. A precise log() will not result. Best to use the 2x range about 1.0.
Create a wrapper function to call the original function in its sweet range of sqrt(2)/2 < x < sqrt(2). Converges, worst case, with about 40 iterations.
#define SQRT_0_5 0.70710678118654752440084436210485
#define LN2 0.69314718055994530941723212145818
// Valid over the range (0...DBL_MAX]
double taylor_logB(double x, unsigned int n) {
int expo;
double signif = frexp(x, &expo);
if (signif < SQRT_0_5) {
signif *= 2;
expo--;
}
double y = taylor_log(signif,n);
y += expo*LN2;
return y;
}