Golden Section Method in C - c

I am pretty new to coding and I have been having an impossible time trying to find online help writing a C code that will use the golden section method (which apparently the GNU Scientific Library has, although I haven't had any luck finding it) to find the minimum of functions that Newton's method of minimization fails for.
Specifically I want to input an x-value as a starting point and have the code output the function's minimum value and the x coordinate of the minimum value point. My function is f(x) = x20. I am also allowed some error (< 10-3).
I don't even know where to begin with this, I have been ALL over the internet and haven't found anything helpful. I would seriously appreciate some help as to where I might find more information, or how I might implement this method.
Edit:
This is my code as of now:
#include <gsl/gsl_errno.h> /* Defines GSL_SUCCESS, etc. */
#include <gsl/gsl_math.h>
#include <gsl/gsl_min.h>
int minimize_convex(gsl_function *F,double a, double b, double *x_min, double tol)
{
int status;
double h = (b - a) * .0000001; /* Used to test slope at boundaries */
/* First deal with the special cases */
if (b - a < tol)
{
*x_min = b;
status = GSL_SUCCESS;
}
/* If the min is at a, then the derivative at a is >= 0. Test for
* this case. */
else if (GSL_FN_EVAL(F, a + h) - GSL_FN_EVAL(F, a) >= 0)
{
*x_min = a;
status = GSL_SUCCESS;
}
/* If the min is at b, then the derivative at b is >= 0. Test for
* this case. */
else if (GSL_FN_EVAL(F, b - h) - GSL_FN_EVAL(F, b) >= 0)
{
*x_min = b;
status = GSL_SUCCESS;
}
else
{
/* Choose x_guess so that it's value is less than either of the two
* endpoint values. Since we've got this far, we know that at least
* of of F(a + h) and F(b - h) has this property. */
double x_guess;
x_guess = (GSL_FN_EVAL(F, a + h) < GSL_FN_EVAL(F, b - h)) ?
a + h : b - h;
int iter = 0, max_iter = 200;
const gsl_min_fminimizer_type *T;
gsl_min_fminimizer *s;
T = gsl_min_fminimizer_goldensection;
s = gsl_min_fminimizer_alloc(T);
gsl_min_fminimizer_set(s, F, x_guess, a, b);
do
{
iter++;
status = gsl_min_fminimizer_iterate(s); /* perform iteration */
status =
gsl_min_test_interval(a, b, tol, 0.0); /* |a - b| < tol? */
a = gsl_min_fminimizer_x_lower(s);
b = gsl_min_fminimizer_x_upper(s);
if (status == GSL_SUCCESS)
{
*x_min = gsl_min_fminimizer_x_minimum(s); /* current est */
}
}
while (status == GSL_CONTINUE && iter < max_iter);
gsl_min_fminimizer_free(s);
}
return status;
}
double f(double x, void *params)
{
double *p = (double *) params;
return (x^(50)) + *p;
}
double C = 0.0;
int main (void)
{
double m = 0.0, result;
double a = -1.0, b = 1.0;
double epsilon = 0.001;
int exit_val;
gsl_function F;
F.function = &f;
F.params = &C;
exit_val = minimize_convex(&F, a, b, m, &result, epsilon);
printf("Minimizer: %g\n", result);
printf("Function value: %g\n", f(result, &C));
printf("%d\n", exit_val);
return 0;
}
I am getting the following errors:
try.c:69:14: error: invalid operands to binary
expression ('double' and 'double')
return (x^(50)) + *p;
try.c:81:54: error: too many arguments to function
call, expected 5, have 6
exit_val = minimize_convex(&F, a, b, m, &result, epsilon);
Any thoughts?

gsl has a generic minimizer that can use multiple methods to acheive the minimization. The description of how to use the minimizer can be found in the documentation. You can set it to the golden section method by delcaring the method as gsl_min_fminimizer_goldensection.

Related

Fixing parameters of a fitting function in Nonlinear Least-Square GSL

I'm working on some code that I'm writing which uses the [GNU Scientific Library (GSL)][1]'s Nonlinear least-squares algorithm for curve fitting.
I have been successful in obtaining a working code that estimate the right parameters from the fitting analysis using a C++ wrapper from https://github.com/Eleobert/gsl-curve-fit/blob/master/example.cpp.
Now, I would like to fix some of the parameters of the function to be fit. And I would like to modify the function in such a way that I can already input the value of the parameter to be fixed.
Any idea on how to do?
I'm showing here the full code.
This is the code for performing nonlinear least-squares fitting:
#include <gsl/gsl_vector.h>
#include <gsl/gsl_multifit_nlinear.h>
#include <iostream>
#include <random>
#include <vector>
#include <cassert>
#include <functional>
template <typename F, size_t... Is>
auto gen_tuple_impl(F func, std::index_sequence<Is...> )
{
return std::make_tuple(func(Is)...);
}
template <size_t N, typename F>
auto gen_tuple(F func)
{
return gen_tuple_impl(func, std::make_index_sequence<N>{} );
}
auto internal_solve_system(gsl_vector* initial_params, gsl_multifit_nlinear_fdf *fdf,
gsl_multifit_nlinear_parameters *params) -> std::vector<double>
{
// This specifies a trust region method
const gsl_multifit_nlinear_type *T = gsl_multifit_nlinear_trust;
const size_t max_iter = 200;
const double xtol = 1.0e-8;
const double gtol = 1.0e-8;
const double ftol = 1.0e-8;
auto *work = gsl_multifit_nlinear_alloc(T, params, fdf->n, fdf->p);
int info;
// initialize solver
gsl_multifit_nlinear_init(initial_params, fdf, work);
//iterate until convergence
gsl_multifit_nlinear_driver(max_iter, xtol, gtol, ftol, nullptr, nullptr, &info, work);
// result will be stored here
gsl_vector * y = gsl_multifit_nlinear_position(work);
auto result = std::vector<double>(initial_params->size);
for(int i = 0; i < result.size(); i++)
{
result[i] = gsl_vector_get(y, i);
}
auto niter = gsl_multifit_nlinear_niter(work);
auto nfev = fdf->nevalf;
auto njev = fdf->nevaldf;
auto naev = fdf->nevalfvv;
// nfev - number of function evaluations
// njev - number of Jacobian evaluations
// naev - number of f_vv evaluations
//logger::debug("curve fitted after ", niter, " iterations {nfev = ", nfev, "} {njev = ", njev, "} {naev = ", naev, "}");
gsl_multifit_nlinear_free(work);
gsl_vector_free(initial_params);
return result;
}
auto internal_make_gsl_vector_ptr(const std::vector<double>& vec) -> gsl_vector*
{
auto* result = gsl_vector_alloc(vec.size());
int i = 0;
for(const auto e: vec)
{
gsl_vector_set(result, i, e);
i++;
}
return result;
}
template<typename C1>
struct fit_data
{
const std::vector<double>& t;
const std::vector<double>& y;
// the actual function to be fitted
C1 f;
};
template<typename FitData, int n_params>
int internal_f(const gsl_vector* x, void* params, gsl_vector *f)
{
auto* d = static_cast<FitData*>(params);
// Convert the parameter values from gsl_vector (in x) into std::tuple
auto init_args = [x](int index)
{
return gsl_vector_get(x, index);
};
auto parameters = gen_tuple<n_params>(init_args);
// Calculate the error for each...
for (size_t i = 0; i < d->t.size(); ++i)
{
double ti = d->t[i];
double yi = d->y[i];
auto func = [ti, &d](auto ...xs)
{
// call the actual function to be fitted
return d->f(ti, xs...);
};
auto y = std::apply(func, parameters);
gsl_vector_set(f, i, yi - y);
}
return GSL_SUCCESS;
}
using func_f_type = int (*) (const gsl_vector*, void*, gsl_vector*);
using func_df_type = int (*) (const gsl_vector*, void*, gsl_matrix*);
using func_fvv_type = int (*) (const gsl_vector*, const gsl_vector *, void *, gsl_vector *);
auto internal_make_gsl_vector_ptr(const std::vector<double>& vec) -> gsl_vector*;
auto internal_solve_system(gsl_vector* initial_params, gsl_multifit_nlinear_fdf *fdf,
gsl_multifit_nlinear_parameters *params) -> std::vector<double>;
template<typename C1>
auto curve_fit_impl(func_f_type f, func_df_type df, func_fvv_type fvv, gsl_vector* initial_params, fit_data<C1>& fd) -> std::vector<double>
{
assert(fd.t.size() == fd.y.size());
auto fdf = gsl_multifit_nlinear_fdf();
auto fdf_params = gsl_multifit_nlinear_default_parameters();
fdf.f = f;
fdf.df = df;
fdf.fvv = fvv;
fdf.n = fd.t.size();
fdf.p = initial_params->size;
fdf.params = &fd;
// "This selects the Levenberg-Marquardt algorithm with geodesic acceleration."
fdf_params.trs = gsl_multifit_nlinear_trs_lmaccel;
return internal_solve_system(initial_params, &fdf, &fdf_params);
}
template<typename Callable>
auto curve_fit(Callable f, const std::vector<double>& initial_params, const std::vector<double>& x, const std::vector<double>& y) -> std::vector<double>
{
// We can't pass lambdas without convert to std::function.
constexpr auto n = 3;
assert(initial_params.size() == n);
auto params = internal_make_gsl_vector_ptr(initial_params);
auto fd = fit_data<Callable>{x, y, f};
return curve_fit_impl(internal_f<decltype(fd), n>, nullptr, nullptr, params, fd);
}
// linspace from https://github.com/Eleobert/meth/blob/master/interpolators.hpp
template <typename Container>
auto linspace(typename Container::value_type a, typename Container::value_type b, size_t n)
{
assert(b > a);
assert(n > 1);
Container res(n);
const auto step = (b - a) / (n - 1);
auto val = a;
for(auto& e: res)
{
e = val;
val += step;
}
return res;
}
This is the function I use for fitting:
double gaussian(double x, double a, double b, double c)
{
const double z = (x - b) / c;
return a * std::exp(-0.5 * z * z);
}
And these last lines create a fake dataset of observed data (with some noise which is normally distributed) and test the fitting curve function.
int main()
{
auto device = std::random_device();
auto gen = std::mt19937(device());
auto xs = linspace<std::vector<double>>(0.0, 1.0, 300);
auto ys = std::vector<double>(xs.size());
double a = 5.0, b = 0.4, c = 0.15;
for(size_t i = 0; i < xs.size(); i++)
{
auto y = gaussian(xs[i], a, b, c);
auto dist = std::normal_distribution(0.0, 0.1 * y);
ys[i] = y + dist(gen);
}
auto r = curve_fit(gaussian, {1.0, 0.0, 1.0}, xs, ys);
std::cout << "result: " << r[0] << ' ' << r[1] << ' ' << r[2] << '\n';
std::cout << "error : " << r[0] - a << ' ' << r[1] - b << ' ' << r[2] - c << '\n';
}
In this case, I would like to fix one of the a, b, c parameters and estimate the remaining two. For example, fix a and estimate b and c. But I would like to find a solution such that I can input any value to the fixed parameter a, without needing to modify the gaussian function every time.
Ok. Here's the answer based on the code linked in http://github.com/Eleobert/gsl-curve-fit/blob/master/example.cpp. However, this is not the code posted in the question: you should update your question accordingly so that others may take advantage from both the question & answer.
So, basically, the main problem is that GSL is a library written in pure C, whereas you use a high-level wrapper written in C++, published in the aforementioned link . While the wrapper is written pretty well in modern C++, it has one basic problem: it is "stiff" - it can be used only for a subclass of problems it was designed for, and this subclass is a rather narrow subset of the capabilities offered by the original C code.
Let's try to improve it a bit and start from how the wrapper is supposed to be used:
double gaussian(double x, double a, double b, double c)
{
const double z = (x - b) / c;
return a * std::exp(-0.5 * z * z);
}
int main()
{
auto device = std::random_device();
auto gen = std::mt19937(device());
auto xs = linspace<std::vector<double>>(0.0, 1.0, 300);
auto ys = std::vector<double>(xs.size());
double a = 5.0, b = 0.4, c = 0.15;
for (size_t i = 0; i < xs.size(); i++)
{
auto y = gaussian(xs[i], a, b, c);
auto dist = std::normal_distribution(0.0, 0.1 * y);
ys[i] = y + dist(gen);
}
auto result = curve_fit(gaussian, {1.0, 0.0, 1.0}, xs, ys);
// use result
}
This code is amazingly simple compared to its original, C-language counterpart. One initializes the x-y pairs of values, here stored as vectors xs and ys and executes a single function that take 4 easy to understand parameters: the function to be fitted to the data, the initial values of the fitting parameters the function depends on, the x values and the corresponding y values of the data to which the function must be fitted.
Your problem is how to keep this high-level interface, but use it for fitting functions where only some parameters are "free", that is, can be changed during the fitting procedure, while the values of others must be fixed. This could be easily achieved using, e.g., global variables that the function has access to, but we hate global variables and never use them without a real cause.
I suggest using a well-known C++ alternative: functors. Look:
struct gaussian_fixed_a
{
double a;
gaussian_fixed_a(double a) : a{a} {}
double operator()(double x, double b, double c) const { return gaussian(x, a, b, c); }
};
This struct/class introduces a new type of function objects. In the constructor, parameter a is passed and stored in an object. Then, there's a function call operator that takes only 3 parameters rather than 4, substituting a from its stored value. This object can pretend to be a Gaussian with a fixed constant and only other 3 arguments, x, b, and c, that can vary.
We would like to use it like this:
gaussian_fixed_a g(a);
auto r2 = curve_fit(g, std::array{0.444, 0.11}, xs, ys);
This is almost the same code you'd use for the original wrapper save for 2 differences:
You now use an object name (here: g) rather than a function name
You have to pass the number of arguments to curve_fit as a compile-time constant, as it is then used internally by it to call a template parametrized by this number. I achieve it by using std::array, for which the compiler can deduce its size at compile time, just as needed. An alternative would ba a nasty template syntax, curve_fit<2>(...
For this to work, you need to change the interface of curve_fit, from
template <typename Callable>
auto curve_fit(Callable f, const std::vector<double>& initial_params, const std::vector<double>& x,
const std::vector<double>& y) -> std::vector<double>
to
template <typename Callable, auto n>
auto curve_fit(Callable f, const std::array<double, n>& initial_params, const std::vector<double>& x,
const std::vector<double>& y) -> std::vector<double>
(btw: this -> syntax with well-known type on its right-hand side is not the best one, IMHO, but let it be). The idea is to force the compiler to read the number of fitting parameters at compile time form the size of the array.
Then you need to make a similar adjustment in the argument list of curve_fit_impl - and this is almost it.
Here I spent quite a lot of time trying to figure out why this code does not work. It turned out it had worked all the time, the secret is that if you fit a function to some data, you'd better provide initial values reasonably close to the solution. That's why used this initializer std::array{0.444, 0.11} rather than the original {0, 1}, as the latter does not converge to anything close to the correct answer.
Do we really need to use explicit function objects? Perhaps lambdas will do? Yes, they will - this compiles and runs as expected:
auto r3 = curve_fit([a](double x, double b, double c) { return gaussian(x, a, b, c); }, std::array{0.444, 0.11}, xs, ys);
Here's the full diff between the original and modified code (without lambda):
7a8
> #include <array>
72c73,74
< auto internal_make_gsl_vector_ptr(const std::vector<double>& vec) -> gsl_vector*
---
> template<auto n>
> auto internal_make_gsl_vector_ptr(const std::array<double, n>& vec) -> gsl_vector*
158,159c160,161
< template <typename Callable>
< auto curve_fit(Callable f, const std::vector<double>& initial_params, const std::vector<double>& x,
---
> template <typename Callable, auto n>
> auto curve_fit(Callable f, const std::array<double, n>& initial_params, const std::vector<double>& x,
163,164c165,166
< constexpr auto n = 3;
< assert(initial_params.size() == n);
---
> // constexpr auto n = 2;
> // assert(initial_params.size() == n);
194a197,204
>
> struct gaussian_fixed_a
> {
> double a;
> gaussian_fixed_a(double a) : a{a} {}
> double operator()(double x, double b, double c) const { return gaussian(x, a, b, c); }
> };
>
212c222,224
< auto r = curve_fit(gaussian, {1.0, 0.0, 1.0}, xs, ys);
---
> auto r = curve_fit(gaussian, std::array{1.0, 0.0, 1.0}, xs, ys);
> gaussian_fixed_a g(a);
> auto r2 = curve_fit(g, std::array{0.444, 0.11}, xs, ys);
215a228,230
> std::cout << "\n";
> std::cout << "result: " << r2[0] << ' ' << r2[1] << '\n';
> std::cout << "error : " << r2[0] - b << ' ' << r2[1] - c << '\n';

How to use GSL to fit an arbitrary function (i.e. 1/x + 1/x^2) to some data?

I have some data and I need to fit a second order "polynomial" in 1/x to it using C and GSL, but I don't really understand how to do it.
The documentation for GSL is, unfortunately, not very helpful, I have read it for a few hours now, but I don't seem to be getting closer to the solution.
Google doesn't turn up anything useful either, and I really don't know what to do anymore.
Could you maybe give me some hints on how to accomplish this, or where even to look?
Thanks
Edit 1: The main problem basically is that
Sum n : a_n*x^(-1)
is not a polynomial, so basic fitting or solving algorithms won't work correctly. That's what I tried, using the code for quadratic fitting from this link, also substituting x->1/x, but it didn't work.
May be it's a bit too late for you to read this. However, I post my answer anyway for other people looking for enlightenment.
I suppose, that this basic example can help you. First of all, you have to read about this method of non-linear fitting since you have to adapt the code for any of your own problem.
Second, it's a bit not really clear for me from your post what function you use.
For the sake of clarity let's consider
a1/x + a2/x**2
where a1 and a2 - your parameters.
Using that slightly modified code from the link above ( I replaced 1/x with 1/(x + 0.1) to avoid singularities but it doesn't really change the picture):
#include <stdlib.h>
#include <stdio.h>
#include <gsl/gsl_rng.h>
#include <gsl/gsl_randist.h>
#include <gsl/gsl_matrix.h>
#include <gsl/gsl_vector.h>
#include <gsl/gsl_blas.h>
#include <gsl/gsl_multifit_nlinear.h>
/* number of data points to fit */
#define N 40
#define FIT(i) gsl_vector_get(w->x, i)
#define ERR(i) sqrt(gsl_matrix_get(covar,i,i))
struct data
{
size_t n;
double * y;
};
int expb_f (const gsl_vector * x, void *data, gsl_vector * f)
{
size_t n = ((struct data *)data)->n;
double *y = ((struct data *)data)->y;
double A_1 = gsl_vector_get (x, 0);
double A_2 = gsl_vector_get (x, 1);
size_t i;
for (i = 0; i < n; i++)
{
/* Model Yi = A_1 / x + A_2 / x**2 */
double t = i;
double Yi = A_1 / (t + 0.1) +A_2 / (t*t + 0.2*t + 0.01) ;
gsl_vector_set (f, i, Yi - y[i]);
}
return GSL_SUCCESS;
}
int expb_df (const gsl_vector * x, void *data, gsl_matrix * J)
{
size_t n = ((struct data *)data)->n;
double A_1 = gsl_vector_get (x, 0);
double A_2 = gsl_vector_get (x, 1);
size_t i;
for (i = 0; i < n; i++)
{
/* Jacobian matrix J(i,j) = dfi / dxj, */
/* where fi = (Yi - yi)/sigma[i], */
/* Yi = A_1 / (t + 0.1) +A_2 / (t*t + 0.2*t + 0.01) */
/* and the xj are the parameters (A_1,A_2) */
double t = i;
double e = 1 / (t + 0.1);
double e1 = 1 / (t*t + 0.2*t + 0.01);
gsl_matrix_set (J, i, 0, e);
gsl_matrix_set (J, i, 1, e1);
}
return GSL_SUCCESS;
}
void callback(const size_t iter, void *params, const gsl_multifit_nlinear_workspace *w)
{
gsl_vector *f = gsl_multifit_nlinear_residual(w);
gsl_vector *x = gsl_multifit_nlinear_position(w);
double rcond;
/* compute reciprocal condition number of J(x) */
gsl_multifit_nlinear_rcond(&rcond, w);
fprintf(stderr, "iter %2zu: A_1 = % e A_2 = % e cond(J) = % e, |f(x)| = % e \n", iter, gsl_vector_get(x, 0), gsl_vector_get(x, 1), 1.0 / rcond, gsl_blas_dnrm2(f));
}
int main (void)
{
const gsl_multifit_nlinear_type *T = gsl_multifit_nlinear_trust;
gsl_multifit_nlinear_workspace *w;
gsl_multifit_nlinear_fdf fdf;
gsl_multifit_nlinear_parameters fdf_params = gsl_multifit_nlinear_default_parameters();
const size_t n = N;
const size_t p = 2;
gsl_vector *f;
gsl_matrix *J;
gsl_matrix *covar = gsl_matrix_alloc (p, p);
double y[N], weights[N];
struct data d = { n, y };
double x_init[2] = { 1.0, 1.0 }; /* starting values */
gsl_vector_view x = gsl_vector_view_array (x_init, p);
gsl_vector_view wts = gsl_vector_view_array(weights, n);
gsl_rng * r;
double chisq, chisq0;
int status, info;
size_t i;
const double xtol = 1e-8;
const double gtol = 1e-8;
const double ftol = 0.0;
gsl_rng_env_setup();
r = gsl_rng_alloc(gsl_rng_default);
/* define the function to be minimized */
fdf.f = expb_f;
fdf.df = expb_df; /* set to NULL for finite-difference Jacobian */
fdf.fvv = NULL; /* not using geodesic acceleration */
fdf.n = n;
fdf.p = p;
fdf.params = &d;
/* this is the data to be fitted */
for (i = 0; i < n; i++)
{
double t = i;
double yi = (0.1 + 3.2/(t + 0.1))/(t + 0.1);
double si = 0.1 * yi;
double dy = gsl_ran_gaussian(r, si);
weights[i] = 1.0 / (si * si);
y[i] = yi + dy;
printf ("% e % e \n",t + 0.1, y[i]);
};
/* allocate workspace with default parameters */
w = gsl_multifit_nlinear_alloc (T, &fdf_params, n, p);
/* initialize solver with starting point and weights */
gsl_multifit_nlinear_winit (&x.vector, &wts.vector, &fdf, w);
/* compute initial cost function */
f = gsl_multifit_nlinear_residual(w);
gsl_blas_ddot(f, f, &chisq0);
/* solve the system with a maximum of 20 iterations */
status = gsl_multifit_nlinear_driver(20, xtol, gtol, ftol, callback, NULL, &info, w);
/* compute covariance of best fit parameters */
J = gsl_multifit_nlinear_jac(w);
gsl_multifit_nlinear_covar (J, 0.0, covar);
/* compute final cost */
gsl_blas_ddot(f, f, &chisq);
fprintf(stderr, "summary from method '%s/%s'\n", gsl_multifit_nlinear_name(w), gsl_multifit_nlinear_trs_name(w));
fprintf(stderr, "number of iterations: %zu \n", gsl_multifit_nlinear_niter(w));
fprintf(stderr, "function evaluations: %zu \n", fdf.nevalf);
fprintf(stderr, "Jacobian evaluations: %zu \n", fdf.nevaldf);
fprintf(stderr, "reason for stopping: %s \n", (info == 1) ? "small step size" : "small gradient");
fprintf(stderr, "initial |f(x)| = % e \n", sqrt(chisq0));
fprintf(stderr, "final |f(x)| = % e \n", sqrt(chisq));
{
double dof = n - p;
double c = GSL_MAX_DBL(1, sqrt(chisq / dof));
fprintf(stderr, "chisq/dof = % e \n", chisq / dof);
fprintf (stderr, "A_1 = % f +/- % f \n", FIT(0), c*ERR(0));
fprintf (stderr, "A_2 = % f +/- % f \n", FIT(1), c*ERR(1));
}
fprintf (stderr, "status = %s \n", gsl_strerror (status));
gsl_multifit_nlinear_free (w);
gsl_matrix_free (covar);
gsl_rng_free (r);
return 0;
}
Results of simulations
Unfortunately, Gnuplot doesn't want to fit this data for some reason. Usually it gives the same function up to certain decimal numbers and helps to verify your code.

How to implement natural logarithm with continued fraction in C?

Here I have a little problem. Create something from this formula:
This is what I have, but it doesn't work. Franky, I really don't understand how it should work.. I tried to code it with some bad instructions. N is number of iteration and parts of fraction. I think it leads somehow to recursion but don't know how.
Thanks for any help.
double contFragLog(double z, int n)
{
double cf = 2 * z;
double a, b;
for(int i = n; i >= 1; i--)
{
a = sq(i - 2) * sq(z);
b = i + i - 2;
cf = a / (b - cf);
}
return (1 + cf) / (1 - cf);
}
The central loop is messed. Reworked. Recursion not needed either. Just compute the deepest term first and work your way out.
double contFragLog(double z, int n) {
double zz = z*z;
double cf = 1.0; // Important this is not 0
for (int i = n; i >= 1; i--) {
cf = (2*i -1) - i*i*zz/cf;
}
return 2*z/cf;
}
void testln(double z) {
double y = log((1+z)/(1-z));
double y2 = contFragLog(z, 8);
printf("%e %e %e\n", z, y, y2);
}
int main() {
testln(0.2);
testln(0.5);
testln(0.8);
return 0;
}
Output
2.000000e-01 4.054651e-01 4.054651e-01
5.000000e-01 1.098612e+00 1.098612e+00
8.000000e-01 2.197225e+00 2.196987e+00
[Edit]
As prompted by #MicroVirus, I found double cf = 1.88*n - 0.95; to work better than double cf = 1.0;. As more terms are used, the value used makes less difference, yet a good initial cf requires fewer terms for a good answer, especially for |z| near 0.5. More work could be done here as I studied 0 < z <= 0.5. #MicroVirus suggestion of 2*n+1 may be close to my suggestion due to an off-by-one of what n is.
This is based on reverse computing and noting the value of CF[n] as n increased. I was surprised the "seed" value did not appear to be some nice integer equation.
Here's a solution to the problem that does use recursion (if anyone is interested):
#include <math.h>
#include <stdio.h>
/* `i` is the iteration of the recursion and `n` is
just for testing when we should end. 'zz' is z^2 */
double recursion (double zz, int i, int n) {
if (!n)
return 1;
return 2 * i - 1 - i * i * zz / recursion (zz, i + 1, --n);
}
double contFragLog (double z, int n) {
return 2 * z / recursion (z * z, 1, n);
}
void testln(double z) {
double y = log((1+z)/(1-z));
double y2 = contFragLog(z, 8);
printf("%e %e %e\n", z, y, y2);
}
int main() {
testln(0.2);
testln(0.5);
testln(0.8);
return 0;
}
The output is identical to the solution above:
2.000000e-01 4.054651e-01 4.054651e-01
5.000000e-01 1.098612e+00 1.098612e+00
8.000000e-01 2.197225e+00 2.196987e+00

R freezes when I call a C code

I wrote a small C code to do random walk metropolis, which I call in R. When I run it, R freezes. I am not sure which part of the code is incorrect. I following this Peng and Leeuw tutorial (on Page 6). As a disclaimer: I don't have much experience with C, and have only some basic knowledge of C++
#----C code --------
#include <R.h>
#include <Rmath.h>
void mcmc(int *niter, double *mean, double *sd, double *lo_bound,
double *hi_bound, double *normal)
{
int i, j;
double x, x1, h, p;
x = runif(-5, 5);
for(i=0; i < *niter; i++) {
x1 = runif(*lo_bound, *hi_bound);
while((x1 + x) > 5 || (x1 + x) < -5)
x1 = runif(*lo_bound, *hi_bound);
h = dnorm(x+x1, *mean, *sd, 0)/dnorm(x, *mean, *sd, 0);
if(h >= 1)
h = 1;
p = runif(0, 1);
if(p < h)
x += x1;
normal[i] = x;
}
}
#-----R code ---------
foo_C<-function(mean, sd, lo_bound, hi_bound, niter)
{
result <- .C("mcmc", as.integer(niter), as.double(mean), as.double(sd),
as.double(lo_bound), as.double(hi_bound), normal=double(niter))
result[["normal"]]
}
After compiling it:
dyn.load("foo_C.so")
foo_C(0, 1, -0.5, 0.5, 100)
FOLLOW UP:
The while loop is where the problem lies. But the root of the problem seems to have to do with the function runif, which is supposed to generate a random variable between a lower bound and an upper bound. But it seems that what the function actually does is to randomly pick either the upper bound value (5) or the lower bound value (-5).
You need to follow the instructions in Writing R Extensions, section 6.3 Random number generation and call GetRNGstate(); before you call R's random number generation routines. You also need to call PutRNGstate(); when you're finished.
The reason your code started working is likely because you called set.seed in the R session before you called your mcmc C function.
So your C code should look like this:
#include <R.h>
#include <Rmath.h>
void mcmc(int *niter, double *mean, double *sd, double *lo_bound,
double *hi_bound, double *normal)
{
int i;
double x, x1, h, p;
GetRNGstate();
x = runif(-5.0, 5.0);
for(i=0; i < *niter; i++) {
x1 = runif(*lo_bound, *hi_bound);
while((x1 + x) > 5.0 || (x1 + x) < -5.0) {
x1 = runif(*lo_bound, *hi_bound);
//R_CheckUserInterrupt();
}
h = dnorm(x+x1, *mean, *sd, 0)/dnorm(x, *mean, *sd, 0);
if(h >= 1)
h = 1;
p = runif(0, 1);
if(p < h)
x += x1;
normal[i] = x;
}
PutRNGstate();
}

fast & efficient least squares fit algorithm in C?

I am trying to implement a linear least squares fit onto 2 arrays of data: time vs amplitude. The only technique I know so far is to test all of the possible m and b points in (y = m*x+b) and then find out which combination fits my data best so that it has the least error. However, I think iterating so many combinations is sometimes useless because it tests out everything. Are there any techniques to speed up the process that I don't know about? Thanks.
Try this code. It fits y = mx + b to your (x,y) data.
The arguments to linreg are
linreg(int n, REAL x[], REAL y[], REAL* b, REAL* m, REAL* r)
n = number of data points
x,y = arrays of data
*b = output intercept
*m = output slope
*r = output correlation coefficient (can be NULL if you don't want it)
The return value is 0 on success, !=0 on failure.
Here's the code
#include "linreg.h"
#include <stdlib.h>
#include <math.h> /* math functions */
//#define REAL float
#define REAL double
inline static REAL sqr(REAL x) {
return x*x;
}
int linreg(int n, const REAL x[], const REAL y[], REAL* m, REAL* b, REAL* r){
REAL sumx = 0.0; /* sum of x */
REAL sumx2 = 0.0; /* sum of x**2 */
REAL sumxy = 0.0; /* sum of x * y */
REAL sumy = 0.0; /* sum of y */
REAL sumy2 = 0.0; /* sum of y**2 */
for (int i=0;i<n;i++){
sumx += x[i];
sumx2 += sqr(x[i]);
sumxy += x[i] * y[i];
sumy += y[i];
sumy2 += sqr(y[i]);
}
REAL denom = (n * sumx2 - sqr(sumx));
if (denom == 0) {
// singular matrix. can't solve the problem.
*m = 0;
*b = 0;
if (r) *r = 0;
return 1;
}
*m = (n * sumxy - sumx * sumy) / denom;
*b = (sumy * sumx2 - sumx * sumxy) / denom;
if (r!=NULL) {
*r = (sumxy - sumx * sumy / n) / /* compute correlation coeff */
sqrt((sumx2 - sqr(sumx)/n) *
(sumy2 - sqr(sumy)/n));
}
return 0;
}
Example
You can run this example online.
int main()
{
int n = 6;
REAL x[6]= {1, 2, 4, 5, 10, 20};
REAL y[6]= {4, 6, 12, 15, 34, 68};
REAL m,b,r;
linreg(n,x,y,&m,&b,&r);
printf("m=%g b=%g r=%g\n",m,b,r);
return 0;
}
Here is the output
m=3.43651 b=-0.888889 r=0.999192
Here is the Excel plot and linear fit (for verification).
All values agree exactly with the C code above (note C code returns r while Excel returns R**2).
There are efficient algorithms for least-squares fitting; see Wikipedia for details. There are also libraries that implement the algorithms for you, likely more efficiently than a naive implementation would do; the GNU Scientific Library is one example, but there are others under more lenient licenses as well.
From Numerical Recipes: The Art of Scientific Computing in (15.2) Fitting Data to a Straight Line:
Linear Regression:
Consider the problem of fitting a set of N data points (xi, yi) to a straight-line model:
Assume that the uncertainty: sigmai associated with each yi and that the xi’s (values of the dependent variable) are known exactly. To measure how well the model agrees with the data, we use the chi-square function, which in this case is:
The above equation is minimized to determine a and b. This is done by finding the derivative of the above equation with respect to a and b, equate them to zero and solve for a and b. Then we estimate the probable uncertainties in the estimates of a and b, since obviously the measurement errors in the data must introduce some uncertainty in the determination of those parameters. Additionally, we must estimate the goodness-of-fit of the data to the
model. Absent this estimate, we have not the slightest indication that the parameters a and b in the model have any meaning at all.
The below struct performs the mentioned calculations:
struct Fitab {
// Object for fitting a straight line y = a + b*x to a set of
// points (xi, yi), with or without available
// errors sigma i . Call one of the two constructors to calculate the fit.
// The answers are then available as the variables:
// a, b, siga, sigb, chi2, and either q or sigdat.
int ndata;
double a, b, siga, sigb, chi2, q, sigdat; // Answers.
vector<double> &x, &y, &sig;
// Constructor.
Fitab(vector<double> &xx, vector<double> &yy, vector<double> &ssig)
: ndata(xx.size()), x(xx), y(yy), sig(ssig), chi2(0.), q(1.), sigdat(0.)
{
// Given a set of data points x[0..ndata-1], y[0..ndata-1]
// with individual standard deviations sig[0..ndata-1],
// sets a,b and their respective probable uncertainties
// siga and sigb, the chi-square: chi2, and the goodness-of-fit
// probability: q
Gamma gam;
int i;
double ss=0., sx=0., sy=0., st2=0., t, wt, sxoss; b=0.0;
for (i=0;i < ndata; i++) { // Accumulate sums ...
wt = 1.0 / SQR(sig[i]); //...with weights
ss += wt;
sx += x[i]*wt;
sy += y[i]*wt;
}
sxoss = sx/ss;
for (i=0; i < ndata; i++) {
t = (x[i]-sxoss) / sig[i];
st2 += t*t;
b += t*y[i]/sig[i];
}
b /= st2; // Solve for a, b, sigma-a, and simga-b.
a = (sy-sx*b) / ss;
siga = sqrt((1.0+sx*sx/(ss*st2))/ss);
sigb = sqrt(1.0/st2); // Calculate chi2.
for (i=0;i<ndata;i++) chi2 += SQR((y[i]-a-b*x[i])/sig[i]);
if (ndata>2) q=gam.gammq(0.5*(ndata-2),0.5*chi2); // goodness of fit
}
// Constructor.
Fitab(vector<double> &xx, vector<double> &yy)
: ndata(xx.size()), x(xx), y(yy), sig(xx), chi2(0.), q(1.), sigdat(0.)
{
// As above, but without known errors (sig is not used).
// The uncertainties siga and sigb are estimated by assuming
// equal errors for all points, and that a straight line is
// a good fit. q is returned as 1.0, the normalization of chi2
// is to unit standard deviation on all points, and sigdat
// is set to the estimated error of each point.
int i;
double ss,sx=0.,sy=0.,st2=0.,t,sxoss;
b=0.0; // Accumulate sums ...
for (i=0; i < ndata; i++) {
sx += x[i]; // ...without weights.
sy += y[i];
}
ss = ndata;
sxoss = sx/ss;
for (i=0;i < ndata; i++) {
t = x[i]-sxoss;
st2 += t*t;
b += t*y[i];
}
b /= st2; // Solve for a, b, sigma-a, and sigma-b.
a = (sy-sx*b)/ss;
siga=sqrt((1.0+sx*sx/(ss*st2))/ss);
sigb=sqrt(1.0/st2); // Calculate chi2.
for (i=0;i<ndata;i++) chi2 += SQR(y[i]-a-b*x[i]);
if (ndata > 2) sigdat=sqrt(chi2/(ndata-2));
// For unweighted data evaluate typical
// sig using chi2, and adjust
// the standard deviations.
siga *= sigdat;
sigb *= sigdat;
}
};
where struct Gamma:
struct Gamma : Gauleg18 {
// Object for incomplete gamma function.
// Gauleg18 provides coefficients for Gauss-Legendre quadrature.
static const Int ASWITCH=100; When to switch to quadrature method.
static const double EPS; // See end of struct for initializations.
static const double FPMIN;
double gln;
double gammp(const double a, const double x) {
// Returns the incomplete gamma function P(a,x)
if (x < 0.0 || a <= 0.0) throw("bad args in gammp");
if (x == 0.0) return 0.0;
else if ((Int)a >= ASWITCH) return gammpapprox(a,x,1); // Quadrature.
else if (x < a+1.0) return gser(a,x); // Use the series representation.
else return 1.0-gcf(a,x); // Use the continued fraction representation.
}
double gammq(const double a, const double x) {
// Returns the incomplete gamma function Q(a,x) = 1 - P(a,x)
if (x < 0.0 || a <= 0.0) throw("bad args in gammq");
if (x == 0.0) return 1.0;
else if ((Int)a >= ASWITCH) return gammpapprox(a,x,0); // Quadrature.
else if (x < a+1.0) return 1.0-gser(a,x); // Use the series representation.
else return gcf(a,x); // Use the continued fraction representation.
}
double gser(const Doub a, const Doub x) {
// Returns the incomplete gamma function P(a,x) evaluated by its series representation.
// Also sets ln (gamma) as gln. User should not call directly.
double sum,del,ap;
gln=gammln(a);
ap=a;
del=sum=1.0/a;
for (;;) {
++ap;
del *= x/ap;
sum += del;
if (fabs(del) < fabs(sum)*EPS) {
return sum*exp(-x+a*log(x)-gln);
}
}
}
double gcf(const Doub a, const Doub x) {
// Returns the incomplete gamma function Q(a, x) evaluated
// by its continued fraction representation.
// Also sets ln (gamma) as gln. User should not call directly.
int i;
double an,b,c,d,del,h;
gln=gammln(a);
b=x+1.0-a; // Set up for evaluating continued fraction
// by modified Lentz’s method with with b0 = 0.
c=1.0/FPMIN;
d=1.0/b;
h=d;
for (i=1;;i++) {
// Iterate to convergence.
an = -i*(i-a);
b += 2.0;
d=an*d+b;
if (fabs(d) < FPMIN) d=FPMIN;
c=b+an/c;
if (fabs(c) < FPMIN) c=FPMIN;
d=1.0/d;
del=d*c;
h *= del;
if (fabs(del-1.0) <= EPS) break;
}
return exp(-x+a*log(x)-gln)*h; Put factors in front.
}
double gammpapprox(double a, double x, int psig) {
// Incomplete gamma by quadrature. Returns P(a,x) or Q(a, x),
// when psig is 1 or 0, respectively. User should not call directly.
int j;
double xu,t,sum,ans;
double a1 = a-1.0, lna1 = log(a1), sqrta1 = sqrt(a1);
gln = gammln(a);
// Set how far to integrate into the tail:
if (x > a1) xu = MAX(a1 + 11.5*sqrta1, x + 6.0*sqrta1);
else xu = MAX(0.,MIN(a1 - 7.5*sqrta1, x - 5.0*sqrta1));
sum = 0;
for (j=0;j<ngau;j++) { // Gauss-Legendre.
t = x + (xu-x)*y[j];
sum += w[j]*exp(-(t-a1)+a1*(log(t)-lna1));
}
ans = sum*(xu-x)*exp(a1*(lna1-1.)-gln);
return (psig?(ans>0.0? 1.0-ans:-ans):(ans>=0.0? ans:1.0+ans));
}
double invgammp(Doub p, Doub a);
// Inverse function on x of P(a,x) .
};
const Doub Gamma::EPS = numeric_limits<Doub>::epsilon();
const Doub Gamma::FPMIN = numeric_limits<Doub>::min()/EPS
and stuct Gauleg18:
struct Gauleg18 {
// Abscissas and weights for Gauss-Legendre quadrature.
static const Int ngau = 18;
static const Doub y[18];
static const Doub w[18];
};
const Doub Gauleg18::y[18] = {0.0021695375159141994,
0.011413521097787704,0.027972308950302116,0.051727015600492421,
0.082502225484340941, 0.12007019910960293,0.16415283300752470,
0.21442376986779355, 0.27051082840644336, 0.33199876341447887,
0.39843234186401943, 0.46931971407375483, 0.54413605556657973,
0.62232745288031077, 0.70331500465597174, 0.78649910768313447,
0.87126389619061517, 0.95698180152629142};
const Doub Gauleg18::w[18] = {0.0055657196642445571,
0.012915947284065419,0.020181515297735382,0.027298621498568734,
0.034213810770299537,0.040875750923643261,0.047235083490265582,
0.053244713977759692,0.058860144245324798,0.064039797355015485
0.068745323835736408,0.072941885005653087,0.076598410645870640,
0.079687828912071670,0.082187266704339706,0.084078218979661945,
0.085346685739338721,0.085983275670394821};
and, finally fuinction Gamma::invgamp():
double Gamma::invgammp(double p, double a) {
// Returns x such that P(a,x) = p for an argument p between 0 and 1.
int j;
double x,err,t,u,pp,lna1,afac,a1=a-1;
const double EPS=1.e-8; // Accuracy is the square of EPS.
gln=gammln(a);
if (a <= 0.) throw("a must be pos in invgammap");
if (p >= 1.) return MAX(100.,a + 100.*sqrt(a));
if (p <= 0.) return 0.0;
if (a > 1.) {
lna1=log(a1);
afac = exp(a1*(lna1-1.)-gln);
pp = (p < 0.5)? p : 1. - p;
t = sqrt(-2.*log(pp));
x = (2.30753+t*0.27061)/(1.+t*(0.99229+t*0.04481)) - t;
if (p < 0.5) x = -x;
x = MAX(1.e-3,a*pow(1.-1./(9.*a)-x/(3.*sqrt(a)),3));
} else {
t = 1.0 - a*(0.253+a*0.12); and (6.2.9).
if (p < t) x = pow(p/t,1./a);
else x = 1.-log(1.-(p-t)/(1.-t));
}
for (j=0;j<12;j++) {
if (x <= 0.0) return 0.0; // x too small to compute accurately.
err = gammp(a,x) - p;
if (a > 1.) t = afac*exp(-(x-a1)+a1*(log(x)-lna1));
else t = exp(-x+a1*log(x)-gln);
u = err/t;
// Halley’s method.
x -= (t = u/(1.-0.5*MIN(1.,u*((a-1.)/x - 1))));
// Halve old value if x tries to go negative.
if (x <= 0.) x = 0.5*(x + t);
if (fabs(t) < EPS*x ) break;
}
return x;
}
Here is my version of a C/C++ function that does simple linear regression. The calculations follow the wikipedia article on simple linear regression. This is published as a single-header public-domain (MIT) library on github: simple_linear_regression. The library (.h file) is tested to work on Linux and Windows, and from C and C++ using -Wall -Werror and all -std versions supported by clang/gcc.
#define SIMPLE_LINEAR_REGRESSION_ERROR_INPUT_VALUE -2
#define SIMPLE_LINEAR_REGRESSION_ERROR_NUMERIC -3
int simple_linear_regression(const double * x, const double * y, const int n, double * slope_out, double * intercept_out, double * r2_out) {
double sum_x = 0.0;
double sum_xx = 0.0;
double sum_xy = 0.0;
double sum_y = 0.0;
double sum_yy = 0.0;
double n_real = (double)(n);
int i = 0;
double slope = 0.0;
double denominator = 0.0;
if (x == NULL || y == NULL || n < 2) {
return SIMPLE_LINEAR_REGRESSION_ERROR_INPUT_VALUE;
}
for (i = 0; i < n; ++i) {
sum_x += x[i];
sum_xx += x[i] * x[i];
sum_xy += x[i] * y[i];
sum_y += y[i];
sum_yy += y[i] * y[i];
}
denominator = n_real * sum_xx - sum_x * sum_x;
if (denominator == 0.0) {
return SIMPLE_LINEAR_REGRESSION_ERROR_NUMERIC;
}
slope = (n_real * sum_xy - sum_x * sum_y) / denominator;
if (slope_out != NULL) {
*slope_out = slope;
}
if (intercept_out != NULL) {
*intercept_out = (sum_y - slope * sum_x) / n_real;
}
if (r2_out != NULL) {
denominator = ((n_real * sum_xx) - (sum_x * sum_x)) * ((n_real * sum_yy) - (sum_y * sum_y));
if (denominator == 0.0) {
return SIMPLE_LINEAR_REGRESSION_ERROR_NUMERIC;
}
*r2_out = ((n_real * sum_xy) - (sum_x * sum_y)) * ((n_real * sum_xy) - (sum_x * sum_y)) / denominator;
}
return 0;
}
Usage example:
#define SIMPLE_LINEAR_REGRESSION_IMPLEMENTATION
#include "simple_linear_regression.h"
#include <stdio.h>
/* Some data that we want to find the slope, intercept and r2 for */
static const double x[] = { 1.47, 1.50, 1.52, 1.55, 1.57, 1.60, 1.63, 1.65, 1.68, 1.70, 1.73, 1.75, 1.78, 1.80, 1.83 };
static const double y[] = { 52.21, 53.12, 54.48, 55.84, 57.20, 58.57, 59.93, 61.29, 63.11, 64.47, 66.28, 68.10, 69.92, 72.19, 74.46 };
int main() {
double slope = 0.0;
double intercept = 0.0;
double r2 = 0.0;
int res = 0;
res = simple_linear_regression(x, y, sizeof(x) / sizeof(x[0]), &slope, &intercept, &r2);
if (res < 0) {
printf("Error: %s\n", simple_linear_regression_error_string(res));
return res;
}
printf("slope: %f\n", slope);
printf("intercept: %f\n", intercept);
printf("r2: %f\n", r2);
return 0;
}
The original example above worked well for me with slope and offset but I had a hard time with the corr coef. Maybe I don't have my parenthesis working the same as the assumed precedence? Anyway, with some help of other web pages I finally got values that match the linear trend-line in Excel. Thought I would share my code using Mark Lakata's variable names. Hope this helps.
double slope = ((n * sumxy) - (sumx * sumy )) / denom;
double intercept = ((sumy * sumx2) - (sumx * sumxy)) / denom;
double term1 = ((n * sumxy) - (sumx * sumy));
double term2 = ((n * sumx2) - (sumx * sumx));
double term3 = ((n * sumy2) - (sumy * sumy));
double term23 = (term2 * term3);
double r2 = 1.0;
if (fabs(term23) > MIN_DOUBLE) // Define MIN_DOUBLE somewhere as 1e-9 or similar
r2 = (term1 * term1) / term23;
as an assignment I had to code in C a simple linear regression using RMSE loss function. The program is dynamic and you can enter your own values and choose your own loss function which is for now limited to Root Mean Square Error. But first here are the algorithms I used:
now the code... you need gnuplot to display the chart, sudo apt install gnuplot
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>
#include <sys/types.h>
#define BUFFSIZE 64
#define MAXSIZE 100
static double vector_x[MAXSIZE] = {0};
static double vector_y[MAXSIZE] = {0};
static double vector_predict[MAXSIZE] = {0};
static double max_x;
static double max_y;
static double mean_x;
static double mean_y;
static double teta_0_intercept;
static double teta_1_grad;
static double RMSE;
static double r_square;
static double prediction;
static char intercept[BUFFSIZE];
static char grad[BUFFSIZE];
static char xrange[BUFFSIZE];
static char yrange[BUFFSIZE];
static char lossname_RMSE[BUFFSIZE] = "Simple Linear Regression using RMSE'";
static char cmd_gnu_0[BUFFSIZE] = "set title '";
static char cmd_gnu_1[BUFFSIZE] = "intercept = ";
static char cmd_gnu_2[BUFFSIZE] = "grad = ";
static char cmd_gnu_3[BUFFSIZE] = "set xrange [0:";
static char cmd_gnu_4[BUFFSIZE] = "set yrange [0:";
static char cmd_gnu_5[BUFFSIZE] = "f(x) = (grad * x) + intercept";
static char cmd_gnu_6[BUFFSIZE] = "plot f(x), 'data.temp' with points pointtype 7";
static char const *commands_gnuplot[] = {
cmd_gnu_0,
cmd_gnu_1,
cmd_gnu_2,
cmd_gnu_3,
cmd_gnu_4,
cmd_gnu_5,
cmd_gnu_6,
};
static size_t size;
static void user_input()
{
printf("Enter x,y vector size, MAX = 100\n");
scanf("%lu", &size);
if (size > MAXSIZE) {
printf("Wrong input size is too big\n");
user_input();
}
printf("vector's size is %lu\n", size);
size_t i;
for (i = 0; i < size; i++) {
printf("Enter vector_x[%ld] values\n", i);
scanf("%lf", &vector_x[i]);
}
for (i = 0; i < size; i++) {
printf("Enter vector_y[%ld] values\n", i);
scanf("%lf", &vector_y[i]);
}
}
static void display_vector()
{
size_t i;
for (i = 0; i < size; i++){
printf("vector_x[%lu] = %lf\t", i, vector_x[i]);
printf("vector_y[%lu] = %lf\n", i, vector_y[i]);
}
}
static void concatenate(char p[], char q[]) {
int c;
int d;
c = 0;
while (p[c] != '\0') {
c++;
}
d = 0;
while (q[d] != '\0') {
p[c] = q[d];
d++;
c++;
}
p[c] = '\0';
}
static void compute_mean_x_y()
{
size_t i;
double tmp_x = 0.0;
double tmp_y = 0.0;
for (i = 0; i < size; i++) {
tmp_x += vector_x[i];
tmp_y += vector_y[i];
}
mean_x = tmp_x / size;
mean_y = tmp_y / size;
printf("mean_x = %lf\n", mean_x);
printf("mean_y = %lf\n", mean_y);
}
static void compute_teta_1_grad()
{
double numerator = 0.0;
double denominator = 0.0;
double tmp1 = 0.0;
double tmp2 = 0.0;
size_t i;
for (i = 0; i < size; i++) {
numerator += (vector_x[i] - mean_x) * (vector_y[i] - mean_y);
}
for (i = 0; i < size; i++) {
tmp1 = vector_x[i] - mean_x;
tmp2 = tmp1 * tmp1;
denominator += tmp2;
}
teta_1_grad = numerator / denominator;
printf("teta_1_grad = %lf\n", teta_1_grad);
}
static void compute_teta_0_intercept()
{
teta_0_intercept = mean_y - (teta_1_grad * mean_x);
printf("teta_0_intercept = %lf\n", teta_0_intercept);
}
static void compute_prediction()
{
size_t i;
for (i = 0; i < size; i++) {
vector_predict[i] = teta_0_intercept + (teta_1_grad * vector_x[i]);
printf("y^[%ld] = %lf\n", i, vector_predict[i]);
}
printf("\n");
}
static void compute_RMSE()
{
compute_prediction();
double error = 0;
size_t i;
for (i = 0; i < size; i++) {
error = (vector_predict[i] - vector_y[i]) * (vector_predict[i] - vector_y[i]);
printf("error y^[%ld] = %lf\n", i, error);
RMSE += error;
}
/* mean */
RMSE = RMSE / size;
/* square root mean */
RMSE = sqrt(RMSE);
printf("\nRMSE = %lf\n", RMSE);
}
static void compute_loss_function()
{
int input = 0;
printf("Which loss function do you want to use?\n");
printf(" 1 - RMSE\n");
scanf("%d", &input);
switch(input) {
case 1:
concatenate(cmd_gnu_0, lossname_RMSE);
compute_RMSE();
printf("\n");
break;
default:
printf("Wrong input try again\n");
compute_loss_function(size);
}
}
static void compute_r_square(size_t size)
{
double num_err = 0.0;
double den_err = 0.0;
size_t i;
for (i = 0; i < size; i++) {
num_err += (vector_y[i] - vector_predict[i]) * (vector_y[i] - vector_predict[i]);
den_err += (vector_y[i] - mean_y) * (vector_y[i] - mean_y);
}
r_square = 1 - (num_err/den_err);
printf("R_square = %lf\n", r_square);
}
static void compute_predict_for_x()
{
double x = 0.0;
printf("Please enter x value\n");
scanf("%lf", &x);
prediction = teta_0_intercept + (teta_1_grad * x);
printf("y^ if x = %lf -> %lf\n",x, prediction);
}
static void compute_max_x_y()
{
size_t i;
double tmp1= 0.0;
double tmp2= 0.0;
for (i = 0; i < size; i++) {
if (vector_x[i] > tmp1) {
tmp1 = vector_x[i];
max_x = vector_x[i];
}
if (vector_y[i] > tmp2) {
tmp2 = vector_y[i];
max_y = vector_y[i];
}
}
printf("vector_x max value %lf\n", max_x);
printf("vector_y max value %lf\n", max_y);
}
static void display_model_line()
{
sprintf(intercept, "%0.7lf", teta_0_intercept);
sprintf(grad, "%0.7lf", teta_1_grad);
sprintf(xrange, "%0.7lf", max_x + 1);
sprintf(yrange, "%0.7lf", max_y + 1);
concatenate(cmd_gnu_1, intercept);
concatenate(cmd_gnu_2, grad);
concatenate(cmd_gnu_3, xrange);
concatenate(cmd_gnu_3, "]");
concatenate(cmd_gnu_4, yrange);
concatenate(cmd_gnu_4, "]");
printf("grad = %s\n", grad);
printf("intercept = %s\n", intercept);
printf("xrange = %s\n", xrange);
printf("yrange = %s\n", yrange);
printf("cmd_gnu_0: %s\n", cmd_gnu_0);
printf("cmd_gnu_1: %s\n", cmd_gnu_1);
printf("cmd_gnu_2: %s\n", cmd_gnu_2);
printf("cmd_gnu_3: %s\n", cmd_gnu_3);
printf("cmd_gnu_4: %s\n", cmd_gnu_4);
printf("cmd_gnu_5: %s\n", cmd_gnu_5);
printf("cmd_gnu_6: %s\n", cmd_gnu_6);
/* print plot */
FILE *gnuplot_pipe = (FILE*)popen("gnuplot -persistent", "w");
FILE *temp = (FILE*)fopen("data.temp", "w");
/* create data.temp */
size_t i;
for (i = 0; i < size; i++)
{
fprintf(temp, "%f %f \n", vector_x[i], vector_y[i]);
}
/* display gnuplot */
for (i = 0; i < 7; i++)
{
fprintf(gnuplot_pipe, "%s \n", commands_gnuplot[i]);
}
}
int main(void)
{
printf("===========================================\n");
printf("INPUT DATA\n");
printf("===========================================\n");
user_input();
display_vector();
printf("\n");
printf("===========================================\n");
printf("COMPUTE MEAN X:Y, TETA_1 TETA_0\n");
printf("===========================================\n");
compute_mean_x_y();
compute_max_x_y();
compute_teta_1_grad();
compute_teta_0_intercept();
printf("\n");
printf("===========================================\n");
printf("COMPUTE LOSS FUNCTION\n");
printf("===========================================\n");
compute_loss_function();
printf("===========================================\n");
printf("COMPUTE R_square\n");
printf("===========================================\n");
compute_r_square(size);
printf("\n");
printf("===========================================\n");
printf("COMPUTE y^ according to x\n");
printf("===========================================\n");
compute_predict_for_x();
printf("\n");
printf("===========================================\n");
printf("DISPLAY LINEAR REGRESSION\n");
printf("===========================================\n");
display_model_line();
printf("\n");
return 0;
}
Look at Section 1 of this paper. This section expresses a 2D linear regression as a matrix multiplication exercise. As long as your data is well-behaved, this technique should permit you to develop a quick least squares fit.
Depending on the size of your data, it might be worthwhile to algebraically reduce the matrix multiplication to simple set of equations, thereby avoiding the need to write a matmult() function. (Be forewarned, this is completely impractical for more than 4 or 5 data points!)
The fastest, most efficient way to solve least squares, as far as I am aware, is to subtract (the gradient)/(the 2nd order gradient) from your parameter vector. (2nd order gradient = i.e. the diagonal of the Hessian.)
Here is the intuition:
Let's say you want to optimize least squares over a single parameter. This is equivalent to finding the vertex of a parabola. Then, for any random initial parameter, x0, the vertex of the loss function is located at x0 - f(1) / f(2). That's because adding - f(1) / f(2) to x will always zero out the derivative, f(1).
Side note: Implementing this in Tensorflow, the solution appeared at w0 - f(1) / f(2) / (number of weights), but I'm not sure if that's due to Tensorflow or if it's due to something else..

Resources