I am asked to normalize a probability distribution P=A(x^2)(e^-x) within 0 to infinity by finding the value for A. I know the algorithms to calculate the Numerical value of Integration, but how do I deal with one of the limits being Infinity.
The only way I have been able to solve this problem with some accuracy (I got full accuracy, indeed) is by doing some math first, in order to obtain the taylor series that represents the integral of the first.
I have been looking here for my sample code, but I don't find it. I'll edit my post if I get a working solution.
The basic idea is to calculate all the derivatives of the function exp(-(x*x)) and use the coeficients to derive the integral form (by dividing those coeficients by one more than the exponent of x of the above function) to get the taylor series of the integral (I recommend you to use the unnormalized version described above to get the simple number coeficients, then adjust the result by multiplying by the proper constants) you'll get a taylor series with good convergence, giving you precise values for full precision (The integral requires a lot of subdivision, and you cannot divide an unbounded interval into a finite number of intervals, all finite)
I'll edit this question if I get on the code I wrote (so stay online, and dont' change the channel :) )
Related
I am now interested in the bundle adjustment in SLAM, where the Rodrigues vectors $R$ of dimension 3 are used as part of variables. Assume, without loss of generality, we use Gauss-Newton method to solve it, then in each step we need to solve the following linear least square problem:
$$J(x_k)\Delta x = -F(x_k),$$
where $J$ is the Jacobi of $F$.
Here I am wondering how to calculate the derivative $\frac{\partial F}{\partial R}$. Is it just like the ordinary Jacobi in mathematic analysis? I have this wondering because when I look for papers, I find many other concepts like exponential map, quaternions, Lie group and Lie algebra. So I suspect if there is any misunderstanding.
This is not an answer, but is too long for a comment.
I think you need to give more information about how the Rodrigues vector appears in your F.
First off, is the vector assumed to be of unit length.? If so that presents some difficulties as now it doesn't have 3 independent components. If you know that the vector will lie in some region (eg that it's z component will always be positive), you can work round this.
If instead the vector is normalised before use, then while you could then compute the derivatives, the resulting Jacobian will be singular.
Another approach is to use the length of the vector as the angle through which you rotate. However this means you need a special case to get a rotation through 0, and the resulting function is not differentiable at 0. Of course if this can never occur, you may be ok.
To find the roots of a function, we can generally use bisection method or Newton's method. For a function f(x), this is possible only when we have an analytical expression for the x-dependence of f(x).
I am trying to find the roots of such a function where I don't know the exact form of the function, rather I have a tabulated data for the values of f(x) for each values of x in a particular range of x. I am writing my program in C and I am using a for-loop to calculate f(x) for each value of x by solving a non-linear equation using bisection method and tabulating the data. Now I need to find the roots of the function f(x).
Can anyone help me with any suitable method or algorithm for the problem?
Thanks in advance!
You know from where the sign changes that a root has to be between two points.
Take several nearby points, put a polynomial through them, and then solve for the root of that polynomial using Newton's method.
From your description it looks like you should be able to calculate your function at this new point. If so, then I would suggest that you calculate the value at this point, add the two nearest neighbors, calculate a parabola and solve for the root of that. If your function is smooth and has a non-zero derivative at the root, this step will make your estimate of the root several orders of magnitude more accurate.
(You can repeat again for even more accuracy. But the increased accuracy at this point may be on par with the numerical errors in your estimate of the value of the function.)
I have been working with R's isoreg function and have experienced a problem: the function is generating more knots than unique fitted values.
From the R help,
iKnots [is an] integer vector giving indices where the fitted curve jumps, i.e., where the convex minorant has kinks
I believe I have an idea about the cause of the problem, and I have a reproducible example:
# Demonstrating the problem
set.seed(100)
x<-runif(88000,0,1)
x<-x[order(x)]
y<- c(rep(c(0.1000001,0.1000000),11000),rep(c(0.1000002,0.1000003),11000),rep(c(0.2000002,0.2000003),11000),rep(1,22000))
plot(test<-isoreg(x,y))
length(unique(test$yf))
length(test$iKnots)
# Evidence of a floating point arithmetic problem
unique(test$yf)
print(c(unique(test$yf)[1],unique(test$yf)[2]),digits=18)
unique(test$yf)[1]==unique(test$yf)[2]
print(c(unique(test$yf)[4],unique(test$yf)[5]),digits=18)
unique(test$yf)[4]==unique(test$yf)[5]}
Here is the plot produced by this example:
You can see that R's isoreg function is identifying many more knots than it should (where there are a lot of red Xs in the plot). However at other places, it correctly uses only 2 knots (the black lines).
It is clear that the problem is connected to R's floating point arithmetic. I also note that isoreg uses .Call to call a C routine to actually do the isotonic regression, so perhaps the problem lies with differences between the C and R languages.
I am using isoreg to calibrate model probabilities, and I would like to be as precise as possible. Therefore, I have 2 questions:
1) Is there some way I could alter the x and y variables used in the isoreg function to avoid this problem while maintaining as high precision as possible?
2) I can manually find the unique fitted values and the respective knots. However, is this ok? Can I assume that the algorithm found the best fit or could this problem invalidate that assumption?
I'm implementing a library which makes use of the GSL for matrix operations. I am now at a point where I need to raise things to any power, including imaginary ones. I already have the code in place to handle negative powers of a matrix, but now I need to handle imaginary powers, since all numbers in my library are complex.
Does the GSL have facilities for doing this already, or am I about to enter loop hell trying to create an algorithm for this? I need to be able to raise not only to imaginary but also complex numbers, such as 3+2i. Having limited experience with matrices as a whole, I'm not even certain on the process for doing this by hand, much less with a computer.
Hmm I never thought the electrical engineering classes I went through would help me on here, but what do you know. So the process for raising something to a complex power is not that complex and I believe you could write something fairly easily (I am not too familiar with the library your using, but this should still work with any library that has some basic complex number functions).
First your going to need to change the number to polar coordinates (i.e 3 + 3i would become (3^2 + 3^2) ^(1/2) angle 45 degrees. Pardon the awful notation. If you are confused on the process of changing the numbers over just do some quick googling on converting from cartesian to polar.
So now that you have changed it to polar coordinates you will have some radius r at an angle a. Lets raise it to the nth power now. You will then get r^n * e^(jan).
If you need more examples on this, research the "general power rule for complex numbers." Best of luck. I hope this helps!
Just reread the question and I see you need to raise to complex as well as imaginary. Well complex and imaginary are going to be the same just with one extra step using the exponent rule. This link will quickly explain how to raise something to a complex http://boards.straightdope.com/sdmb/showthread.php?t=261399
One approach would be to compute (if possible) the logarithm of your matrix, multiply that by your (complex) exponent, and then exponentiate.
That is you could have
mat_pow( M, z) = mat_exp( z * mat_log( M));
However mat_log and even mat_exp are tricky.
In case it is still relevant to you, I have extended the capabilities of my package so that now you can raise any diagonalizable matrix to any power (including, in particular, complex powers). The name of the function is 'Matpow', and can be found in package 'powerplus'. Also, this package is for the R language, not C, but perhaps you could do your calculations in R if needed.
Edit: Version 3.0 extends capabilities to (some) non-diagonalizable matrices too.
I hope it helps!
I have to use an algorithm which expects a matrix of integers as input. The input I have is real valued, therefore I want to convert the input it to integer before passing it to the algorithm.
I though of scaling the input by a large constant and then rounding it to integers. This looks like a good solution but how does one decide a good constant to be used, specially since the range of float input could vary from case to case? Any other ideas are also welcome?
Probably the best general answer to this question is to find out what is the maximum integer value that your algorithm can accept as an element in the matrix without causing overflow in the algorithm itself. Once you have this maximum value, find the maximum floating point value in your input data, then scale your inputs by the ratio of these two maximum values and round to the nearest integer (avoid truncation).
In practice you probably cannot do this because you probably cannot determine what is the maximum integer value that the algorithm can accept without overflowing. Perhaps you don't know the details of the algorithm, or it depends in a complicated way on all of the input values. If this is the case, you'll just have to pick an arbitrary maximum input value that seems to work well enough.
First normalize your input to [0,1) range, then use a common way to scale them:
f(x) = range_max_exclusive * x + range_min_inclusive
After that, cast f(x) (or round if you wish) to integer. In that way you can handle situations such as real values are in range [0,1) or [0,n) where n>1.
In general, your favourite library contains matrix operations, which you can implement this technique easily and with better performance than your possible implementation.
EDIT: Scaling-down then Scaling-up is sure to get lost some precision. I favor it because a normalization operation is generally comes with the library. Also you can do that without downscaling by:
f(x) = range_max_exlusive / max_element * x + range_min_inclusive