What is the advantage of linspace over the colon ":" operator? - arrays

Is there some advantage of writing
t = linspace(0,20,21)
over
t = 0:1:20
?
I understand the former produces a vector, as the first does.
Can anyone state me some situation where linspace is useful over t = 0:1:20?

It's not just the usability. Though the documentation says:
The linspace function generates linearly spaced vectors. It is
similar to the colon operator :, but gives direct control over the
number of points.
it is the same, the main difference and advantage of linspace is that it generates a vector of integers with the desired length (or default 100) and scales it afterwards to the desired range. The : colon creates the vector directly by increments.
Imagine you need to define bin edges for a histogram. And especially you need the certain bin edge 0.35 to be exactly on it's right place:
edges = [0.05:0.10:.55];
X = edges == 0.35
edges = 0.0500 0.1500 0.2500 0.3500 0.4500 0.5500
X = 0 0 0 0 0 0
does not define the right bin edge, but:
edges = linspace(0.05,0.55,6); %// 6 = (0.55-0.05)/0.1+1
X = edges == 0.35
edges = 0.0500 0.1500 0.2500 0.3500 0.4500 0.5500
X = 0 0 0 1 0 0
does.
Well, it's basically a floating point issue. Which can be avoided by linspace, as a single division of an integer is not that delicate, like the cumulative sum of floting point numbers. But as Mark Dickinson pointed out in the comments:
You shouldn't rely on any of the computed values being exactly what you expect. That is not what linspace is for. In my opinion it's a matter of how likely you will get floating point issues and how much you can reduce the probabilty for them or how small can you set the tolerances. Using linspace can reduce the probability of occurance of these issues, it's not a security.
That's the code of linspace:
n1 = n-1
c = (d2 - d1).*(n1-1) % opposite signs may cause overflow
if isinf(c)
y = d1 + (d2/n1).*(0:n1) - (d1/n1).*(0:n1)
else
y = d1 + (0:n1).*(d2 - d1)/n1
end
To sum up: linspace and colon are reliable at doing different tasks. linspace tries to ensure (as the name suggests) linear spacing, whereas colon tries to ensure symmetry
In your special case, as you create a vector of integers, there is no advantage of linspace (apart from usability), but when it comes to floating point delicate tasks, there may is.
The answer of Sam Roberts provides some additional information and clarifies further things, including some statements of MathWorks regarding the colon operator.

linspace and the colon operator do different things.
linspace creates a vector of integers of the specified length, and then scales it down to the specified interval with a division. In this way it ensures that the output vector is as linearly spaced as possible.
The colon operator adds increments to the starting point, and subtracts decrements from the end point to reach a middle point. In this way, it ensures that the output vector is as symmetric as possible.
The two methods thus have different aims, and will often give very slightly different answers, e.g.
>> a = 0:pi/1000:10*pi;
>> b = linspace(0,10*pi,10001);
>> all(a==b)
ans =
0
>> max(a-b)
ans =
3.5527e-15
In practice, however, the differences will often have little impact unless you are interested in tiny numerical details. I find linspace more convenient when the number of gaps is easy to express, whereas I find the colon operator more convenient when the increment is easy to express.
See this MathWorks technical note for more detail on the algorithm behind the colon operator. For more detail on linspace, you can just type edit linspace to see exactly what it does.

linspace is useful where you know the number of elements you want rather than the size of the "step" between them. So if I said make a vector with 360 elements between 0 and 2*pi as a contrived example it's either going to be
linspace(0, 2*pi, 360)
or if you just had the colon operator you would have to manually calculate the step size:
0:(2*pi - 0)/(360-1):2*pi
linspace is just more convenient
For a simple real world application, see this answer where linspace is helpful in creating a custom colour map

Related

Creating a logarithmic spaced array in IDL

I was looking for a way to generate a logarithmic spaced array in IDL.
From the L3 Harris Geospatial website I came across "arrgen" and was trying to use it for this purpose. However,
arrgen(1,215,/log)
returns the error: Variable is undefined: ARRGEN.
What would be the correct way to do it?
Thanks in advance for your help
Start by defining your lower and upper bounds in which ever log-base you prefer. I will use base $e$ for brevity sake.
lowe = ALOG(low[0])
uppe = ALOG(upp[0])
where low and upp are scalar, numerical values you, the user, define (e.g., 1 and 215 in your example). Then construct an evenly spaced array of n elements, such as:
dinde = DINDGEN(n[0])*(uppe[0] - lowe[0])/(n[0] - 1L) + lowe[0]
where n is a scalar integer. Now convert back to linear space to get:
dind = EXP(dinde)
This will be a logarithmically spaced array. If you want to use base-10 log, then substitute ALOG for ALOG10. If you need another base, then you can use the logarithmic change of base rule given by:
logb x = logc x / logc b

Matlab: average each element in 2D array based on neighbors [duplicate]

I've written code to smooth an image using a 3x3 averaging filter, however the output is strange, it is almost all black. Here's my code.
function [filtered_img] = average_filter(noisy_img)
[m,n] = size(noisy_img);
filtered_img = zeros(m,n);
for i = 1:m-2
for j = 1:n-2
sum = 0;
for k = i:i+2
for l = j:j+2
sum = sum+noisy_img(k,l);
end
end
filtered_img(i+1,j+1) = sum/9.0;
end
end
end
I call the function as follows:
img=imread('img.bmp');
filtered = average_filter(img);
imshow(uint8(filtered));
I can't see anything wrong in the code logic so far, I'd appreciate it if someone can spot the problem.
Assuming you're working with grayscal images, you should replace the inner two for loops with :
filtered_img(i+1,j+1) = mean2(noisy_img(i:i+2,j:j+2));
Does it change anything?
EDIT: don't forget to reconvert it to uint8!!
filtered_img = uint8(filtered_img);
Edit 2: the reason why it's not working in your code is because sum is saturating at 255, the upper limit of uint8. mean seems to prevent that from happening
another option:
f = #(x) mean(x(:));
filtered_img = nlfilter(noisy_img,[3 3],f);
img = imread('img.bmp');
filtered = imfilter(double(img), ones(3) / 9, 'replicate');
imshow(uint8(filtered));
Implement neighborhood operation of sum of product operation between an image and a filter of size 3x3, the filter should be averaging filter.
Then use the same function/code to compute Laplacian(2nd order derivative, prewitt and sobel operation(first order derivatives).
Use a simple 10*10 matrix to perform these operations
need matlab code
Tangentially to the question:
Especially for 5x5 or larger window you can consider averaging first in one direction and then in the other and you save some operations. So, point at 3 would be (P1+P2+P3+P4+P5). Point at 4 would be (P2+P3+P4+P5+P6). Divided by 5 in the end. So, point at 4 could be calculated as P3new + P6 - P2. Etc for point 5 and so on. Repeat the same procedure in other direction.
Make sure to divide first, then sum.
I would need to time this, but I believe it could work a bit faster for larger windows. It is sequential per line which might not seem the best, but you have many lines where you can work in parallel, so it shouldn't be a problem.
This first divide, then sum also prevents saturation if you have integers, so you might use the approach even in 3x3 case, as it is less wrong (though slower) to divide twice by 3 than once by 9. But note that you will always underestimate final value with that, so you might as well add a bit of bias (say all values +1 between the steps).
img=imread('camraman.tif');
nsy-img=imnoise(img,'salt&pepper',0.2);
imshow('nsy-img');
h=ones(3,3)/9;
avg=conv2(img,h,'same');
imshow(Unit8(avg));

How to efficiently evaluate or approximate a road Clothoid?

I'm facing the problem of computing values of a clothoid in C in real-time.
First I tried using the Matlab coder to obtain auto-generated C code for the quadgk-integrator for the Fresnel formulas. This essentially works great in my test scnearios. The only issue is that it runs incredibly slow (in Matlab as well as the auto-generated code).
Another option was interpolating a data-table of the unit clothoid connecting the sample points via straight lines (linear interpolation). I gave up after I found out that for only small changes in curvature (tiny steps along the clothoid) the results were obviously degrading to lines. What a surprise...
I know that circles may be plotted using a different formula but low changes in curvature are often encountered in real-world-scenarios and 30k sampling points in between the headings 0° and 360° didn't provide enough angular resolution for my problems.
Then I tried a Taylor approximation around the R = inf point hoping that there would be significant curvatures everywhere I wanted them to be. I soon realized I couldn't use more than 4 terms (power of 15) as the polynom otherwise quickly becomes unstable (probably due to numerical inaccuracies in double precision fp-computation). Thus obviously accuracy quickly degrades for large t values. And by "large t values" I'm talking about every point on the clothoid that represents a curve of more than 90° w.r.t. the zero curvature point.
For instance when evaluating a road that goes from R=150m to R=125m while making a 90° turn I'm way outside the region of valid approximation. Instead I'm in the range of 204.5° - 294.5° whereas my Taylor limit would be at around 90° of the unit clothoid.
I'm kinda done randomly trying out things now. I mean I could just try to spend time on the dozens of papers one finds on that topic. Or I could try to improve or combine some of the methods described above. Maybe there even exists an integrate function in Matlab that is compatible with the Coder and fast enough.
This problem is so fundamental it feels to me I shouldn't have that much trouble solving it. any suggetions?
about the 4 terms in Taylor series - you should be able to use much more. total theta of 2pi is certainly doable, with doubles.
you're probably calculating each term in isolation, according to the full formula, calculating full factorial and power values. that is the reason for losing precision extremely fast.
instead, calculate the terms progressively, the next one from the previous one. Find the formula for the ratio of the next term over the previous one in the series, and use it.
For increased precision, do not calculate in theta by rather in the distance, s (to not lose the precision on scaling).
your example is an extremely flat clothoid. if I made no mistake, it goes from (25/22) pi =~ 204.545° to (36/22) pi =~ 294.545° (why not include these details in your question?). Nevertheless it should be OK. Even 2 pi = 360°, the full circle (and twice that), should pose no problem.
given: r = 150 -> 125, 90 degrees turn :
r s = A^2 = 150 s = 125 (s+x)
=> 1+(x/s) = 150/125 = 1 + 25/125 x/s = 1/5
theta = s^2/2A^2 = s^2 / (300 s) = s / 300 ; = (pi/2) * (25/11) = 204.545°
theta2 = (s+x)^2/(300 s) = (6/5)^2 s / 300 ; = (pi/2) * (36/11) = 294.545°
theta2 - theta = ( 36/25 - 1 ) s / 300 == pi/2
=> s = 300 * (pi/2) * (25/11) = 1070.99749554 x = s/5 = 214.1994991
A^2 = 150 s = 150 * 300 * (pi/2) * (25/11)
a = sqrt (2 A^2) = 300 sqrt ( (pi/2) * (25/11) ) = 566.83264608
The reference point is at r = Infinity, where theta = 0.
we have x = a INT[u=0..(s/a)] cos(u^2) d(u) where a = sqrt(2 r s) and theta = (s/a)^2. write out the Taylor series for cos, and integrate it, term-by-term, to get your Taylor approximation for x as function of distance, s, along the curve, from the 0-point. that's all.
next you have to decide with what density to calculate your points along the clothoid. you can find it from a desired tolerance value above the chord, for your minimal radius of 125. these points will thus define the approximation of the curve by line segments, drawn between the consecutive points.
I am doing my thesis in the same area right now.
My approach is the following.
at each point on your clothoid, calculate the following (change in heading / distance traveled along your clothoid), by this formula you can calculate the curvature at each point by this simple equation.
you are going to plot each curvature value, your x-axis will be the distance along the clothoid, the y axis will be the curvature. By plotting this and applying very easy linear regression algorithm (search for Peuker algorithm implementation in your language of choice)
you can easily identify where are the curve sections with value of zero (Line has no curvature), or linearly increasing or decreasing (Euler spiral CCW/CW), or constant value != 0 (arc has constant curvature across all points on it).
I hope this will help you a little bit.
You can find my code on github. I implemented some algorithms for such problems like Peuker Algorithm.

Draw imaginary numbers in matlab

i am trying to learn matlab.
I am trying to make a program that draw these imaginary numbers: ("," = decimal number)
and determine what of the 500 numbers that is closest the real axis.
And i need a little guidance.
What do i have to do to solve this task?
I was thinking about making a loop where all the "values" get stored in a array:
[code]
n= 1
while n < 500
value=1+0.1^n;
disp(value)
n=n+1[/code]
(seems like value is printing wrong values? and how to store in a array?)
And then somehow determine what number that is nearest the real axis and then display the value.
would be really grateful if someone could help me.
thanks in advance.
MATLAB creates imaginary numbers by appending an i or j term with the number. For example, if you wanted to create an imaginary number such that the real component was 1 and the imaginary component was 1, you would simply do:
>> A = 1 + i
A =
1.0000 + 1.0000i
You can see that there is a distinct real component as well as an imaginary component and is stored in A. Similarly, if you want to make the imaginary component have anything other than 1, you would need to add a constant in front of the i (or j). Something like:
>> A = 3 + 6i
A =
3.0000 + 6.0000i
Therefore, for your task, you simply need to create a vector of n between 1 to 500, input this into the above equation, then plot the resulting imaginary numbers. In this case, you would plot the real component on the x axis and the imaginary component on the y axis. Something like:
>> n = 1 : 500;
>> A = (1 + 0.1i).^n;
>> plot(real(A), imag(A));
real and imag are functions in MATLAB that access the real and imaginary components of complex numbers stored in arrays, matrices or single values. As noted by knedlsepp, you can simply plot the array itself as plot can handle complex-valued arrays:
>> plot(A);
Nice picture btw! Be mindful of the . operator appended with the ^ operator. The . means an element-wise operation. This means that we wish to apply the power operation for each value of n from 1 to 500 with 1 + 0.1i as the base. The result would be a 500 element array with the resulting calculations. If we did ^ by itself, we would be expecting to perform a matrix power operation, when this is not the case.
The values that you want to analyze for each value of n being applied to the equation in your post are stored in A. We then plot the real and imaginary components on the graph. Now if you want to find which numbers are closest to the real axis, you simply need to find the smallest absolute imaginary component of the numbers stored in A, then search for all of those numbers that share this number.
>> min_dist = min(abs(imag(A)));
>> vals = A(abs(imag(A)) == min_dist)
vals =
1.3681 - 0.0056i
This means that the value of 1.3681 - 0.0056i is the closest to the real axis.

Calculate a series up to five decimal places

I want to write a C program that will calculate a series:
1/x + 1/2*x^2 + 1/3*x^3 + 1/4*x^4 + ...
up to five decimal places.
The program will take x as input and print the f(x) (value of series) up to five decimal places. Can you help me?
For evaluating a polynomial, Horner form generally has better numerical stability than expanded form See http://reference.wolfram.com/legacy/v5/Add-onsLinks/StandardPackages/Algebra/Horner.html
If first term was a typo then try (((((1/4 )* x + 1/3) * x ) + 1/2) * x + 1) * x
Else if first term is really 1/x (((((1/4 )* x + 1/3) * x ) + 1/2) * x*x + 1/x
Of course, you still have to analyze convergence and numerical stability as developped in Eric Postpischil answer.
Last thing, does the serie you submited as example really converge to a finite value for some x???
In order to know that the sum you have calculated is within a desired distance to the limit of the series, you need to demonstrate that the sources of error are less than the desired distance.
When evaluating a series numerically, there are two sources of error. One is the limitations of numerical calculation, such as floating-point rounding. The other is the sum of the remaining terms, which have not been added into the partial sum.
The numerical error depends on the calculations done. For each series you want to evaluate, a custom analysis of the error must be performed. For the sample series you show, a crude but sufficient bound on the numerical error could like be calculated without too much effort. Is this the series you are primarily interested in, or are there others?
The sum of the remaining terms also requires a custom analysis. Often, given a series, we can find an expression that can be proven to be at least as large as the sum of all remaining terms but that is more easily calculated.
After you have established bounds on these two errors, you could sum terms of the series until the sum of the two bounds is less than the desired distance.

Resources