Using each element of a vector in a series of calculations - arrays

I am trying to write MATLAB code that uses a set of variables in a vector in a calculation. I am trying to run the same formula with each value in the vector and then store each result in a new vector.
The goal is to calculate and plot the cost of constructing a water tank based on various radius sizes. In the calculations I have a cylindrical tank and a hemispherical top. The exact value for the volume of the tank is 500m^3. The cost of the tank is $400/m^2 surface area for the hemispherical top and $300/m^2 surface area for the cylindrical body. I know I need to use element wise operators, however I am getting strange, unrealistic results which leads me to believe I am using these incorrectly.
rTank = 2:0.5:10;
h = ((250./(pi.*rTank(:)))-((rTank(:).^2)./3));
cost = ((2*pi*400.*(rTank(:).^2))+(2*pi*h(:).*300.*rTank(:)));
plot(rTank,cost)
I am expecting a curve of all positive values between radii 2m and 10m, with positive values for cost. For some reason I am getting negative values for results, and according to the resulting plot, the cost of the water tank is free when the radius is 8m, which makes no sense.

Filter out h<0
rTank = 2:0.5:10;
h = 250./(pi*rTank)-1/3*rTank.^2;
good_h=h(h>0);
good_rTank=rTank(h>0);
cost = 2*pi*400*good_rTank.^2 + 2*pi*good_h*300.*good_rTank;
plot(good_rTank, cost)

Related

Interpolate 2D Array to single point in MATLAB

I have 3 graphs of an IV curve (monotonic increasing function. consider a positive quadratic function in the 1st quadrant. Photo attached.) at 3 different temperatures that are not obtained linearly. That is, one is obtained at 25C, one at 125C and one at 150C.
What I want to make is an interpolated 2D array to fill in the other temperatures. My current method to build a meshgrid-type array is as follows:
H = 5;
W = 6;
[Wmat,Hmat] = meshgrid(1:W,1:H);
X = [1:W; 1:W];
Y = [ones(1,W); H*ones(1,W)];
Z = [vecsatIE25; vecsatIE125];
img = griddata(X,Y,Z,Wmat,Hmat,'linear')
This works to build a 6x6 array, which I can then index one row from, then interpolate from that 1D array.
This is really not what I want to do.
For example, the rows are # temps = 25C, 50C, 75C, 100C, 125C and 150C. So I must select a temperature of, say, 50C when my temperature is actually 57.5C. Then I can interpolate my I to get my V output. So again for example, my I is 113.2A, and I can actually interpolate a value and get a V for 113.2A.
When I take the attached photo and digitize the plot information, I get an array of points. So my goal is to input any Temperature and any current to get a voltage by interpolation. The type of interpolation is not as important, so long as it produces reasonable values - I do not want nearest neighbor interpolation, linear or something similar is preferred. If it is an option, I will try different kinds of interpolation later (cubic, linear).
I am not sure how I can accomplish this, ideally. The meshgrid array does not need to exist. I simply need the 1 value.
Thank you.
If I understand the question properly, I think what you're looking for is interp2:
Vq = interp2(X,Y,V,Xq,Yq) where Vq is the V you want, Xq and Yq are the temperature and current, and X, Y, and V are the input arrays for temperature, current, and voltage.
As an option, you can change method between 'linear', 'nearest', 'cubic', 'makima', and 'spline'

MATLAB - get angle between arrays of points

For the sake of illumination analysis, based on this document, I am trying to determine three things for an array of lights and a series of points on a solid surface:
(Image key: big blue points are lights with illumination direction shown, small points are the points on my surface)
1) The distances between each of the lights and each of the points,
2) the angles between the direction each light is facing and the normal vectors of all of the points:
Note in this image I have replicated the normal vector and moved it to more clearly show the angle.
3) the angles between the direction each light is facing, and the vector from that light to all of the points on the solid:
Originally I had nested for loops iterating through all of the lights and points on the solid, but am now doing my best to do it in true MATLAB style with matrices:
I have found the distances between all the points with the pdist2 function, but have not managed to find a similar method to find the angles between the lights and all the points, nor the lights and the normal vectors of the points. I would prefer to do this with matrix methods rather than with iteration as I have been using.
Considering I have data set out, where each column of Lmat has my x,y,z position vectors of my lights; Dmat gives x,y,z directions of each light, thus the combination of each row from both of these matrices fully define the light and the direction it is facing. Similarly, Omega and nmat do the same for the points on the surface.
I am fairly sure that to get angles I want to do something along the lines of:
distMatrix = pdist2(Omega, Lmat);
LmatNew = zeros(numPoints, numLights, 3);
DmatNew = zeros(numPoints, numLights, 3);
OmegaNew = zeros(numPoints, numLights, 3);
nmatNew = zeros(numPoints, numLights, 3);
for i = 1:numLights
LmatNew(:,i,1) = Lmat(i,1);
LmatNew(:,i,2) = Lmat(i,2);
LmatNew(:,i,3) = Lmat(i,3);
DmatNew(:,i,1) = Dmat(i,1);
DmatNew(:,i,2) = Dmat(i,2);
DmatNew(:,i,3) = Dmat(i,3);
end
for j = 1:numPoints
OmegaNew(j,:,1) = Omega(j,1);
OmegaNew(j,:,2) = Omega(j,2);
OmegaNew(j,:,3) = Omega(j,3);
DmatNew(:,i,1) = Dmat(i,1);
DmatNew(:,i,2) = Dmat(i,2);
DmatNew(:,i,3) = Dmat(i,3);
end
angleMatrix = -dot(LmatNew-OmegaNew, DmatNew, 3);
angleMatrix = atand(angleMatrix);
angleMatrix = angleMatrix.*(angleMatrix > 0);
But I am getting conceptually stuck trying to get my head around what to do after my dot product.
Am I on the right track? Is there an inbuilt angle equivalent of pdist2 that I am overlooking?
Thanks all for your help, and sorry for the paint images!
Context: This image shows my lights (big blue points), the directions the lights are facing (little black traces), and my model.
According to MathWorks, there is no built-in function to calculate the angle between vectors. However, you can use trigonometry to calculate the angles.
Inputs
Since you unfortunately didn't explain your input data in great detail, I'm going to assume that you have a matrix Lmat containing a location vector of a light source in each row and a matrix Dmat containing the directional vectors for the light sources, both of size n×3, where n is the number of light sources in your scene.
The matrices Omega and Nmat supposedly are of size m×3 and contain the location vectors and normal vectors of all m surface points. The desired result are the angles between all light direction vectors and surface normal vectors, of which there are n⋅m, and the angles between the light direction vectors and the vectors connecting the light to each point on the surface, of which there are n⋅m as well.
To get results for all combinations of light sources and surface points, the input matrices have to be repeated vertically:
Lmat = repmat(Lmat, size(Omega,1), 1);
Dmat = repmat(Dmat, size(Omega,1), 1);
Omega = repmat(Omega, size(Lmat,1), 1);
Nmat = repmat(Nmat, size(Lmat,1), 1);
Using the inner product / dot product
The definition of the inner product of two vectors is
where θ is the angle between the two vectors. Reordering the equation yields
You can therefore calculate the angles between your directional vectors Dmat and your normal vectors Nmat like this:
normProd = sqrt(sum(Dmat.^2,2)).*sqrt(sum(Nmat.^2,2));
anglesInDegrees = acos(dot(Dmat.',Nmat.')' ./ normProd) * 180 / pi;
To calculate the angles between the light-to-point vectors and the directional vectors, just replace Nmat with Omega - Lmat.
Using the vector product / cross product
It has been mentioned that the above method will have problems with accuracy for very small (θ ≈ 0°) or very large (θ ≈ 180°) angles. The suggested solution is calculating the angles using the cross product and the inner product.
The norm of the vector product of two vectors is
You can combine this with the above definition of the inner product to get
which can obviously be reordered to this:
The corresponding MATLAB code looks like this:
normCross = sqrt(sum(cross(Dmat,Nmat,2).^2,2));
anglesInDegrees = atan2(normCross,dot(Dmat.',Nmat.')') * 180/pi;

How to efficiently evaluate or approximate a road Clothoid?

I'm facing the problem of computing values of a clothoid in C in real-time.
First I tried using the Matlab coder to obtain auto-generated C code for the quadgk-integrator for the Fresnel formulas. This essentially works great in my test scnearios. The only issue is that it runs incredibly slow (in Matlab as well as the auto-generated code).
Another option was interpolating a data-table of the unit clothoid connecting the sample points via straight lines (linear interpolation). I gave up after I found out that for only small changes in curvature (tiny steps along the clothoid) the results were obviously degrading to lines. What a surprise...
I know that circles may be plotted using a different formula but low changes in curvature are often encountered in real-world-scenarios and 30k sampling points in between the headings 0° and 360° didn't provide enough angular resolution for my problems.
Then I tried a Taylor approximation around the R = inf point hoping that there would be significant curvatures everywhere I wanted them to be. I soon realized I couldn't use more than 4 terms (power of 15) as the polynom otherwise quickly becomes unstable (probably due to numerical inaccuracies in double precision fp-computation). Thus obviously accuracy quickly degrades for large t values. And by "large t values" I'm talking about every point on the clothoid that represents a curve of more than 90° w.r.t. the zero curvature point.
For instance when evaluating a road that goes from R=150m to R=125m while making a 90° turn I'm way outside the region of valid approximation. Instead I'm in the range of 204.5° - 294.5° whereas my Taylor limit would be at around 90° of the unit clothoid.
I'm kinda done randomly trying out things now. I mean I could just try to spend time on the dozens of papers one finds on that topic. Or I could try to improve or combine some of the methods described above. Maybe there even exists an integrate function in Matlab that is compatible with the Coder and fast enough.
This problem is so fundamental it feels to me I shouldn't have that much trouble solving it. any suggetions?
about the 4 terms in Taylor series - you should be able to use much more. total theta of 2pi is certainly doable, with doubles.
you're probably calculating each term in isolation, according to the full formula, calculating full factorial and power values. that is the reason for losing precision extremely fast.
instead, calculate the terms progressively, the next one from the previous one. Find the formula for the ratio of the next term over the previous one in the series, and use it.
For increased precision, do not calculate in theta by rather in the distance, s (to not lose the precision on scaling).
your example is an extremely flat clothoid. if I made no mistake, it goes from (25/22) pi =~ 204.545° to (36/22) pi =~ 294.545° (why not include these details in your question?). Nevertheless it should be OK. Even 2 pi = 360°, the full circle (and twice that), should pose no problem.
given: r = 150 -> 125, 90 degrees turn :
r s = A^2 = 150 s = 125 (s+x)
=> 1+(x/s) = 150/125 = 1 + 25/125 x/s = 1/5
theta = s^2/2A^2 = s^2 / (300 s) = s / 300 ; = (pi/2) * (25/11) = 204.545°
theta2 = (s+x)^2/(300 s) = (6/5)^2 s / 300 ; = (pi/2) * (36/11) = 294.545°
theta2 - theta = ( 36/25 - 1 ) s / 300 == pi/2
=> s = 300 * (pi/2) * (25/11) = 1070.99749554 x = s/5 = 214.1994991
A^2 = 150 s = 150 * 300 * (pi/2) * (25/11)
a = sqrt (2 A^2) = 300 sqrt ( (pi/2) * (25/11) ) = 566.83264608
The reference point is at r = Infinity, where theta = 0.
we have x = a INT[u=0..(s/a)] cos(u^2) d(u) where a = sqrt(2 r s) and theta = (s/a)^2. write out the Taylor series for cos, and integrate it, term-by-term, to get your Taylor approximation for x as function of distance, s, along the curve, from the 0-point. that's all.
next you have to decide with what density to calculate your points along the clothoid. you can find it from a desired tolerance value above the chord, for your minimal radius of 125. these points will thus define the approximation of the curve by line segments, drawn between the consecutive points.
I am doing my thesis in the same area right now.
My approach is the following.
at each point on your clothoid, calculate the following (change in heading / distance traveled along your clothoid), by this formula you can calculate the curvature at each point by this simple equation.
you are going to plot each curvature value, your x-axis will be the distance along the clothoid, the y axis will be the curvature. By plotting this and applying very easy linear regression algorithm (search for Peuker algorithm implementation in your language of choice)
you can easily identify where are the curve sections with value of zero (Line has no curvature), or linearly increasing or decreasing (Euler spiral CCW/CW), or constant value != 0 (arc has constant curvature across all points on it).
I hope this will help you a little bit.
You can find my code on github. I implemented some algorithms for such problems like Peuker Algorithm.

Why allowing diagonal movement would make the A* and Manhattan Distance inadmissible?

I'm slightly confused about diagonal movement in a grid using A* and the Manhattan distance metric. Can someone explain why using diagonal movement makes it inadmissible? Wouldn't going in diagonal movement find a better optimal solution as in take less steps to get to goal state than up down left right or am I missing something?
Much as beaker's comment denotes, Manhattan Distance will over estimate the distance between a state and the states diagonally accessible to it. By definition, a heuristic that over estimates distances is not admissible.
Now, why exactly is this so?
Lets assume your Manhattan Distance procedure looks something like this:
function manhattan_dist(state):
y_dist = abs(state.y - goal.y)
x_dist = abs(state.x - goal.x)
return (y_dist + x_dist)
Now, consider the case of applying that procedure to the state of (1,1), and assume the goal is at (3,3). This will return the value of 4, which over estimates the actual distance which is 2. Therefore, Manhattan Distance in this situation will not work as an admissible heuristic.
On game boards that allow for diagonal movement Chebyshev Distance is typically used instead. Why?
Consider this new procedure:
function chebyshev dist(state):
y_dist = abs(state.y - goal.y)
x_dist = abs(state.x - goal.x)
return max(y_dist, x_dist)
Returning to the previous example of (1,1) and (3,3), this procedure will return the value of 2, which is indeed not an overestimation of the actual distance.
While this topic is older I would like to add a different solution that uses the actual fastest free path to the goal if diagonal movement is allowed.
function heuristic(state):
delta_x = abs(state.x - goal.x)
delta_y = abs(state.y - goal.y)
return min(delta_x, delta_y) * sqrt(2) + abs(delta_x - delta_y)
This method returns a heuristic that moves the maximum amount diagonally and the remainder in a straight way to the goal and presents the largest possible heuristic that does not over-estimate the movement costs to the goal.

Uniformly sampling on hyperplanes

Given the vector size N, I want to generate a vector <s1,s2, ..., sn> that s1+s2+...+sn = S.
Known 0<S<1 and si < S. Also such vectors generated should be uniformly distributed.
Any code in C that helps explain would be great!
The code here seems to do the trick, though it's rather complex.
I would probably settle for a simpler rejection-based algorithm, namely: pick an orthonormal basis in n-dimensional space starting with the hyperplane's normal vector. Transform each of the points (S,0,0,0..0), (0,S,0,0..0) into that basis and store the minimum and maximum along each of the basis vectors. Sample uniformly each component in the new basis, except for the first one (the normal vector), which is always S, then transform back to the original space and check if the constraints are satisfied. If they are not, sample again.
P.S. I think this is more of a maths question, actually, could be a good idea to ask at http://maths.stackexchange.com or http://stats.stackexchange.com
[I'll skip "hyper-" prefix for simplicity]
One of possible ideas: generate many uniformly distributed points in some enclosing volume and project them on the target part of plane.
To get uniform distribution the volume must be shaped like the part of plane but with added margins along plane normal.
To uniformly generate points in such volumewe can enclose it in a cube and reject everything outside of the volume.
select margin, let's take margin=S for simplicity (once margin is positive it affects only performance)
generate a point in cube [-M,S+M]x[-M,S+M]x[-M,S+M]
if distance to the plane is more than M, reject the point and go to #2
project the point on the plane
check that projection falls into [0,S]x[0,S]x[0,S], if not - reject and go to #2
add this point to the resulting set and go to #2 is you need more points
The problem can be mapped to that of sampling on linear polytopes for which the common approaches are Monte Carlo methods, Random Walks, and hit-and-run methods (see https://www.jmlr.org/papers/volume19/18-158/18-158.pdf for examples a short comparison). It is related to linear programming, and can be extended to manifolds.
There is also the analysis of polytopes in compositional data analysis, e.g. https://link.springer.com/content/pdf/10.1023/A:1023818214614.pdf, which provide an invertible transformation between the plane and the polytope that can be used for sampling.
If you are working on low dimensions, you can use also rejection sampling. This means you first sample on the plane containing the polytope (defined by your inequalities). This later method is easy to implement (and wasteful, of course), the GNU Octave (I let the author of the question re-implement in C) code below is an example.
The first requirement is to get vector orthogonal to the hyperplane. For a sum of N variables this is n = (1,...,1). The second requirement is a point on the plane. For your example that could be p = (S,...,S)/N.
Now any point on the plane satisfies n^T * (x - p) = 0
we assume also that x_i >= 0
With these given you compute an orthonormal basis on the plane (the nullity of the vector n) and then create random combination on that bases. Finally you map back to the original space and apply your constraints on the generated samples.
# Example in 3D
dim = 3;
S = 1;
n = ones(dim, 1); # perpendicular vector
p = S * ones(dim, 1) / dim;
# null-space of the perpendicular vector (transposed, i.e. row vector)
# this generates a basis in the plane
V = null (n.');
# These steps are just to reduce the amount of samples that are rejected
# we build a tight bounding box
bb = S * eye(dim); # each column is a corner of the constrained region
# project on the null-space
w_bb = V \ (bb - repmat(p, 1, dim));
wmin = min (w_bb(:));
wmax = max (w_bb(:));
# random combinations and map back
nsamples = 1e3;
w = wmin + (wmax - wmin) * rand(dim - 1, nsamples);
x = V * w + p;
# mask the points inside the polytope
msk = true(1, nsamples);
for i = 1:dim
msk &= (x(i,:) >= 0);
endfor
x_in = x(:, msk); # inside the polytope (your samples)
x_out = x(:, !msk); # outside the polytope
# plot the results
scatter3 (x(1,:), x(2,:), x(3,:), 8, double(msk), 'filled');
hold on
plot3(bb(1,:), bb(2,:), bb(3,:), 'xr')
axis image

Resources