Using AI to predict parameter in a function - artificial-intelligence

Say, I got a dataset of intensity for each wavelength from these curves in the picture and performed curve fitting with the gaussian function for each curve to extract its parameters like peak amplitude, center, and FWHM.
)
I want to apply the AI method to predict the Imax (maximum Intensity), center, and FWHM for another curve in different temperatures. Say I want to predict Imax, center, and FWHM at 1 Celcius, is it possible to do so? which method should I try and how should I format the input feature and its labels? thank you.

Related

Plotting a frequency response from biquad filter

This is a hard one and although I can think of a few kludge methods of doing it, I have a feeling there is a clean mathematical method, although I am having difficulty inventing it myself.
I have a number of parameters which control (software) biquad filters for audio. Essentially there are just 3 parameters, frequency, gain and Q (or bandwidth). In audio terms, the frequency represents the center frequency of the filter. The gain represents whether this frequency is boosted or cut (a gain of 0 results in no change to the audio passing through the filter). Q represents the width of the filter - IE a very wide filter might affect frequencies far away from the center frequency, whereas a narrow (low Q) filter will only affect frequencies close to the center frequency.The filters take the form of a bell curve, or at least thats an approximation, whether its mathematically accurate I am not sure.
I want to display the characteristics of these filters graphically - display a graph of gain against frequency. There are several of these filters applied to the audio channel, and I want to be able to add the different result graphs, to produce an overall graph (IE a graph summing all the components of the combined filters). But I also want to be able to access the individual filters graphs.
I can handle adding the component graphs into a single 'total' graph, but how to produce the original x-y graph from the filter parameters escapes me. I will draw bitmaps so all I need is to be able to create arrays of the form frequency[x]=y. Im doing this in C so I don't have the mathematical tools in matlab etc. So I might have a filter with a center frequency of say 1000 (Hz), a gain of say 20 (db or linear I understand how to convert that), and a Q of say 3. The Q factor is relative and does not have to be exactly mathematically correct if that causes any complication.
It seems like a quite simple mathematical function but maths is not my strong point and I don't know enough - I have been messing round with sine functions etc but its not working and I suspect is probably wasting processing power by over complicating the maths (although I might be wrong there).
TIA, Pete
I have my doubts about the relationships between biquad filters, Q values, and bell curves. But I'll put those aside and just tell you how to draw a bell curve, since that's what you asked.
From this wikipedia article, the equation for a bell curve is
where for your application
a corresponds to the gain
b determines the center frequency
2c^2 is related to Q (larger values will make the curve wider)
The C code below computes a sample bell curve. For this example, the numbers were chosen based on drawing into a window that is 250 pixels wide by 200 pixels high, with a coordinate system where the origin {0,0} is at the bottom left corner.
int width = 250;
int height = 200;
int bellCurve[width]; // the output array that holds the f(x) values
double gain = 180; // the 'a' value, determines how high the peak is from the baseline
double offset = 10; // the 'd' value, determines the y coordinate of the baseline
double qFactor = 1000; // the '2c^2' value, determines how fat the curve is
double center = 100; // the 'b' value, determines the x coordinate of the peak
double dx;
for ( int x = 0; x < width; x++ )
{
dx = x - center;
bellCurve[x] = gain * exp( -( dx * dx ) / qFactor ) + offset;
}
Plotting the curve results in an image like this where the peak is at x=100, y=10+180=190
You could input a unit impulse (an array of all zeros, except one element=1.0) into your digital filters, treating them as black boxes. Then FFT the impulse response output array to get the frequency response of the filter. Plotting the magnitude of the complex frequency samples will give you a pretty picture. Python+numpy+matplotlib would probably be an easier way to go about it. You will need to know the sampling period to get meaningful plots.
What you really want is the bode plot of the filter. This is non trivial to calculate yourself, a cursory search for a library to do it for you in C yielded nothing. If accuracy is not important and you can approximate the shape once and stretch it based on the parameters of the particular filter. For example, you might have a normalized array of relative values and construct a new curve (array) based on the parameters of the filter and the base curve you generated earlier.
The base curve could be generated from MATLAB if you can or Wolfram Alpha or something like that.
Here is one in javascript.
http://www.earlevel.com/main/2013/10/13/biquad-calculator-v2/
The filter you describe is the 'peak' filter. Use the log scale to display frequencies.
—Tom

converting distance into coordinates [duplicate]

This question already has an answer here:
Closed 11 years ago.
How do i convert a given distance (in metres) into geographic coordinates.
What I need is to draw a polygon on the map (a regular polygon, like a circle), however, it's radius is given in meters, how do I calculate the radius in degrees or radians?
I'll probally code it in c
Oh man, geographic coordinates can be a pain in the behind. First of all, I'm assuming that by geographic coordinates, you're talking about geodetic coordinates (lat/lon).
Second of all, you can't find a "radius" in radians or degrees. Why, you ask? Well, one degree of longitude at the equator is WAY longer than one degree of longitude close to the north or south pole. The arc of one degree latitude also changes based on your location on the earth since the earth is not a perfect sphere. It's usually modeled as an ellipsoid.
That being said, here are two ways to map the coordinates of a polygon onto lat-lon coordinates:
1) If you're feeling like a complete badass, you can do the math in lat-lon. Lots of trig, easy to make mistakes... DON'T DO IT. I'm just including this option here to let you know that it is possible.
2) Convert your geodetic coordinates to UTM. Then, you can do whatever you need to do in meters (i.e. find the vertices of a polygon), and then convert the resulting UTM back to geodetic. Personally, I think this is the way to go.
Well, consider that at the equator (0 degrees latitude) one degree of longitude is equal to appximately 60 nautical miles. At either pole (90 degrees latitude) a single degree of longitude equals 0 nautical miles. As I remember the cosine of the latitude times 60 will give you the approximate distance in nautical miles at that latitude of a single degree of longitude.
However, how accurate you would be would have to account for the map projection you're using. For aeronautical maps, they use the Lambert Conformal Conic projection, which means distances are only exactly accurate along the two latitudes that the cone cuts the sphere of the earth. But if an approximation is good enough, you may not need the accuracy.
For conversion, one nautical mile equals 1.852 km. If I did the arithmetic properly (no guarantee, I'm in my 70s), that means that a meter equals (except as you get really close to the poles) 0.0000009 degrees latitude. It also equals 0.0000009 degrees longitude on the equator. If you're not at the equator, divide the 0.0000009 by the cosine of the latitude to get the degrees of longitude.
So, a 1000 meter radius circle at 45 degrees latitude would mean a radius of 0.0009 degrees latitude and 0.0009/0.707 degrees longitude. Approximately of course.
All this is from memory, so take it with a grain of salt. If you really want to get involved, Google geographic equations or some such.
Check out http://trac.osgeo.org/proj/wiki/GeodesicCalculations. Depending on the accuracy you need, this can get pretty complicated, so you're probably best off starting from some existing code.

Compact representation of OpenGL modelview matrix 4x4

What is the most user friendly way to store only the rotation part of an OGL modelview (4x4) matrix?
For example; in a level editor to set the rotation for an object it would be easy to use the XYZ Euler angles. However this seems a very tricky system to use with matrices.
I need to be able to get AND set the rotation from this new representation.
(The alternative is to store the rotation part (4*3 numbers) but it is hard for a user to manipulate these)
I found some code here http://www.google.com/codesearch/p?hl=en#HQY9Wd_snmY/mesh/matrix3.h&q=matrix3&sa=N&cd=1&ct=rc that allows me to set and get rotation from angles (3 floats). This is ideal.
Although they're used regularily, I disregard the use of Euler angles. They're problematic as they only preserve the pointing direction of the object, but not the bitangent to that direction. More important: They're prone to gibal lock http://en.wikipedia.org/wiki/Gimbal_lock
A far superior method for storing rotations are Quarternions. In layman terms a quaternion consists of the rotational axis and the angle of rotation around this axis. It is thus a tuple of 4 scalars a,b,c,d. The quaternion is then Q = a + i*b + j*c + k*d, |Q| = 1, with the special properties of i,j,k that i² = j² = k² = i·j·k = -1 and i·j = k, j·k = i, k·i = j, which implies j·i = -k, k·j = -i, i·k = -j
Quaterions are thus extensions of complex numbers. If you recall complex number theory, you'll remember that the product of two complex numbers a =/= b with |a| = |b| = 1 is a rotation in the complex plane. It is thus easy to assume that rotations in 3D can be described by an extension of complex numbers into a complex hyperplane. This is what quaternions are.
See this article on the details.
http://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation
In a standard 3D matrix you only need the top left 3x3 values to give the rotation. To apply the matrix as a 4x4 later on, you need to make the other values 0 apart from on the diagonal.
Here's a rotation only matrix where the values vXY give the rotations.
[v00 v01 v02 0]
[v10 v11 v12 0]
[v20 v21 v22 0]
[ 0 0 0 1]
Interestingly, the values form the bases of the coordinate system you have rotated the object into, so in the new system, the x-axis is along [v00 v01 v02], the y-axis is along [v10 v11 v12] and the z-axis obviously [v20 v21 v22].
You could show these axes beside the object and let the other drag them around to change the rotation, perhaps.
I would say this depends on the user, but to me the most "user friendly" way is to store "roll", "pitch" and "yaw". These are very non-technical terms that an average user can understand and adjust, and it should be easy for you to take these values and compute the matrix.
IMO, the most 'user friendly format' for rotation is storing Euler XYZ angles, this is generally how rotations are exposed in any 3d content creation software.
Euler angles are easy to transform to matrices, see here for the involved matrix product.
But you should not confuse the format given to the GUI/user and the storage format of the data: Euler XYZ angles have problems of their own when doing animation, gimbal lock can introduce unwanted behaviour.
Another candidate for storing/computing rotations is quaternions. They offer mathematical advantages over XYZ angles, essentially when interpolating between two rotations. Of course, you don't want to expose the quaternion values directly to any human user, you'll need to convert them to XYZ angles. You'll find plenty of code to that on the Web.
I would not recommend storing the rotation directly in matrix format. Extracting user friendly values from it is difficult, it does not offer any interesting behaviour for animation/interpolation, it takes for storage. IMO, matrices are to be created when needed to transform the geometry.
To conclude, there are a few options, you should select what suits you most. Do you plan to having animation or not ? etc.
EDIT
Also, you should not make an amalgam with model and view matrices. They are semantically very different, and are combined in OpenGL only for performance reasons. What I had in mind above in the 'model matrix'. The view matrix is generally given by your view system/camera manager, and is combined with you model matrix.
A quaternion is, although the math is "obscure and unintellegible" surprisingly user friendly, as it represents rotation around an axis by a given angle.
The axis of rotation is simply a unit vector pointing in that direction, multiplied by the sine of 1/2 the rotation angle, and the "obscure" 4th component equals the cosine of 1/2 the rotation angle.
It feels kind of "unnatural" at first sight, but once you grasp it... can it be any easier?

Maps: Does calculating distance between 2 points factor in altitude?

Does Postgres' Spatial plugin, or any Spatial package for that manner, factor in the altitude when calculating the distance between 2 points?
I know the Spatial packages factor in the approximate curvature of the earth but if one location is at the top of a mountain and the other location is close to the sea - it seems like the calculated difference between those two points would greatly vary if the difference in altitude was not factored into account.
Also keep in mind that if I have 2 points are at the same ocean altitude but a mountain exists between the 2 points - the distance package should account for this.
Those factors are not being counted at all. Why? The software only knows about the two features (the two points you are getting the distance, the sphere/spheroid and a datum/projection factor).
For that to happen you need to probably use a developed linestring, in which you will connect your point with n vertices, each of them being Z aware.
Imagine this (loose WKT): LINESTRING((0,1,2),(0,2,3),(0,3,4),(0,10,15),(0,11,-1)).
Asking the software to calculate the distance between each vertex and summing it up, will consider the variations of terrain. But without something like that, it is impossible to map for irregularities in terrain.
All GIS softwares cannot tell, by themselves, what are those irregularities in terrain, and therefore, not take them in account.
You can create such linestrings (automatically) with softwares like ArcGIS (and others), using a line (between two points), and a surface file, such as the ones provided freely by NASA (SRTM project). These files come in a raster format, and each pixel has a X Y and Z value, in meters. Traversing the line you want, coupled with that terrain profile, you can achieve the calculation you want to achieve. If you need to have super extra precise calculations, you need a precise surface, and precise Z values in each vertex of this profile line.
That cleared up?
If the distance formula you're using does not take the altitude of the two points as parameters (in addition to the Latitudes and Longitudes of the two points), then it does not factor in altitude to the distance calculation. In any event, altitude difference does not have a very significant effect on calculated distance.
As usual with GPS, the difference in distance calculations that altitude would make is probably smaller than the error in most commercial GPS devices anyway, so in most applications altitude can be safely dispensed with (altitude measurements themselves are pretty inaccurate with commercial GPS devices, although survey data on altitudes is quite accurate).
PostgreSQL does not factor in altitude when calculating distances. It is all done in a planar surface.
Most of database spatial packages will not take this into account, altought, if your point is 3d, i.e., has a Z coordinate that might happend.
I don´t have PostgreSQL in this machine, but try this.
SELECT ST_DISTANCE(ST_POINT(0,0,10),ST_POINT(0,0,0));
It´s fairly easy to know if it is taking into account your Z value, since the return should be > 0; If that turns out to be true, just create Z aware features, and you will be successfull.
What SQL SERVER 2008, for example, takes into account when calculating distances, is the position of a Geography feature in a sphere. Geometry features in SQL SERVER will always use planar calculations.
EDIT: checked this in PostGIS manual
For Z aware points you must use the ST_MakePoint function. It takes up to 4 arguments (X Y Z and M). St_POINT only takes two (X Y)
http://postgis.refractions.net/documentation/manual-1.4/ST_Distance.html
ST_DISTANCE = 2D calculations
ST_DISTANCE_SPHERE documentation (takes in account a fixed sphere for calculations - aka not planar)
http://postgis.refractions.net/documentation/manual-1.4/ST_Distance_Sphere.html
ST_DISTANCE_SPHEROID documentation (takes into account a choosen spheroid for your calculations)
http://postgis.refractions.net/documentation/manual-1.4/ST_Distance_Spheroid.html
ST_POINT documentation
http://postgis.refractions.net/documentation/manual-1.4/ST_Point.html

Angle at corner of two lines

I search for the fastest or simplest method to compute the outer angle at any point of a convex polygon. That means, always the bigger angle, whereas the two angles in question add up to 360 degrees.
Here is an illustration:
Now I know that I may compute the angles between the two vectors A-B and C-B which involves the dot product, normalization and cosine. Then I would still have to determine which of the two resulting angles (second one is 180 degrees minus first one) I want to take two times added to the other one.
However, I thought there might be a much simpler, less tricky solution, probably using the mighty atan2() function. I got stuck and ask you about this :-)
UPDATE:
I was asked what I need the angle for. I need to calculate the area of this specific circle around B, but only of the polygon that is described by A, B, C, ...
So to calculate the area, I need the angle to use the formula 0.5*angle*r*r.
Use the inner-product (dot product) of the vectors describing the lines to get the inner angle and subtract from 360 degrees?
Works best if you already have the lines in point-vector form, but you can get vectors from point-to-point form pretty easily (i.e. by subtraction).
Taking . to be the dot product we have
v . w = |v| * |w| * cos(theta)
where v and w are vectors and theta is the angle between the lines. And the dot product can be computed from the components of the vectors by
v . w = SUM(v_i * w_i : i=0..3) // 3 for three dimensions. Use more or fewer as needed
here the subscripts indicate components.
Having actually read the question:
The angle returned from inverting the dot-product will always be less than 180 degrees, so it is always the inner angle.
Use this formula:
beta = 360° - arccos(<BA,BC>/|BA||BC|)
Where <,> is the scalar product and BA (BC) are the vectors from B to A (B to C).
I need to calculate the area of the circle outside of the polygon that is described by A, B, C, ...
It sounds like you're taking the wrong approach, then. You need to calculate the area of the circumcircle, then subtract the area of the polygon.
If you need the angle there is no way around normalizing the vectors and do a dot or cross-product. You often have a choice if you want to calculate the angle via asin, acos or atan but in the end that does not make a difference to execution speed.
However, It would be nice if you could tell us what you're trying to archive. If we have a better picture of what you're doing we might be able to give you some hints how to solve the problem without calculating the angle at the first place.
Lots of geometric algorithms can be rewritten to work with cross and dot-products only. Euler-angles are rarely needed.

Resources