I have vectorized raster data using gdal_polygonize, which produced rough looking polygons like this
I would like to smooth out vertices to produce neat looking polygons. So far I have had a moderate progress with a combination of ST_SIMPLIFY and ST_CHAIKINSMOOTHING, but resuls could be better. The best approach could be to convert jagged vertices to straight diagonal lines and apply chaikin smoothing after that. Any ideas on how to achieve this?
Related
I'm posting here because I'm at a bit of a loss
I'm trying to implement a solution to Maxwells equations (p47 2-2)
,
which is given in Spherical coordinates in C++ so it may be used in a larger modeling project. I'm using Eigen3 as a base for linear algebra, which as far as I can find doesn't explicitly support spherical coordinates (I'm open to alternatives)
To implement the solution I need (or at least i think i need) to define the spherical unit vectors as spherical coordinates however, as they're not constants like in Cartesian Coordinates and I don't understand how to do this.
I'm hesitant to convert the solution to Cartesian coordinates as I don't think I understand the implications of doing this (is it even valid?)
Any and all input and advice is appreciated
The solution, which seems obvious now I have found it, is to implement Spherical Unit Vector Identities as 3 functions (one for each unit vector) that takes r, Theta, and Phi as arguments and return a vector.
Assume a workflow for 2D image feature extraction by using SIFT, SURF, or MSER methods followed by bag-of-words/features encoded and subsequently used to train classifiers.
I was wondering if there is an analogous approach for 3D datasets, for example, a 3D volume of MRI data. When dealing with 2D images, each image represents an entity with features to be detected and indexed. However, in a 3D dataset is it possible to extract features from the three-dimensional entity? Does this have to be done slice-by-slice, by decomposing the 3D images to multiple 2D images (slices)? Or is there a way of reducing the 3D dimensionality to 2D while retaining the 3D information?
Any pointers would be greatly appreciated.
You can perform feature extraction by passing your 3D volumes through a pre-trained 3D convolutional neural network. Because pre-trained 3D CNNs are hard to find, you could consider training your own on a similar, but distinct, dataset.
Here is a link for code for a 3D CNN in Lasagne. The authors use 3D CNN versions of VGG and Resnet.
Alternatively, you can perform 2D feature extraction on each slice of the volume and then combine the features for each slice, using PCA to reduce the dimensionality to something reasonable. For this, I recommend using ImageNet pre-trained Resnet-50 or VGG.
In Keras, these can be found here.
Assume a grey-scale 2D image which can mathematically be described as a matrix. Generalizing the concept of a matrix results in theory about tensors (informally you can think of a multidimensional array). I.e. a RGB 2D image is represented as a tensor of size [width, height, 3]. Further a RGB 3D Image is represented as a tensor of size [width, height, depth, 3]. Moreover and like in the case of matrices you can also perform tensor-tensor multiplications.
For instance consider the typical neural network with 2D images as input. Such a network does basically nothing else than matrix-matrix multiplications (despite of the elementwise non-linear operations at nodes). In the same way a neural network operates on tensors by performing tensor-tensor multiplications.
Now back to your question of feature extraction: Indeed the problem of tensors are their high dimensionality. Hence modern research problems regard the efficient decomposition of tensors retaining the initial (most meaningful) information. In order to extract features from tensors a tensor decomposition approach might be a good start in order to reduce the rank of the tensor. A few papers on tensors in machine learning are:
Tensor Decompositions for Learning Latent Variable Models
Supervised Learning With Quantum-Inspired Tensor Networks
Optimal Feature Extraction and Classification of Tensors via Matrix Product State Decomposition
Hope this helps, even though the math behind is not easy.
Kalman filters and quaternions are something new for me.
I have a sensor which output voltage on its pins changes in function of its inclination on x,y and/or z-axis, i.e. an angle sensor.
My questions:
Is it possible to apply a Kalman filter to smooth the results and avoid any noise on the measurements?
I will then only have 1 single 3D vector. What kind of operations with quaternions could I use with this 3d vector to learn more about quaternions?
You can apply a Kalman filter to accelerometer data, it's a powerful technique though and there are lots of ways to do it wrong. If your goal is to learn about the filter then go for it - the discussion here might be helpful.
If you just want to smooth the data and get on with the next problem then you might want to start with a moving average filter, or traditional lowpass/bandpass filters.
After applying a Kalman filter you will still have a sequence of data - it won't reduce it to a single vector. If this is your goal you might as well take the mean of each coordinate sequence.
As for quaternions, you could probably come up with a way of performing quaternion operations on your accelerometer data but the challenge would be to make it meaningful. For the purposes of learning about the concept you really need it to make some sense, so that you can visualise the results and interpret them.
I would be tempted to write some functions to implement quaternion operations instead - multiplication is the strange one. This will be a good introduction to the way they work, and then when you find an application that calls for them you can use your functions and you'll already know how the mechanics work.
If you want to read the most famous use of quaternions have a look at Maxwell's equations in their original quaternion form, before Heaviside dramatically simplified them and put them in the vector notation we use today.
Also a lot of work is done using tensors these days and if you're interested in the more complex mathematical datatypes that would be a worthwhile one to look into.
I'm trying to use path finding on a series of convex polygons, rather than waypoints. To even further complicate this, the polygons are made by the users, and may have inconsistent vertices. For example:
We know the object is X wide by Y deep, and that the polygons have vertices at certain locations. Is there a specific algorithm to find the fastest way to the goal while keeping the entire object in the polygons (If I understand correctly, A* only works on waypoints)? How do you handle the vertices not being the same object but being at the same location?
EDIT: The polygons are convex; It's 2 separate polygons with the edges on the line.
Also, how do you implement * pathfinding, as a node based system wouldn't work in a 'infinite' resolution polygon?
In general, all shortest-path segments will have, as end-points, either polygon vertices or the start and goal points. If you build a graph that includes all those segments (from the start to each "visible" polygon vertex, from the goal to each "visible" polygon vertex, and from each polygon vertex to each other polygon vertex) and run A* on that, you have your optimal path. The cost of building the graph for A* is:
For each vertex, a visibility-test to find visible vertices: the simple algorithm (for each pair of vertices, see if the segment from one to another lies inside the polygon) is O(n^3). Building convex polygons and processing them independently, or using a smarter "radial sweep" algorithm can greatly lower this, but I suspect it is still around O(n^2).
For each query (from a start-point to a goal-point), O(n) for the visibility-test to find all vertices that it can see.
If you are only going to apply A* once, then the price of building the fixed part of the A* graph for a single traversal may be somewhat steep. An alternative is to build the graph incrementally as you use it:
Java code implementing the above approach can be found here.
The polygons in your drawing are not convex. For convex polygons, you can place a way point in the middle of each each edge and then apply A*. And, of course, you need to fix inconsistent vertices.
There are two polygons given. how can one determine whether one polygon is inside, outside or intersecting the other polygon?
Polygons can be Concave or convex.
You want to use the separating axis theorem for convex polygons.
Basically, for each face of each polygon you project each polygon onto the normal of that face and you see if those projections intersect.
You can perform various tricks to reduce the number of these computations that you have to perform- for example, you can draw a rectangle around the object and assume that if two objects' rectangles do not intersect, they themselves do not intersect. (This is easier because it's less computationally expensive to check the intersection of these boxes, and is generally quite intuitive.)
Concave polygons are more difficult. I think that you could decompose the polygon into a set of convex polygons and attempt to check each combination of intersection, but I wouldn't consider myself skilled enough in this area to try it.
Generally, problems like that are solved easily by a sweep-line algorithm. However, the primary purpose and benefit of using the sweep-line approach is that it can solve the problem efficiently when input consists of two relatively large sets of polygons. Once the sweep line solution is implemented, it can also be efficiently applied to a pair of polygons, if need arises. Maybe you should consider moving in that direction, in case you'll need to solve a massive problem in the future.
However, if you are sure that you need a solution for two and only two polygons , then it can be solved by a sequential point-vs-polygon and segment-vs-polygon tests.
There is an easyish method to check whether a point lies in a polygon. According to this Wikipedia article it is called the ray casting algorithm.
The basic idea of the algorithm is that you cast a ray in some arbitrary direction from the point you are testing and count with how many of the edges of the polygon it intersects. If this number is even then the point lies outside the polygon, otherwise if it is odd the point lies inside the polygon.
There are a number of issues with this algorithm that I won't delve into (they're discussed in the Wikipedia article I linked earlier), but they are the reason I call this algorithm easyish. But to give you an idea you have to handle corner cases involving the ray intersecting vertices, the ray running parallel and intersecting an edge and numeric stability issues with the point lying close to an edge.
You can then use this method in the way Thomas described in his answer to test whether two polygons intersect. This should give you an O(NM) algorithm where the two polygons have N and M vertices respectively.
Here is a simple algorithm to know if a given point is inside or outside a given polygon:
bool isInside(point a, polygon B)
{
double angle = 0;
for(int i = 0; i < B.nbVertices(); i++)
{
angle += angle(B[i],a,B[i+1]);
}
return (abs(angle) > pi);
}
If a line segment of A intersects a line segment of B then the two polygons intersect each other.
Else if all points of polygon A are inside polygon B, then A is inside B.
Else if all points of polygon B are inside polygon A, then B is inside A.
Else if all points of polygon A are outside polygon B, then A is outside B.
Else the two polygons intersect each other.