I need to sort a point array (a point is a struct with two float types - one for x and one for y) in a special fashion.
The points have to be sorted so when they are traversed, they form a zig-zag pattern starting at the top leftmost point, moving to the top rightmost point, then down to the second leftmost point, to the second rightmost point and so on.
I need this to be able to convert arbitrary polygons to triangle strip arrays which I can then draw using GLes. What would be the most efficient way of sorting those points, by either using pointers (ie. passing and rearranging the pointers to the point structures) or by copying and moving the data in the structures directly?
I'd use qsort() with a custom compare() function that as #stefan noted, sorts descending by y then alternates (max/min) for x.
I would highly recommend you use Delaunay Triangulation. OpenCV (it's available in C) has a nice implementation.
You seem to presenting us with an already reduced version of your original problem, believing that you are on the right path to the solution. I might be wrong, but it doesn't look like you are.
It seems (judging by your other questions) that you are ultimately looking for a triangulation. And, quite possibly, a triangulation of a polygon or polygons (as opposed to a set of independent points). If so, I'd suggest you take a look at some basic triangulation algorithms, like the one based on monotone decomposition. The problem you present here actually looks like a [possibly misguided] attempt to do something similar to monotone decomposition.
I don't think you've given a well-defined order. For example, what order should the points be connected if they look like this:
*
*
*
*
*
*
I would recommend moving data from the structures directly.
The size of the point struct is only 8 to 16 bytes (16bytes if float is 8bytes). If you sort the array through pointers you are copying almost the same amount of data (Or same amount of data if float is 4bytes and 8bytes pointer on 64bit system).
I would recommend sorting through pointer if the struct is large.
It seems you are trying to reinvent some kind of monotone polygonal chain. Some polygon triangulation methods are in short described in wiki and here with links to code
You should first find the median(middle value) of the points (based on the horizontal values). This will split the set of points into left and right. Next sort the 2 sets based on the vertical value. You can then just iterate from the top from each set: take top element from left set, then top element from the right.. and so on.
To find the median there is a short algorithm based on quick-sort. But faster than quick-sort. Just recurse on the part where the median is (not on both like in quick-sort).
You should be able to do it the other way around: first sort by the vertical value and then split by the horizontal (maybe this is better when you have an odd number of points).
Related
I've found answers to similar problems, but none of them exactly described my problem.
so on the risk of being down-voted to hell I was wondering if there is a standard method to solve my problem. Further, there's a chance that I'm asking the wrong question. Maybe the problem can be solved more efficiently another way.
So here's some background:
I'm looping through a list of particles. Each particle has a list of it's neighboring particles. Now I need to create a list of unique particle pairs of mutual neightbours.
Each particle can be identified by an integer number.
Should I just build a list of all the pair's including duplicates and use some kind of sort & comparator to eliminate duplicates or should I try to avoid adding duplicates into my list in the first place?
Performance is really important to me. I guess most of the loops may be vectorized and threaded. On average each particle has around 15 neighbours and I expect, that there will be 1e6 particles at most.
I do have some ideas, but I'm not an experienced coder and I don't want to waste 1 week to test every single method by benchmarking different situations just to find out that there's already a standard meyjod for my problem.
Any suggestions?
BTW: I'm using C.
Some pseudo-code
for i in nparticles
particle=particles[i]; //just an array containing the "index" of each particle
//each particle has a neightbor-list
for k in neighlist[i] //looping through all the neighbors
//k represent the index of the neighbor of particle "i"
if the pair (i,k) or (k,i) is not already in the pair-list, add it. otherwise don't
Sorting the elements each iteration is not a good idea since comparison sort is O(n log n) complex.
The next best thing would be to store the items in a search tree, better yet binary search tree, and better yet self equalizing binary search tree, you can find implementations on GitHub.
Even better solution would give an access time of O(1), you can achieve this in 2 different ways one is a simple identity array, where at each slot you would save say a pointer to item if there is on at this id or some flag defining that current id is empty. This is very fast but wasteful. You'll need O(N) memory.
The best solution in my opinion would be to use a set or a has-map. Which are basically the same because sets can be implemented using hash-map.
Here is a github project with c hash-map implementation.
And stack overflow answer to a similar question.
I would like to fill a plane with randomly placed points, check whether any of them overlap (and if they do, move one of them to empty place) and then calculate the average distance between them. Later I plan on extending that to 3D so that it is kind of having particles in a box.
I know there must be better ways of doing it but here's what I came up with. For placing random points in a plane:
int pos[NUMBER][2]; /* Creates an array of NUMBER amount of points with x and y coordinate */
int a, b;
srand( time(NULL) );
for(a=0;a<NUMBER;a++)
for(b=0;b<2;b++)
pos[a][b]=rand()%11; /* Using modulus is random enough for now */
The next stage is finding points that over lap:
for(a=0;a<NUMBER-1;a++)
for(b=a+1;b<NUMBER;b++)
if( pos[a][0] == pos[b][0] && pos[a][1] == pos[b][1])
printf("These points overlap:\t", pos[a][0], pos[a][1]);
Now when I identify which points overlap I have to move one of them, but when I do the point in new position might overlap with one of the earlier ones. Is there any accepted way of solving this problem? One way is infinite while(true) loop with breaking condition but that seems very inefficient especially when system gets dense.
Thank you!
Here's a sketch of a solution that I think could work:
Your point generation algorithm is good, can be left as is.
The correct time to check for overlap is already when the point is generated. We simply generate new points until we generate one that doesn't overlap with any previous.
To quickly find overlap, use a hash table such as the one from '''glib'''. The key could be two int32_t packed into a int64_t union:
typedef union _Point {
struct {
int32_t x;
int32_t y;
};
int64_t hashkey;
} Point;
Use the "iterate over all keys" functionality of your hash table to build the output array.
I haven't been able to test this but it should work. This assumes that the plane is large in relation to the number of points, so that overlaps are less likely. If the opposite is true, you can invert the logic: start with a full plane and add holes randomly.
Average complexity of this algorithm is O(n).
As you hinted that it should work for high densities as well, the best course of action is to create a 2D array of booleans (or bit vectors if you want to save space), where all elements are set to false initially. Then you loop NUMBER times, generating a random coordinate, and check whether the value in the array is true or not. If true, you generate another random coordinate. If false, you add the coordinate to the list, and set the corresponding element in the array to true.
The above assumes you want exactly NUMBER points, and a completely uniform chance of placing them. If either of those constraints are not necessary, there are other algorithms possible that use much less memory.
One solution is to place points at random, see if they overlap, and re-try on overlap. To avoid testing every point, you need to set up an index by space - if you have a 100*100 plane and a cut-off of 3-4, you could use 10*10 grid squares. Then you have to search four grid squares to check you don't have a hit.
But there are other ways of doing it. Uniformly placing points on a gird will create a Poisson distribution. So for each point, you can create a random number with the Poisson distribution. What happens when you get 2 or more? This method forces you to answer that question. Maybe you artificially clamp to one, maybe you move into the neighbouring slot. This method won't create exactly N points, so if you must have N, you can put in a fudge (randomly add/remove the last few points).
Edited...
Thanks for every one to try to help me!!!
i am trying to make a Finite Element Analysis in Mathemetica.... We can obtain all the local stiffness matrices that has 8x8 dimensions. I mean there are 2000 matrices they are similar but not same. every local stiffness matrix shown like a function that name is KK. For example KK[1] is first element local stiffness matrix
i am trying to assemble all the local matrices to make global stiffness matrix. To make it easy:
Do[K[e][i][j]=KK[[e]][[i]][[j]],{e,2000},{i,8},{j,8}]....edited
Here is my question.... this equality can affect the analysis time...If yes what can i do to improve this...
in matlab this is named as 3d array but i don't know what is called in Mathematica
what are the advantages and disadvantages of this explanation type in Mathematica...is t faster or is it easy way
Thanks for your help...
It is difficult to understand what your question is, so you might want to reformulate it.
As others have mentioned, there is no advantage to be expected from a switch from a 3D array to DownValues or SubValues. In fact you will then move from accessing data-structures to pattern matching, which is powerful and the real strength of Mathematica but not very efficient for what you plan to do, so I would strongly suggest to stay in the realm of ordinary arrays.
There is another thing that might not be clear for someone more familiar with matlab than with Mathematica: In Mathematica the "default" for arrays behave a lot like cell arrays in matlab: each entry can contain arbitrary content and they don't need to be rectangular (as High Performance Mark has mentioned they are just expressions with a head List and can roughly be compared to matlab cell arrays). But if such a nested list is a rectangular array and every element of it is of the same type such arrays can be converted to so called PackedArrays. PackedArrays are much more memory efficient and will also speed up many calculations, they behave in many respect like regular ("not-cell") arrays in matlab. This conversion is often done implicitly from functions like Table, which will oten return a packed array automatically. But if you are interested in efficiency it is a good idea to check with Developer`PackedArrayQ and convert explicitly with Developer`ToPackedArray if necessary. If you are working with PackedArrays speed and memory efficiency of many operations are much better and usually comparable to verctorized operations on normal matlab arrays. Unfortunately it can happen that packed arrays get "unpacked" by some operations, so if calculations become slow it is usually a good idea to check if that has happend.
Neither "normal" arrays nor PackedArrays are restricted in the rank (called Depth in Mathematica) they can have, so you can of course create and use "3D arrays" just as you can in matlab. I have never experienced or would know of any efficiency penalties when doing so.
It probably is of interest that newer versions of Mathematica (>= 10) bring the finite element method as one of the solver methods for NDSolve, so if you are not doing this as an exercise you might want to have a look what is available already, there is quite excessive documentation about it.
A final remark is that you can instead of kk[[e]][[i]][[j]] use the much more readable form kk[[e,i,j]] which is also easier and less error prone to type...
extended comment i guess, but
KK[e][[i]][[j]]
is not the (e,i,j) element of a "3d array". Note the single
brackets on the e. When you use the single brackets you are not denoting an array or list element but a DownValue, which is quite different from a list element.
If you do for example,
f[1]=0
f[2]=2
...
the resulting f appears similar to an array, but is actually more akin to an overloaded function in some other language. It is convenient because the indices need not be contiguous or even integers, but there is a significant performance drawback if you ever want to operate on the structure as a list.
Your 'do' loop example would almost certainly be better written as:
kk = Table[ k[e][i][j] ,{e,2000},{i,8},{j,8} ]
( Your loop wont even work as-is unless you previously "initialized" each of the kk[e] as an 8x8 array. )
Note now the list elements are all double bracketed, ie kk[[e]][[i]][[j]] or kk[[e,i,j]]
I'm writing a program for a numerical simulation in C. Part of the simulation are spatially fixed nodes that have some float value to each other node. It is like a directed graph. However, if two nodes are too far away, (farther than some cut-off length a) this value is 0.
To represent all these "correlations" or float values, I tried to use a 2D array, but since I have 100.000 and more nodes, that would correspond to 40GB memory or so.
Now, I am trying to think of different solustions for that problem. I don't want to save all these values on the harddisk. I also don't want to calculate them on the fly. One idea was some sort of sparse matrix, like the one one can use in Matlab.
Do you have any other ideas, how to store these values?
I am new to C, so please don't expect too much experience.
Thanks and best regards,
Jan Oliver
How many nodes, on average, are within the cutoff distance for a given node determines your memory requirement and tells you whether you need to page to disk. The solution taking the least memory is probably a hash table that maps a pair of nodes to a distance. Since the distance is the same each way, you only need to enter it into the hash table once for the pair -- put the two node numbers in numerical order and then combine them to form a hash key. You could use the Posix hsearch/hcreate/hdestroy functions for the hash table, although they are less than ideal.
A sparse matrix approach sounds ideal for this. The Wikipedia article on sparse matrices discusses several approaches to implementation.
A sparse adjacency matrix is one idea, or you could use an adjacency list, allowing your to only store the edges which are closer than your cutoff value.
You could also hold a list for each node, which contains the other nodes this node is related to. You would then have an overall number of list entries of 2*k, where k is the number of non-zero values in the virtual matrix.
Implementing the whole system as a combination of hashes/sets/maps is still expected to be acceptable with regard to speed/performance compared to a "real" matrix allowing random access.
edit: This solution is one possible form of an implementation of a sparse matrix. (See also Jim Balter's note below. Thank you, Jim.)
You should indeed use sparse matrices if possible. In scipy, we have support for sparse matrices, so that you can play in python, although to be honest sparse support still has rough edges.
If you have access to matlab, it will definitely be better ATM.
Without using sparse matrix, you could think about using memap-based arrays so that you don't need 40 Gb of RAM, but it will still be slow, and only really make sense if you have a low degree of sparsity (say if 10-20 % of your 100000x100000 matrix has items in it, then full arrays will actually be faster and maybe even take less space than sparse matrices).
I have two arrays, say A and B with |A|=8 and |B|=4. I want to calculate the set difference A-B. How do I proceed? Please note that there are no repeated elements in either of the sets.
Edit: Thank you so much everybody for a myriad of elegant solutions. Since I am in prototyping stage of my project, for now I implemented the simplest solution told by Brian and Owen. But I do appreciate the clever use of data structures as suggested here by the rest of you, even Though I am not a computer scientist but an engineer and never studied data structures as a course. Looks like it's about time I should really start reading CLRS which I have been procrastinating for quite a while :) Thanks again!
sort arrays A and B
result will be in C
let a - the first elem of A
let b - the first elem of B
then:
1) while a < b: insert a into C and a = next elem of A
2) while a > b: b = next elem of B
3) if a = b: a = next elem of A and b = next elem of B
4) if b goes to end: insert rest of A into C and stop
5) if a goes to end: stop
Iterate over each element of A, if each of those elements are not in B, then add them to a new set C.
It depends on how you want to represent your sets, but if they are just packed bits then you can use bitwise operators, e.g. D = A & ~B; would give you the set difference A-B if the sets fit into an integer type. For larger sets you might use arrays of integer types and iterate, e.g.
for (i = 0; i < N; ++i)
{
D[i] = A[i] & ~B[i];
}
The following assumes the sets are stored as a sorted container (as std::set does).
There's a common algorithm for merging two ordered lists to produce a third. The idea is that when you look at the heads of the two lists, you can determine which is the lower, extract that, and add it to the tail of the output, then repeat.
There are variants which detect the case where the two heads are equal, and treat this specially. Set intersections and unions are examples of this.
With a set asymmetric difference, the key point is that for A-B, when you extract the head of B, you discard it. When you extract the head of A, you add it to the input unless the head of B is equal, in which case you extract that too and discard both.
Although this approach is designed for sequential-access data structures (and tape storage etc), it's sometimes very useful to do the same thing for a random-access data structure so long as it's reasonably efficient to access it sequentially anyway. And you don't necessarily have to extract things for real - you can do copying and step instead.
The key point is that you step through the inputs sequentially, always looking at the lowest remaining value next, so that (if the inputs have no duplicates) you will the matched items. You therefore always know whether your next lowest value to handle is an item from A with no match in B, and item in B with no match in A, or an item that's equal in both A and B.
More generally, the algorithm for the set difference depends on the representation of the set. For example, if the set is represented as a bit-vector, the above would be overcomplex and slow - you'd just loop through the vectors doing bitwise operations. If the set is represented as a hashtable (as in the tr1 unordered_set) the above is wrong as it requires ordered inputs.
If you have your own binary tree code that you're using for the sets, one good option is to convert both trees into linked lists, work on the lists, then convert the resulting list to a perfectly balanced tree. The linked-list set-difference is very simple, and the two conversions are re-usable for other similar operations.
EDIT
On the complexity - using these ordered merge-like algorithms is O(n) provided you can do the in-order traversals in O(n). Converting to a list and back is also O(n) as each of the three steps is O(n) - tree-to-list, set-difference and list-to-tree.
Tree-to-list basically does a depth-first traversal, deconstructing the tree as it goes. Theres a trick for making this iterative, storing the "stack" in part-handled nodes - changing a left-child pointer into a parent-pointer just before you step to the left child. This is a good idea if the tree may be large and unbalanced.
Converting a list to a tree basically involves a depth-first traversal of an imaginary tree (based on the size, known from the start) building it for real as you go. If a tree has 5 nodes, for instance, you can say that the root will be node 3. You recurse to build a two-node left subtree, then grab the next item from the list for that root, then recurse to build a two-node right subtree.
The list-to-tree conversion shouldn't need to be implemented iteratively - recursive is fine as the result is always perfectly balanced. If you can't handle the log n recursion depth, you almost certainly can't handle the full tree anyway.
Implement a set object in C. You can do it using a hash table for the underlying storage. This is obviously a non trivial exercise, but a few Open Source solutions exist. Then you simply need to add all the elements of A and then iterate over B and remove any that are elements of your set.
The key point is to use the right data structure for the job.
For larger sets I'd suggest sorting the numbers and iterating through them by emulating the code at http://www.cplusplus.com/reference/algorithm/set_difference/ which would be O(N*logN), but since the set sizes are so small, the solution given by Brian seems fine even though it's theoretically slower at O(N^2).