I'm dealing with OBJ file with OpenGL 2.1. This is OBJ spec: http://www.martinreddy.net/gfx/3d/OBJ.spec
According to that, a face is indicated as vertex/texture/normal where vertex, texture and normal are different, this mean index of vertex, texture coordinate and normal index in each buffers are different.
Question: How can i draw an object that supply different indicate buffers for each vertex, texture and normal arrays?
Sad answer, this is not possible in OpenGL. In OpenGL (and many other such systems, especially Direct3D), a vertex is conceptually the union of all the vertex attributes, like position, normal or texture coordinates (it's unfortunate that the term vertex is often used only for the position attribute). So two vertices with the same position but different normal vectors are conceptually two different vertices.
That's the reason, why you have to use the same vertex index for all attributes. So to draw an .OBJ file with OpenGL or similar libraries, you won't get around processing the data after loading.
The easiest way is to completely drop indices and just store all vertex data for each triangle corner one after the other, using the indices read from the face specification, thus having 3*F vertices and no indices. This, however is rather inefficient, since you probably still have many duplicate vertices, even if considering all attributes at a whole.
Another option is to insert the index triples (comprised of vertex, texcoord and normal index) into a hash table and map it to a combined index. Whenever the index triple already exists, you replace it by the one index in the hash table and when it doesn't exist, you add a new vertex (comprised of position, normal and texCoords) to your final rendered vertex arrays, indexing the file's vertex arrays using the index triple (and insert this new vertex array size as unique index into the hash table, of course).
Related
I'm assuming the best answer to this question requires using VoronoiDelaunay.jl, but I'm also open to other packages/approaches.
If I have a set of points in 2D (or 3D? Though not I'm not sure this is possible with the package VoronoiDelaunay.jl), what is the fastest way to get each of their nearest neighbors in a Voronoi-tesselation sense (e.g the neighbors within the 'first Voronoi shell')? I am also not really confident on the mathematics behind this or how it relates to Delaunay triangulation.
The data structure doesn't matter too much to me, but let's just assume the data is stored in a 2D array of type Array{Float64,2} called my_points, whose size is (nDims, nPoints), and nDims is 2 or 3, and nPoints is the number of points. Let's say I want to have the output be an edge list of some kind, e.g. array of arrays called edge_list (Array{Array{Int64,1}}) where each element i of edge_list gives me the indices of those points that are Voronoi neighbors of the focal point i (whose coordinates are stored in my_points[:,i]).
For every point (point in the red tessellation), I want the coordinates / identities of the points that are its Voronoi neighbors (the points in the orange tessellations). This image is taken from Figure 1b of this paper: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1002678
Im using HDF5 to store massive sparse arrays in Coordinate format (basically, an M x 3 array which stores the value, x index and y index for each non-zero element).
This is great for processing the whole dataset in an iterative manner, but I am struggling with random lookups based on index values.
E.g, given a 100x100 matrix, I might store then non sparse elements like so:
[[1,2,3,4,5], // Data values
[13, 14, 55, 67, 80], // X-indices
[45, 12, 43, 55, 12]] // Y-indices
I then wish to get all the data values between 10<x<32 and 10<y<32, for example. With the current format, all I can do is iterate through the x and y index arrays looking for matching indices. This is very very slow, with multiple reads from disk (my real data typically has as size of 200000x200000 with perhaps 10000000 non-sparse elements).
Is there a better way to store large (larger than RAM) sparse matrices and support rapid index-based lookups?
I'm using HDF5, but happy to be pointed in other directions
First, let's suppose that, as your example hints but you don't state conclusively, you store the elements in order sorted by x first and by y second.
One easy technique for more rapid lookup would be to store an x-index-index, a vector of tuples (following your example this might be [(10,1),(20,null),(30,null),(40,null),(50,3),...]) pointing to locations in the x-index vector at which runs of elements start. If this index-index fits comfortably in RAM you could get away with reading it from disk only once at the start of your computation.
Of course, this only supports rapid location of x indices, and then a scan for the y. If you need to support rapid location of both you're into the realm of spatial indexing, and HDF5 might not be the best on-disk storage you could choose.
One thought that does occur, though, would be to define a z-order curve across your array and to store the elements in your HDF5 file in that order. To supplement that you'd want to define a z-index which would identify the location of the start of the elements in each 'tile' of the array. This all begins to get a bit hairy, I suggest you look at the Wikipedia article on z-order curves and do some head scratching.
Finally, in case it's not crystal clear, I've looked at this only from the point of view of reading values out of the file. All the suggestions I've made make creating and updating the file more difficult.
Finally, finally, you're not the first person to think about effective and efficient indexing for sparse arrays and your favourite search engine will throw up some useful resources for your study.
I am tracking particles into a 3D lattice. Each lattice element is labeled with an index corresponding to an unrolled 3D array
S = x + WIDTH * (y + DEPTH * z)
I am interested in the transition form cell S1 to cell S2. The resulting transition matrix M(S1,S2) is sparsely populated, because particles can reach only near by cells. Unfortunately using the indexing of an unrolled 3D array cells that are geometrically near might have big difference in their indexes. For instance, cells that are siting on top of each other (say at z and z+1) will have their indexes shifted by WIDTH*DEPTH. Therefore if I try accumulating the resulting 2D matrix M(S1,S2) , S1 and S2 will be very different, even dough the cells are adjacent. This is a significant problem, because I can't use the usual sparse matrix storage.
At the beginning I tried storing the matrix in coordinate format:
I , J VALUE
Unfortunately I need to loop the entire index set to find the proper S1,S2 and store the accumulated M(S1,S2).
Unusually sparse matrices have some underlying structure and therefore the indexing is quite straightforward. In this case however, I have some troubles figuring out how to index my cells.
I would appreciate your help
Thank you in advance,
There are several approaches. Which is best depends on operations that need to be performed on the matrix.
A good general purpose one is to use a hash table where the key is the index tuple, in your case (i,j).
If neighboring (in the Euclidean sense) matrix elements must be discoverable, then an alternate strategy is a balanced tree with a Morton Order key. The Morton order value of a key (i,j) is just the integers i and j with their bits interleaved. You should quickly see that index tuples close to each other in the index 2-space are also close in linear Morton order.
Of course if you are building the matrix all at once, after which it's immutable, then you can build the key-value pairs in an array rather than a hash table or balanced tree, sort them (lexicographically for (i,j) pairs and linearly for Morton keys) and then do reads with simple binary search.
I have some trouble with building Graph Structure. I know how to build a simply linked list and doubly too. But I want to construct a graph structure like in this site (the pic. output) http://www.cs.sunysb.edu/~algorith/files/graph-data-structures.shtml
You have three common solutions:
an adjacency matrix (in which you store a matrix of N*N where N is the number of vertices and in matrix[x][y] you will store a value if x has an edge to y, 0 otherwise
an edge list, in which you just keep a long lists of edges so that if the couple (x,y) is in the list, then there is an edge from x to y
an adjacency list, in which you have a list of vertices and every vertex x has a list of edges to the nodes for which x has an edge to.
Every different approach is good or bad according to
space required
computational complexity related to specific operations more than other
So according to what you need to do with the graph you could choose any of those. If you want to know specific characteristic of the above possible implementations take a look at my answer to another SO question.
I am working on a project which will be using large datasets (both 2D and 3D) which I will be turning into triangles, or tetrahedra, in order to render them.
I will also be performing calculations on these tris/tets. Which tris/tets to use for each calculation depend on the greatest and smallest values of their vertices.
So I need to sort the tris/tets in order of their greatest valued vertex.
--
I have tried quicksort, and binary insertion sort. Quicksort so far offers the quickest solution but it is still quite slow due to the size of the data sets.
I was thinking along the lines of a bucket/map sort when creating the tris/tets in the first place; a bucket for each of the greatest valued vertices encountered, adding pointers to the triangles who all have that value as the value of their greatest valued vertex.
This approach should be linear in time, but requires more memory obviously. This is not an issue, but my programming language of choice is c. And I'm not entirely sure how I would go about coding such a thing up.
So my question to you is, how would you go about getting the triangles/tets in such a way that you could iterate through them, from the triangle whos vertex with the greatest value of its 3 vertices is the greatest valued vertex in the entire data set, all the way down to the triangle with the the smallest greatest vertex value? :)
Can't you just store them in a binary search tree as you generate them? That would keep them in order and easily searchable (O(log(n)) for both insertion and lookup)
You could use priority queue based on a heap data structure. This should also get you O(log(n)) for insertion and extraction.