What does glDrawElements() expect the normals array to contain? - arrays

I'm writing a decoder for MilkShape 3D models.
I load the contents to a vertex array and a face index array (as std::vector), then use glDrawElements() to render it, so far so good.
But the problem is with the normals array. In what order does OpenGL expect the normals? The MilkShape 3D file contains three float[3] normals which are the same, following the face (triangle) indices. But if I simply push_back() what I read into the normals array, OpenGL won't apply lighting correctly.
So I think I'm messing up the order. How to do it right?
Thanks for reading.

OpenGL assumes the normals being indexed the very same like the vertex positions. One must understand, that a vertex itself is a vector of attribute vectors, or in other words a vector of all the attribute values (position, normal, colour, texture coordinate(s), etc.). The glDrawElements index array addresses the array of vertices, where each vertex is such a higher dimensional vector.
Now what could happen is, that Milkshape mixes the face winding and gives you normals of which some have been flipped into the opposite direction (inwards instead of outwards). I don't know about how it's done in Milkshape, but in Blender there is a function "Recalculate Normals" (accessed by CTRL+N hotkey), that fixes this.
If you don't want to fix the normals, you must enable double sided lighting (this has a performance impact).
glLightModeli(GL_LIGHT_MODEL_TWO_SIDE, GL_TRUE);

What you describe sounds like MilkShape3D gives you per-face normals. OpenGL expects per-vertex normals.
So you need to process your data to generate per-vertex from per-face. There's a lot of literature on the web for this, one hit on Google gives this.

I was indeed messing up the order of the normals.
A MilkShape 3D model file structure can be approximated as:
vertex list, float[3]
vertex index list as short[3] for the triangles + 3 normals as float[3] each
You need to load the normals at normals[currentFace.index123]. In plain English, the normals' indices from the normals array must correspond to the vertex indices from the vertices array.

Related

How to pass variable length float array to GPUImageFilter Shader?

I want to pass my touch points to GPUImage (iOS)
The Point can be translate to float array, the length of the array is variable length.
But I must direct the length of array in shader.
Disclaimer: not a glsl expert
AFAIk you can't have variable length arrays like what you want. This is a GLSL limitation, not GPUImage so it's not a quick fix- the work you'll be doing will be with textures or glsl, not GPUImage.
Here's another stack overflow post about glsl: GLSL indexing into uniform array with variable length
There's two solutions that could work:
1) Limit the number of points. It's reasonable to limit touches but in practice may be hard to narrow them down if there's too many. You could pass these points in to a fixed length array or as individual constants (one for each point). If you really care about scalability with the number of points this isn't a great method because in your shader you'll have to do check each of these points and perform the relevant computation, which could be expensive when performed for the entire image (again, depending on your use case). If for each pixel you're checking a distance to point, this could be too expensive.
2) Input your points in a texture. You can either have 2 1D textures with the x&y coordinates and then treat them like an array (then go to option 1), or you can create a 2D texture, all 0, and set parts to 1 where there are touches. The 2D texture can have a lower resolution than the actual screen. This method could be a lot less work for the shader if you're doing something simple like turning finger touches black.
Your choice depends largely on what you're doing with the points in the shader.

Passing varying array from vertex to geometry shader on Mac

I'd like to be able to pass an arbitrary number of varying values per vertex from the vertex shader to the geometry shader. I know that OpenGL has no dynamic arrays, so the number should be specified at compile time. The whole thing should run on an Apple MacBook with a NVIDIA GeForce 9400M graphics card and a driver that only offers OpenGL 2.1, along with some extensions.
The problem here seems to be that the geometry shader takes its input in the form or an array with one element per vertex. As far as I can tell, there are no arrays of arrays available in my setup, and no arrays of interface blocks containing arrays either. So far, the best solution I could come up with is specifying a number of variables to pass this information, extracted from an array in the vertex shader and turned back into an array with a certain stride length in the geometry shader. That way, access to the values can still be performed using computed indices.
Is there a better, more elegant way?
From EXT_geometry_shader4 specification:
User-defined varying variables can be declared as arrays in the
vertex shader. This means that those, on input to the geometry shader,
must be declared as two-dimensional arrays. See sections 4.3.6 and 7.6 of
the OpenGL Shading Language Specification for more information.
For example, in the vertex shader, you may specify
varying vec2 value[2];
and in the geometry shader, this becomes a two-dimensional array, e.g. with triangles as input primitives
varying in vec2 value[3][2];
Note the counterintuitive order of array indices! Also beware that the array dimensions must be specified explicitly, using an integer constant. Using a non-constant integer variable or gl_VerticesIn yields a compiler error. Both remarks have been tested on the very MacBook Pro model mentioned in the question.
There are reasons why core OpenGL's geometry shaders don't work the way EXT_geometry_shader4 does. This is one of them. EXT_geometry_shader4 doesn't allow arrays of inputs because that would mean allowing arrays of arrays of values. And GLSL can't handle that (well, until recently, but that's only 2 months old).
Interface blocks can have arrays in them. Your problem is that GLSL 1.20 doesn't have interface blocks.
There's not much you can do besides use different variables and manually unroll all your loops. You could write a function that takes an integer value and conditionally returns one of the different values that correspond to that index, but that's about the best you're going to get with old-school GLSL.

glVertexAttribPointer, interleaved elements and performance / cache friendliness

So, in the course of writing a model loader for a 3D scene I'm working on, I've decided to pack the vertex, texture and normal data like so:
VVVVTTTNNN
for each vertex, where V = vertex coordinate, T = UV coordinate, and N = normal coordinate. When I pass this data on to the vertex shader for my scene, I make three glVertexAttribPointer calls, like so:
glVertexAttribPointer(ATTRIB_VERTEX, 4, GL_FLOAT, 0, 10, group->vertices.data);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_NORMAL, 3, GL_FLOAT, 0, 10, group->normals.data);
glEnableVertexAttribArray(ATTRIB_NORMAL);
glVertexAttribPointer(ATTRIB_UV_COORDINATES, 3, GL_FLOAT, 0, 10, group->uvcoordinates.data);
glEnableVertexAttribArray(ATTRIB_UV_COORDINATES);
Each of the group pointers being passed refer to the beginning position in the shared vertex data block where that vertex type starts:
group->vertices.data == data
group->uvcoordinates.data == &data[4]
group->normals.data == &data[7]
Part of the reason for me interleaving this data was to program for cache friendliness and minimize data being sent to the card. ( NOTE: This is not for a realistic performance bottleneck. I'm investigating the optimization because I want to learn more about programming to address these sort of concerns. ) However, for the life of me, I can't imagine how GL would be able to infer that the 3 different pointers refer to offset positions within the same larger data block, and thereby make the necessary optimization to avoid copying the data once it has already been copied. Furthermore, since I'm only ensuring data locality in system memory ( and don't really have any guarantees on how that data is going to be organized on the GPU ), I'm only really optimizing for the case where I access any of these vertices outside of GL. Is that right? Are these optimizations mostly useless, or will providing data in this manner help minimize the data transfer to the GPU / prevent cache misses when iterating over vertex data in the vertex shader?
OpenGL is just an API, the intelligence lies in the driver. Anyway the problem is actually rather simple to implement: For every Vertex Attribute you got a starting memory address and when calling glDrawArrays or glDrawElements one looks for the largest index found. That defines the upper bound of the range.
Then you sort the vertex attributes starting addresses and for each address check if it range overlaps with any other vertex attribute range. You find the contiguous regions and copy those.
In the case of Vertex Buffer Objects it's even simpler since you already copied stuff to OpenGL ready for processing.

Passing int array to fragment shader

I'm attempting to plot an iterative function in OpenGL ES. An array of ints is being updated with how often a given pixel is hit by the iterative function. I'd like to pass this density array to a fragment shader and use it to plot the result on a simple quad covering the whole screen.
My question is: can I pass this array directly to the shader as a uniform and generate pixels by using gl_FragCoord to look up the density for the given position
or
should I rather use the array to create a texture with one channel using GL_LUMINANCE and pass that to the shader?
You have a limited number of uniforms available, and the indexing might be troubling since not all GPUs support non-constant indexing. A 2D Nx1 texture doesn't have any of these issues but will return values into the [0, 1] range. You can scale back this values to obtain the original integer and use it.

Recognizing tetris pieces in C

I have to make an application that recognizes inside an black and white image a piece of tetris given by the user. I read the image to analyze into an array.
How can I do something like this using C?
Assuming that you already loaded the images into arrays, what about using regular expressions?
You don't need exact shape matching but approximately, so why not give it a try!
Edit: I downloaded your doc file. You must identify a random pattern among random figures on a 2D array so regex isn't suitable for this problem, lets say that's the bad news. The good news is that your homework is not exactly image processing, and it's much easier.
It's your homework so I won't create the code for you but I can give you directions.
You need a routine that can create a new piece from the original pattern/piece rotated. (note: with piece I mean the 4x4 square - all the cells of it)
You need a routine that checks if a piece matches an area from the 2D image at position x,y - the matching area would have corners (x-2, y-2, x+1, y+1).
You search by checking every image position (x,y) for a match.
Since you must use parallelism you can create 4 threads and assign to each thread a different rotation to search.
You might not want to implement that from scratch (unless required, of course) ... I'd recommend looking for a suitable library. I've heard that OpenCV is good, but never done any work with machine vision myself so I haven't tested it.
Search for connected components (i.e. using depth-first search; you might want to avoid recursion if efficiency is an issue; use your own stack instead). The largest connected component should be your tetris piece. You can then further analyze it (using the shape, the size or some kind of border description)
Looking at the shapes given for tetris pieces in Wikipedia, called "I,J,L,O,S,T,Z", it seems that the ratios of the sides of the bounding box (easy to find given a binary image and C) reveal whether you have I (4:1) or O (1:1); the other shapes are 2:3.
To detect which of the remaining shapes you have (J,L,S,T, or Z), it looks like you could collect the length and position of the shape's edges that fall on the bounding box's edges. Thus, T would show 3 and 1 along the 3-sides, and 1 and 1 along the 2 sides. Keeping track of the positions helps distinguish J from L, S from Z.

Resources