I have a problem. I'm rotating an object on screen with OpenGL ES 2.0 on a Raspberry Pi. Part of the rotation seems to work fine but the other part completly flattens the object out? I have tried 2 rotation functions so far with the exact same result. The depth buffer is also enabled and setup. I'm starting to think my projection matrix might be the problem here but I'm not sure. There's too much code to post right now, I will update this question with code when someone can narrow down where this behavior could come from.
Here's a video of the aforementioned problem:
https://www.youtube.com/watch?v=3mDMG7Eypj4
Thanks in advance.
So I figured out my issue finally... I had written the matrix multiplication function myself. Problem is I was assigning multiplication to one of the original matrices resulting in warped results down the rows.
void matrix_multiply(GLfloat * matrix1, GLfloat * matrix2) {
matrix1[0] = matrix1[0] * matrix2[0] + matrix1[4] * matrix2[1] ... // etc
[...]
matrix1[4] = matrix1[0] * matrix2[4] + ... //etc
}
Now if you noticed, the value of matrix1[0] had already changed and been reassigned.
Rookie mistake.
Related
I want to create my own C function to interpolate some data points and find an accurate minimum (Overall project is audio frequency tuning, and I'm using the YIN algorithm which is working well). I am implementing this on a digital DSP K22F ARM chip, so I want to minimize floating point multiplies as much as possible to implement within the interrupt while the main function pushes to a display and #/b indicators.
I have gotten to the point where I need to interpolate I have implemented the algorithm, found the integer minimum, and now need to interpolate. Currently I am trying to parabolically interpolate using the 3 points I have. I have found one that works within a small margin of error Most interpolation functions seem to only be made between two points.
It seems as though the secant method on this page would work well for my application. However, I am at a loss for how to combine the 3 points with this 2 point method. Maybe I am going about this the wrong way?
Can someone help me implement the secant method of interpolation?
I found some example code that gets the exact same answer as my code.
Example code:
betterTau = minTau + (fc - fa) / (2 * (2 * fb - fc - fa));
My code:
newpoint = b + ((fa - fb)*(c-b).^2 - (fc - fb)*(b-a)^2) / ...
(2*((fa-fb)*(c-b)+(fc-fb)*(b-a)))
where the x point values are a, b, and c. The values of each point is fa, fb, and fc, respectively
Currently I am just simulating in MATLAB before I put it on the board which is why the syntax isn't C. Mathematically, I am not seeing how these two equations are equivalent.
Can someone explain to me how these two functions are equivalent?
Thank you in advance.
I would like to produce a realistic 3D demonstration of a ball rolling down a Conical Helix path. The reference that has helped me get close to a solution can be found here. [I am creating my solution in Actionscript 3, using Stage3D, but would be happy to have any suggested coding solutions in other languages, using other 3D frameworks, with which you may be more familiar.]
As I entered the title for my posting, the system pointed me to a wealth of "Questions that may already have your answer", and that was helpful, and I did check each of them out. Without wanting to hijack an existing thread, I should say that this oneincludes a good deal of very helpful commentary about the general subject, but does not get to the specific challenges I have been unable to resolve.
Using the cited reference, I am happy with this code snippet that traces the path I would like the ball to follow. [N.B. My reference, and most other math-based references, treat Z as being up-down; my usage, however, is the more usual 3D graphics of Y for up-down.]
This code is executed for each frame.
ft += 0.01; // Where ft is a global Number.
var n:Number = Math.pow (0.5, (0.15 * ft));
// Where s is a constant used to scale the overall path.
obj.moveTo (
(s * n * Math.cos (2.0 * ft)),
(s * n),
(s * n * Math.sin (2.0 * ft))
);
The ball follows a nice path, and owing to the lighting and other shader code, a very decent effect is viewed in the scene.
What is not good about my current implementation is that the ball does not appear to be rolling along that path as it moves from point to point. I am not using any physics engine, and am not seeking any solution dealing with collisions, but I would like the ball to correctly demonstrate what would be happening if the movement were due to the ball rolling down a track.
So, to make a little more clear the challenge, let's say that the ball is a billiard ball with the stripe and label for #15. In that case, the visual result should be that the number 15 should be turning head over heals, but, as you can probably surmise from the name of my obj.moveTo() function, that only results in changes in position of the 3D object, not its orientation.
That, finally, brings me to the specific question/request. I have been unable to discover what rotation changes must be synchronized with each positional change in order to correctly demonstrate the way the billiard ball would appear if it rolled from point 1 from point 2 along the path.
Part of the solution appears to be:
obj.setRotation ((Math.atan2 (Math.sin (ft), Math.cos (ft))), Vector3D.Y_AXIS);
but that is still not correct. I hope there is some well-known formula that I can add to my render code.
I am implementing Catmull-Clark subdivision on a mesh using OpenGL. I can draw my mesh just fine, and I do so using a vertex array.
The array that I draw is called extraVert1[].
In order to implement this subdivision, I have to do operations on certain points besides just the vertices used to draw. I have implemented the standard half-edge data structure in order to iterate through the edges of the mesh and generate the edge-points needed to subdivide.
The issue is here
When I calculate edge-points, I store them into a vertex array, and make the corresponding face point to this edge-point vertex (of which each face points to 4).
The code snippet is as follows (edgeAry1[] is the array of half-edges)
edgePoint1[j].x = (edgeAry1[i].end->x + edgeAry1[i].next->next->next->end->x + edgeAry1[i].heFace->center.x + edgeAry1[i].opp->heFace->center.x) / 4.0;
edgePoint1[j].y = (edgeAry1[i].end->y + edgeAry1[i].next->next->next->end->y + edgeAry1[i].heFace->center.y + edgeAry1[i].opp->heFace->center.y) / 4.0;
edgePoint1[j].z = (edgeAry1[i].end->z + edgeAry1[i].next->next->next->end->z + edgeAry1[i].heFace->center.z + edgeAry1[i].opp->heFace->center.z) / 4.0;
faceAry1[i].e = &edgePoint1[j];
j++;
When this code executes (it loops through for each face in faceAry1[]), I get random edges and triangles around the center of my mesh, even though I never make any changes to extraVert1[], the array I draw from.
I thought this had something to do with my pointers, so I individually commented out each operand and none of them changed anything. I then set every line equal to just 4.0. This gave me a single extra triangle, with points [approximately] (0,0,0), (4,0,0), (4,4,4).
When debugging, I went through the extraVer1[] array both before and after this section of code. It remained unchanged. My draw code is: (extraVert has size 408)
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, extraVert1);
glDrawArrays(GL_QUADS, 0, 408);
glDisableClientState(GL_VERTEX_ARRAY);
Again, I'm not modifying the drawing array extraVert1[] in any way, so I'm completely stumped as to why this is occurring. I'm sure I'll need to provide more information if anyone is interested in answering, so feel free to ask for it. I'm going to keep working at it for now until then.
UPDATE
It seems that using a different array large enough to store these values (in this case, extraVert2[]). The problem seems to be one of overwriting memory, but I'm not sure exactly how. When my arrays are declared like so:
face faceAry1[34];
float extraVert1[408];
halfEdge edgeAry1[136];
vertex edgePoint1[136];
vertex extraVert2[1632];
I can store the information in extraVert2[] with no issues. If I flip the order of extraVert2[] and edgePoint1[], I get the same issue as before. Anyone know what causes this?
While I don't know how 3D-rendering works, random edges and triangles usually occurs from uninitialized variables or dangling pointers. These two are usually the cause of unexpected random behaviour in my programs. I too think this has something to do with your pointers, but as I have no knowledge of 3D-rendering it could be related to any 3D-specific context aswell.
I found this link which explains a little about pcf shadow mapping. I looked through the code sample provided and I cannot work out what the offset array is. I'm assuming it is an array of float2 and I know that it will offset the pixel to give the neighbouring ones. I just can't figure out what the offset should be set too.
Link: http://www.gamerendering.com/2008/11/15/percentage-closer-filtering-for-shadow-mapping/
Here is the code
float result;
result = shadow2DProj(shadowMap,texCoord+offset[0]);
result += shadow2DProj(shadowMap,texCoord+offset[1]);
result += shadow2DProj(shadowMap,texCoord+offset[2]);
result += shadow2DProj(shadowMap,texCoord+offset[3]);
result /= 4.0; // now result will hold the average shading
I must just be missing something simple
Any help is appreciated
Thank you,
Mark
I notice you are using shadow2DProj, as far as I am aware this is a GLSL function and the equivalent in HLSL/CGSL is tex2Dproj. If you are getting a blank screen then this may lead you closer as you should be able to temporarily remove the offset values.
Good luck mate I am new at this too so I know how this is :)
Based on the documents
http://www.gnu.org/software/gsl/manual/html_node/Householder-Transformations.html
and
http://en.wikipedia.org/wiki/Householder_transformation
I figured the following code would successfully produce the matrix for reflection in the plane orthogonal to the unit vector normal_vector.
gsl_matrix * reflection = gsl_matrix_alloc(3, 3);
gsl_matrix_set_identity(reflection);
gsl_linalg_householder_hm(2, normal_vector, reflection);
However, the result is not a reflection matrix as far as I can tell. In particular in my case it has the real eigenvalue -(2 + 1/3), which is impossible for a reflection matrix.
So my questions are:
(1) What am I doing wrong? It seems like that should work to me.
(2) If that approach doesn't work, does anyone know how to go about building such a matrix using gsl?
[As a final note, I realize gsl provides functions for applying Householder transformations without actually finding the matrices. I actually need the matrices in my case for other work.]
reflection matrix, P, is never formed.
Instead you get v as in P = I - \tau v v^T.
gsl_linalg_householder_hm applies PA transformation, you must generate v first with gsl_linalg_householder_transform