Polygons are being drawn when I modify an unrelated, undrawn array - c

I am implementing Catmull-Clark subdivision on a mesh using OpenGL. I can draw my mesh just fine, and I do so using a vertex array.
The array that I draw is called extraVert1[].
In order to implement this subdivision, I have to do operations on certain points besides just the vertices used to draw. I have implemented the standard half-edge data structure in order to iterate through the edges of the mesh and generate the edge-points needed to subdivide.
The issue is here
When I calculate edge-points, I store them into a vertex array, and make the corresponding face point to this edge-point vertex (of which each face points to 4).
The code snippet is as follows (edgeAry1[] is the array of half-edges)
edgePoint1[j].x = (edgeAry1[i].end->x + edgeAry1[i].next->next->next->end->x + edgeAry1[i].heFace->center.x + edgeAry1[i].opp->heFace->center.x) / 4.0;
edgePoint1[j].y = (edgeAry1[i].end->y + edgeAry1[i].next->next->next->end->y + edgeAry1[i].heFace->center.y + edgeAry1[i].opp->heFace->center.y) / 4.0;
edgePoint1[j].z = (edgeAry1[i].end->z + edgeAry1[i].next->next->next->end->z + edgeAry1[i].heFace->center.z + edgeAry1[i].opp->heFace->center.z) / 4.0;
faceAry1[i].e = &edgePoint1[j];
j++;
When this code executes (it loops through for each face in faceAry1[]), I get random edges and triangles around the center of my mesh, even though I never make any changes to extraVert1[], the array I draw from.
I thought this had something to do with my pointers, so I individually commented out each operand and none of them changed anything. I then set every line equal to just 4.0. This gave me a single extra triangle, with points [approximately] (0,0,0), (4,0,0), (4,4,4).
When debugging, I went through the extraVer1[] array both before and after this section of code. It remained unchanged. My draw code is: (extraVert has size 408)
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, extraVert1);
glDrawArrays(GL_QUADS, 0, 408);
glDisableClientState(GL_VERTEX_ARRAY);
Again, I'm not modifying the drawing array extraVert1[] in any way, so I'm completely stumped as to why this is occurring. I'm sure I'll need to provide more information if anyone is interested in answering, so feel free to ask for it. I'm going to keep working at it for now until then.
UPDATE
It seems that using a different array large enough to store these values (in this case, extraVert2[]). The problem seems to be one of overwriting memory, but I'm not sure exactly how. When my arrays are declared like so:
face faceAry1[34];
float extraVert1[408];
halfEdge edgeAry1[136];
vertex edgePoint1[136];
vertex extraVert2[1632];
I can store the information in extraVert2[] with no issues. If I flip the order of extraVert2[] and edgePoint1[], I get the same issue as before. Anyone know what causes this?

While I don't know how 3D-rendering works, random edges and triangles usually occurs from uninitialized variables or dangling pointers. These two are usually the cause of unexpected random behaviour in my programs. I too think this has something to do with your pointers, but as I have no knowledge of 3D-rendering it could be related to any 3D-specific context aswell.

Related

glDrawArrays with multiple triangles in a single array with a persistent first and count

I have a bunch of polygon data that I want to draw. I pulled out that drawing code so that it currently looks like this
for (int Index = 0; Index < Count; Index++) {
glDrawArrays(GL_TRIANGLE_FAN, Index * 4, 4);
}
I have a giant one dimensional array filled with multiple triangles, a new triangle every 4 vertices. Are there any OpenGL functions that I can replace this entire loop with so that I don't have the overhead of each glDrawArrays call? glMultiDrawArrays is sort of what I want, except I don't need a different first (well, I need a different first, but multiplied by a constant) and count each time.
I thought something like glVertexAttribDivisor would work, but nothing was drawing after I added it, I'm not completely sure if it works without instancing either.
EDIT: I am using a triangle fan, and I am buffering the vertices every frame.
glDrawElements() + GL_PRIMITIVE_RESTART & glPrimitiveRestartIndex().

glPolygonOffset() not work for object outline

I'm recently playing with glPolygonOffset( factor, units ) and find something interesting.
I used GL_POLYGON_OFFSET_FILL, and set factor and units to negative values so the filled object is pulled out. This pulled object is supposed to cover the wireframe which is drawn right after it.
This works correctly for pixels inside of the object. However for those on object outline, it seems the filled object is not pulled and there is still lines there.
Before pulling the filled object:
  
After pulling the filled object:
  
glEnable(GL_POLYGON_OFFSET_FILL);
float line_offset_slope = -1.f;
float line_offset_unit = 0.f;
// I also tried slope = 0.f and unit = -1.f, no changes
glPolygonOffset( line_offset_slope, line_offset_unit );
DrawGeo();
glDisable( GL_POLYGON_OFFSET_FILL );
glPolygonMode( GL_FRONT_AND_BACK, GL_LINE );
DrawGeo();
I read THIS POST about the meaning and usage of glPolygonOffset(). But I still don't understand why the pulling doesn't happen to those pixels on border.
To do this properly, you definitely do not want a unit of 0.0f. You absolutely want the pass that is supposed to be drawn overtop the wireframe to have a depth value that is at least 1 unit closer than the wireframe no matter the slope of the primitive being drawn. There is a far simpler approach that I will discuss below though.
One other thing to note is that line primitives have different coverage rules during rasterization than polygons. Lines use a diamond pattern for coverage testing and triangles use a square. You will sometimes see software apply a sub-pixel offset like (0.375, 0.375) to everything drawn, this is done as a hack to make the coverage tests for triangle edges and lines consistent. However, the depth value generated by line primitives is also different from planar polygons, so lines and triangles do not often jive for multi-pass rendering.
glPolygonMode (...) does not change the actual primitive type (it only changes how polygons are filled), so that will not be an issue if this is your actual code. However, if you try doing this with GL_LINES in one pass and GL_TRIANGLES in another you might get different results if you do not consider pixel coverage.
As for doing this simpler, you should be able to use a depth test of GL_LEQUAL (the default is GL_LESS) and avoid a depth offset altogether assuming you draw the same sphere on both passes. You will want to swap the order you draw your wireframe and filled sphere, however -- the thing that should be on top needs to be drawn last.

How to pass variable length float array to GPUImageFilter Shader?

I want to pass my touch points to GPUImage (iOS)
The Point can be translate to float array, the length of the array is variable length.
But I must direct the length of array in shader.
Disclaimer: not a glsl expert
AFAIk you can't have variable length arrays like what you want. This is a GLSL limitation, not GPUImage so it's not a quick fix- the work you'll be doing will be with textures or glsl, not GPUImage.
Here's another stack overflow post about glsl: GLSL indexing into uniform array with variable length
There's two solutions that could work:
1) Limit the number of points. It's reasonable to limit touches but in practice may be hard to narrow them down if there's too many. You could pass these points in to a fixed length array or as individual constants (one for each point). If you really care about scalability with the number of points this isn't a great method because in your shader you'll have to do check each of these points and perform the relevant computation, which could be expensive when performed for the entire image (again, depending on your use case). If for each pixel you're checking a distance to point, this could be too expensive.
2) Input your points in a texture. You can either have 2 1D textures with the x&y coordinates and then treat them like an array (then go to option 1), or you can create a 2D texture, all 0, and set parts to 1 where there are touches. The 2D texture can have a lower resolution than the actual screen. This method could be a lot less work for the shader if you're doing something simple like turning finger touches black.
Your choice depends largely on what you're doing with the points in the shader.

glsl multi light, best practice of passing data (array of structs?)

working myself from step to step I am now trying to figure out more about multi lights in glsl. I read some tutorials so far but none seems to have THE answer for this.
Lets say I have such a struct for my lighting:
struct LightInfo
{
vec4 LightLocation;
vec3 DiffuseLightColor;
vec3 AmbientLightColor;
vec3 SpecularLightColor;
vec3 spotDirection;
float AmbientLightIntensity;
float SpecularLightIntensity;
float constantAttenuation;
float linearAttenuation;
float quadraticAttenuation;
float spotCutoff;
float spotExponent;
};
uniform LightInfo gLight;
my first idea would be to make it something like
uniform LightInfo gLight[NumLights];
but then I read a lot about that passing data that way to the shader wouldn't work, since it can't get the location of that. Now I have to admit that I didn't try it myself yet, but I found a couple of pages mentioning this, so it's probably not that wrong - or is this maybe just outdated information?
The other idea would be to make it just:
uniform[NumOfArgs]
and split it in the shader again, but if I take my example struct above I have an immense huge array very soon, and taking the information out of it with a for loop probably will be quite expensive too- and that only if I want to use a similar number of lights like the max of 8 when using gl_LightSource - while I wanted to avoid using that because of the advantage having an own struct with all information needed at once.
Of course not any light in question would require that many parameters, but any light COULD require them (and even if stripping it quite some it will grow very soon also).
Yet again I could use some qsort first to determine the closest lights, then limiting the maxlights to something like 3 (something which is also suggested on many places), but here again I have to say that I expect a bit more from nowadays glsl and modern hardware although there is no contradiction in using this as well, unrelated to the chosen solution.
So my question now, what's best practice here, what's really fast? Or should I stay with gl_LightSource and passing the additional information then via some uniform array? Although this doesn't seem to make more sense to me either.
The idea of a light struct is just fine. For forward rendering - passing all lights into the one shader which processes your actual geometry - an array is just fine.
You may have an array of structs as uniforms (uniform LightInfo gLight[NumLights], where NumLights is compile-time constatnt), but arrays are not so different to just declaring uniform LightInfo gLight0, gLight1....
You get the uniform location via the full name, eg:
glGetUniformLocation(program, "gLight[3].spotExponent")
Note that glGetActiveUniform will return just the string with element zero but the size will give the number of elements.
Uniform buffers will be important with lots of lights and attributes. You can store all the data for the structs on the GPU, so it doesn't get sent every time with individual calls to glUniform*. You can use glMapBuffer to modify parts of the buffer if the rest doesn't need changing.
Be very aware of how the structs and arrays get packed (it's not always intuitive)! Related issues occur in non-uniform/uniform block cases too.
See: Sub-section 2.15.3.1.2 - Standard Uniform Block Layout
To get the byte offset from the beginning of the block, use the GL_UNIFORM_OFFSET​ enum
See: Uniform Buffer Object
Elements are aligned to whopping big 16 byte boundaries (vec4 size). To make that struct more efficient, you should pair the vec3s with the floats.
You're right, if you have more lights than there are in the shader you'll have to chop and choose. Lights that are close are important, but you might also want to prioritize lights in the direction you're facing (those whose area of influence touches the viewing volume formed by the projection matrix) and bigger/brighter lights (eg. sun/directional).
Ultimately if you have too many lights this method ceases to work. Your next step is to swap to deferred shading, which brings with it a few more issues (eg. blending/transparency).

Image-processing basics

I have to do some image processing but I don't know where to start. My problem is as follows :-
I have a 2D fiber image (attached with this post), in which the fiber edges are denoted by white color and the inside of the fiber is black. I want to choose any black pixel inside the fiber, and travel from it along the length of the fiber. This will involve comparing the contrast with the surrounding pixels and then travelling in the desired direction. My main aim is to find the length of the fiber
So can someone please tell me atleast where to start? I have made a rough algorithm in my mind on how to approach my problem but I don't know even which software/library to use.
Regards
Adi
EDIT1 - Instead of OpenCV, I started using MATLAB since I found it much easier. I applied the Hough Transform and then Houghpeaks function with max no. of peaks = 100 so that all fibers are included. After that I got the following image. How do I find the length now?
EDIT2 - I found a research article on how to calculate length using Hough Transform but I'm not able to implement it in MATLAB. Someone please help
If your images are all as clean as the one you posted, it's quite an easy problem.
The very first technique I'd try is using a Hough Transform to estimate the line parameters, and there is a good implementation of the algorithm in OpenCV. After you have them, you can estimate their length any way you want, based on whatever other constraints you have.
Problem is two-fold as I see it:
1) locate start and end point from your starting position.
2) decide length between start and end points
Since I don't know your input data I assume it's pixel data with a 0..1 data on each pixel representing it's "whiteness".
In order to find end points I would do some kind of WALKER/AI that tries to walk in different locations, knowing original pos and last traversed direction then continuing along that route until "forward arc" is all white. This assumes fiber is somewhat straight (is it?).
Once you got start and end points you can input these into a a* path finding algorithm and give black pixels a low value and white very high. Then find shortest distance between start and end point, that is the length of the fiber.
Kinda hard to give more detail since I have no idea what techniques you gonna use and some example input data.
Assumptions:
-This image can be considered a binary image where there are only 0s(black) and 1s(white).
-all the fibers are straight and their starting and ending points are on borders.
-we can come up with a limit for thickness in fiber(thickness of white lines).
Under these assumptions:
start scanning the image border(start from wherever you want in whichever direction you want...just be consistent) until you encounter with the first white pixel.At this point your program will understand that this is definitely a starting point. By knowing this, you will gather all the white pixels until you reach a certain limit(or a threshold). The idea here is, if there is a fiber,you will get the angle between the fiber and the border the starting point is on...of course the more pixels you get(the inner you get)the surer you will be in the end. This is the trickiest part. after somehow ending up with a line...you need to calculate the angle(basic trigonometry). Since you know the starting point, the width/height of the image and the angle(or cos/sin of those) you will have the exact coordinate of the end point. Be advised...the exactness here is not really what you might have understood because we may(the thing is we will) have calculation errors in cos/sin values. So you need to hold the threshold as long as possible. So your end point will not be a point actually but rather an area indicating possibility that the ending point is somewhere inside that area. The rest is just simple maths.
Obviously you can put too much detail in this method like checking the both white lines that makes the fiber and deciding which one is longer or you can allow some margin for error since those lines will not be straight properly...this is where a conceptual thickness comes to the stage etc.
Programming:
C# has nice stuff and easy for you to use...I'll put some code here...
newBitmap = new Bitmap(openFileDialog1.FileName);
for (int x = 0; x < newBitmap.Width; x++)
{
for (int y = 0; y < newBitmap.Height; y++)
{
Color originalColor = newBitmap.GetPixel(x, y);//gets the pixel value...
//things go here...
}
}
you'll get the image from a openfiledialog and bitmap the image. inside the nested for loop this code scans the image left-to-right however you can change this...
Since you know C++ and C, I would recommend OpenCV
. It is open-source so if you don't trust anyone like me, you won't have a problem ;). Also if you want to use C# like #VictorS. Mentioned I would use EmguCV which is the C# equivilant of OpenCV. Tutorials for OpenCV are included and for EmguCV can be found on their website. Hope this helps!
Download and install the latest version of 3Dslicer,
Load your data and go the the package>EM segmenter without Atlas>
Choose your anatomical tree in 2 different labels, the back one which is your purpose, the white edges.
The choose the whole 2D image as your ROI and click on segment.
Here is the result, I labeled the edges in green and the black area in white
You can modify your tree and change the structures you define.
You can give more samples to your segmentation to make it more accurate.

Resources