I'm recently playing with glPolygonOffset( factor, units ) and find something interesting.
I used GL_POLYGON_OFFSET_FILL, and set factor and units to negative values so the filled object is pulled out. This pulled object is supposed to cover the wireframe which is drawn right after it.
This works correctly for pixels inside of the object. However for those on object outline, it seems the filled object is not pulled and there is still lines there.
Before pulling the filled object:
After pulling the filled object:
glEnable(GL_POLYGON_OFFSET_FILL);
float line_offset_slope = -1.f;
float line_offset_unit = 0.f;
// I also tried slope = 0.f and unit = -1.f, no changes
glPolygonOffset( line_offset_slope, line_offset_unit );
DrawGeo();
glDisable( GL_POLYGON_OFFSET_FILL );
glPolygonMode( GL_FRONT_AND_BACK, GL_LINE );
DrawGeo();
I read THIS POST about the meaning and usage of glPolygonOffset(). But I still don't understand why the pulling doesn't happen to those pixels on border.
To do this properly, you definitely do not want a unit of 0.0f. You absolutely want the pass that is supposed to be drawn overtop the wireframe to have a depth value that is at least 1 unit closer than the wireframe no matter the slope of the primitive being drawn. There is a far simpler approach that I will discuss below though.
One other thing to note is that line primitives have different coverage rules during rasterization than polygons. Lines use a diamond pattern for coverage testing and triangles use a square. You will sometimes see software apply a sub-pixel offset like (0.375, 0.375) to everything drawn, this is done as a hack to make the coverage tests for triangle edges and lines consistent. However, the depth value generated by line primitives is also different from planar polygons, so lines and triangles do not often jive for multi-pass rendering.
glPolygonMode (...) does not change the actual primitive type (it only changes how polygons are filled), so that will not be an issue if this is your actual code. However, if you try doing this with GL_LINES in one pass and GL_TRIANGLES in another you might get different results if you do not consider pixel coverage.
As for doing this simpler, you should be able to use a depth test of GL_LEQUAL (the default is GL_LESS) and avoid a depth offset altogether assuming you draw the same sphere on both passes. You will want to swap the order you draw your wireframe and filled sphere, however -- the thing that should be on top needs to be drawn last.
Related
I am using GL_LINE_LOOP to draw a circle in C and openGL! Is it possible for me to fill the circle with colors?
If needed, this is the code I'm using:
const int circle_points=100;
const float cx=50+i, cy=50+x, r=50;
const float pi = 3.14159f;
int i = 50;
glColor3f(1, 1, 1);
glBegin(GL_LINE_LOOP);
for(i=0;i<circle_points;i++)
{
const float theta=(2*pi*i)/circle_points;
glVertex2f(cx+r*cos(theta),cy+r*sin(theta));
}
glEnd();
Lookup polygon triangulation!
I hope something here is somehow useful to someone, even though this question was asked in February. There are many answers, even though a lot of people would give none. I could witter forever, but I'll try to finish before then.
Some would even say, "You never would," or, "That's not appropriate for OpenGL," I'd like to say more than them about why. Converting polygons into the triangles that OpenGL likes so much is outside of OpenGL's job-spec, and is probably better done on the processor side anyway. Calculate that stage in advance, as few times as possible, rather than repeatedly sending such a chunky problem on every draw call.
Perhaps the original questioner drifted away from OpenGL since February, or perhaps they've become an expert. Perhaps I'll re-inspire them to look at it again, to hack away at some original 'imposters'. Or maybe they'll say it's not the tool for them after all, but that would be disappointing. Whatever graphics code you're writing, you know that OpenGL can speed it up!
Triangles for convex polygons are easy
Do you just want a circle? Make a triangle fan with the shared point at the circle's origin. GL_POLYGON was, for better or worse, deprecated then killed off entirely; it will not work with current or future implementations of OpenGL.
Triangles for concave polygons are hard
You'll want more general polygons later? Well, there are some tricks you could play with, for all manner of convex polygons, but concave ones will soon get difficult. It would be easy to start five different solutions without finishing a single one. Then it would be difficult, on finishing one, to make it quick, and nearly impossible to be sure that it's the quickest.
To achieve it in a future-proofed way you really want to stick with triangles -- so "polygon triangulation" is the subject you want to search for. OpenGL will always be great for drawing triangles. Triangle strips are popular because they reuse many vertices, and a whole mesh can be covered with only triangle strips, (perhaps including the odd lone triangle or pair of triangles). Drawing with only one primitive usually means the entire mesh can be rendered with a single draw call, which could improve performance. (Number of draw calls is one performance consideration, but not always the most important.)
Polygon triangulation gets more complex when you allow convex polygons or polygons with holes. (Finding algorithms for triangulating a general polygon, robustly yet quickly, is actually an area of ongoing research. Nonetheless, you can find some pretty good solutions out there that are probably fit for purpose.)
But is this what you want?
Is a filled polygon crucial to your final goals in OpenGL? Or did you simply choose what felt like it would be a simple early lesson?
Frustratingly, although drawing a filled polygon seems like a simple thing to do -- and indeed is one of the simplest things to do in many languages -- the solution in OpenGL is likely to be quite complicated. Of course, it can be done if we're clever enough -- but that could be a lot of effort, without being the best route to take towards your later goals.
Even in languages that implement filled polygons in a way that is simple to program with, you don't always know how much strain it puts on the CPU or GPU. If you send a sequence of vertices, to be linked and filled, once every animation frame, will it be slow? If a polygon doesn't change shape, perhaps you should do the difficult part of the calculation just once? You will be doing just that, if you triangulate a polygon once using the CPU, then repeatedly send those triangles to OpenGL for rendering.
OpenGL is very, very good at doing certain things, very quickly, taking advantage of hardware acceleration. It is worth appreciating what it is and is not optimal for, to decide your best route towards your final goals with OpenGL.
If you're looking for a simple early lesson, rotating brightly coloured tetrahedrons is ideal, and happens early in most tutorials.
If on the other hand, you're planning a project that you currently envision using filled polygons a great deal -- say, a stylized cartoon rendering engine for instance -- I still advise going to the tutorials, and even more so! Find a good one; stick with it to the end; you can then think better about OpenGL functions that are and aren't available to you. What can you take advantage of? What do you need or want to redo in software? And is it worth writing your own code for apparently simple things -- like drawing filled polygons -- that are 'missing from' (or at least inappropriate to) OpenGL?
Is there a higher level graphics library, free to use -- perhaps relying on OpenGL for rasterisation -- that can already do want you want? If so, how much freedom does it give you, to mess with the nuts and bolts of OpenGL itself?
OpenGL is very good at drawing points, lines, and triangles, and hardware accelerating certain common operations such as clipping, face culling, perspective divides, perspective texture accesses (very useful for lighting) and so on. It offers you a chance to write special programs called shaders, which operate at various stages of the rendering pipeline, maximising your chance to insert your own unique cleverness while still taking advantage of hardware acceleration.
A good tutorial is one that explains the rendering pipeline and puts you in a much better position to assess what the tool of OpenGL is best used for.
Here is one such tutorial that I found recently: Learning Modern 3D Graphics Programming
by Jason L. McKesson. It doesn't appear to be complete, but if you get far enough for that to annoy you, you'll be well placed to search for the rest.
Using imposters to fill polygons
Everything in computer graphics is an imposter, but the term often has a specialised meaning. Imposters display very different geometry from what they actually have -- only more so than usual! Of course, a 3D world is very different from the pixels representing it, but with imposters, the deception goes deeper than usual.
For instance, a rectangle that OpenGL actually constructs out of two triangles can appear to be a sphere if, in its fragment shader, you write a customised depth value to the depth coordinate, calculate your own normals for lighting and so on, and discard those fragments of the square that would fall outside the outline of the sphere. (Calculating the depth on those fragments would involve a square root of a negative number, which can be used to discard the fragment.) Imposters like that are sometimes called flat cards or billboards.
(The tutorial above includes a chapter on imposters, and examples doing just what I've described here. In fact, the rectangle itself is constructed only part way through the pipeline, from a single point. I warn that the scaling of their rectangle, to account for the way that perspective distorts a sphere into an ellipse in a wide FOV, is a non-robust fudge . The correct and robust answer is tricky to work out, using mathematics that would be slightly beyond the scope of the book. I'd say it is beyond the author's algebra skills to work it out but I could be wrong; he'd certainly understand a worked example. However, when you have the correct solution, it is computationally inexpensive; it involves only linear operations plus two square roots, to find the four limits of a horizontally- or vertically-translated sphere. To generalise that technique for other displacements requires one more square root, for a vector normalisation to find the correct rotation, and one application of that rotation matrix when you render the rectangle.)
So just to suggest an original solution that others aren't likely to provide, you could use an inequality (like x * x + y * y <= 1 for a circle or x * x - y * y <= 1 for a hyperbola) or a system of inequalities (like three straight line forms to bound a triangle) to decide how to discard a fragment. Note that if inequalities have more than linear order, they can encode perfect curves, and render them just as smoothly as your pixelated screen will allow -- with no limitation on the 'geometric detail' of the curve. You can also combine straight and curved edges in a single polygon, in this way.
For instance, a fragment shader (which would be written in GLSL) for a semi-circle might have something like this:
if (y < 0) discard;
float rSq = x * x + y * y;
if (1 < rSq) discard;
// We're inside the semi-circle; put further shader computations here
However, the polygons that are easy to draw, in this way, are very different from the ones that you're used to being easy. Converting a sequence of connected nodes into inequalities means yet more code to write, and deciding on the Boolean logic, to deal with combining those inequalities, could then get quite complex -- especially for concave polygons. Performing inequalities in a sensible order, so that some can be culled based on the results of others, is another ill-posed headache of a problem, if it needs to be general, even though it is easy to hard-code an optimal solution for a single case like a square.
I suggest using imposters mainly for its contrast with the triangulation method. Something like either one could be a route to pursue, depending on what you're hoping to achieve in the end, and the nature of your polygons.
Have fun...
P.S. have a related topic... Polygon triangulation into triangle strips for OpenGL ES
As long as the link lasts, it's a more detailed explanation of 'polygon triangulation' than mine. Those are the two words to search for if the link ever dies.
A line loop is just an outline.
To fill the middle as well, you want to use GL_POLYGON.
I'm trying to combine several UIBezierPath drawings.
I have different types of drawings I can make (line, cubic bezier, quadratic beziers), and each of these can be filled or unfilled. I'm selecting the drawing type randomly, and my goal is to make 3 different drawings which are connected at a point.
So where the first, say, line drawing ends, the second path - maybe a cubic bezier — begins. Where that ends, a third, maybe a filled line drawing begins.
I've got a square UIView that I'm trying to draw this in, and each path should have its own part of the UIView: the first 1/3rd, the second and the third.
Would I be able to create this with one UIBezierPath object, or do I need to create 3 different ones? How to make them end and start at the same point? Is there a way to do this with subpaths?
UIBezierPath has its instance methods like (DOC)
-addLineToPoint:
-addArcWithCenter:radius:startAngle:endAngle:clockwise:
-addCurveToPoint:controlPoint1:controlPoint2:
-addQuadCurveToPoint:controlPoint:
-appendPath:
You can combine paths one by one. When you've done, use -closePath to close the path.
Feel free to take a look at my open sourced lib which called UIBezierPath-Symbol. ;)
And if you want more customise path drawing, I recommend CGMutablePath. You can create each path as complex as you want (you can combine simple paths by CGPathAdd... methods). Finally, use CGPathAddPath() to combine them together.
void CGPathAddPath (
CGMutablePathRef path1, // The mutable path to change.
const CGAffineTransform *m, // A pointer to an affine transformation matrix, or NULL if no transformation is needed. If > specified, Quartz applies the transformation to path2 before it is added to path1.
CGPathRef path2 // The path to add.
);
You can combine paths like this:
UIBezierPath *endPath = [UIBezierPath bezierPath];
[endPath appendPath:leftLine];
[endPath appendPath:rightLine];
[endPath appendPath:midLine];
A UIBezierPath is just a wrapper for a CGPath, which itself is just a set of instructions for drawing (by stroke or fill, or both). That drawing can take place anywhere. In other words, a UIBezierPath is just a tool for drawing; the important thing is the drawing itself. Given a graphics context (which might be a UIView, a UIImage, a CALayer, whatever), you can do as much drawing as you like in succession - say, a line, then a cubic bezier, then a filled line drawing. But how you perform those drawing bits is totally up to you. You shouldn't really care whether you do it with three UIBezierPaths, one UIBezierPath, multiple paths, one path, subpaths, whatever (or even by copying other drawings into this one) - the final effect is all that matters, i.e. the accumulated drawing ultimately done in this graphics context.
Your question is like asking, "Should I draw this circle with my right hand or my left hand, and should I draw it counter-clockwise or clockwise?" It doesn't matter. Once it's done, what will have been drawn is a circle; that is what's important.
I have to do some image processing but I don't know where to start. My problem is as follows :-
I have a 2D fiber image (attached with this post), in which the fiber edges are denoted by white color and the inside of the fiber is black. I want to choose any black pixel inside the fiber, and travel from it along the length of the fiber. This will involve comparing the contrast with the surrounding pixels and then travelling in the desired direction. My main aim is to find the length of the fiber
So can someone please tell me atleast where to start? I have made a rough algorithm in my mind on how to approach my problem but I don't know even which software/library to use.
Regards
Adi
EDIT1 - Instead of OpenCV, I started using MATLAB since I found it much easier. I applied the Hough Transform and then Houghpeaks function with max no. of peaks = 100 so that all fibers are included. After that I got the following image. How do I find the length now?
EDIT2 - I found a research article on how to calculate length using Hough Transform but I'm not able to implement it in MATLAB. Someone please help
If your images are all as clean as the one you posted, it's quite an easy problem.
The very first technique I'd try is using a Hough Transform to estimate the line parameters, and there is a good implementation of the algorithm in OpenCV. After you have them, you can estimate their length any way you want, based on whatever other constraints you have.
Problem is two-fold as I see it:
1) locate start and end point from your starting position.
2) decide length between start and end points
Since I don't know your input data I assume it's pixel data with a 0..1 data on each pixel representing it's "whiteness".
In order to find end points I would do some kind of WALKER/AI that tries to walk in different locations, knowing original pos and last traversed direction then continuing along that route until "forward arc" is all white. This assumes fiber is somewhat straight (is it?).
Once you got start and end points you can input these into a a* path finding algorithm and give black pixels a low value and white very high. Then find shortest distance between start and end point, that is the length of the fiber.
Kinda hard to give more detail since I have no idea what techniques you gonna use and some example input data.
Assumptions:
-This image can be considered a binary image where there are only 0s(black) and 1s(white).
-all the fibers are straight and their starting and ending points are on borders.
-we can come up with a limit for thickness in fiber(thickness of white lines).
Under these assumptions:
start scanning the image border(start from wherever you want in whichever direction you want...just be consistent) until you encounter with the first white pixel.At this point your program will understand that this is definitely a starting point. By knowing this, you will gather all the white pixels until you reach a certain limit(or a threshold). The idea here is, if there is a fiber,you will get the angle between the fiber and the border the starting point is on...of course the more pixels you get(the inner you get)the surer you will be in the end. This is the trickiest part. after somehow ending up with a line...you need to calculate the angle(basic trigonometry). Since you know the starting point, the width/height of the image and the angle(or cos/sin of those) you will have the exact coordinate of the end point. Be advised...the exactness here is not really what you might have understood because we may(the thing is we will) have calculation errors in cos/sin values. So you need to hold the threshold as long as possible. So your end point will not be a point actually but rather an area indicating possibility that the ending point is somewhere inside that area. The rest is just simple maths.
Obviously you can put too much detail in this method like checking the both white lines that makes the fiber and deciding which one is longer or you can allow some margin for error since those lines will not be straight properly...this is where a conceptual thickness comes to the stage etc.
Programming:
C# has nice stuff and easy for you to use...I'll put some code here...
newBitmap = new Bitmap(openFileDialog1.FileName);
for (int x = 0; x < newBitmap.Width; x++)
{
for (int y = 0; y < newBitmap.Height; y++)
{
Color originalColor = newBitmap.GetPixel(x, y);//gets the pixel value...
//things go here...
}
}
you'll get the image from a openfiledialog and bitmap the image. inside the nested for loop this code scans the image left-to-right however you can change this...
Since you know C++ and C, I would recommend OpenCV
. It is open-source so if you don't trust anyone like me, you won't have a problem ;). Also if you want to use C# like #VictorS. Mentioned I would use EmguCV which is the C# equivilant of OpenCV. Tutorials for OpenCV are included and for EmguCV can be found on their website. Hope this helps!
Download and install the latest version of 3Dslicer,
Load your data and go the the package>EM segmenter without Atlas>
Choose your anatomical tree in 2 different labels, the back one which is your purpose, the white edges.
The choose the whole 2D image as your ROI and click on segment.
Here is the result, I labeled the edges in green and the black area in white
You can modify your tree and change the structures you define.
You can give more samples to your segmentation to make it more accurate.
I have custom control - chart with size, for example, 300x300 pixels and more than one million points (maybe less) in it. And its clear that now he works very slowly. I am searching for algoritm which will show only few points with minimal visual difference.
I have a link to the component which have functionallity exactly what i need
(2 million points demo):
I will be grateful for any matherials, links or thoughts how to realize such functionallity.
If I understand your question correctly, then you are looking to plot a graph of a dataset where you have ~1M points, but the chart's horizontal resolution is much smaller? If so, you can down-sample your dataset to get about the number of available x values. If your data is sorted in equal intervals, you can extract every N'th point and plot it. Choose N such that the number of points is, say, double the resolution (in this case, N=2000 will give you 500 points to display).
If the intervals are very different from eachother (not regularly spaced), you can approximate your graph with a polynomial, or spline or any other method that fits, and then interpolate 300-600 points from that approximation.
EDIT:
Depending on the nature of the data, you may end up with aliasing artifacts when you simply sample every N't point. There are probably better methods for coping with this problem, but again - it depends on what exactly you want to plot.
You could always buy the control - it is for sale!
John-Daniel Trask (Co-founder of Mindscape ;-)
I have to make an application that recognizes inside an black and white image a piece of tetris given by the user. I read the image to analyze into an array.
How can I do something like this using C?
Assuming that you already loaded the images into arrays, what about using regular expressions?
You don't need exact shape matching but approximately, so why not give it a try!
Edit: I downloaded your doc file. You must identify a random pattern among random figures on a 2D array so regex isn't suitable for this problem, lets say that's the bad news. The good news is that your homework is not exactly image processing, and it's much easier.
It's your homework so I won't create the code for you but I can give you directions.
You need a routine that can create a new piece from the original pattern/piece rotated. (note: with piece I mean the 4x4 square - all the cells of it)
You need a routine that checks if a piece matches an area from the 2D image at position x,y - the matching area would have corners (x-2, y-2, x+1, y+1).
You search by checking every image position (x,y) for a match.
Since you must use parallelism you can create 4 threads and assign to each thread a different rotation to search.
You might not want to implement that from scratch (unless required, of course) ... I'd recommend looking for a suitable library. I've heard that OpenCV is good, but never done any work with machine vision myself so I haven't tested it.
Search for connected components (i.e. using depth-first search; you might want to avoid recursion if efficiency is an issue; use your own stack instead). The largest connected component should be your tetris piece. You can then further analyze it (using the shape, the size or some kind of border description)
Looking at the shapes given for tetris pieces in Wikipedia, called "I,J,L,O,S,T,Z", it seems that the ratios of the sides of the bounding box (easy to find given a binary image and C) reveal whether you have I (4:1) or O (1:1); the other shapes are 2:3.
To detect which of the remaining shapes you have (J,L,S,T, or Z), it looks like you could collect the length and position of the shape's edges that fall on the bounding box's edges. Thus, T would show 3 and 1 along the 3-sides, and 1 and 1 along the 2 sides. Keeping track of the positions helps distinguish J from L, S from Z.