How can I smooth the seam from merged geometry in Maya? - maya

I'm not sure if this is a geometry problem or a normal problem but I am having a hard time combining meshes without leaving a "seam" or visible discontinuity. In the example below I have a simple example with two sphere polygons with matching divisions. I have done a union and merged nearby vertices. I then tried to do a little manual adjustment to smooth it, but as you can see the result is not good.
I know that I can use the smooth tool to smooth it by adding new geometry, but I feel like given that the vertices match perfectly here I should be able to fix this through some other means. I've played with "soften edge normals" but I don't see any effect from that. I've tried averaging vertices but that doesn't seem to do much. I've gone to the sculpting tools and using the relax and smooth tools there... No matter how correct the geometry looks to me it still appears discontinuous unless I use the add-geometry smoothing tool.
What is the correct way to merge geometry like this?
thanks.
UPDATE:
I'm going to mark the answer below correct even thought it is basically one of the procedures I tried before. I think the real answer is that I just wasn't performing the merge very well (cutting the geometry and merging the edges in the cleanest way possible) prior to softening the edge.

Here is how to merge geometry and get rid of unpleasant seam from scratch:
a) Delete a history for each object (Edit–>DeleteByType–>History)
b) Combine a mesh (Mesh–>Combine)
c) Merge the edges, controlling a tolerance (EditMesh–>Merge)
d) Soften the edges, controlling an angle (MeshDisplay–>SoftenEdge)
Remember, Angle parameter makes the edge hard/soft.
Here are MEL equivalents:
// Deleting a history
DeleteHistory;
// Combining a mesh
polyUnite;
// Merge border edges within a given threshold
polySewEdge;
// Softening the edge (angle = 0...180)
polySoftEdge;
MEL example for softening the edge:
select -r pSphere2.e[35:54];
polySoftEdge -a 180;

Related

opencascade BRepOffsetAPI_Sewing is slow

i have fairly large files with 3D scan points (200.000 ish) and try to make a TopoDS_Shape with BRepOffsetAPI_Sewing
gp_Pnt p1(0,0,100);
...
TopoDS_Edge e1 = BRepBuilderAPI_MakeEdge( p4, p1);
...
TopoDS_Wire w1 = BRepBuilderAPI_MakeWire(e1, e2, e3, e4);
...
TopoDS_Face f1 = BRepBuilderAPI_MakeFace(w1);
...
BRepOffsetAPI_Sewing sew(0.1);
sew.Add(f1);sew.Perform();TopoDS_Shape sewedShape = sew.SewedShape();
of corse with all the points in loops etc. above code is just a sample how I try to create things.
with 200.000 points it takes 20-30 second to produce the face.
my next approach was to save the produced shape after generated and load it later as a workaround.
BRepTools::Write(sewedShape, sFile);
but even that is slow.
I did similar things in Java3D and it was way faster. So I make something wrong.
only showing the points with
Handle_Graphic3d_ArrayOfPoints points3d = new Graphic3d_ArrayOfPoints(totPoints, true, false);
gp_Pnt pnt(x, y, z);
points3d->AddVertex(pnt, aColor); // adding 200.000 points
Handle(AIS_PointCloud) m_points = new AIS_PointCloud();
m_points->SetPoints(points3d);
m_occView->getContext()->Display(m_points, true);
is almost instant (less then a second)
my goal is to build 2 of those faces and find the intersection with OCBRepAlgoAPI_Section
Thanks for help in advance!
As far as I understand, your current approach is:
Create a TopoDS_Face per quad in Point Cloud.
To avoid unnecessary overhead on sewing operation, you would need reconsidering your workflow and create shared shapes from the scratch. E.g., instead of creating TopoDS_Vertex for the same point in Point Cloud multiple times, you should create a single one and reuse it in construction of connected edges / quads; the same applies to TopoDS_Edge construction. What sewing operation does for you is finding and repairing shared information between geometrically connected faces - which is a plenty of work that could be entirely avoided.
But as you have been already pointed out (by trying to dump produced shape into a file), mapping tessellation to B-Rep is counter-efficient approach in general. Just take a look at all these TopoDS_Vertex, TopoDS_Edge, TopoDS_Wire, TopoDS_Face to figure out how much more data structures are needed in B-Rep for mapping a very single triangle or quad. This structure is heavy not only from memory utilization point of view, but also for algorithms you might want to do with it like Boolean operations.
Possible alternatives:
Create a Poly_Triangulation from your point cloud and a single TopoDS_Face from it. You would be able to efficiently visualize it in 3D Viewer and perform some operations like computing surface area. Unfortunately, such geometry definition is not yet supported by all OCCT algorithms, so that you wouldn't be able performing Boolean operations.
Create an approximated B-Spline surface from your Point Clouds. This could be done with help of GeomPlate or SSP (Surface from Scattered Points) algorithms. Approximated surface would be a better fit to B-Rep geometry definition, but might loose some details of original surface and might be tricky to apply (you might need splitting a complex surface into several pieces).
Use OMF product (Mesh Framework) to perform Boolean operations on meshes. In case if Boolean operation on meshes is all you need, OMF could be helpful.

OpenGL -- GL_LINE_LOOP --

I am using GL_LINE_LOOP to draw a circle in C and openGL! Is it possible for me to fill the circle with colors?
If needed, this is the code I'm using:
const int circle_points=100;
const float cx=50+i, cy=50+x, r=50;
const float pi = 3.14159f;
int i = 50;
glColor3f(1, 1, 1);
glBegin(GL_LINE_LOOP);
for(i=0;i<circle_points;i++)
{
const float theta=(2*pi*i)/circle_points;
glVertex2f(cx+r*cos(theta),cy+r*sin(theta));
}
glEnd();
Lookup polygon triangulation!
I hope something here is somehow useful to someone, even though this question was asked in February. There are many answers, even though a lot of people would give none. I could witter forever, but I'll try to finish before then.
Some would even say, "You never would," or, "That's not appropriate for OpenGL," I'd like to say more than them about why. Converting polygons into the triangles that OpenGL likes so much is outside of OpenGL's job-spec, and is probably better done on the processor side anyway. Calculate that stage in advance, as few times as possible, rather than repeatedly sending such a chunky problem on every draw call.
Perhaps the original questioner drifted away from OpenGL since February, or perhaps they've become an expert. Perhaps I'll re-inspire them to look at it again, to hack away at some original 'imposters'. Or maybe they'll say it's not the tool for them after all, but that would be disappointing. Whatever graphics code you're writing, you know that OpenGL can speed it up!
Triangles for convex polygons are easy
Do you just want a circle? Make a triangle fan with the shared point at the circle's origin. GL_POLYGON was, for better or worse, deprecated then killed off entirely; it will not work with current or future implementations of OpenGL.
Triangles for concave polygons are hard
You'll want more general polygons later? Well, there are some tricks you could play with, for all manner of convex polygons, but concave ones will soon get difficult. It would be easy to start five different solutions without finishing a single one. Then it would be difficult, on finishing one, to make it quick, and nearly impossible to be sure that it's the quickest.
To achieve it in a future-proofed way you really want to stick with triangles -- so "polygon triangulation" is the subject you want to search for. OpenGL will always be great for drawing triangles. Triangle strips are popular because they reuse many vertices, and a whole mesh can be covered with only triangle strips, (perhaps including the odd lone triangle or pair of triangles). Drawing with only one primitive usually means the entire mesh can be rendered with a single draw call, which could improve performance. (Number of draw calls is one performance consideration, but not always the most important.)
Polygon triangulation gets more complex when you allow convex polygons or polygons with holes. (Finding algorithms for triangulating a general polygon, robustly yet quickly, is actually an area of ongoing research. Nonetheless, you can find some pretty good solutions out there that are probably fit for purpose.)
But is this what you want?
Is a filled polygon crucial to your final goals in OpenGL? Or did you simply choose what felt like it would be a simple early lesson?
Frustratingly, although drawing a filled polygon seems like a simple thing to do -- and indeed is one of the simplest things to do in many languages -- the solution in OpenGL is likely to be quite complicated. Of course, it can be done if we're clever enough -- but that could be a lot of effort, without being the best route to take towards your later goals.
Even in languages that implement filled polygons in a way that is simple to program with, you don't always know how much strain it puts on the CPU or GPU. If you send a sequence of vertices, to be linked and filled, once every animation frame, will it be slow? If a polygon doesn't change shape, perhaps you should do the difficult part of the calculation just once? You will be doing just that, if you triangulate a polygon once using the CPU, then repeatedly send those triangles to OpenGL for rendering.
OpenGL is very, very good at doing certain things, very quickly, taking advantage of hardware acceleration. It is worth appreciating what it is and is not optimal for, to decide your best route towards your final goals with OpenGL.
If you're looking for a simple early lesson, rotating brightly coloured tetrahedrons is ideal, and happens early in most tutorials.
If on the other hand, you're planning a project that you currently envision using filled polygons a great deal -- say, a stylized cartoon rendering engine for instance -- I still advise going to the tutorials, and even more so! Find a good one; stick with it to the end; you can then think better about OpenGL functions that are and aren't available to you. What can you take advantage of? What do you need or want to redo in software? And is it worth writing your own code for apparently simple things -- like drawing filled polygons -- that are 'missing from' (or at least inappropriate to) OpenGL?
Is there a higher level graphics library, free to use -- perhaps relying on OpenGL for rasterisation -- that can already do want you want? If so, how much freedom does it give you, to mess with the nuts and bolts of OpenGL itself?
OpenGL is very good at drawing points, lines, and triangles, and hardware accelerating certain common operations such as clipping, face culling, perspective divides, perspective texture accesses (very useful for lighting) and so on. It offers you a chance to write special programs called shaders, which operate at various stages of the rendering pipeline, maximising your chance to insert your own unique cleverness while still taking advantage of hardware acceleration.
A good tutorial is one that explains the rendering pipeline and puts you in a much better position to assess what the tool of OpenGL is best used for.
Here is one such tutorial that I found recently: Learning Modern 3D Graphics Programming
by Jason L. McKesson. It doesn't appear to be complete, but if you get far enough for that to annoy you, you'll be well placed to search for the rest.
Using imposters to fill polygons
Everything in computer graphics is an imposter, but the term often has a specialised meaning. Imposters display very different geometry from what they actually have -- only more so than usual! Of course, a 3D world is very different from the pixels representing it, but with imposters, the deception goes deeper than usual.
For instance, a rectangle that OpenGL actually constructs out of two triangles can appear to be a sphere if, in its fragment shader, you write a customised depth value to the depth coordinate, calculate your own normals for lighting and so on, and discard those fragments of the square that would fall outside the outline of the sphere. (Calculating the depth on those fragments would involve a square root of a negative number, which can be used to discard the fragment.) Imposters like that are sometimes called flat cards or billboards.
(The tutorial above includes a chapter on imposters, and examples doing just what I've described here. In fact, the rectangle itself is constructed only part way through the pipeline, from a single point. I warn that the scaling of their rectangle, to account for the way that perspective distorts a sphere into an ellipse in a wide FOV, is a non-robust fudge . The correct and robust answer is tricky to work out, using mathematics that would be slightly beyond the scope of the book. I'd say it is beyond the author's algebra skills to work it out but I could be wrong; he'd certainly understand a worked example. However, when you have the correct solution, it is computationally inexpensive; it involves only linear operations plus two square roots, to find the four limits of a horizontally- or vertically-translated sphere. To generalise that technique for other displacements requires one more square root, for a vector normalisation to find the correct rotation, and one application of that rotation matrix when you render the rectangle.)
So just to suggest an original solution that others aren't likely to provide, you could use an inequality (like x * x + y * y <= 1 for a circle or x * x - y * y <= 1 for a hyperbola) or a system of inequalities (like three straight line forms to bound a triangle) to decide how to discard a fragment. Note that if inequalities have more than linear order, they can encode perfect curves, and render them just as smoothly as your pixelated screen will allow -- with no limitation on the 'geometric detail' of the curve. You can also combine straight and curved edges in a single polygon, in this way.
For instance, a fragment shader (which would be written in GLSL) for a semi-circle might have something like this:
if (y < 0) discard;
float rSq = x * x + y * y;
if (1 < rSq) discard;
// We're inside the semi-circle; put further shader computations here
However, the polygons that are easy to draw, in this way, are very different from the ones that you're used to being easy. Converting a sequence of connected nodes into inequalities means yet more code to write, and deciding on the Boolean logic, to deal with combining those inequalities, could then get quite complex -- especially for concave polygons. Performing inequalities in a sensible order, so that some can be culled based on the results of others, is another ill-posed headache of a problem, if it needs to be general, even though it is easy to hard-code an optimal solution for a single case like a square.
I suggest using imposters mainly for its contrast with the triangulation method. Something like either one could be a route to pursue, depending on what you're hoping to achieve in the end, and the nature of your polygons.
Have fun...
P.S. have a related topic... Polygon triangulation into triangle strips for OpenGL ES
As long as the link lasts, it's a more detailed explanation of 'polygon triangulation' than mine. Those are the two words to search for if the link ever dies.
A line loop is just an outline.
To fill the middle as well, you want to use GL_POLYGON.

Image-processing basics

I have to do some image processing but I don't know where to start. My problem is as follows :-
I have a 2D fiber image (attached with this post), in which the fiber edges are denoted by white color and the inside of the fiber is black. I want to choose any black pixel inside the fiber, and travel from it along the length of the fiber. This will involve comparing the contrast with the surrounding pixels and then travelling in the desired direction. My main aim is to find the length of the fiber
So can someone please tell me atleast where to start? I have made a rough algorithm in my mind on how to approach my problem but I don't know even which software/library to use.
Regards
Adi
EDIT1 - Instead of OpenCV, I started using MATLAB since I found it much easier. I applied the Hough Transform and then Houghpeaks function with max no. of peaks = 100 so that all fibers are included. After that I got the following image. How do I find the length now?
EDIT2 - I found a research article on how to calculate length using Hough Transform but I'm not able to implement it in MATLAB. Someone please help
If your images are all as clean as the one you posted, it's quite an easy problem.
The very first technique I'd try is using a Hough Transform to estimate the line parameters, and there is a good implementation of the algorithm in OpenCV. After you have them, you can estimate their length any way you want, based on whatever other constraints you have.
Problem is two-fold as I see it:
1) locate start and end point from your starting position.
2) decide length between start and end points
Since I don't know your input data I assume it's pixel data with a 0..1 data on each pixel representing it's "whiteness".
In order to find end points I would do some kind of WALKER/AI that tries to walk in different locations, knowing original pos and last traversed direction then continuing along that route until "forward arc" is all white. This assumes fiber is somewhat straight (is it?).
Once you got start and end points you can input these into a a* path finding algorithm and give black pixels a low value and white very high. Then find shortest distance between start and end point, that is the length of the fiber.
Kinda hard to give more detail since I have no idea what techniques you gonna use and some example input data.
Assumptions:
-This image can be considered a binary image where there are only 0s(black) and 1s(white).
-all the fibers are straight and their starting and ending points are on borders.
-we can come up with a limit for thickness in fiber(thickness of white lines).
Under these assumptions:
start scanning the image border(start from wherever you want in whichever direction you want...just be consistent) until you encounter with the first white pixel.At this point your program will understand that this is definitely a starting point. By knowing this, you will gather all the white pixels until you reach a certain limit(or a threshold). The idea here is, if there is a fiber,you will get the angle between the fiber and the border the starting point is on...of course the more pixels you get(the inner you get)the surer you will be in the end. This is the trickiest part. after somehow ending up with a line...you need to calculate the angle(basic trigonometry). Since you know the starting point, the width/height of the image and the angle(or cos/sin of those) you will have the exact coordinate of the end point. Be advised...the exactness here is not really what you might have understood because we may(the thing is we will) have calculation errors in cos/sin values. So you need to hold the threshold as long as possible. So your end point will not be a point actually but rather an area indicating possibility that the ending point is somewhere inside that area. The rest is just simple maths.
Obviously you can put too much detail in this method like checking the both white lines that makes the fiber and deciding which one is longer or you can allow some margin for error since those lines will not be straight properly...this is where a conceptual thickness comes to the stage etc.
Programming:
C# has nice stuff and easy for you to use...I'll put some code here...
newBitmap = new Bitmap(openFileDialog1.FileName);
for (int x = 0; x < newBitmap.Width; x++)
{
for (int y = 0; y < newBitmap.Height; y++)
{
Color originalColor = newBitmap.GetPixel(x, y);//gets the pixel value...
//things go here...
}
}
you'll get the image from a openfiledialog and bitmap the image. inside the nested for loop this code scans the image left-to-right however you can change this...
Since you know C++ and C, I would recommend OpenCV
. It is open-source so if you don't trust anyone like me, you won't have a problem ;). Also if you want to use C# like #VictorS. Mentioned I would use EmguCV which is the C# equivilant of OpenCV. Tutorials for OpenCV are included and for EmguCV can be found on their website. Hope this helps!
Download and install the latest version of 3Dslicer,
Load your data and go the the package>EM segmenter without Atlas>
Choose your anatomical tree in 2 different labels, the back one which is your purpose, the white edges.
The choose the whole 2D image as your ROI and click on segment.
Here is the result, I labeled the edges in green and the black area in white
You can modify your tree and change the structures you define.
You can give more samples to your segmentation to make it more accurate.

Detect signs on roads

I have a video which has got turn left,turn right etc marks on the roads.
I have to detect those signs.I am going ahead with template matching in which I am matching the edge detected outputs,But I am not getting satisfactory results,Is there any other way to detect it? Please help.
If you want a solution that is not too complicated but more robust than template matching, I suggest you'd go for Hough voting on SIFT descriptors. This method is provides some degree of robustness to various problems, including partial occlusion of the sign, illumination variations and deformations of the sign. In particular, the method is completely invariant to rotation and uniform scaling of the template object.
The basic idea of the algorithm is as follows:
a) extract SIFT features from the template and query images.
b) set an arbitrary reference point in the template image and calculate, for each keypoint in the template image, the vector from the keypoint to the reference point.
c) match keypoints from the template image to the query image.
d) cast a vote for each matched keypoint for all object locations in the query image that this keypoint agrees with. You do that using the vectors calculated in step (b) and the location, scale and orientation of the matched keypoints in the query image.
e) If the object is indeed located in the image, the votes map should have a strong local maximum at it's location.
f) Optionally, you can verify the detection by using template matching.
You can read more about that method on Wikipedia here or in the original paper (by D. Lowe) here.
Using SIFT or SURF. You can get the invariable descriptor with training you can determine if the vector that represent the road marks (turn left, right or stop) match with the new in the video.
You might try extracting features and training a classifier (linear discriminant, neural network, naive Bayes, etc.). There are many candidate features you might try, but I'd think that you wouldn't need anything too complicated, even if the edge detection is poor, assuming that isolation of the sign is good. Some features to consider are: horizontal and vertical projections (row and column totals) and simple statistics of edge pixels (mean, standard deviation, skewness, etc. For more feature ideas, see any of these books:
"Shape Classification and Analysis: Theory and Practice", by Costa and Cesar
"Algorithms for Image Processing and Computer Vision", by J. R. Parker
"Digital Image Processing", by Gonzalez and Woods

Recognizing tetris pieces in C

I have to make an application that recognizes inside an black and white image a piece of tetris given by the user. I read the image to analyze into an array.
How can I do something like this using C?
Assuming that you already loaded the images into arrays, what about using regular expressions?
You don't need exact shape matching but approximately, so why not give it a try!
Edit: I downloaded your doc file. You must identify a random pattern among random figures on a 2D array so regex isn't suitable for this problem, lets say that's the bad news. The good news is that your homework is not exactly image processing, and it's much easier.
It's your homework so I won't create the code for you but I can give you directions.
You need a routine that can create a new piece from the original pattern/piece rotated. (note: with piece I mean the 4x4 square - all the cells of it)
You need a routine that checks if a piece matches an area from the 2D image at position x,y - the matching area would have corners (x-2, y-2, x+1, y+1).
You search by checking every image position (x,y) for a match.
Since you must use parallelism you can create 4 threads and assign to each thread a different rotation to search.
You might not want to implement that from scratch (unless required, of course) ... I'd recommend looking for a suitable library. I've heard that OpenCV is good, but never done any work with machine vision myself so I haven't tested it.
Search for connected components (i.e. using depth-first search; you might want to avoid recursion if efficiency is an issue; use your own stack instead). The largest connected component should be your tetris piece. You can then further analyze it (using the shape, the size or some kind of border description)
Looking at the shapes given for tetris pieces in Wikipedia, called "I,J,L,O,S,T,Z", it seems that the ratios of the sides of the bounding box (easy to find given a binary image and C) reveal whether you have I (4:1) or O (1:1); the other shapes are 2:3.
To detect which of the remaining shapes you have (J,L,S,T, or Z), it looks like you could collect the length and position of the shape's edges that fall on the bounding box's edges. Thus, T would show 3 and 1 along the 3-sides, and 1 and 1 along the 2 sides. Keeping track of the positions helps distinguish J from L, S from Z.

Resources