Classifying TopoDS_Edge objects in opencascade - opencascade

I'm having challenges with some IGES/STEP models where my code is not able to classify faces bases on the underlying classification of edges i.e whether an edge is a Straight line(non-rational Bspline curve) or arc(rational Bspline curve). I have been using the code below (which works for some models):
edgex.setIsRational(BRepAdaptor_Curve(edge).IsRational());
,where the edge is a TopoDS_Edge and edgex is a custom Edge object.
I also tried the following code but it crashes the program on the second line:
BRepAdaptor_Curve curve = BRepAdaptor_Curve(edge);
Handle_Geom_BSplineCurve spline = curve.BSpline();
edgex.setIsRational(spline.IsRational())
May you please help with a better method or fix for my solutions. Thank you in advance.

You can use the BrepAdaptor::GetType() method to determine the type of curve. The crash in the second line occurs, apparently, that the edge is not a BSpline curve, and the BrepAdaptor::BSpline() method creates a copy, and there is nothing to make it from.

Related

What is the purpose of `BRepLib::BuildCurves3d` calls in the OpenCASCADE tutorial?

As an OpenCASCADE newbie, I am reading the OpenCASCADE tutorial:
https://www.opencascade.com/doc/occt-7.4.0/overview/html/occt__tutorial.html
There are following two curious calls:
BRepLib::BuildCurves3d(threadingWire1);
BRepLib::BuildCurves3d(threadingWire2);
The tutorial explains the need for these two calls in this way:
Remember that these wires were built out of a surface and 2D curves. One important data item is missing as far as these wires are concerned: there is no information on the 3D curves. Fortunately, you do not need to compute this yourself, which can be a difficult task since the mathematics can be quite complex. When a shape contains all the necessary information except 3D curves, Open CASCADE Technology provides a tool to build them automatically. In the BRepLib tool package, you can use the BuildCurves3d method to compute 3D curves for all the edges of a shape.
which I did not find entirely clear.
Imagine that I have constructed some TopoDS_Shape object.
How can I, in general, figure out whether BRepLib::BuildCurves3d call is necessary or not?
With this code you can get the 3D curve of an edge (take from BRepExtrema_DistanceSS.cxx):
Standard_Real aFirst, aLast;
Handle(Geom_Curve) pCurv = BRep_Tool::Curve(E, aFirst, aLast);
If you have not created the 3D curves, pCurv will be a null handle. Using it will result in segmentation faults.
I have been excited about where the 3D curves are actually used. Therefore I have tried several algorithms. These are the algorithms I have tried where the 3D curves are not used:
Visualizing
Export to BREP
Export to STEP
Length Measurement
Checking Whether a Wire Is Closed or Ordered
The only algorithm I have found where the 3D curves are used are extrema/distance computations with BRepExtrema_DistShapeShape. You will not be able to use this class if you have not created the 3D curves.

ARKit: Reproducing the Project Point function

I'm attempting to reproduce the ARCamera's project point function, but for some reason the values are not matching up properly. I am taking the ARCamera's projection matrix and view matrix and applying basic CG perspective transform math, (PV) * p, but the NDC values do not match the pixel values given from the ARCamera's project point function. Any ideas? Am I forgetting something?
Some more detail:
Basically, I'm trying to take an ARFrame a the click of a button, and then trying to replicate the functionality of https://developer.apple.com/documentation/arkit/arcamera/2923538-projectpoint. I'm attempting to do this with https://developer.apple.com/documentation/arkit/arcamera/2887458-projectionmatrix and https://developer.apple.com/documentation/arkit/arcamera/2921672-viewmatrix, making sure all of the inputs match for both parts. CG size is used to transform the coordinates from NDC space to image space.
EDIT: Solution found, check comments below.
The problem turned out to be projection_matrix sometimes does not correctly find the device orientation. The correct approach is to use projectionMatrix(for:viewportSize:zNear:zFar:).

nurbs straight line between first two control points

I have been working on a piece of code that takes in a curve (cloud of points with x,y coordinates only for now) and parameterises it to approximate the given shape with nurbs. The issue I have is that the resultant parameterised curve is linear(!) between the first two control points and only between the other ones approximates the input curve. Any idea on why that would happen (i.e. the linear segment between the first two control points)?
Also, the system wouldn't let me post a picture. Hope the problem is clear enough though..
Your software system most probably uses multiple start and end points. This leads to visually straight lines at the given control points. These are in fact not really linear going, they only look like.
Thanks for replying and looking at my problem, but I have found the bug in my code. I used the number of points from the input curve rather than the number of control points wanted (which have similar variable names in my code) to compute the knot vector and thus the problem propagated from that point onwards.

Representing images as graphs based on pixels using OpenCV's CvGraph

Need to use c for a project and i saw this screenshot in a pdf which gave me the idea
http://i983.photobucket.com/albums/ae313/edmoney777/Screenshotfrom2013-11-10015540_zps3f09b5aa.png
It say's you can treat each pixel of an image as a graph node(or vertex i guess) so i was wondering how
i would do this using OpenCV and the CvGraph set of functions. Im trying to do this to learn about and how
to use graphs in computer vision and i think this would be a good starting point.
I know i can add a vetex to a graph with
int cvGraphAddVtx(CvGraph* graph, const CvGraphVtx* vtx=NULL, CvGraphVtx** inserted_vtx=NULL )
and the documentation says for the above functions vtx parameter
"Optional input argument used to initialize the added vertex (only user-defined fields beyond sizeof(CvGraphVtx) are copied)"
is this how i would represent a pixel as a graph vertex or am i barking up the wrong tree...I would love to learn more about
graphs so if someone could help me by maybe posting code, links, or good ol' fashioned advice...Id be grateful=)
http://vision.csd.uwo.ca/code has an implementation on Mulit-label optimization. GCoptimization.cpp file has a GCoptimizationGridGraph class, which I guess is what you need. I am not a C++ expert, so can't still figure out how it works. I am also looking for some simpler solution.

Best way to take input for a graph Data Structure in C?

I am working on a basic graph implementation(Adj List based) in C so that I can re-use the basic structure to solve all graph related problems.
To map a graph I draw on a paper,I want the best and easiest way.
Talking of the way I take the input rather then how should I go about implementing it! :)
Should I make an input routine which asks for all the nodes label first and then asks for what all edges are to be connected based on two labels?
What could be a good and quick way out? I want an easy way out which lets me spend less amount of energy on the "Input".
Best is to go for input of an edge list,
that is triplets of,
Source, Destination, Cost
This routine can be used to fill Adj List and Adj Matrix.
With the latter, you would need to properly initialize the Matrix though and setup a convention to determine non existent edges.
Here you find details about representation of graph:
Graph-internal-representaion
However here some codes in c++ and java are also given,which you can easily convert to C codes.

Resources