Add SCNText to SCNScene with ARKit - scenekit

I just started studying for ARKitexample and Scenekit. I read a few Scenekit and found out that in order to add text, I need to use SCNText.
I try to write like this but it doesn't show.
guard let pointOfView = sceneView.pointOfView else { return }
let text = SCNText(string: "Hello", extrusionDepth: 4)
let textNode = SCNNode(geometry: text)
textNode.geometry = text
textNode.position = SCNVector3Make(pointOfView.position.x, pointOfView.position.y, pointOfView.position.z)
sceneView.scene.rootNode.addChildNode(textNode)
I just want to add some text (like "hello world") on SCNScene when user press button.
Edit
I saw that text but since I haven't set up plane (or anchor), I can't look at that as if I am in front of that text. How can I do?

You have at least two problems here.
If you set a node's position to match that of the camera, you probably won't see any of that node's content. You want to position things in front of the camera for them to be seen. A camera always looks in the -z direction of its local space. There's a ton of ways to do the requisite math, but here's one that might be handy (coded on phone, so YMMV):
textNode.simdPosition = pointOfView.simdPosition + pointOfView.simdWorldFront * 0.5
This should put your object half a meter in front of the camera (or rather, where the camera is at that moment — it won't follow the camera). It works because simdWorldFront is the vector (0,0,-1), which in local space means the direction the camera node points, converted from local space to world space.
The default font size for SCNText is something like 16. But that's in scene units, and scene units map to meters in ARKit. Also, the "text box" is anchored at its lower left. So quite likely your text isn't visible because it's sixteen meters tall and off to your right.
An easy way to handle this is by setting a scale or pivot on the node that makes its contents much smaller.

Related

Unity3D: TPS shooting without mouse aiming

I'm currently developing some TPS game. I have my player model and camera snapped to its shoulder, and some Empty game object in front of player at some distance for calculating vector for bullets (Yellow diamond at screenshot).
I'm developing for mobile platforms, so there is no mouse; just that Empty game object that points direction of the gun.
So when a fire event occurs I want to apply force to bullet and it will fly in right direction. Here is my code
b.transform.position = transform.position;
b.transform.position += transform.forward;
b.SetActive(true);
var rb = b.GetComponent<Rigidbody>();
print((Aim.position - transform.position).normalized);
rb.AddForce((Aim.position - transform.position).normalized * Thrust);
Aim is my EmptyGameObject that points direction, transform is GunEnd gameobject, and b is my bullet instance. So if I try shoot from default player position bulet flies correct from GunEnd to Aim object.
But if I rotate character for example more that 90 degree left, bullets start to fly in some weird trajectory
So, can anybody help me how to correct send bullets?
When you move it´s position with b.transform.position += transform.forward; you might be setting it in an odd place if the transform does not rotate when you aim (and by what I can see in the screenshot, it is not rotating as its components in the transform.rotate remain the same in y). Try moving it with the vector you find with the Aim, like this:
b.transform.position += (Aim.position - transform.position).normalized;

CATransform3DScale expecting CATransform3D struct not SCNMatrix4

I am trying to put some 3D text in my app, but I need to scale it, here is the code I'm trying to use:
SCNText *text = [SCNText textWithString:#"Some Text" extrusionDepth:4.f];
SCNNode *textNode = [SCNNode nodeWithGeometry:text];
textNode.position = SCNVector3Make(-1, 5, 0);
textNode.transform = CATransform3DScale(textNode.transform, .1f, .1f, .1f);
[root addChildNode:textNode];
and I get a
CATransform3DScale expecting CATransform3D struct not SCNMatrix4
or something of the sort.
If I don't transform, the text takes up most of the screen.
Any ideas?
Thanks
on OS X SCNMatrix4 is a typedef of CATransform3D (and thus you can use CoreAnimation utils) but that's not true on iOS. Have a look at SceenKitTypes.h, it exposes functions that match the ones of CA such as SCNMatrix4Scale.
Also it's strange that the text appears too big in your screen. The default font size is rather small and you must almost always change the geometry's font size for it to fit well in your scene (changing the font size is better than scaling because the discretization of the glyph changes and lead to smoother curves). Is that text the only thing in your scene?

How can I combine UIBezierPath drawings?

I'm trying to combine several UIBezierPath drawings.
I have different types of drawings I can make (line, cubic bezier, quadratic beziers), and each of these can be filled or unfilled. I'm selecting the drawing type randomly, and my goal is to make 3 different drawings which are connected at a point.
So where the first, say, line drawing ends, the second path - maybe a cubic bezier — begins. Where that ends, a third, maybe a filled line drawing begins.
I've got a square UIView that I'm trying to draw this in, and each path should have its own part of the UIView: the first 1/3rd, the second and the third.
Would I be able to create this with one UIBezierPath object, or do I need to create 3 different ones? How to make them end and start at the same point? Is there a way to do this with subpaths?
UIBezierPath has its instance methods like (DOC)
-addLineToPoint:
-addArcWithCenter:radius:startAngle:endAngle:clockwise:
-addCurveToPoint:controlPoint1:controlPoint2:
-addQuadCurveToPoint:controlPoint:
-appendPath:
You can combine paths one by one. When you've done, use -closePath to close the path.
Feel free to take a look at my open sourced lib which called UIBezierPath-Symbol. ;)
And if you want more customise path drawing, I recommend CGMutablePath. You can create each path as complex as you want (you can combine simple paths by CGPathAdd... methods). Finally, use CGPathAddPath() to combine them together.
void CGPathAddPath (
CGMutablePathRef path1, // The mutable path to change.
const CGAffineTransform *m, // A pointer to an affine transformation matrix, or NULL if no transformation is needed. If > specified, Quartz applies the transformation to path2 before it is added to path1.
CGPathRef path2 // The path to add.
);
You can combine paths like this:
UIBezierPath *endPath = [UIBezierPath bezierPath];
[endPath appendPath:leftLine];
[endPath appendPath:rightLine];
[endPath appendPath:midLine];
A UIBezierPath is just a wrapper for a CGPath, which itself is just a set of instructions for drawing (by stroke or fill, or both). That drawing can take place anywhere. In other words, a UIBezierPath is just a tool for drawing; the important thing is the drawing itself. Given a graphics context (which might be a UIView, a UIImage, a CALayer, whatever), you can do as much drawing as you like in succession - say, a line, then a cubic bezier, then a filled line drawing. But how you perform those drawing bits is totally up to you. You shouldn't really care whether you do it with three UIBezierPaths, one UIBezierPath, multiple paths, one path, subpaths, whatever (or even by copying other drawings into this one) - the final effect is all that matters, i.e. the accumulated drawing ultimately done in this graphics context.
Your question is like asking, "Should I draw this circle with my right hand or my left hand, and should I draw it counter-clockwise or clockwise?" It doesn't matter. Once it's done, what will have been drawn is a circle; that is what's important.

Image-processing basics

I have to do some image processing but I don't know where to start. My problem is as follows :-
I have a 2D fiber image (attached with this post), in which the fiber edges are denoted by white color and the inside of the fiber is black. I want to choose any black pixel inside the fiber, and travel from it along the length of the fiber. This will involve comparing the contrast with the surrounding pixels and then travelling in the desired direction. My main aim is to find the length of the fiber
So can someone please tell me atleast where to start? I have made a rough algorithm in my mind on how to approach my problem but I don't know even which software/library to use.
Regards
Adi
EDIT1 - Instead of OpenCV, I started using MATLAB since I found it much easier. I applied the Hough Transform and then Houghpeaks function with max no. of peaks = 100 so that all fibers are included. After that I got the following image. How do I find the length now?
EDIT2 - I found a research article on how to calculate length using Hough Transform but I'm not able to implement it in MATLAB. Someone please help
If your images are all as clean as the one you posted, it's quite an easy problem.
The very first technique I'd try is using a Hough Transform to estimate the line parameters, and there is a good implementation of the algorithm in OpenCV. After you have them, you can estimate their length any way you want, based on whatever other constraints you have.
Problem is two-fold as I see it:
1) locate start and end point from your starting position.
2) decide length between start and end points
Since I don't know your input data I assume it's pixel data with a 0..1 data on each pixel representing it's "whiteness".
In order to find end points I would do some kind of WALKER/AI that tries to walk in different locations, knowing original pos and last traversed direction then continuing along that route until "forward arc" is all white. This assumes fiber is somewhat straight (is it?).
Once you got start and end points you can input these into a a* path finding algorithm and give black pixels a low value and white very high. Then find shortest distance between start and end point, that is the length of the fiber.
Kinda hard to give more detail since I have no idea what techniques you gonna use and some example input data.
Assumptions:
-This image can be considered a binary image where there are only 0s(black) and 1s(white).
-all the fibers are straight and their starting and ending points are on borders.
-we can come up with a limit for thickness in fiber(thickness of white lines).
Under these assumptions:
start scanning the image border(start from wherever you want in whichever direction you want...just be consistent) until you encounter with the first white pixel.At this point your program will understand that this is definitely a starting point. By knowing this, you will gather all the white pixels until you reach a certain limit(or a threshold). The idea here is, if there is a fiber,you will get the angle between the fiber and the border the starting point is on...of course the more pixels you get(the inner you get)the surer you will be in the end. This is the trickiest part. after somehow ending up with a line...you need to calculate the angle(basic trigonometry). Since you know the starting point, the width/height of the image and the angle(or cos/sin of those) you will have the exact coordinate of the end point. Be advised...the exactness here is not really what you might have understood because we may(the thing is we will) have calculation errors in cos/sin values. So you need to hold the threshold as long as possible. So your end point will not be a point actually but rather an area indicating possibility that the ending point is somewhere inside that area. The rest is just simple maths.
Obviously you can put too much detail in this method like checking the both white lines that makes the fiber and deciding which one is longer or you can allow some margin for error since those lines will not be straight properly...this is where a conceptual thickness comes to the stage etc.
Programming:
C# has nice stuff and easy for you to use...I'll put some code here...
newBitmap = new Bitmap(openFileDialog1.FileName);
for (int x = 0; x < newBitmap.Width; x++)
{
for (int y = 0; y < newBitmap.Height; y++)
{
Color originalColor = newBitmap.GetPixel(x, y);//gets the pixel value...
//things go here...
}
}
you'll get the image from a openfiledialog and bitmap the image. inside the nested for loop this code scans the image left-to-right however you can change this...
Since you know C++ and C, I would recommend OpenCV
. It is open-source so if you don't trust anyone like me, you won't have a problem ;). Also if you want to use C# like #VictorS. Mentioned I would use EmguCV which is the C# equivilant of OpenCV. Tutorials for OpenCV are included and for EmguCV can be found on their website. Hope this helps!
Download and install the latest version of 3Dslicer,
Load your data and go the the package>EM segmenter without Atlas>
Choose your anatomical tree in 2 different labels, the back one which is your purpose, the white edges.
The choose the whole 2D image as your ROI and click on segment.
Here is the result, I labeled the edges in green and the black area in white
You can modify your tree and change the structures you define.
You can give more samples to your segmentation to make it more accurate.

OpenCV: How to merge two static images into one and emboss text on it?

I have completed an image processing algorithm where I extract certain features from two similar images.
I'm using OpenCV2.1 and I wish to showcase a comparison between these two similar images. I wish to combine both the images into one, where the final image will have both the images next to one another. Like in the figure below.
Also, the black dots are the similarities my algorithm has found, now I want to mark them with digits. Where, point 1 on the right is the corresponding matching point on the left.**
What OpenCV functions are useful for this work?
If you really want them in the same window, and assuming they have same width and height (if they are similar they should have same width and height). You could try to create an image with a final width twice bigger than the width of your 2 similar images. And then use ROI to copy them.
You can write a new function to encapsulate these (usefull) functions in one function in order to have a nice code.
Mat img1,img2; //They are previously declared and of the same width & height
Mat imgResult(img1.rows,2*img1.cols,img1.type()); // Your final image
Mat roiImgResult_Left = imgResult(Rect(0,0,img1.cols,img1.rows)); //Img1 will be on the left part
Mat roiImgResult_Right = imgResult(Rect(img1.cols,0,img2.cols,img2.rows)); //Img2 will be on the right part, we shift the roi of img1.cols on the right
Mat roiImg1 = img1(Rect(0,0,img1.cols,img1.rows));
Mat roiImg2 = img2(Rect(0,0,img2.cols,img2.rows));
roiImg1.copyTo(roiImgResult_Left); //Img1 will be on the left of imgResult
roiImg2.copyTo(roiImgResult_Right); //Img2 will be on the right of imgResult
Julien,
The easiest way I can think right now would be to create two windows instead of one. You can do it using cvNamedWindow(), and then position them side by side with cvMoveWindow().
After that if you now the position of the similarities on the images, you can draw your text near them. Take a look at cvInitFont(), cvPutText().

Resources