Moving an SCNNode relative to the root node - scenekit

I’m learning SceneKit by writing a game where you’re flying through an asteroid field dodging objects. Initially, I did this by moving/rotating the camera, but I realized that at some point I’d run out of coordinate space and it’s probably better to move all of the objects toward the camera (and dispose of them when I’ve “passed” them).
But I can’t seem to get them to move. My original code that moved the camera looked like this:
[cameraNode setTransform:CATransform3DTranslate(cameraNode.transform, 0.f, 0.f, -2.f)];
I thought I could do something similar with each asteroid node:
[asteroidNode setTransform:CATransform3DTranslate(cameraNode.transform, 0.f, 0.f, 2.f)];
but they don’t move. If I add a basic animation:
CABasicAnimation *anim = [CABasicAnimation animationWithKeyPath:#"position.z"];
anim.byValue = #10;
anim.duration = 1.0;
[asteroidNode addAnimation:anim forKey:#"move forward"];
the asteroids move but predictably snap back to their original location when it’s done.
This feels like a rookie mistake but I can’t find anything addressing this problem online. Am I going about this the wrong way?
Thanks,
Jeff

moving the cameraNode the way you do it should work but make sure "cameraNode" is your current pointOfView or it will have no effect (check that scnView.pointOfView == cameraNode).
If you want to move the nodes instead you should translate "node.transform" (not "cameraNode.transform"). But Actually it's simpler to just do:
node.position = SCNVector3Make(node.position.x, node.position.y, node.position.z+2.0);
Also make sure there is no animation or physics running on these nodes that could override your changes.

Related

Scenekit scnbillboard and lookatconstraints conflict

I'm trying to point a circle with arrow at the top at an scnnode in scenekit, but keep it constrained or flat toward the point of view at all times. It seems that with a look at constraint, the billboard constraint gets ignored.
Here is the setup for constraints:
let billboardConstraint = SCNBillboardConstraint ()
billboardConstraint.freeAxes = SCNBillboardAxis.Y
let lookatConstraint = SCNLookAtConstraint ( target: targetNode )
lookatConstraint.localFront = SCNVector3Make ( 0, 1, 0 )
lookatConstraint.worldUp = SCNVector3Make ( 0, 1, 0 )
arrowNode.constraints = [billboardConstraint, lookatConstraint]
self.sceneView.pointOfView?.addChildNode ( arrowNode )
Here are example images of the issue. The first image is how I want to keep it, the second image shows the flattening that I can't seem to control:
No matter what I try, it doesn't stay facing the camera. I've tried equating the z distance for both the arrow node and the target node (still rotates the arrow node undesirably), tried setting a gimbal lock (flips the arrow node out), tried to adjust the angles for the arrow node to keep it "facing" the point of view, and tried an SCNCone as a pointer.
I also tried just moving this arrow node into an imageview in the overlay, with an invisible arrow node inside Scenekit, but couldn't get the math right when trying to CGAffineTransform the 2d UIImageView. I tried getting the rotation vector for the SCNNode that has the SCNLookatConstraint, tried projectpoint, etc. Not quite getting it. Maybe I should've paid more attention in high school :-(
Anyone have any ideas:
This post: 59251351 - it's like a 3d box with a tube (a tank with a gun), that targets a specific node - "aims" at it. That's your arrow, so first - make sure that you have set it up with the correct rotation from the beginning, otherwise it will never work. Arrow should be a subnode so that your main node and all subnodes rotate together as you set the constraints for the main node.
The example is a fixed -z for related nodes and the point of view doesn't change. Assuming you want to move the camera forward, then you need a way to maintain perspective - that's keeping Z for all in question at the same distance from the camera.
So, if your target node and indicator can stay on the same z distance plane (as you indicated in your example that it can work that way), I "think" this should help. You might set your target node slightly off on z compared to your arrrow nodes, something insignificant such as .001 - just so the lookAt math doesn't actually return a (NAN) though some happenstance.
If it's more complex than that, then someone with more math skills is required.
Hope that helps

opengl make spotlight act like a flashlight

I'm trying to mimic a flashlight in opengl. Basically I want the spotlight to be in the same position as the camera and point in the same direction that the camera is pointing in.
Here is my code:
gluLookAt (xAt, yAt, zAt, xLookAt, yLookAt, zLookAt, 0, 1, 0);
light_pos [4] = {xAt, yAt, zAt, 1.0};
glLightfv (GL_LIGHT0, GL_POSITION, light_pos);
spotDir [] = {xLookAt - xAt, yLookAt - yAt, zLookAt - zAt};
glLightfv (GL_LIGHT0, GL_SPOT_DIRECTION, spotDir);
I've made calls to initialize the light and I've calculated the surface normals of all my objects.
Now the above code kind of works, when the camera is moved then the spotlight follows. However when I move the camera closer to an object the object gets less light shone on it. And when I move the camera further away the object gets more light.
I want the opposite to happen - the further away the camera is from an object there should be less light being shone on the object. How is this done? Or is this not the behaviour of an opengl spotlight?
So I looked into this and apparently modifying the attenuation of the light will yield correct results. Hope this helps anyone else who stumbles across this.

Image-processing basics

I have to do some image processing but I don't know where to start. My problem is as follows :-
I have a 2D fiber image (attached with this post), in which the fiber edges are denoted by white color and the inside of the fiber is black. I want to choose any black pixel inside the fiber, and travel from it along the length of the fiber. This will involve comparing the contrast with the surrounding pixels and then travelling in the desired direction. My main aim is to find the length of the fiber
So can someone please tell me atleast where to start? I have made a rough algorithm in my mind on how to approach my problem but I don't know even which software/library to use.
Regards
Adi
EDIT1 - Instead of OpenCV, I started using MATLAB since I found it much easier. I applied the Hough Transform and then Houghpeaks function with max no. of peaks = 100 so that all fibers are included. After that I got the following image. How do I find the length now?
EDIT2 - I found a research article on how to calculate length using Hough Transform but I'm not able to implement it in MATLAB. Someone please help
If your images are all as clean as the one you posted, it's quite an easy problem.
The very first technique I'd try is using a Hough Transform to estimate the line parameters, and there is a good implementation of the algorithm in OpenCV. After you have them, you can estimate their length any way you want, based on whatever other constraints you have.
Problem is two-fold as I see it:
1) locate start and end point from your starting position.
2) decide length between start and end points
Since I don't know your input data I assume it's pixel data with a 0..1 data on each pixel representing it's "whiteness".
In order to find end points I would do some kind of WALKER/AI that tries to walk in different locations, knowing original pos and last traversed direction then continuing along that route until "forward arc" is all white. This assumes fiber is somewhat straight (is it?).
Once you got start and end points you can input these into a a* path finding algorithm and give black pixels a low value and white very high. Then find shortest distance between start and end point, that is the length of the fiber.
Kinda hard to give more detail since I have no idea what techniques you gonna use and some example input data.
Assumptions:
-This image can be considered a binary image where there are only 0s(black) and 1s(white).
-all the fibers are straight and their starting and ending points are on borders.
-we can come up with a limit for thickness in fiber(thickness of white lines).
Under these assumptions:
start scanning the image border(start from wherever you want in whichever direction you want...just be consistent) until you encounter with the first white pixel.At this point your program will understand that this is definitely a starting point. By knowing this, you will gather all the white pixels until you reach a certain limit(or a threshold). The idea here is, if there is a fiber,you will get the angle between the fiber and the border the starting point is on...of course the more pixels you get(the inner you get)the surer you will be in the end. This is the trickiest part. after somehow ending up with a line...you need to calculate the angle(basic trigonometry). Since you know the starting point, the width/height of the image and the angle(or cos/sin of those) you will have the exact coordinate of the end point. Be advised...the exactness here is not really what you might have understood because we may(the thing is we will) have calculation errors in cos/sin values. So you need to hold the threshold as long as possible. So your end point will not be a point actually but rather an area indicating possibility that the ending point is somewhere inside that area. The rest is just simple maths.
Obviously you can put too much detail in this method like checking the both white lines that makes the fiber and deciding which one is longer or you can allow some margin for error since those lines will not be straight properly...this is where a conceptual thickness comes to the stage etc.
Programming:
C# has nice stuff and easy for you to use...I'll put some code here...
newBitmap = new Bitmap(openFileDialog1.FileName);
for (int x = 0; x < newBitmap.Width; x++)
{
for (int y = 0; y < newBitmap.Height; y++)
{
Color originalColor = newBitmap.GetPixel(x, y);//gets the pixel value...
//things go here...
}
}
you'll get the image from a openfiledialog and bitmap the image. inside the nested for loop this code scans the image left-to-right however you can change this...
Since you know C++ and C, I would recommend OpenCV
. It is open-source so if you don't trust anyone like me, you won't have a problem ;). Also if you want to use C# like #VictorS. Mentioned I would use EmguCV which is the C# equivilant of OpenCV. Tutorials for OpenCV are included and for EmguCV can be found on their website. Hope this helps!
Download and install the latest version of 3Dslicer,
Load your data and go the the package>EM segmenter without Atlas>
Choose your anatomical tree in 2 different labels, the back one which is your purpose, the white edges.
The choose the whole 2D image as your ROI and click on segment.
Here is the result, I labeled the edges in green and the black area in white
You can modify your tree and change the structures you define.
You can give more samples to your segmentation to make it more accurate.

About finding pupil in a video

I am now working on an eye tracking project. In this project I am tracking eyes in a webcam video (resolution if 640X480).
I can locate and track the eye in every frame, but I need to locate the pupil. I read a lot of papers and most of them refer to Alan Yuille's deformable template method to extract and track the eye features. Can anyone help me with the code of this method in any languages (matlab/OpenCV)?
I have tried with different thresholds, but due to the low resolution in the eye regions, it does not work very well. I will really appreciate any kind of help regarding finding pupil or even iris in the video.
What you need to do is to convert your webcam to a Near-Infrared Cam. There are plenty of tutorials online for that. Try this.
A Image taken from an NIR cam will look something like this -
You can use OpenCV then to threshold.
Then use the Erode function.
After this fill the image with some color takeing a corner as the seed point.
Eliminate the holes and invert the image.
Use the distance transform to the nearest non-zero value.
Find the max-value's coordinate and draw a circle.
If you're still working on this, check out my OptimEyes project: https://github.com/LukeAllen/optimeyes
It uses Python with OpenCV, and works fairly well with images from a 640x480 webcam. You can check out the "Theory Paper" and demo video on that page also. (It was a class project at Stanford earlier this year; it's not very polished but we made some attempts to comment the code.)
Depending on the application for tracking the pupil I would find a bounding box for the eyes and then find the darkest pixel within that box.
Some psuedocode:
box left_location = findlefteye()
box right_location = findrighteye()
image_matrix left = image[left_location]
image_matrix right = image[right_location]
image_matrix average = left + right
pixel min = min(average)
pixel left_pupil = left_location.corner + min
pixel right_pupil = right_location.corner + min
In the first answer suggested by Anirudth...
Just apply the HoughCirles function after thresholding function (2nd step).
Then you can directly draw the circles around the pupil and using radius(r) and center of eye(x,y) you can easily find out the Center of Eye..

WPF: Collision Detection with Rotated Squares

With reference to this programming game I am currently building.
Thanks to the answers from this post, I am now able to find the x-y coordinates of all the points of the rectangles (even when rotated), and Collision-Detection with Walls is almost working perfectly now.
Now I need to implement collision detection with the bots themselves (cause obviously, there will be more than one bot in the Arena).
Square-Square Collision Detection (Non-rotated) is not valid in this case because the bots will be turned at an angle (just like I described here).
So what is the best way to implement this form of Rotated Rectangles Collision Detection in WPF?
I guess there must be some math involved, but usually it turns out that there are functions in WPF that "calculate" these maths for you (just like in this case)
Solution
By using the method I posted as a solution to this previous question and a WPF method called IntersectsWith (from Rect), I was able to solve this issue of rotated rectangles collision detection like so:
public Rect GetBounds(FrameworkElement of, FrameworkElement from)
{
// Might throw an exception if of and from are not in the same visual tree
GeneralTransform transform = of.TransformToVisual(from);
return transform.TransformBounds(new Rect(0, 0, of.ActualWidth, of.ActualHeight));
}
Vehicle IsBotCollided(IEnumerable<Vehicle> blist)
{
//currentBounds is of type Rect, which contains the 4 points of the rectangle (even when rotated)
var currentBounds = GetBounds(BotBody, BattleArena);
//Then I check if the current bounds intersect with each of the other Bots` bounds that are also in the Arena
foreach (Vehicle vehicle in blist)
{
if(GetBounds(vehicle.BotBody, BattleArena).IntersectsWith(currentBounds))
{
return vehicle;
}
}
return null;
}
I would check each line for collision (so you'd have max. 4*4 line collision checks, if two lines collide, the bots do, too, and you can stop), although I'm sure there are better/faster ways to do this. If the rectangles can have different sizes, you should also check if the smaller is inside the other.
The performance could be slightly increased if you first check the rotated x/y-min/max-value of the rectangles (or you can even calculate two circles around the bots and check these, which is even faster) so you don't have to check the lines if they are far away from each other.

Resources