Hy! I can move only the last virtual object placed in scene. When i add another object, the previous object added remains static and i cannot do any action on it. Can you give me please give me a solution to move, scale, rotate what object i want when I touch it?
Seeing as you haven't posted any code, its hard to provide an exact answer.
However, in order to keep track of the current SCNNode you wish to use, when you load a model etc you could store it as a variable e.g:
var currentNode: SCNNode!
You could also use the name property of the node when you are loading your model to keep a more specific reference to it.
Here is an example of doing this:
/// Example Of Loading An SCNScene & Setting It As The Current Node
func loadModel(){
let modelPath = "Character.scn"
//1. Get The Reference To Our SCNScene & Get The Model Root Node
guard let model = SCNScene(named: modelPath),
let modelObject = model.rootNode.childNode(withName: "Root", recursively: false) else { return }
//2. Scale It
modelObject.scale = SCNVector3(0.4, 0.4, 0.4)
//3. Add It To The Scene & Position It 1.5m Away From The Camera
augmentedRealityView.scene.rootNode.addChildNode(modelObject)
modelObject.position = SCNVector3(0, 0, -1.5)
//4. Set It As The Current Node & Assign A Name
currentNode = modelObject
currentNode.name = "Character"
}
You can also make use of an SCNHitTest which:
Looks for SCNGeometry objects along the ray you specify. For each
intersection between the ray and and a geometry, SceneKit creates a
hit-test result to provide information about both the SCNNode object
containing the geometry and the location of the intersection on the
geometry’s surface.
This can be used to differentiate between which node(s) have been hit e.g:
/// Detects A Tap On An SCNNode
///
/// - Parameter gesture: UITapGestureRecognizer
#objc func detectTap(_ gesture: UITapGestureRecognizer){
//1. Get The Current Touch Point
let currentTouchPoint = gesture.location(in: self.augmentedRealityView)
//2a. Perform An SCNHitTest To Detect If An SCNNode Has Been Touched
guard let nodeHitTest = self.augmentedRealityView.hitTest(currentTouchPoint, options: nil).first else { return }
if nodeHitTest.node == currentNode{
print("The Current Node Has Been Touched")
}
//Or To See Which Node Has Been Touched
if nodeHitTest.node.name == "The Model"{
print("The Model Has Been Touched")
}
}
Another way you can handle this is to add a UITapGestureRecognizer to your view, and enable interaction based on the result e.g. setting the currentNode that way e.g:
/// Detects A Tap On An SCNNode & Sets It As The Current Node
///
/// - Parameter gesture: UITapGestureRecognizer
#objc func detectTap(_ gesture: UITapGestureRecognizer){
//1. Get The Current Touch Point
let currentTouchPoint = gesture.location(in: self.augmentedRealityView)
//2a. Perform An SCNHitTest To Detect If An SCNNode Has Been Touched
guard let nodeHitTest = self.augmentedRealityView.hitTest(currentTouchPoint, options: nil).first else { return }
//2b. Set It As The Current Node
currentNode = nodeHitTest.node
}
Then you can apply logic to ensure that only the currentNode moves etc:
/// Moves An SCNNode
///
/// - Parameter gesture: UIPanGestureRecognizer
#objc func moveNode(_ gesture: UIPanGestureRecognizer) {
//1. Get The Current Touch Point
let currentTouchPoint = gesture.location(in: self.augmentedRealityView)
//2. If The Gesture State Has Begun Perform A Hit Test To Get The SCNNode At The Touch Location
if gesture.state == .began{
//2a. Perform An SCNHitTest To Detect If An SCNNode Has Been Touched
guard let nodeHitTest = self.augmentedRealityView.hitTest(currentTouchPoint, options: nil).first else { return }
//2b. Get The SCNNode Result
if nodeHitTest.node == currentNode{
//3a. If The Gesture State Has Changed Then Perform An ARSCNHitTest To Detect Any Existing Planes
if gesture.state == .changed{
//3b. Get The Next Feature Point Etc
guard let hitTest = self.augmentedRealityView.hitTest(currentTouchPoint, types: .existingPlane).first else { return }
//3c. Convert To World Coordinates
let worldTransform = hitTest.worldTransform
//3d. Set The New Position
let newPosition = SCNVector3(worldTransform.columns.3.x, worldTransform.columns.3.y, worldTransform.columns.3.z)
//3e. Apply To The Node
currentNode.simdPosition = float3(newPosition.x, newPosition.y, newPosition.z)
}
}
}
}
This should be more than enough to point you in the right direction...
Related
So I am using session(_ session: ARSession, didAdd anchors: [ARAnchor] to get all detected planes (I cannot use renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) Because I am using RealityKit instead of ARKit).
So my question is, How can I get the rotation of the detected anchor from this method? I want to create a plane Entity that has exactly the same position, size, orientation & rotation of the detected anchor. So far Im using the extent for the size, center for position but I can't figure out how to get the rotation. Because right now Its detected correctly (Right position and size)but with wrong rotation that the real plane.
You can get the position and orientation of the ARPlaneAnchor from its transform property, then use that to render the bounding plane in RealityKit.
extension float4x4 {
/// Returns the translation components of the matrix
func toTranslation() -> SIMD3<Float> {
return [self[3,0], self[3,1], self[3,2]]
}
/// Returns a quaternion representing the
/// rotation component of the matrix
func toQuaternion() -> simd_quatf {
return simd_quatf(self)
}
}
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
guard let planeAnchor = anchors[0] as? ARPlaneAnchor else {
return
}
// NOTE: When the ARPlaneAnchor is first created its transform
// contains the position and the plane.center == [0,0,0]. On updates
// the plane.center will change as the extents of the plane change.
let position = planeAnchor.transform.toTranslation()
let orientation = planeAnchor.transform.toQuaternion()
// The center is a position before the orientation is taken in to
// account, so we need to rotate it to get the true position before
// we add it to the anchors position
let rotatedCenter = orientation.act(planeAnchor.center)
// You have a ModelEntity that you created earlier
// e.g. modelEntity
// Assuming you added the entity to an anchor that is just
// fixed at 0,0,0 in the world, or you created a custom entity
// with HasAnchor set to 0,0,0
modelEntity.transform.translation = position + rotatedCenter
modelEntity.transform.rotation = orientation
// Doesn't seem to be a way to update meshes in RealityKit so
// just create a new plane mesh for the updated dimensions
modelEntity.model?.mesh = MeshResource.generatePlane(
width: planeAnchor.extent.x,
depth: planeAnchor.extent.z
)
}
We own two objects in the scene. One follows the mouse position on the screen, and the object 2 in turn follows the route object 1 did. We are storing the positions covered by the object 1 and causing the object 2 play them.
When you run the game, an object follows the other quietly, reproducing the stored position ... but when one object's speed is changed (on mouse click increase velocity) the object 2 can not keep up, as this still following the positions already be cached in the array (including the calculations speed). Please, watch the shot video below:
YouTube: https://youtu.be/_HbP09A3cFA
public class Play : MonoBehaviour
{
public Transform obj;
private List<Recorder> recordList;
private float velocity = 10.0f;
private Transform clone;
void Start()
{
recordList = new List<Recorder>();
clone = obj;
}
void Update()
{
if (Input.GetMouseButton(0))
{
velocity = 20.0f;
}
else {
velocity = 10.0f;
}
var dir = Input.mousePosition - Camera.main.WorldToScreenPoint(transform.position);
var angle = Mathf.Atan2(dir.y, dir.x) * Mathf.Rad2Deg;
transform.rotation = Quaternion.RotateTowards(transform.rotation, Quaternion.AngleAxis(angle, Vector3.forward), 180 * Time.deltaTime);
transform.position += transform.right * Time.deltaTime * velocity;
Camera.main.transform.position = new Vector3(transform.position.x, transform.position.y, Camera.main.transform.position.z);
recordList.Insert(0, new Recorder
{
Position = transform.position,
Rotation = transform.rotation,
Velocity = velocity
});
var x = 8;
if (x < recordList.Count)
{
clone.position = recordList[x].Position;
clone.rotation = recordList[x].Rotation;
clone.position += clone.right * Time.deltaTime * velocity;
}
if (recordList.Count > x)
recordList.RemoveRange(x, recordList.Count - x);
}
}
public class Recorder
{
public Vector3 Position{get;set;}
public Quaternion Rotation{get;set;}
public float Velocity{get;set;}
}
How can we play the positions stored always with the speed of the object 1?
Summary:
If the object 1 is slowly moving object 2 as well;
If the object 2 is running, the object 2 should do the route at a faster speed to always follow the object 1;
Thanks in advance.
If i understood correctly you might want to consider using Queue<T> instead of List<T>. I think it would be a better suited datatype as it represents a FIFO collection (first in, first out), which is how you use List anyway. You can add elements with Enqueue(T) to the end of the queue and always get the first item with Dequeue() (it also removes it). As for Stack<T> (the opposite), there is also a Peek() function which lets you "preview" the next element.
Another thing, it depends on distance and speed, but i have the feeling that storing the position of every frame could become a bit excessive (maybe im just overly concerned though)
I think the issue with your code is that you always get the 8th element of the List.
The app I'm working on has multiple nodes which are added repeatedly to the game. What I have been doing is creating new SKSpriteNodes using an array of texture, and once the node has been added, I assign that node with a specific physics body. But what I would like to do is instead of a having an array of textures, have an array of SKSpriteNodes with the physics body pre-assigned.
I believe I have figured out how to do this, the problem I have is when I try to add that SKSpriteNode more then once to the scene.
For example, below is I'm been playing around with, which works, but if I add another line of addChild(arraySegment_Type1[0] as SKSpriteNode), I get an error and the app crashes.
override func didMoveToView(view: SKView) {
anchorPoint = CGPointMake(0.5, 0.5)
var arraySegment_Type1:[SKSpriteNode] = []
var sprite = SKSpriteNode(imageNamed: "myimage")
sprite.physicsBody = SKPhysicsBody(rectangleOfSize: CGSizeMake(sprite.size.width, sprite.size.height))
sprite.physicsBody?.dynamic = false
sprite.name = "mysprite"
arraySegment_Type1.append(sprite)
addChild(arraySegment_Type1[0] as SKSpriteNode)
}
Does anybody know how I would be able to load SKSpriteNodes multiple times from an array?
What your code does at the moment is the following:
Creates an empty array of the type [SKSpriteNode]
Creates a single SKSpriteNode instance
Ads a physicsBody to the sprite
Ads this single spriteNode to an array
Picks out the first object in the array (index 0) and ads it to the parentNode.
Your app crashes because you are trying to add the same SKSpriteNode instance to the scene several times over.
Here is a reworked version of your code that actually ads several sprites to the scene, it technically works but does not make a lot of sense just yet:
override func didMoveToView(view: SKView) {
// your ARRAY is not an array-segment it is a collection of sprites
var sprites:[SKSpriteNode] = []
let numberOfSprites = 16 // or any other number you'd like
// Creates 16 instances of a sprite an add them to the sprites array
for __ in 0..<numberOfSprites {
var sprite = SKSpriteNode(imageNamed: "myimage")
sprite.physicsBody = SKPhysicsBody(rectangleOfSize: sprite.size)
sprite.physicsBody?.dynamic = false
sprite.name = "mysprite" // not sure what your plans are here...
sprites.append(sprite)
}
// picks out each sprite in the sprites array
for sprite in sprites {
addChild(sprite)
}
}
Now as I said this doesn't make all that much sense. As you've probably noticed the sprites all appear on top of each other for one thing. More importantly, doing this all in the didMoveToView means it makes very little sense going through the whole array-dance, you could simply do the following instead:
override func didMoveToView(view: SKView) {
let numberOfSprites = 16 // or any other number you'd like
for __ in 0..<numberOfSprites {
var sprite = SKSpriteNode(imageNamed: "myimage")
// skipping the physicsBody stuff for now as it is not part of the question
addChild(sprite)
}
}
An improvement obviously, but all sprites are still covering each other, and didMoveToView might not be the best place to handle all this anyway, so perhaps something like this to illustrate the point:
override func didMoveToView(view: SKView) {
let sprites = spritesCollection(count: 16)
for sprite in sprites {
addChild(sprite)
}
}
private func spritesCollection(#count: Int) -> [SKSpriteNode] {
var sprites = [SKSpriteNode]()
for __ in 0..<count {
var sprite = SKSpriteNode(imageNamed: "myimage")
// skipping the physicsBody stuff for now as it is not part of the question
// giving the sprites a random position
let x = CGFloat(arc4random() % UInt32(size.width))
let y = CGFloat(arc4random() % UInt32(size.height))
sprite.position = CGPointMake(x, y)
sprites.append(sprite)
}
return sprites
}
Now, long-wound as this is it is still just a starting point towards a working and more sensible code-structure...
I am using DataVisualization.Charting.Chart (winform), I need to get the data point index when user clicks on a line graph in MouseDown event.
I know there is a HitTest function accepting x & y, but for a line graph, we only need to verify x, if we scan the y (0 to height of graph), it will work, but the performance is too bad.
One way to do this is to enable the cursor
chartArea1.CursorX.IsUserEnabled = true;
chartArea1.CursorX.IsUserSelectionEnabled = true;
// set selection color to transparent so that range selection is not drawn
chartArea1.CursorX.SelectionColor = System.Drawing.Color.Transparent;
and handle the CursorPositionChanged event.
private void chart1_CursorPositionChanged(object sender, CursorEventArgs e)
{
// find a point (this series only has Y values, so using position as index works
// for a series with actual X values, you'd need to Find the closest point
DataPoint pt = chart1.Series[0].Points[(int)Math.Max(e.ChartArea.CursorX.Position - 1, 0)];
// do what is need with the data point
pt.MarkerStyle = MarkerStyle.Square;
}
This obviously assumes a single Series in your ChartArea.
if you are use HitTestResult's ChartElementType.
HitTestResult result = chart.HitTest(e.X, e.Y);
if (result.ChartElementType == ChartElementType.DataPoint)
{
int index = result.PointIndex;
// todo something...
}
I'm writing a WPF application that displays terrain in 3D.
When I perform hit testing, the wrong 3D point is returned (not the point I clicked on).
I tried highlighting the triangle that was hit (by creating a new mesh, taking the coordinates from the RayMeshGeometry3DHitTestResult object). I see that the wrong triangle gets hit (a triangle is highlighted, but it is not under the cursor).
I'm using a perspective camera with field of view of 60, and the near and far planes are of 3 and 35000 respectively.
Any idea why it might happen and what I can do to solve it?
Let me know if you need any more data.
Edit: This is the code I use to perform the hit testing:
private void m_viewport3d_MouseDown(object sender, MouseButtonEventArgs e)
{
Point mousePos = e.GetPosition(m_viewport3d);
PointHitTestParameters hitParams = new PointHitTestParameters(mousePos);
HitTestResult result = VisualTreeHelper.HitTest(m_viewport3d, mousePos);
RayMeshGeometry3DHitTestResult rayMeshResult = result as RayMeshGeometry3DHitTestResult;
if (rayMeshResult != null)
{
MeshGeometry3D mesh = new MeshGeometry3D();
mesh.Positions.Add(rayMeshResult.MeshHit.Positions[rayMeshResult.VertexIndex1]);
mesh.Positions.Add(rayMeshResult.MeshHit.Positions[rayMeshResult.VertexIndex2]);
mesh.Positions.Add(rayMeshResult.MeshHit.Positions[rayMeshResult.VertexIndex3]);
mesh.TriangleIndices.Add(0);
mesh.TriangleIndices.Add(1);
mesh.TriangleIndices.Add(2);
GeometryModel3D marker = new GeometryModel3D(mesh, new DiffuseMaterial(Brushes.Blue));
//...add marker to the scene...
}
}
Something that caught me was that the points were in model coords. I had to transform to world coords. Here is my code that does the hit test (this will return all hits under the cursor, not just the first):
// This will cast a ray from the point (on _viewport) along the direction that the camera is looking, and returns hits
private List<RayMeshGeometry3DHitTestResult> CastRay(Point clickPoint, IEnumerable<Visual3D> ignoreVisuals)
{
List<RayMeshGeometry3DHitTestResult> retVal = new List<RayMeshGeometry3DHitTestResult>();
// This gets called every time there is a hit
HitTestResultCallback resultCallback = delegate(HitTestResult result)
{
if (result is RayMeshGeometry3DHitTestResult) // It could also be a RayHitTestResult, which isn't as exact as RayMeshGeometry3DHitTestResult
{
RayMeshGeometry3DHitTestResult resultCast = (RayMeshGeometry3DHitTestResult)result;
if (ignoreVisuals == null || !ignoreVisuals.Any(o => o == resultCast.VisualHit))
{
retVal.Add(resultCast);
}
}
return HitTestResultBehavior.Continue;
};
// Get hits against existing models
VisualTreeHelper.HitTest(grdViewPort, null, resultCallback, new PointHitTestParameters(clickPoint));
// Exit Function
return retVal;
}
And some logic that consumes a hit:
if (hit.VisualHit.Transform != null)
{
return hit.VisualHit.Transform.Transform(hit.PointHit);
}
else
{
return hit.PointHit;
}
You need to provide the ray to hit test along in order for this to work in 3d. Use the correct overload of VisualTreeHelper.HitTest which takes a Visual3D and a RayHitTestParameters: http://msdn.microsoft.com/en-us/library/ms608751.aspx
Figures out it was a Normalize issue. I shouldn't have normalized the camera's look and up vectors. In the scales I'm using, the distortion is too big for the hit test to work correctly.