Deriving a ray from cursor co-ordinates using Ogre3D - cursor

I am creating a spaceship game in which you will be able to fire a laser from their ship. Basically I want to create a ray from the players ship to the cursor position. The player can move around but the camera is static. So far I've tried using:
Ray laser = mCamera->getCameraToViewportRay(mMouse->getMouseState().X.abs, mMouse->getMouseState().Y.abs);
and setting:
laser.setOrigin->(mPlayer->getPosition);
However every time I execute the ray scene query, it fires towards the top left corner of my screen. I am using the code here as a reference to how to derive the screen co-ordinates: http://www.ogre3d.org/forums/viewtopic.php?f=5&t=49132
A quick side question for extra credit:
Is there a way of only drawing ManualObject for a small amount of time to simulate a shot from a laser gun? I've already tried to draw a small portion of the ray using the following snippet:
Ogre::ManualObject* lazor = mSceneMgr->createManualObject("lazor");
lazor->begin("HiliteYellow", Ogre::RenderOperation::OT_LINE_LIST);
// define start and end point
for (int i = 0; i< 20000;i++)
{
lazor->position(laser.getPoint(30+i));
lazor->position(laser.getPoint(300+i));
}
lazor->end();
mSceneMgr->getRootSceneNode()->attachObject(lazor);
Thanks!

If you've installed from source, or have the SDK, I'd recommend checking out SdkTrays.h - specifically, screenToScene, sceneToScreen, and getCursorRay.
HTH

The camera to viewport ray starts at your camera's position and goes through the location you clicked in your world.
If one of the three axis coordinates are the same for all your objects (all on a same plain, 2d), you can use the camera to viewport ray to determine the point where the ray intersects the plain. Then you can draw the laser from your ship to that point.
You could also use the ray to get the intersection point of the object you targeted with your cursor. That would work with a 2d and a 3d representation. Again you would draw the laser from your ship to that point.
How to use such a ray query is explained in detail here: http://www.ogre3d.org/tikiwiki/tiki-index.php?page=Intermediate+Tutorial+3

Related

Draw curves to make arrows in the Codename One GlassPane

In a CodenameOne App, I need to draw curved arrows in the GlassPane. The use of the GlassPane is not mandatory, however I've already used some layers in the ContentPane and some layers in the LayeredPane, so I suppose that the GlassPane is the best option to be sure that the arrows are "over" the app.
The arrows should be like the following ones:
I suppose that I can create an algorithm that decides the absolute X and Y coordinates of the "Start" and "End" points, more other few points (P0, P1, P2, etc.), that describes the curves. For example:
My problem is that I don't know how to do it. Usually I don't need low-level drawing in a Codename One app like in this case. Could you please show me a correct and complete code to do this drawing (assuming to know the coordinates of Start, End, P0, P1, etc.)? Thank you.
This is a bit hard to do by hand. I would suggest using SVG to draw an arrow like this by using a tool such as Sketch or a similar vector graphics tool. Then using flamingo to convert it to an image: https://www.codenameone.com/blog/flamingo-svg-transcoder.html
Alternatively you can handcode it with a GeneralPath e.g.:
GeneralPath gp = new GeneralPath();
// move to start of path
gp.move(x, y);
// draw the curve of the arrow, we use a control point around which
// the curve is drawn and curve to the destination of the line
gp.curveTo(contolX, controlY, destX, destY);
// Stroke defines how the shape is drawn it accepts the line width
// cap style, join style and miter limit
Stroke st = new Stroke(2, Stroke.CAP_SQUARE, Stroke.JOIN_MITER, 1);
// red
graphics.setColor(0xff00000);
// now we can draw the shape
graphics.drawShape(gp, st);

SDL Relative Position

I have a theoretical question about SDL' Surface cursor.
If I want to display surface_A on my screen I'll use a cursor created with SDL_Rect cursor; and I'll use it with SDL_BlitSurface();.
The cursor will contain a position relative to the top-left corner of my window.
But if I want to display surface_B inside surface_A, do I have to indicate a cursor relative the top-left corner of my window or the top-left corner of surface_A ?
You may be making some wrong assumptions about the relative positions of your cursors. There is a very good, and detailed set of tutorials at the linked location that may clear things up for you...
From HERE...
Using the first tutorial as our base, we'll delve more into the world
of SDL surfaces. As I attempted to explain in the last lesson, SDL
Surfaces are basically images stored in memory. Imagine we have a
blank 320x240 pixel surface. Illustrating the SDL coordinate system,
we have something like this:
This coordinate system is quite different than the normal one you are
familiar with. Notice how the Y coordinate increases going down, and
the X coordinate increases going right. Understanding the SDL
coordinate system is important in order to properly draw images on the
screen.
Some additional terms that may help clarify:
SDL Window : You can think of this as physical pixels, or your monitor.
SDL Renderer : Controls the properties/settings of what is created in that window.

ARKit: project a feature point found in the ARPointCloud to image space and check to see if it's contained in a CGRect on screen?

So, I am using ARKit to display feature points in the session. I am able to get the current frame, then its rawFeaturePoints and place geometries in the world space so the user can see them on screen. That is working great.
In the app I then have a quadrant on screen. My objective is to show in screen coordinates feature points that projected would fall inside the 2D quadrant on screen. To do that, I tried this:
get feature points as an array of vector_float3
for each of those points I then get a SCNVector3 setting the Z component to 0 (near plane)
I then call on the ARSCNView:
public func projectPoint(_ point: SCNVector3) -> SCNVector3
This approach does give me 2D points back, but, depending on where the camera is they seem to be way off.
So then, since in ARKit the camera keeps moving around, do I need to take that into account to achieve what I explained?
EDIT:
About flipping the Y of the CGPoint retrieved from the projectPoint call on the camera:
/**
Project a 3D point in world coordinate system into 2D viewport space.
#param point 3D point in world coordinate system.
#param orientation Viewport orientation.
#param viewportSize Viewport (or image) size.
#return 2D point in viewport coordinate system with origin at top-left.
*/
open func projectPoint(_ point: vector_float3, orientation: UIInterfaceOrientation, viewportSize: CGSize) -> CGPoint
Remy San mentioned flipping the Y. I tried that and it does seem to work. One difference between what he's doing and what I am doing is that I am not using an SKScene, but I am using SCNScene. Looking at the docs it says:
...The projection of the specified point into a 2D pixel coordinate space
whose origin is in the upper left corner...
So, what throws me off is that if I don't flip the Y it seems like it's not really working properly. (I'll try to post images to show what I mean). But then if flipping the Y though makes things look better, it goes against the docs. No?
I get you are using the intrinsics matrix for you projection. ARkit technology may also give you some extra information. These are the cameraPoseARFrame, the projectionMatrix and the transformToWorldMap matrices. Are you taking them into consideration when transforming from world coordinates to pixel coordinates?
If anyone has a methodology for applying these matrices to the point cloud coordinates to convert them into screen coordinates, could you contribute to my answer please? I think they may provide more precision and accuracy to the final result.
Thank you!

In SceneKit, how do you make camera look at a specific face of a node's geometry?

In SceneKit, you can add a lookAtConstraint constraint to your SceneView's Point Of View, to make Camera look at a certain node.
Is there a standard way of doing the same but for a specific face of a geometry?
So that, if I touch a specific face of a cube, camera would move so that the Z axis of the camera node gets in line with the normal of the touched face? So that the cube would look like a plane form the new perspective.
No.
That would require movement of the camera, in addition to re-aiming it.
Imagine I'm in front of my house. I have a great view of the front and can just barely see the side to my left. In my Scene I tap the side of the house. A LookAt constraint would merely change the angle of the camera. It would not be aligned with the normal of that barely visible side.
To align with the normal, I'd have to walk around the house until I can stare at the house and be perpendicular to the side I tapped. At what radius? What path? You have to figure that out yourself.
Depending on what effect you're trying for, you might want to rotate the model instead of moving the camera. Rotate the tapped node locally (or as a child of an invisible parent) so that its minus-Z axis points out the tapped face, and keep a lookAtConstraint on the node, not the camera. This approach will change the look of the object, though: you will see it rotating, and the shading changing appropriately.
So that, if I touch a specific face of a cube, camera would move so that the Z axis of the camera node gets in line with the normal of the touched face?
Supposing you are using hit-testing to determine what object got touched, a SCNHitTestResult will give you both localCoordinates and localNormal from which it should be fairly easy to derive a camera transform.
One easy way would be to have the camera as a child node of the box, compute a position that would look like localCoordinates + distance * localNormal and finally a transform using GLKMatrix4MakeLookAt and SCNMatrix4FromGLKMatrix4.
Note that you can also use worldCoordinates, worldNormal, as well as conversion utilities such as SCNNode.convertTransform(_:from:).
mutating on mnuages answer, use a hit test or ray trace to find where the user tapped on the mesh, then add a node at that location, and constrain the camera to lookAt that node.

Converting mouse position to world position OpenGL

Hey, I'm working on a map editor for my game, and I'm trying to convert the mouse position to a position in the game world, the view is set up using gluPerspective
A good place to start would be the function gluUnProject, which takes mouse coordinates and calculates object space coordinates. Take a look at http://nehe.gamedev.net/data/articles/article.asp?article=13 for a basic tutorial.
UPDATE:
You must enable depth buffering for the code in that article to work. The Z value for mouse coordinates is determined based on the value in the depth buffer at that point.
In your initialization code, make sure you do the following:
glEnable(GL_DEPTH);
A point on the screen represents an entire line (an infinite set of points) in 3D space.
Most people with questions similar to yours are really trying to select an object by clicking on it. If that's what you're after, OpenGL offers a selection mode that's generally more effective than trying to convert the screen coordinate into real-world coordinates.
Using selection mode is (usually) pretty simple: you start with gluPickMatrix, which you use to specify a small box around the click point. You then draw your scene in selection mode. When you're done, instead of actually drawing anything, it gives you back records of what would have been drawn in the box you specified. If memory serves, those are arranged in Z order, so the first one in the list is what would have displayed front-most (i.e., the one you usually want).

Resources