Converting mouse position to world position OpenGL - c

Hey, I'm working on a map editor for my game, and I'm trying to convert the mouse position to a position in the game world, the view is set up using gluPerspective

A good place to start would be the function gluUnProject, which takes mouse coordinates and calculates object space coordinates. Take a look at http://nehe.gamedev.net/data/articles/article.asp?article=13 for a basic tutorial.
UPDATE:
You must enable depth buffering for the code in that article to work. The Z value for mouse coordinates is determined based on the value in the depth buffer at that point.
In your initialization code, make sure you do the following:
glEnable(GL_DEPTH);

A point on the screen represents an entire line (an infinite set of points) in 3D space.
Most people with questions similar to yours are really trying to select an object by clicking on it. If that's what you're after, OpenGL offers a selection mode that's generally more effective than trying to convert the screen coordinate into real-world coordinates.
Using selection mode is (usually) pretty simple: you start with gluPickMatrix, which you use to specify a small box around the click point. You then draw your scene in selection mode. When you're done, instead of actually drawing anything, it gives you back records of what would have been drawn in the box you specified. If memory serves, those are arranged in Z order, so the first one in the list is what would have displayed front-most (i.e., the one you usually want).

Related

How to find out if the surface detected by ARKit is no more available?

I am working on an application with ARKit and SceneKit frameworks. In my application I have enabled surface detection (I followed the placing objects sample provided by Apple). How to find if the surface detected is no more available? That is, initially only if user has detected the surface in ARSession I am allowing him to place the 3D object.
But if the user moves rapidly or focuses somewhere, the detected surface area is getting lost. In this case if the user tries to place another object I shouldn't allow him to place it until he scans the floor again and get the surface corrected.
Is there any delegate which is available to let us know that the surface detected is no more available?
There are delegate functions that you can use. The delegate is the ARSCNViewDelegate
It has a function that is renderer(_:didRemove:for:) that fires when an ARAnchor has been removed. You can use this function to perform some operation when a surface gets removed.
ARSCNViewDelegate Link
There are two ways to “lose” a surface, so there’s more than one approach to dealing with such a problem.
As noted in the other answer, there’s an ARSCNViewDelegate method that ARKit calls when an anchor is removed from the AR session. However, ARKit doesn’t remove plane anchors during a running session — once it’s detected a plane, it assumes the plane is always there. So that method gets called only if:
You remove the plane anchor directly by passing it to session.remove(anchor:), or
You reset the session by running it again with the .removeExistingAnchors option.
I’m not sure the former is a good idea, but the latter is important to handle, so you probably want your delegate to handle it well.
You can also “lose” a surface by having it pass out of view — for example, ARKit detects a table, and then the user turns around so the camera isn’t pointed at or near the table anymore.
ARKit itself doesn’t offer you any help for dealing with this problem. It gives you all the info you need to do the math yourself, though. You get the plane anchor’s position, orientation, and size, so you can calculate its four corner points. And you get the camera’s projection matrix, so you can check for whether any point is in the viewing frustum.
Since you’re already using SceneKit, though, there are also ways to get SceneKit to do the math for you... Working backwards:
SceneKit gives you an isNode(_:insideFrustumOf:) test, so if you have a SCNNode whose bounding box matches the extent of your plane anchor, you can pass that along with the camera (view.pointOfView) to find out if the node is visible.
To get a node whose bounding box matches a plane anchor, implement the ARSCNViewDelegate didAdd and didUpdate callbacks to create/update an SCNPlane whose position and dimensions match the ARPlaneAnchor’s center and extent. (Don’t forget to flip the plane sideways, since SCNPlane is vertically oriented by default.)
If you don’t want that plane visible in the AR view, set its materials to be transparent.

OpenGL: How to drag image and move it to the line by using the mouse

I want to drag an image to one line by using the mouse and when the image is close to the line, the image will automatically move on to the line, like some "floor planner" program ------------you create wall and drag the door to this wall and when the door is close to the wall, the door will automatically show up on the wall.
Can OpenGL do it?
if it can, can anyone tell me how? If it can not, can anyone tell me how I can do it?
Show me an example.
OpenGL is a rendering API, it's purpose is to generate rasterized images based on descriptions provided to it by an application.
It knows nothing about user input, and even less about the application's "domain objects" such as doors, walls, and so on. All it deals with is abstract coordinates and matrices that describe the transforms and projections to take those 3D coordinates into 2D for rasterization, as well as shading for surfaces and so on.
So, it's up to you to implement that, so that the coordinates you eventually pass to OpenGL end up being what you want them to be.
Snapping is typically a combination of measuring the distance to some guiding object, and the following quantization of the input coordinates to correspond to the the guide.

Pixel stack for opacity (passing an object vs passing a string and parsing)

I've got a paint program I've been working on, and I've started to tackle opacity. I'm at a point now where I compare the background color to the brush color, average the two based on their alphas, and then set a new pixel. I just need to cache a portion of what I'm drawing off to the side so that it doesn't continuously sample what is continuously changing while the mouse is down. I figured I would fix this by throwing 50 pixels into a stack or queue that starts changing pixels on screen once it's full and completely empties all it's contents onto the screen on mouse up. What I'd like to know is what would be more efficient, two stacks (one of coordinates and one of colors) or one stack of strings that I parse into coordinates and colors.
TLDR: What's more efficient, two stacks of different data types, or one string stack that I parse into two data types.
Your question seems longer and more confusing than it needs to be, but I think what you're asking is:
I'm designing a paint program. If the user is painting 50%-opaque black pixels on a white background, I want them to show up as gray. My problem is that if the user draws a curve that crosses itself, or just leaves the mouse cursor in the same place for a while, the repeated pixels become darker and darker: 50% black, then 75%, then 87.5%... I don't want this. As long as the mouse button is down, no pixel should be "painted" twice, no matter how many times the curve crosses itself.
This question answers itself. The only way to keep pixels from being painted twice is to keep track of which pixels have been painted since the last time the mouse button was up. Replace
image[mouse.x][mouse.y] = alpha_average(image[mouse.x][mouse.y], current_color, current_alpha);
with
if (not already_painted[mouse.x][mouse.y]) {
image[mouse.x][mouse.y] = alpha_average(image[mouse.x][mouse.y], current_color, current_alpha);
already_painted[mouse.x][mouse.y] = true;
}
handle(mouse_up) {
already_painted[*][*] = false;
}
Problem solved, right?
But to answer another implied question: If you're trying to choose between a bunch of parallel arrays and a bunch of data stuffed into strings, you're probably doing it wrong. All Unity3D languages (Python, C#, Javascript) support struct/class/dict types and tuples, either of which would be a better idea than parallel arrays or everything-is-a-string-ism.

Nested scissor boxes with OpenGL

I am working on an OpenGL-based UI written in C. I have a hierarchy of controls -- small square 2D boxes inside a scrollable region of a bigger control. Inside the small boxes 3D scenes are drawn. The 3D scene must be scissored within the small 2D boxes and the small 2D boxes must be scissored within the larger region.
glScissor() is an obvious candidate for making sure drawing takes places within the desired region. The problem is that I cannot simultaneously call glScissor() for the large region and then call glScissor() again for subregions inside the larger scissored region.
Example code to demonstrate what I want:
void drawBigBox() {
glScissor( ... ); //set the bounds of the larger region
for each little box {
//these might be anywhere, but they should only be visible if inside the glScissor bounds
drawLittleBoxes();
}
}
void drawLittleBoxes() {
//draw only inside little box -- Whoops! This replaces the big box bounds!
//How can I meet this constraint while retaining the outer scissor region?
glScissor( ... );
draw3dScene();
}
Is there a simple way of doing what I want? Obviously I can also calculate the intersection of the small and large scissor boxes, or I could more carefully clip/project my 3D scene, but that is more complicated than an OpenGL call. Perhaps there is another approach to doing this that I am unaware of.
Thanks!
For this it is probably easier to use the stencil buffer. You can use INCR to increase the stencil buffer values when writing a quad to it, and when drawing you set the test to EQUAL or GEQUAL to only write where the stencil buffer has the highest value, thus it will only write in the region constrained by all current clipping regions. To remove them when drawing an hierarchy use DECR to remove them again.
This way you can also have non rectangular clipping regions.
You could try to rely on glPush/PopAttribs calls to save and restore the scissor state. However, that was removed in 3.1, and the stack for attributes may not be as large as you need. So you should use an explicit stack.
Just make some object that you use to set the current scissor state. Each layer as you go down the call stack pushes its scissor state, which uses an actual stack of scissor rectangles. I have a generalized version of this that also pushes transform matrices, blend parameters, and so forth.
The object either has to be global GUI state, or bundled with some object that you pass to each of the functions.
You can get the coordinates of the current scissor box (either to save them, or to calculate their intersection with another region) using glGet() with the GL_SCISSOR_BOX parameter.

WPF 3D Billboards

In a 3D scene we often need to apply labels (little textelements or icons) next to 3D object that is moving around (rotation, translation) in the scene. These labels should always face the camera but still move with the object. This technique I believe is called billboard.
An additional cool feature would be if the label would stay always at the same size - no matter how far away the associated object is. So the label seems to live in 2D screenspace and not in the 3D scenegraph.
Does anyone figures out a clever way how to do this in WPF?
For billboarding you need to make sure that the face normal is pointing towards the camera. The algorithm is that the dot product between the face normal and the view direction should be -1 (minus one).
I have some old C code that does this, but it's probably not particularly useful.
For keeping the object the same size you'd need to work out the screen size and then apply a transform to keep it the constant size you desired.
However, if you want the object to appear as though it's in 2D space, why not draw it in a 2D overlay? This will solve both the billboarding and scaling problem at the same time. You work out the screen location of your label and then use the 2D drawing functions.

Resources