Creating a stable physics plane in AR - scenekit

For a project I am working on I am cresting a tower and placing it on the floor and enabling gravity.
First I create the floor with a static physics body, then generate a tower of many blocks and place them on the floor. I place them into the scenes root node by getting the position of the anchor node and enable gravity.
The problem is, as the plane moves slightly, the tower becomes very unstable and sometimes falls. I understand this is an issue in ARKit as it constantly refined the scenes positioning, however I just need a static floor so the blocks have a stable platform (it doesn’t have to even be seen).
Is there a way to solve this? Or any hacky solution (making the blocks stick to the floor, or turning off gravity before the scene moves, setting gravity on the platform, etc.?)

Related

OpenGL rendering quality vs. number of vertices

I am coding a modern OpenGL application to visualize 3d atomic models (molecules, periodic systems ...) for chemistry and condense matter physics.
I started to work on this few years ago, the first version of my program was in old OpenGL now I am updating it to modern OpenGL.
I come with a question regarding the quality of the rendering of the OpenGL window. In the following examples I draw 3D cylinders and 3D spheres using instanced drawing, in this model to render the bonds I only draw one cylinder, then I translate/scale/rotate it properly in the vertex shader
to render all bonds, same goes for the sphere to render the atoms.
As you can see it works just fine, and the efficiency of the method is amazing and I can render models with hundreds of thousand of atoms smoothly.
However I noticed something weird, that somehow the quality of the rendering seems to be dependent on the number of vertices (objects, atoms and bonds) in the scene, obviously the number of triangles is the most important parameter but not the only one ... since the quality decrease when a lot of vertices are rendered ... please see the attached snapshots:
To render the spheres in the scene I am using 50x50 vertices, and 2x50 for the cylinders (GL_TRIANGLE_STRIP in both cases)
1) In this test model I load: 96 atoms, 512 half bonds, : ~ 291200 vertices:
2) I zoom in to focus on one selected atom and it surrounding, at this scale the result is impeccable:
3) I reset the view and use the builder in my program to increase the number of boxes
(I am simply doing replicas in the 3 direction of space) here I choose to do 20x20x20 replicas,
see the result bellow, the original box is highlighted.
In that scene there are 768000 atoms, 4096000 half-bonds, and thus: 291200x20x20x20 = 2329600000 vertices
quite a lot, yet it works, but something weird appears ...
4) I zoom in again on that particular area of the model I picked before and there is a decrease in quality in particular
in the areas where 3D objects (spheres/cylinders) superimpose/overlap ...
Can somebody explain to me what I see ?
Note 1: In the same window I can decrease the number of replicas back to the original box, zoom again
and see that the result is back to impeccable.
Note 2: the older version of my program still works fine (old OpenGL, using display list with glutsphere and glutcylinders),
I can do the same things, the rendering will take much much longer, but at the end of the process when I zoom in on the 20x20x20
boxes model, the results remains perfect, like for the single box model, and obviously I use same graphic card, driver and else.
Can somebody explain to me what I see ?
You're seeing the limited precision of the depth buffer. There are only so many bits you can work with and in a perspective projection a lonlinear scaling from Z distance to depth buffer value is applied.
The best course of action is to limit the near/depth range of the perspective projection matrix to what's going to be actually visible on screen, to make better use of the depth buffer. Also it's possible to linearize the depth buffer (but that comes with a performance hit). Also you could try to cleanly intersect the geometry where sticks and spheres meet, i.e. constrain the sphere's vertices to the cylinder surface where the sticks and similarly constrain the sticks' end vertices to the sphere where they meet. That way you avoid overlap and hence these artifacts.

Create GO like AI - maybe with minimax algorithm?

So I'm trying to make a game that's similar to the game GO. Essentially I've got a grid of faces, and when you click on a face they turn your respective color (red and blue). You and your opponent take turns clicking on faces to color them your respective color. When any amount of faces are surrounded by faces that are of all the same color, then all the faces that are surrounded are deleted from the board, and the number deleted is added as score to the player whose color surrounded. And if you tap a face twice with the same color (so red face is tapped by red again) then it bursts leaving residue around that face's surrounding faces making it so those residued faces can't have their color change to the other color that's not the residue color.Now my hope is that I could get a slightly working AI, it doesn't have to be amazing or anything, just good enough to make decently intelligent moves and could possibly win. After doing some research it seems that using a MiniMax algorithm would be my best bet, but I have no clue how to create such a thing in unity. I was hoping someone might have some insight on how to accomplish this, or does anyone have a better idea of an algorithm that would be better in determining moves?
Thanks for the help!
This is horribly broad question, however - one of the most succesful approaches to the board games AI (especially GO) is UCT based approach. It is a monte carlo driven heuristic approximation of the minmax algorithm. MinMax requires game's state space to be very small in order to fit in both memory and time constraints. UCT on the other hand can make reasonable moves in any given amount of time (it is fully iterative).

Detect road surface in a traffic scene point cloud

I want to analyze a traffic scene. My source data is a point cloud like this one (see images at the bottom of that post). I want to be able to detect objects that are on the road (cars, cyclists etc.). So first of all I need know where the road surface is so that I can remove or ignore these points or simply just run a detection above the surface level.
What are the ways to detect such road surface? The easiest scenario is a straight and flat road - I guess I could try to registrate a simple plane to the approximate position of the surface (I quite surely know it begins just in front of the car) and because the road surface is not a perfect plane I have to allow some tolerance around the plane.
More difficult scenario would be a curvy and wavy (undulated?) road surface that would form some kind of a 3D curve... I will appreciate any inputs.
A relatively simple starting point:
If you can assume that the road surface starts directly in front of the camera then you can use a region growing algorithm to find a region such that the curvature does not change so much within the region (thereby using sharp edges to delineate the region). This would involve calculating the curvature first. This can make a first approximation; there will be issues with occluding objects and other artefacts I am sure.
http://pointclouds.org/documentation/tutorials/region_growing_segmentation.php#region-growing-segmentation
http://pointclouds.org/documentation/tutorials/normal_estimation.php

Path finding in a highly dynamic world

I am working on a simple soccer simulation, I am using potential fields for collision avoidance more specifically following technique,
http://www.ibm.com/developerworks/java/library/j-antigrav/
only obstacles on the field are other players and they are constantly moving. Problem is it works if I assign really big push force to the characters since characters move at speed it takes some time to change direction but this has few drawbacks with such a high gravity I can never position an npc to grab the ball cause there is always some force pushing me around.
I though I could solve this by assigning pulling force to the ball but that actually made it worse. nps would go to the ball, ball starts pulling which makes npc push the ball it goes into a loop until npc crashes to a wall.
How I've implemented this is, I have a vector that would steer me towards my target then I add to that all gravitational forces acting on the npc and steer towards that direction.
Basically I am wondering what kind of improvements I can make? my current problem is not hitting other players precisely getting behind a ball without getting affected by other players.
I'm not sure that employing potential fields is the way to go here. If all players are directly between you and the ball, how do you get to it?
I'd be tempted to plot a straight line, then iteratively adjust the route for the position of other players, adjusting for their trajectories and — if you're really clever — anticipating changes in the same.
You could put some kind of limit as to the area of effect that another player's gravity well will have. Limit it to a small radius around that player so that when you're clear of other players, there are no forces acting on your character. You could also diminish the collision avoidance force when you're near the ball, reasoning that players care less about not hitting each other when it's time to kick the ball.

Determining if a polygon is inside the viewing frustum

here are my questions. I heard that opengl ignores the vertices which are outside the viewing frustum and doesn't consider them in rendering pipeline. Recently I ran into a same post that said you should check this your self and if a point is not inside, it is you duty to find out not opengl's! Now,
Is this true about opengl? does it understand if a point is not inside, and not to render it?
I am developing a grass scene which has about 4000 grasses on rectangles. I have awful FPS, and the only solution I came up was to decide which grasses are inside the viewport and then only render them! My question here is that what solution is best for me to find out which rectangle is not inside or which one is?
Please consider that my question is not about points mainly but about rectangles. Also I need to sort the grasses based on their distance, so it is better if native on client side memory.
Please let me know if there are any effective and real-time ways to find out if any given mesh is inside or outside the frustum. Thanks.
Even if is true then OpenGL does not show polygons outside the frustum ( as any other 3d engines ) it has to consider them to check if there are inside or not and then fps slow down. Usually some smart optimization algorithm is needed to avoid flooding the scene with invisible objects. Check for example BSP trees+PVS or Portals as a starting point.
To check if there is some bottleneck in the application, you can try with gDebugger. If nothing is reasonable wrong optimizing in order to draw just the PVS ( possible visible set ) is the way to go.
OpenGL won't render pixels ("fragments") outside your screen, so it has to clip somehow...
More precisely :
You submit your geometry
You make a Draw Call (glDrawArrays or glDrawElements)
Each vertex goes through the vertex shader, which computes the final position of the vertex in camera space. If you didn't write a vertex shader (=old opengl), the driver will create one for you.
The perspective division transforms these coordinates in Normalized Device Coordinates. Roughly, its means that the frustum of your camera is deformed to fit in a [-1,1]x[-1,1]x[-1,1] box
Everything outside this box is clipped. This can mean completely discarding a triangle, or subdivide it if it is across a clipping plane
Each remaining triangle is rasterized into fragments
Each fragment goes through the fragment shader
So basically, OpenGL knows how to clip, but each vertex still has to go through the vertex shader. So submitting your entire world will work, of course, but if you can find a way not to submit everything, your GPU will be happier.
This is a tradeoff, of course. If you spend 10ms checking each and every patch of grass on the CPU so that the GPU has only the minimal amount of data to draw, it's not a good solution either.
If you want to optimize grass, I suggest culling big patches (5m x 5m or so). It's standard AABB-frustum testing.
If you want to optimize a more generic model, you can investigate quadtree for "flat" models, octrees and bsp-trees for more complex objects.
Yes, OpenGL does not rasterize triangles outsize the viewing frustrum. But, this doesn't mean that this is optimal for applications: OpenGL implementation shall transform the vertex coordinate (by using fixed pipeline or vertex shaders), then, having the normalized coordinates it finally knows whether the triangle lie inside the viewing frustrum.
This mean that no pixel is rasterized in that cases, but the vertex data is processed all the same; simply doesn't produce fragments derived from a non visible triangle!
The OpenGL extension ARB_occlusion_query may help you, but in the discussion section make it clear:
Do occlusion queries make other visibility algorithms obsolete?
No.
Occlusion queries are helpful, but they are not a cure-all. They
should be only one of many items in your bag of tricks to decide
whether objects are visible or invisible. They are not an excuse
to skip frustum culling, or precomputing visibility using portals
for static environments, or other standard visibility techniques.
For the question regarding the mesh sorting on depth, you shall use the depth buffer: essentially the mesh fragment is effectively rendered only if its distance from the viewport is less than the previous fragment in the same position. This make you aware of sorting meshes. This buffer is essentially free, and it allows you to improve performances since it discard more far fragments.
Yes. Like others have pointed out, OpenGL has to perform a lot of per-vertex operations to determine if it is in the frustum. It must do this for every vertex you send it. In addition to the processing overhead that must take place, keep in mind that there is also additional overhead in the transmission of those vertices from the CPU to the GPU. You want to avoid sending information to the GPU that it isn't going to use. Though the bandwidth between the CPU and GPU is quite good on modern hardware, there's still a limit.
What you want is a Scene Graph. Scene graphs are frequently implemented with some kind of spatial partitioning scheme, e.g., Quadtrees, Octrees, BSPTrees, etc etc. Spatial partitioning allows you to intelligently determine what geometries are visible. Instead of doing this on a per-vertex basis (like OpenGL is forced to do) it can eliminate huge spatial subsets of geometry at a time. When rendering a complex scene, the performance savings can be enormous.

Resources