How to efficiently find co-planar point in array of 3D points (Terrain) - arrays

I am representing a terrain using a two dimensional array height map (500x500) where points are 1 unit apart (1 unit = 1 meter in this case).
I am also representing a player using a 3D-point which is represented as (x-float,y-float,z-float). Since the players positioning allows for fine-tuned positioning while the height map is lower resolution, i need a method to find the approximation of the height (z-axis) of the character based on the x/y coordinates that he falls within the terrain height-map (co-planar point).
Therefore my question comes in two parts:
How to find co-planar point where the x/y portion of the co-planar point is already solved for?
Is there a recommended library that can handle this in C#?

Related

Swift 3 - Function to create n number of sprites with random x/y coordinates

I am trying to create multiple SKSpriteNodes that each have their own independent variables that I can change/modify. I would like to be able to run a function when the app starts, for example "createSprites(5)" which would create 5 sprites with the image/texture "shape.png" at random x and y coordinates and add all 5 Sprites to an array that I can access and edit different Sprite's positioning based on the index value. I would then like to be able to have another function "addSprite()" which, each time it is called, create a new Sprite with the same "shape.png" texture, place it at another random X and Y coordinate and also add it to the array of all Sprites to, again, be able to access later and change coordinates etc.
I have been looking through so many other Stack Overflow pages and can not seem to find a solution. My ideal solution would simply be the two functions I stated earlier. One to create an "n" number of Sprites and another function to create and add one more sprite to the array each time it is called.
Hope that makes sense, I'm fairly new to Swift and all this Sprite stuff, so simple informative answers would be very much appreciated.
You're not going to find an ideal solution from the past because nobody has likely had exactly the same desire with both Swift and SpriteKit. Having said that, there's likely partial answers you can blend together, and get the result you want or, at least, an understanding of how to do it.
Sprite Positioning in SK is probably the first thing to read up on:
https://developer.apple.com/library/content/documentation/GraphicsAnimation/Conceptual/SpriteKit_PG/Sprites/Sprites.html
having gotten that figured out, you can move to random positions.
Random positioning of Sprites:
Duplicate Sprite in Random Positions with SpriteKit
Sprite Kit random positions
Both use earlier versions of randomisation that aren't as powerful as what's available now, in GameplayKit. So... Generating random numbers in Swift with GameplayKit:
https://www.hackingwithswift.com/read/35/overview
It's hard to overstate the importance of understanding the various possibilities of game design implications of varying types of randomisation, so probably wise to read this, from Apple:
https://developer.apple.com/library/content/documentation/General/Conceptual/GameplayKit_Guide/RandomSources.html
After that, it's a case of needing to determine what constitutes a time or event at which to create more sprites at more random positions, and how fussy you want to be about proximity to other sprites, and overlaps.

Leaflet JS: Custom 2D projection that uses meters instead of lat,long

I am working on a custom game map. This map is basically a raster image, overlayed with some paths and markers. I want to use Leaflet to display the map.
What I am struggling with, is that Leaflet uses Latitude and Longitude to calculate positions, while it uses meters for distances (path lengths, radii of circles, etc).
This is very understandable when dealing with a spherical world like our Earth, but it complicates the custom map, which is flat a lot.
I would like to be able to specify the positions in the same unit as the distances.
Now, by default Leaflet uses a Spherical Mercator projection. According to the Docs, it is possible to define your own projections and coordinate reference systems, but I have been unable to do this thus far.
How would this be possible? Or is there a simpler way?
You should take a look at the simple coordinate reference system (L.CRS.Simple) included with Leaflet:
A simple CRS that maps longitude and latitude into x and y directly. May be used for maps of flat surfaces (e.g. game maps).
You can define the CRS of your L.Map instead upon initialization like so:
new L.Map('myDiv', {
crs: L.CRS.Simple
});
Some further elaboration: As #ghybs pointed out in the comment below and the comment to your question the default sperical mercator projection (L.CRS.EPSG3857) already works in meters. When you calculate the distance between two coordinates, Leaflet returns meters, example:
var startCoordinate = new L.LatLng(0, -1);
var endCoordinate = new L.LatLng(0, 1);
var distance = startCoordinate.distanceTo(endCoordinate);
console.log(distance);
The above will print 222638.98158654713 to your console which is the distance between those two coordinates in meters. Problem is that when using spherical projection, distance between two coordinates will become less the further you get from the equator which will become problematic when creating a flat gameworld. That's why you should use L.CRS.Simple, you won't have said problem.

How to pass variable length float array to GPUImageFilter Shader?

I want to pass my touch points to GPUImage (iOS)
The Point can be translate to float array, the length of the array is variable length.
But I must direct the length of array in shader.
Disclaimer: not a glsl expert
AFAIk you can't have variable length arrays like what you want. This is a GLSL limitation, not GPUImage so it's not a quick fix- the work you'll be doing will be with textures or glsl, not GPUImage.
Here's another stack overflow post about glsl: GLSL indexing into uniform array with variable length
There's two solutions that could work:
1) Limit the number of points. It's reasonable to limit touches but in practice may be hard to narrow them down if there's too many. You could pass these points in to a fixed length array or as individual constants (one for each point). If you really care about scalability with the number of points this isn't a great method because in your shader you'll have to do check each of these points and perform the relevant computation, which could be expensive when performed for the entire image (again, depending on your use case). If for each pixel you're checking a distance to point, this could be too expensive.
2) Input your points in a texture. You can either have 2 1D textures with the x&y coordinates and then treat them like an array (then go to option 1), or you can create a 2D texture, all 0, and set parts to 1 where there are touches. The 2D texture can have a lower resolution than the actual screen. This method could be a lot less work for the shader if you're doing something simple like turning finger touches black.
Your choice depends largely on what you're doing with the points in the shader.

similarity between an image and its rotated version using SIFT

I have implemented SIFT in opencv for comparing images... i have not yet written the program for comparing.Thinking of using FLANN for the same.But,my problem is that,looking into the 128 elements of the descriptor,cannot really understand the similarity of an image and its rotated version.
By reading Lowe's paper,i do understand that the descriptor co-ordinates are all rotated in terms of the keypoint orientation...but,how exactly is the similarity obtained.Can we undertstand the similarity by just viewing the 128 values.
pls,help me...this is for my project presentation.
You can first use Lowe's metric to compute some putative matches between the two images. The metric is that for any given descriptor de in image 1, find the distance to all descriptors de' in image 2. If the ratio of the closest distance to the second closest distance is below a threshold, then accept it.
After this, you can do RANSAC or other form of robust estimation or Hough Transform to check geometric consistency in terms of position, orientation, and scale of the keypoints that you accepted as putative matches.
If I recall correctly, SIFT will give you a set of 128-value descriptors that describe each of the interest points. You also have the location of each point in each of the images, as well as its "direction" (I forget what the "direction" is called in the paper) and scale in each image.
Once you've found two points that have matching descriptors, you can calculate the transformation from the interest point in one image to the same point in the other image by comparing coordinates and directions.
If you have enough matches, you see if all (or a majority of) the interest points have the same transformation. If they do, the images are similar, if they don't, the images are different.
Hope this helps...
What you are looking for is basically ASIFT
You can find the code here and some overview

Recognizing tetris pieces in C

I have to make an application that recognizes inside an black and white image a piece of tetris given by the user. I read the image to analyze into an array.
How can I do something like this using C?
Assuming that you already loaded the images into arrays, what about using regular expressions?
You don't need exact shape matching but approximately, so why not give it a try!
Edit: I downloaded your doc file. You must identify a random pattern among random figures on a 2D array so regex isn't suitable for this problem, lets say that's the bad news. The good news is that your homework is not exactly image processing, and it's much easier.
It's your homework so I won't create the code for you but I can give you directions.
You need a routine that can create a new piece from the original pattern/piece rotated. (note: with piece I mean the 4x4 square - all the cells of it)
You need a routine that checks if a piece matches an area from the 2D image at position x,y - the matching area would have corners (x-2, y-2, x+1, y+1).
You search by checking every image position (x,y) for a match.
Since you must use parallelism you can create 4 threads and assign to each thread a different rotation to search.
You might not want to implement that from scratch (unless required, of course) ... I'd recommend looking for a suitable library. I've heard that OpenCV is good, but never done any work with machine vision myself so I haven't tested it.
Search for connected components (i.e. using depth-first search; you might want to avoid recursion if efficiency is an issue; use your own stack instead). The largest connected component should be your tetris piece. You can then further analyze it (using the shape, the size or some kind of border description)
Looking at the shapes given for tetris pieces in Wikipedia, called "I,J,L,O,S,T,Z", it seems that the ratios of the sides of the bounding box (easy to find given a binary image and C) reveal whether you have I (4:1) or O (1:1); the other shapes are 2:3.
To detect which of the remaining shapes you have (J,L,S,T, or Z), it looks like you could collect the length and position of the shape's edges that fall on the bounding box's edges. Thus, T would show 3 and 1 along the 3-sides, and 1 and 1 along the 2 sides. Keeping track of the positions helps distinguish J from L, S from Z.

Resources