Check if pixel is inside a polygon - silverlight

I want to know certain method so that i can tell if a pixel is inside a 4-point polygon or quadrilateral figure (not necessarily to be rectangle) given the 4 co-ordinates of that polygon.
I tried several methods, but none of them worked really well.
Thanx and Regards
Uday Gupta

A simple method is to use areas : You first decompose your polygon into two triangles ABC and CDA, and check whether the point is in either triangle.
For that, assuming the triangle ABC for example and your point to test is M, you can test whether the area of the triangle ABC is equal to the sum of the areas of ABM + BCM + CAM.
Computing the area of a triangle is done using half the norm of the cross product.
Another solution that directly uses cross products can be found here:
http://www.blackpawn.com/texts/pointinpoly/default.html

Related

Rubik's cube Thistlethwaite algotithm, check for good edges

I'm trying to build a rubik's cube solver in C using the Thistlethwaite algorithm.
I'm storing a cube as an array of 6 uint64_t integers (Faces).
Each of this faces stores 8 colors as one byte.
This structure let's me easily rotate faces using bit manipulation but I wonder if I should use something else that would be more appropriate for the Thistlethwaite algorithm.
The issue I'm having is checking if a cube is contained in the sub group G1 <L, R, F, B, U2,D2>
From what I understand, a cube that has correctly orientated edges is contained in this subgroup.
(see https://www.jaapsch.net/puzzles/thistle.htm)
The paper at the end of the page clearly indicates how to check if an edge is good or not but I could not find a way to implement it.
The question I have is: How to check in code if an edge is correctly oriented given a scrambled cube ?
According to the article, page 1:
Getting into G1
An edge piece is BAD if in taking it home an odd
number of quarter-turns of U and D faces is needed; otherwise it is GOOD
A different way of putting this: if you can manage to bring an edge home without ever using a U or D turn (so only turning the L, F, R and B faces), then an edge is good, otherwise it is bad.
So let's say you have a scrambled cube and are looking at one particular edge piece. Identify the position where it should end up (obviously based on the centre pieces). Let's say that one of the two colours of this edge is red. Then identify where the current place of that red face is in the following image:
Do the same for the place where that red side should end up.
If both places have the same colour (yellow or blue) in the above image, then the edge is good. If they have different colours in the above image, then the edge is bad.
You can easily see that if you had taken the other colour side of the edge in question (the not-red one), you would arrive at the same conclusion with this method.
Up to you to translate this to your data structure.

ARKit: Reproducing the Project Point function

I'm attempting to reproduce the ARCamera's project point function, but for some reason the values are not matching up properly. I am taking the ARCamera's projection matrix and view matrix and applying basic CG perspective transform math, (PV) * p, but the NDC values do not match the pixel values given from the ARCamera's project point function. Any ideas? Am I forgetting something?
Some more detail:
Basically, I'm trying to take an ARFrame a the click of a button, and then trying to replicate the functionality of https://developer.apple.com/documentation/arkit/arcamera/2923538-projectpoint. I'm attempting to do this with https://developer.apple.com/documentation/arkit/arcamera/2887458-projectionmatrix and https://developer.apple.com/documentation/arkit/arcamera/2921672-viewmatrix, making sure all of the inputs match for both parts. CG size is used to transform the coordinates from NDC space to image space.
EDIT: Solution found, check comments below.
The problem turned out to be projection_matrix sometimes does not correctly find the device orientation. The correct approach is to use projectionMatrix(for:viewportSize:zNear:zFar:).

Chart optimization: More than million points

I have custom control - chart with size, for example, 300x300 pixels and more than one million points (maybe less) in it. And its clear that now he works very slowly. I am searching for algoritm which will show only few points with minimal visual difference.
I have a link to the component which have functionallity exactly what i need
(2 million points demo):
I will be grateful for any matherials, links or thoughts how to realize such functionallity.
If I understand your question correctly, then you are looking to plot a graph of a dataset where you have ~1M points, but the chart's horizontal resolution is much smaller? If so, you can down-sample your dataset to get about the number of available x values. If your data is sorted in equal intervals, you can extract every N'th point and plot it. Choose N such that the number of points is, say, double the resolution (in this case, N=2000 will give you 500 points to display).
If the intervals are very different from eachother (not regularly spaced), you can approximate your graph with a polynomial, or spline or any other method that fits, and then interpolate 300-600 points from that approximation.
EDIT:
Depending on the nature of the data, you may end up with aliasing artifacts when you simply sample every N't point. There are probably better methods for coping with this problem, but again - it depends on what exactly you want to plot.
You could always buy the control - it is for sale!
John-Daniel Trask (Co-founder of Mindscape ;-)

similarity between an image and its rotated version using SIFT

I have implemented SIFT in opencv for comparing images... i have not yet written the program for comparing.Thinking of using FLANN for the same.But,my problem is that,looking into the 128 elements of the descriptor,cannot really understand the similarity of an image and its rotated version.
By reading Lowe's paper,i do understand that the descriptor co-ordinates are all rotated in terms of the keypoint orientation...but,how exactly is the similarity obtained.Can we undertstand the similarity by just viewing the 128 values.
pls,help me...this is for my project presentation.
You can first use Lowe's metric to compute some putative matches between the two images. The metric is that for any given descriptor de in image 1, find the distance to all descriptors de' in image 2. If the ratio of the closest distance to the second closest distance is below a threshold, then accept it.
After this, you can do RANSAC or other form of robust estimation or Hough Transform to check geometric consistency in terms of position, orientation, and scale of the keypoints that you accepted as putative matches.
If I recall correctly, SIFT will give you a set of 128-value descriptors that describe each of the interest points. You also have the location of each point in each of the images, as well as its "direction" (I forget what the "direction" is called in the paper) and scale in each image.
Once you've found two points that have matching descriptors, you can calculate the transformation from the interest point in one image to the same point in the other image by comparing coordinates and directions.
If you have enough matches, you see if all (or a majority of) the interest points have the same transformation. If they do, the images are similar, if they don't, the images are different.
Hope this helps...
What you are looking for is basically ASIFT
You can find the code here and some overview

Recognizing tetris pieces in C

I have to make an application that recognizes inside an black and white image a piece of tetris given by the user. I read the image to analyze into an array.
How can I do something like this using C?
Assuming that you already loaded the images into arrays, what about using regular expressions?
You don't need exact shape matching but approximately, so why not give it a try!
Edit: I downloaded your doc file. You must identify a random pattern among random figures on a 2D array so regex isn't suitable for this problem, lets say that's the bad news. The good news is that your homework is not exactly image processing, and it's much easier.
It's your homework so I won't create the code for you but I can give you directions.
You need a routine that can create a new piece from the original pattern/piece rotated. (note: with piece I mean the 4x4 square - all the cells of it)
You need a routine that checks if a piece matches an area from the 2D image at position x,y - the matching area would have corners (x-2, y-2, x+1, y+1).
You search by checking every image position (x,y) for a match.
Since you must use parallelism you can create 4 threads and assign to each thread a different rotation to search.
You might not want to implement that from scratch (unless required, of course) ... I'd recommend looking for a suitable library. I've heard that OpenCV is good, but never done any work with machine vision myself so I haven't tested it.
Search for connected components (i.e. using depth-first search; you might want to avoid recursion if efficiency is an issue; use your own stack instead). The largest connected component should be your tetris piece. You can then further analyze it (using the shape, the size or some kind of border description)
Looking at the shapes given for tetris pieces in Wikipedia, called "I,J,L,O,S,T,Z", it seems that the ratios of the sides of the bounding box (easy to find given a binary image and C) reveal whether you have I (4:1) or O (1:1); the other shapes are 2:3.
To detect which of the remaining shapes you have (J,L,S,T, or Z), it looks like you could collect the length and position of the shape's edges that fall on the bounding box's edges. Thus, T would show 3 and 1 along the 3-sides, and 1 and 1 along the 2 sides. Keeping track of the positions helps distinguish J from L, S from Z.

Resources