I am looking for a line drawing algorithm that only plots one pixel per scanline/row and only use integers.
Bresenham line drawing algorithm works in most of the cases, but it plots two pixel per scanline when I have a line from P1(40, 0) to P2(0, 20) where Pn(x, y).
For the record, I'm coding in C for a low end micro controller with no floating point unit.
Thanks!
To simulate floating point you can scale the integers, so in stead of "1" you have "1000" where you imagine the floating point after the third digit from the left. Then you perform Bresenham's algorithm in integer on these scaled values. Once calculated, you de-scale by integer division with this maximum. You may want to round. You must determine the maximum x value and choose your scale acordingly. Taking a power of 2, you can use shift in stead of divisision.
To ensue there will be only one pixel per line, you can use a cut-off value, where below will not draw the pixel and above will. The pixel should be drawn as black (or white) but there are no gradients (as otherwise you need two pixels).
See http://en.wikipedia.org/wiki/Bresenham's_line_algorithm:
"...This results in an algorithm that uses only integer arithmetic."
Related
I am developing a SLAM algorithm in C, and I have implemented the FAST corner finding method which gives me some strong keypoints in the image. The next step is to get the center of the keypoints with a sub-pixel accuracy, therefore I extract a 3x3 patch around each of them, and do a Least Squares fit of a two dimensional quadratic:
Where f(x,y) is the corner saliency measure of each pixel, similar to the FAST score proposed on the original paper, but modified to also provide a saliency measure in non corner pixels.
And the least squares:
With being the estimated parameters.
I can now calculate the location of the peak of the fitted quadratic, by taking the gradient equal to zero, achieving my original goal.
The issue arises on some corner cases, where the local peak is closer to the edge of the window, resulting in a fit with low residuals but a peak of the quadratic way outside the window.
An example:
The corner saliency and a contour of the fitted quadratic:
The saliency (blue) and fit (red) as 3D meshes:
Numeric values of this example are (row-major ordering):
[336, 522, 483, 423, 539, 153, 221, 412, 234]
And the resulting sub pixel center of (2.6, -17.1) being wrong.
How can I constrain the fit so the center is within the window?
I'm open to alternative methods for finding the sub pixel peak.
The obvious answer is to reject 3x3 (or 5x5, whatever you use) boxes whose discrete maximum is not at the center. In other words, to use a quadratic approximation only to refine the location of a maximum that must be located inside the box.
More generally, in such cases the first questions to ask is not "How do I constrain my model-fitting procedure to shoehorn a solution for this edge case?", but rather
"Does my model apply to this edge case?" and "Is this edge case even worth spending time on, or can I just ignore it?"
I tried my own code to fit a 2D quadratic function to the 3x3 values, using a stable least-squares solving algorithm, and also found a maximum outside of the domain. The 3x3 patch of data does not match a quadratic function, and therefore the fit is not useful.
Fitting a 2D quadratic to a 3x3 neighborhood requires a degree of smoothness in the data that you don't seem to have in your FAST output.
There are many other methods to find the sub-pixel location of the maximum. One that I like because it is more stable and less computationally intensive is the fitting of a "separable" quadratic function. In short, you fit a quadratic function to the three values around the local maximum in one dimension, and then another in the other dimension. Instead of solving 6 parameters with 9 values, this solves 3 parameters with 3 values, twice. The solution is guaranteed stable, as long as the center pixel is larger or equal to all pixels in the 4-connected neighborhood.
z1 = [f(-1,0), f(0,0), f(1,0)]^T
[1,-1,0]
X = [0,0,0]
[1,1,0]
solve: X b1 = z1
and
z2 = [f(0,-1), f(0,0), f(0,1)]^T
[1,-1,0]
X = [0,0,0]
[1,1,0]
solve: X b2 = z2
Now you get the x-coordinate of the centroid from b1 and the y-coordinate from b2.
The scenario is, I want to get the output as a shape, when the number of edges, vertices and the interior angle is given as input. And am trying to do this using Genetic Algorithms.
My problem is, am having a starting trouble. How would I create the initial population randomly for this case? And how could I define the chromosomes in bitwise representation?
I was referring some PPTs.
But in my case, I think I can't represent the chromosome as bits. Because it's numeric value that I would be giving isn't it? Any clues to make me move forward?
Genetic Algorithms don't have to be represented as bits, although I prefer to do it this way. The best way is probably to just convert the numbers from binary to whatever form you need to represent your shapes and back again.
You can either scale the binary or clip the edges to make it fit whatever boundary you need.
In terms of initialisation all you need to do is work out how many bits you need to represent all your input and generate this randomly. For example, if you wanted 3 whole numbers between 0-255 you would need 24 bits (8 * 3). Just randomly generate this number for each chromosome in the population. When creating the shape you just split the chromosome into 3, convert into your 3 whole numbers and use them.
EDIT:
I have stripped the program down to its basics: https://github.com/aidan-aidan/temp/blob/master/source/vpp.c
I have posted this question over at the gba-dev forums but they seem pretty dead and I have not gotten a response after many days.
Here is a video showing the problem: http://youtu.be/8gweFiSobwc (different from the code above, you can run the rom if you want to see it)
As you can see the BG rotation is unaffected, though they use the same LUT and they have the same data types in their structures as each other.
I have redone the LUT a few times to no avail, and this problem only surfaced after switching to 2048 circle divisions rather than 256.
Looking at the memory viewer in VBA, I can see that pb is not behaving the same as pa. pa ranges from 0x0100 to 0 as I would expect (same as 1 to 0), but pb ranges from 0 to 0x0096 when the gun is right from the center line, but jumps up to 0xFFFF as soon as it goes left from the center line. The only thing I can figure is that it is going negative (which makes sense, as cosine of that angle should result in a negative number), but I don't fully understand ones complement so i can't be sure. 0 "degrees" is horizontal on the right for the gun. 512 would be ninety degrees.
I have included everything I can think of, any help is appreciated.
Thanks!
As near as I can tell the program is technically working correctly. Instead your visual artifacts stem from the limited 8-bit fixed-point precision in the affine transformation used by the hardware.
Essentially the difference between a 0 and 1 in the sine tables makes a rather large and visible jump in the rotation for the right-angles in your rectangular sprite, while the 0°/90°/180°/270° corners are not that special for the round earth background.
The tangent for the (co-)sine function at 0 is high, so if you look at your tables there is only a single spot where they are precisely 0. If the rotation doesn't fall precisely on this index the result looks visibly skewed.
What you may try however is to cheat by fiddling with the rounding to produce more consecutive zeroes. At the moment you table generator is scaling by 256 and rounding towards the nearest integer:
sine[i] = floor(sin(i * M_PI / 1024.0) * 256.0 + 0.5);
As an alternative we may skew the table a little by rounding towards zero, as is the default in C, and increasing the scale to insure that the table still reaches 256 for at more than one entry.
int value = sin(i * M_PI / 1024.0) * 257.0;
if(value < -256) value = -256;
if(value > +256) value = +256;
sine[i] = value;
The table may be then flattened even further by increasing the scaling factor even further.
The problem is also that for small sprites near right angles there is nowhere for the rotation to go.
Picture a 64x1 straight line sprite pointing vertically at nearly an almost straight angle, with only the minimum 1/256th fractional step from the sine table. Keep in mind that the rotation is always centered so you get one jagged horizontal step precisely at the center.
For each of the 32 pixels from the center outwards you then add up another 1/256th fraction to the texture coordinate. However all of these together only make for 1/8th of a pixel in total and so no more horizontal steps are taken. In fact for this sprite shape you will see no visible change until the rotation angle reaches a full 8/256th fraction.
On the other hand your large earth object is sufficiently big to always show multiple visible integer "steps." This means that an increased rotation angle will always at least shift the position of the non-centered step, even if it insufficient to add up to another full pixel along entire side. The result is a smoother overall effect due to the continual animation.
I'm looking for an algorithm to do a best fit of an arbitrary rectangle to an unordered set of points. Specifically, I'm looking for a rectangle where the sum of the distances of the points to any one of the rectangle edges is minimised. I've found plenty of best fit line, circle and ellipse algorithms, but none for a rectangle. Ideally, I'd like something in C, C++ or Java, but not really that fussy on the language.
The input data will typically be comprised of most points lying on or close to the rectangle, with a few outliers. The distribution of data will be uneven, and unlikely to include all four corners.
Here are some ideas that might help you.
We can estimate if a point is on an edge or on a corner as follows:
Collect the point's n neares neighbours
Calculate the points' centroid
Calculate the points' covariance matrix as follows:
Start with Covariance = ((0, 0), (0, 0))
For each point calculate d = point - centroid
Covariance += outer_product(d, d)
Calculate the covariance's eigenvalues. (e.g. with SVD)
Classify point:
if one eigenvalue is large and the other very small, we are probably on an edge
otherwise we should be on a corner
Extract all corner points and do a segmentation. Choose the four segments with most entries. The centroid of those segments are candidates for the rectangle's corners.
Calculate the normalized direction vectors of two opposite sides and calculate their mean. Calculate the mean of the other two opposite sides. These are the direction vectors of a parallelogram. If you want a rectangle, calculate a perpendicular vector to one of those directions and calculate the mean with the other direction vector. Then the rectangle's direction's are the mean vector and a perpendicular vector.
In order to calculate the corners, you can project the candidates on their directions and move them so that they form the corners of a rectangle.
The idea of a line of best fit is to compute the vertical distances between your points and the line y=ax+b. Then you can use calculus to find the values of a and b that minimize the sum of the squares of the distances. The reason squaring is chosen over absolute value is because the former is differentiable at 0.
If you were to try the same approach with a rectangle, you would run into the problem that the square of the distance to the side of a rectangle is a piecewise defined function with 8 different pieces and is not differentiable when the pieces meet up inside the rectangle.
In order to proceed, you'll need to decide on a function that measures how far a point is from a rectangle that is everywhere differentiable.
Here's a general idea. Make a grid with smallish cells; calculate best fit line for each not-too-empty cell (the calculation is immediate1, there's no search involved). Join adjacent cells while making sure the standard deviation is improving/not worsening much. Thus we detect the four sides and the four corners, and divide our points into four groups, each belonging to one of the four sides.
Next, we throw away the corner cells, put the true rectangle in place of the four approximate
lines and do a bit of hill climbing (or whatever). The calculation of best fit line may be augmented for this case, since the two lines are parallel, and we've already separated our points into the four groups (for a given rectangle, we know the delta-y between the two opposing sides (taking horizontal-ish sides for a moment), so we just add this delta-y to the ys of the lower group of points and make the calculation).
The initial rectangular grid may be replaced with working by stripes (say, vertical). Then, at least half of the stripes will have two pronounced groupings of points (find them by dividing each stripe by horizontal division lines into cells).
1For a line Y = a*X+b, minimize the sum of squares of perpendicular distances of data points {xi,yi} to that line. This is directly solvable for a and b. For more vertical lines, flip the Xs and the Ys.
P.S. I interpret the problem as minimizing the sum of squares of perpendicular distances of each point to its nearest side of the rectangle, not to all the rectangle's sides.
I am not completely sure, but You might play around first 2 (3?) dimensions over the PCA from your points. it will work reasonably fast for the most cases.
If I have a polyline that describes a road and I know the road width at all parts, is there an algorithm I can use to determine if a point is on the road? I'm not entirely sure how to do this since the line itself has a width of 1px.
thanks,
Jeff
Find the minimum distance of the point to the line (it will be a vector perpendicular to the line). Actual calculation where P0 is the first point of the road segment, v is the road segment vector and w is the vector from P0 to the point in question. You will have to iterate over each edge in the polyline. If the distance is less than the width of that segment, then it is "on" the road.
d = |v x w| / |v|
The corners might be tricky depending on if you treat them as rounded (constant radius) or angular.
Perhaps you could take each line segment, build the rectangle of the line segment + its width, and use rectangle/point collision algorithms to determine if the rectangle contains the point. A good algorithm will account for the width = 1 scenario, which should simply attempt to build the inverse function of the line segment and determine if y-1(point.y) is an x between line_segment.x1 and line_segment.x2