I have a grid of pixels 64x8. The aim is to to activate the pixels on this grid in a random manner till the whole grid is activated.
Logically I can generate random numbers in 0-63 and 0-7 range and then activate this pixel. Assuming I run this for long enough, the grid should be completely activated.
However, I am wondering if there is any algorithm that can minimize / avoid altogether collision (returning already activated pixel coordinate) and guarantee complete grid activation in a finite amount of time?
Fill an array of length 512 with numbers increasing from from 0 to 511 (64x8 = 512), so the array will contain {0,1,2,3,..., 511}).
Then shuffle that array, for example like explained here: Shuffle array in C.
Then define a function that maps a number to a coordinate, that would be:
y = n / 8
x = n % 8
n being one of the numbers of the array.
If the array is well shuffled this guarantees that all pixels will be activatged in a random order.
You could implement a pseudo random generator (PRG # Wikipedia) with a period of 64 * 8. Use 3 bits for the axis with 8, and the remaining 6 bits for the axis with 64.
Related
need to make an algorithm (formula, function) using AND OR XOR NEG SHIFT NOT etc. which calculates the element of array from an index,
the size of the element is one byte
e.g. element = index^constant where the constant is array[index]^index (previously calculated).
This will work only if the array size is less then 256.
How to make a byte from an index when the index would be bigger then 1 byte.
the same way however there will be duplicates as you got only 256 possible numbers in BYTE so if your array is bigger than 256 there must be duplicates.
to avoid obvious mirroring you can not use monotonic functions for example
value[ix] = ix
is monotnic so it will be saw like shape mirroring the content of array every 256 bytes. To avoiding this you need to combine more stuff together. Its similar to computing own pseudo random generator. The usual approaches are:
modular arithmetics
something like:
value[ix]=( ( c0*ix + c1*ix*ix + c2*ix*ix*ix )%prime )&255
if constants c0,c1,c2 and prime are big enough the output looks random so there will be much less repeated patterns visible in the output ... But you need to use arithmetic of bitwidth that can hold the prime ...
In case you are hitting the upper bounds of your arithmetics bitwidth then you need to use modmul,modpow to avoid overflows. See:
Modular arithmetics and NTT (finite field DFT) optimizations
swapping bits
simply do some math on your ix where you also use ix with swapped bits. That will change the monotonic properties a lot... This approach works best however on cumulative sub-result which is not the case of yours. I would try:
value[ix]=( ix + ((ix<<3)*5) - ((ix>>2)*7) + ((3*ix)^((ix<<4)||(ix>>4))) )&255
playing with constants and operators achieve different results. However with this approach you need to check validity (which I did not!). So render graph for first few values (like 1024) where x axis is ix and y axis is value[ix]. There you should see if the stuff is repeating or even saturated towards some value and if it is change the equation.
for more info see How to seed to generate random numbers?
Of coarse after all this its not possible to get the ix from value[ix] ...
I have n rectangles each height 1 and various (integer) widths. So the rectangles are equivalent to an n-length vector of positive integers.
I have c containers each width (integer) w but whose height varies. Each container is equivalent to w rectangles of width 1 and height some non-negative integer.
So each container is equivalent to a w-length vector of non-negative integers and all c containers are equivalent to a c x w matrix M of non-negative integers.
I value the packing of each rectangle in proportion to its width. I may only pack rectangles horizontally.
So I need for each rectangle a position for its left end in some container i.e. I need (i,j) such that when summing over all the packed rectangles the total height in container i at each position is no greater than M(i,j).
I tried using solver in Excel but it only gave a local optimum.
Am thinking something like try to place rectangles in descending length. If there is ever more than one possible position, pick the one that leaves the most options for the next size down.
I have the following homework assignment in C. I basically need an approach rather than a solution.
We have a 13 x 13 array. In the array, we have a diamond shape that we need to consider. Everything outside this diamond is initialized to -1 (unimportant). Example 5 x 5 array below-
x x 1 x x
x 2 2 2 x
3 3 3 3 3
x 4 4 4 x
x x 5 x x
x=-1
Now in this array, the values we have in the diamond for each entry contains 11 bits. 5 lsb contains one data (hue), and other 6 contains another data (diameter). We need to sort the data row-wise, monotonically for the hue, and then column-wise for the diameter, monotonically.
What would be the most efficient and memory conserving way of doing this? Since we need to conserve this, it's best if the entries are swapped around rather than creating another array. In the end, we will end up with a sorted diamond array (still with the -1s). Thanks a lot in advance guys!
I didn't understand how exactly you want to reorder the elements
row-wise, monotonically for the hue, and then column-wise for the diameter, monotonically
but here are some ideas you might be able to use.
Your array is 13x13 (169 elements); out of that, almost exactly half (84) are empty, so you can use them as temporary storage (for e.g. radix-sort).
Your values have 11 meaningful bits; numbers in real computers have either 16 or 32 bits - so you can use the 5 (or 21, depending on your system) most significant bits as temporary storage.
One possibly good way to use the upper 5 bits is putting a copy of the 5 LSB (hue) there. This will reverse the significance of the two parts when doing normal integer comparison (making hue more significant than diameter)
I see.
I suppose such a diamond shape could be directly represented by an array.
Ignore all the -1 entries.
{ row-0 row-1 row-2 row-3 ... row-13 }
{ 1 2 2 2 3 3 3 3 3 4 4 4 5 }
You can now sort the array as you like.
Sort it twice, once for hue, once for diameter; or figure out how to sort an array by two criteria.
You can also work in-place, if you just write a function for converting an array index to diamond-coordinates. With that done, you can just work on the diamond-structure as if it was an array.
Write a sorting routine with this prototype:
void sort(int startx, int starty, int dx, int dy, int count, int (*compare)(int, int));
or
void sort(int *start, int stride, int count, int (*compare)(int,int));
Write a couple of comparison functions, and call sort in two for loops, one for rows and another for columns.
So I am trying to code Hough Transform on C. I have a binary image and have extracted the binary values from the image. Now to do hough transform I have to convert the [X,Y] values from the image into [rho,theta] to do a parametric transform of the form
rho=xcos(theta)+ysin(theta)
I don't quite understand how it's actually transformed, looking at other online codes. Any help explaining the algorithm and how the accumulator for [rho,theta] values should be done based on [X,Y] would be appreciated.Thanks in advance. :)
Your question hints at the fact that you think that you need to map each (X,Y) point of interest in the image to ONE (rho, theta) vector in the Hough space.
The fact of the matter is that each point in the image is mapped to a curve, i.e. SEVERAL vectors in the Hough space. The number of vectors for each input point depends on some "arbitrary" resolution that you decide upon. For example, for 1 degree resolution, you'd get 360 vectors in Hough space.
There are two possible conventions, for the (rho, theta) vectors: either you use [0, 359] degrees range for theta, and in that case rho is always positive, or you use [0,179] degrees for theta and allow rho to be either positive or negative. The latter is typically used in many implementation.
Once you understand this, the Accumulator is little more than a two dimension array, which covers the range of the (rho, theta) space, and where each cell is initialized with 0. It is used to count the number of vectors that are common to various curves for different points in the input.
The algorithm therefore compute all 360 vectors (assuming 1 degree resolution for theta) for each point of interest in the input image. For each of the these vectors, after rounding rho to the nearest integral value (depends on precision in the rho dimension, e.g. 0.5 if we have 2 points per unit) it finds the corresponding cell in the accumulator, and increment the value in this cell.
when this has been done for all points of interest, the algorithm searches for all cells in the accumulator which have a value above a chosen threshold. The (rho, theta) "address" of these cells are the polar coordinates values for the lines (in the input image) that the Hough algorithm has identified.
Now, note that this gives you line equations, one is typically left with figure out the segment of these lines that effectively belong in the input image.
A very rough pseudo-code "implementation" of the above
Accumulator_rho_size = Sqrt(2) * max(width_of_image, height_of_image)
* precision_factor // e.g. 2 if we want 0.5 precision
Accumulator_theta_size = 180 // going with rho positive or negative convention
Accumulator = newly allocated array of integers
with dimension [Accumulator_rho_size, Accumulator_theta_size]
Fill all cells of Accumulator with 0 value.
For each (x,y) point of interest in the input image
For theta = 0 to 179
rho = round(x * cos(theta) + y * sin(theta),
value_based_on_precision_factor)
Accumulator[rho, theta]++
Search in Accumulator the cells with the biggest counter value
(or with a value above a given threshold) // picking threshold can be tricky
The corresponding (rho, theta) "address" of these cells with a high values are
the polar coordinates of the lines discovered in the the original image, defined
by their angle relative to the x axis, and their distance to the origin.
Simple math can be used to compute various points on this line, in particular
the axis intercepts to produce a y = ax + b equation if so desired.
Overall this is a rather simple algorithm. The complexity lies mostly in being consistent with the units, for e.g. for the conversion between degrees and radians (most math libraries' trig functions are radian-based), and also regarding the coordinates system used for the input image.
This is a follow-up to this question.
I am working on a low level C app where I have to draw text. I have decided to store the font I want to use as an array (black and white, each char 128x256, perhaps), then I'd downscale it to the sizes I need with some algorithm (as grayscale, so I can have some crude font smoothing).
Note: this is a toy project, please disregard stuff like doing calculations at runtime or not.
Question is, which algorithm?
I looked up 2xSaI, but it's rather complicated. I'd like something I can read the description for and work out the code myself (I am a beginner and have been coding in C/C++ for just under a year).
Suggestions, anyone?
Thanks for your time!
Edit: Please note, the input is B&W, the output should be smoothed grayscale
Figure out the rectangle in the source image that will correspond to a destination pixel. For example if your source image is 50x100 and your destination is 20x40, the upper left pixel in the destination corresponds to the rectangle from (0,0) to (2.2,2.2) in the source image. Now, do an area-average over those pixels:
Area is 2.2 * 2.2 = 4.84. You'll scale the result by 1/4.84.
Pixels at (0,0), (0,1), (1,0), and (1,1) each weigh in at 1 unit.
Pixels at (0,2), (1,2), (2,0), and (2,1) each weigh in at 0.2 unit (because the rectangle only covers 20% of them).
The pixel at (2,2) weighs in at 0.04 (because the rectangle only covers 4% of it).
The total weight is of course 4*1 + 4*0.2 + 0.04 = 4.84.
This one was easy because you started with source and destination pixels lined up evenly at the edge of the image. In general, you'll have partial coverage at all 4 sides/4 corners of the sliding rectangle.
Don't bother with algorithms other than area-averaging for downscaling. Most of them are plain wrong (they result in horrible aliasing, at least with a factor smaller than 1/2) and the ones that aren't plain wrong are a good bit more painful to implement and probably won't give you better results.
Consider that your image is a N*M BW bitmap. For simplicity we'll consider it char Letter[N][M], when allowable values are 0 and 1. Now consider that you want to downscale it to the unsigned char letter[n][m]. This will mean that each greyscale pixel from letter will be computed as number of white pixels in the big bitmap:
char Letter[N][M];
unsigned char letter[n][m];
int rect_sz_X = N / n; // the size of rectangle that will map to a single pixel
int rect_sz_Y = M / m; // in the downscaled image
int i, j, x, y;
for (i = 0; i < n; i++) for (j = 0; j < m; j++){
int sum = 0;
for (x = 0; x < rect_sz_X; x++) for (y = 0; y < rect_sz_Y; y++)
sum += Letter[i*rect_sz_X + x][j*rect_sz_Y + y];
letter[n][m] = ( sum * 255) / (rect_sz_X * rect_sz_Y);
};
Note that the rectangles that creates pixels could overlap (in case when sizes aren't divisible). The larger is your original bitmap, the better.
Scaling a bitmapped font is the same problem as scaling any other bitmap. The general class of algorithm that you're after is interpolation. There's quite a few ways to do this - in general, the more visually accurate the result, the more complicated the algorithm. You could start by looking at (in increasing order of complexity):
Nearest-neighbour
Bilinear interpolation
Bicubic interpolation
It's pretty simple. If all you've got is a bitmapped font instead of an outline font then you have very limited choices in picking an anti-aliasing pixel color. For example, if the bitmapped font point size is exactly four times as large as the desired display point size then you can only ever get 16 distinct choices. The number of 'lit' pixels in the 4x4 mapping rectangle.
Having to deal with fractional mapping is a programming exercise but not one that improves the quality.
If it is acceptable to constrain the downscaling to multiples of 2 (50%, 25%, 12.5%, etc.), then a very simple and fairly good algorithm is to create each downscaled pixel as the majority vote of all the source pixels. For example, at 50%, a square of four pixels are forming the one downscaled pixel: if zero or one of them is on, then the output is off; if three or four are on, then the output is on. The artistic case (for two pixels on), either always choose on or off, or look at other surrounding pixels for tiebreaking.