Line fit from an array of 2d vectors - c

I have a problem in some C code, I assume it belonged here over the Mathematics exchange.
I have an array of changes in x and y position generated by a user dragging a mouse, how could I determine if a straight line was drawn or not.
I am currently using linear regression, is there a better(more efficient) way to do this?
EDIT:
Hough transformation attempt:
#define abSIZE 100
#define ARRAYSIZE 10
int A[abSIZE][abSIZE]; //points in the a-b plane
int dX[10] = {0, 10, 13, 8, 20, 18, 19, 22, 12, 23};
int dY[10] = {0, 2, 3, 1, -1, -2, 0, 0, 3, 1};
int absX[10]; //absolute positions
int absY[10];
int error = 0;
int sumx = 0, sumy = 0, i;
//Convert deltas to absolute positions
for (i = 0; i<10; i++) {
absX[i] = sumx+=dX[i];
absY[i] = sumy+=dY[i];
}
//initialise array to zero
int a, b, x, y;
for(a = -abSIZE/2; a < abSIZE/2; a++) {
for(b = -abSIZE/2; b< abSIZE/2; b++) {
A[a+abSIZE/2][b+abSIZE/2] = 0;
}
}
//Hough transform
int aMax = 0;
int bMax = 0;
int highest = 0;
for(i=0; i<10; i++) {
x = absX[i];
y = absX[i];
for(a = -abSIZE/2; a < abSIZE/2; a++) {
for(b = -abSIZE/2; b< abSIZE/2; b++) {
if (a*x + b == y) {
A[a+abSIZE/2][b+abSIZE/2] += 1;
if (A[a+abSIZE/2][b+abSIZE/2] > highest) {
highest++; //highest = A[a+abSIZE/2][b+abSIZE/2]
aMax = a;
bMax = b;
}
}
}
}
}
printf("Line is Y = %d*X + %d\n",aMax,bMax);
//Calculate MSE
int e;
for (i = 0; i < ARRAYSIZE; i++) {
e = absY[i] - (aMax * absX[i] + bMax);
e = (int) pow((double)e, 2);
error += e;
}
printf("error is: %d\n", error);

Though linear regression sounds like a perfectly reasonable way to solve the task, here's another suggestion: Hough transform, which might be somewhat more robust against outliers. Here is a very rough sketch of how this can be applied:
initialize a large matrix A with zeros
transform your deltas to some absolute coordinates (x, y) in a x-y-plane (e.g. start with (0,0))
for each point
there are non-unique parameters a and b such that a*x + b = y. All such points (a,b) define a straight line in the a-b-plane
draw this "line" in the a-b-plane by adding ones to the corresponding cells in A, which represents the quantized plane
now you can find a maximum in the a-b-plane-matrix A, which will correspond to the parameters (a, b) of the straight line in the x-y-plane that has most support by the original points
finally, calculate MSE to the original points and decide with some threshold if the move was a straight line
More details e.g. here:
http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/MARSHALL/node32.html
Edit: here's a quote from Wikipedia that explains why it's better to use a different parametrization to deal with vertical lines (where a would become infinite in ax+b=y):
However, vertical lines pose a problem. They are more naturally described as x = a and would give rise to unbounded values of the slope parameter m. Thus, for computational reasons, Duda and Hart proposed the use of a different pair of parameters, denoted r and theta, for the lines in the Hough transform. These two values, taken in conjunction, define a polar coordinate.
Thanks to Zaw Lin for pointing this out.

Related

3D Sobel Operator Algorith in C

I'm currently struggling to make a 3D Sobel edge detector in C (which I am quite new to). It's not exactly working as expected (highlighting non-edges within a solid 3D object) and I was hoping someone might see where I've gone wrong. (and sorry for the poor spacing in this post)
First of all, im is the input image which has been copied into tm with a 1 pixel border on each side.
I loop through the image:
for (z = im.zlo; z <= im.zhi; z++) {
for (y = im.ylo; y <= im.yhi; y++) {
for (x = im.xlo; x <= im.xhi; x++) {
I make an array which will house the change in the x, y, and z directions, and loop through a 3x3x3 cube:
int dxdydz[3] = {0, 0, 0};
for (a = -1; a < 2; a++) {
for (b = -1; b < 2; b++) {
for (c = -1; c < 2; c++) {
Now here's the meat, where it gets a bit tricky. I'm weighting my Sobel operator such that if you imagine one 2D surface of the kernel, it would be {{1,2,1},{2,4,2},{1,2,1}}. In other words, the weight of a kernel pixel is related to its 4-connected nearness to the center pixel.
To accomplish this, I define e as 3 - (|a| + |b| + |c|), so that it is either 0, 1, or 2. The kernel will be weighted by 3^e at each pixel.
The sign of the kernel pixel will just be determined by the sign of a, b, or c.
int e = 3 - (abs(a) + abs(b) + abs(c));
Now I loop through a, b, and c by packaging them into an array and looping from 0-1-2. When a for example is 0, we don't want to add any values to x, so we exclude that with an if statement (8 levels deep!).
int abc[3] = {a, b, c};
for (i = 0; i < 3; i++) {
if (abc[i] != 0) {
The value to add should just be the image value at that pixel multiplied by the kernel value at that pixel. abc[i] is just -1 or 1, and (int)pow(3, e) is the nearness-to-center weight.
dxdydz[i] += abc[i]*(int)pow(3, e)*tm.u[z+a][y+b][x+c];
}
}
}
}
}
Lastly take the sqrt of the sum of the squared changes in x, y, and z.
int mag2 = 0;
for (i = 0; i < 3; i++) {
mag2 += (int)pow(dxdydz[i], 2);
}
im.u[z][y][x] = (int)sqrt(mag2);
}
}
}
Of course I could just loop through the image and multiply 3x3x3 cubes by the 3D kernels:
int kx[3][3][3] = {{{-1,-2,-1},{0,0,0},{1,2,1}},
{{-2,-4,-2},{0,0,0},{2,4,2}},
{{-1,-2,-1},{0,0,0},{1,2,1}}};
int ky[3][3][3] = {{{-1,-2,-1},{-2,-4,-2},{-1,-2,-1}},
{{0,0,0},{0,0,0},{0,0,0}},
{{1,2,1},{2,4,2},{1,2,1}}};
int kz[3][3][3] = {{{-1,0,1},{-2,0,2},{-1,0,1}},
{{-2,0,2},{-4,0,4},{-2,0,2}},
{{-1,0,1},{-1,0,1},{-1,0,1}}};
But I think the loop approach is a lot sexier.

Generate permutation with keyword

I have to implement the following algorithm which will be run a few zillion times in the cryptanalysis of a specific cipher, by hill-climbing.
The algorithm produces a permutation of the standard alphabet {A,B,C,...,Y,Z} with a key K of 7 letters of the same alphabet, as follows:
Assume K = INGXNDM = {9, 14, 7, 24, 14, 4, 13}
From right to left, count K1=9 positions on the alphabet. The R is reached, so the first element of the permutation is R: P = {R,...}
Mark R as used, we will have to 'jump' over it later.
From R, count K2=14 positions to the left, we reach D and mark it as used.
P={R,D,...}
The next count is 7, when reaching A we loop and consider that Z follows A, so we reach W: mark it as used, P={R,D,W,...}
The next count is 24, so we reach V because we jump over R,D and W.
and so on... When K7=13 has been used, we restart with K1=9
the obtained transposed alphabet is: RDWVGBL ZHUKNFI PJSAMXT CQOYE
In fact I need the inverse permutation in the deciphering code.
My code implements a chained list for skipping the used letters.
It is in base 0, so A = 0, ... Z = 25 and returns the inverse permutation Pinv(i)=j meaning that the letter i is at position j.
#define ALPHALEN 26
void KeyToPermut(int *K, int * Pinv);
int previnit[ALPHALEN], prev[ALPHALEN], nextinit[ALPHALEN], next[ALPHALEN];
int main() {
int l, Pinv[ALPHALEN], K[7] = { 8, 13, 6, 23, 13, 3, 12 }, P[ALPHALEN];
// precalculate links between letters, ordered right to left , from Z to A
for (l = 0; l < ALPHALEN; l++) {
previnit[l] = l + 1; // prev[A] = B
if (previnit[l] >= ALPHALEN) previnit[l] -= ALPHALEN; // prev[Z] = A
nextinit[l] = l - 1; // next[B] = A
if (nextinit[l] < 0) nextinit[l] += ALPHALEN; // next[A] = Z
}
KeyToPermut(K, Pinv); // this is the code to be optimized
for (l = 0; l < ALPHALEN; l++) P[Pinv[l]] = l; // calculate direct permutation
for (l = 0; l < ALPHALEN; l++) printf("%c", P[l] + 65);
printf("\n");
}
void KeyToPermut(int *K, int * Permut) {
int l, keyptr=0, cnt=0, p=0;
// copy prev[] and next[] from precalculated arrays previnit[] and nextinit[]
for (l = 0; l < ALPHALEN; l++) {
prev[l] = previnit[l];
next[l] = nextinit[l];
}
while (1) {
for (l = 0; l <= K[keyptr] % (ALPHALEN-cnt); l++) p = next[p];
Permut[p] = cnt++;
if (cnt < ALPHALEN)
{
prev[next[p]] = prev[p]; // link previous and next positions
next[prev[p]] = next[p];
keyptr++;
if (keyptr >= 7) keyptr = 0; // re-use K1 after K7
}
else
break;
}
}
I have 2 questions:
how can I optimize the code in KeyToPermut ? The profiler obviously indicates that the for loop across the chaining is the bottleneck. There might be a method avoiding the linked list ...?
Obviously the key space is not 26! but much smaller: 26^7, so only a subset of the 26! can be generated. Do you know how specific the generated permutations are ? Do they belong to a known class of permutations ? For example, I could not identify (so far) any pattern in the cycles of these permutations.
I use VS2013 and C ,other parts of the project are CUDA code. (x64 platform)
Thank you for your attention.
Background information: the encryption scheme used by the cipher uses 4 keys K of length 7. So, the theoretical key-space to be explored for finding the plain-text is 26^28 i.e. 131 bits. The method could use other key lengths: any value from 1 to 25 would work.
how can I optimize the code in KeyToPermut ? The profiler obviously
indicates that the for loop across the chaining is the bottleneck.
There might be a method avoiding the linked list ...?
I didn't find a better method avoiding the linked list, but we can do with a singly instead of doubly linked one, since the needed previous position can be obtained from the last iteration of the for loop.
void KeyToPermut(int *K, int *Permut)
{
int l, keyptr=0, cnt=0, p=0, prev;
// copy next[] from precalculated array nextinit[]
for (l = 0; l < ALPHALEN; l++) next[l] = nextinit[l];
while (1)
{
for (l = 0; l <= K[keyptr] % (ALPHALEN-cnt); l++) prev = p, p = next[p];
Permut[p] = cnt++;
if (cnt < ALPHALEN)
{
next[prev] = next[p]; // link previous to next position
p = prev;
keyptr++;
if (keyptr >= 7) keyptr = 0; // re-use K1 after K7
}
else
break;
}
}
This saves about 10 % runtime of the function.

Distribute elements between equivalent arrays to achieve balanced sums

I am given a set of elements from, say, 10 to 21 (always sequential),
I generate arrays of the same size, where size is determined runtime.
Example of 3 generated arrays (arrays # is dynamic as well as # of elements in all arrays, where some elements can be 0s - not used):
A1 = [10, 11, 12, 13]
A2 = [14, 15, 16, 17]
A3 = [18, 19, 20, 21]
these generated arrays will be given to different processes to to do some computations on the elements. My aim is to balance the load for every process that will get an array. What I mean is:
With given example, there are
A1 = 46
A2 = 62
A3 = 78
potential iterations over elements given for each thread.
I want to rearrange initial arrays to give equal amount of work for each process, so for example:
A1 = [21, 11, 12, 13] = 57
A2 = [14, 15, 16, 17] = 62
A3 = [18, 19, 20, 10] = 67
(Not an equal distribution, but more fair than initial). Distributions can be different, as long as they approach some optimal distribution and are better than the worst (initial) case of 1st and last arrays. As I see it, different distributions can be achieved using different indexing [where the split of arrays is made {can be uneven}]
This works fine for given example, but there may be weird cases..
So, I see this as a reflection problem (due to the lack of knowledge of proper definition), where arrays should be seen with a diagonal through them, like:
10|111213
1415|1617
181920|21
And then an obvious substitution can be done..
I tried to implement like:
if(rest == 0)
payload_size = (upper-lower)/(processes-1);
else
payload_size = (upper-lower)/(processes-1) + 1;
//printf("payload size: %d\n", payload_size);
long payload[payload_size];
int m = 0;
int k = payload_size/2;
int added = 0; //track what been added so far (to skip over already added elements)
int added2 = 0; // same as 'added'
int p = 0;
for (i = lower; i <= upper; i=i+payload_size){
for(j = i; j<(i+payload_size); j++){
if(j <= upper){
if((j-i) > k){
if(added2 > j){
added = j;
payload[(j-i)] = j;
printf("1 adding data: %d at location: %d\n", payload[(j-i)], (j-i));
}else{
printf("else..\n");
}
}else{
if(added < upper - (m+1)){
payload[(j-i)] = upper - (p*payload_size) - (m++);
added2 = payload[(j-i)];
printf("2 adding data: %d at location: %d\n", payload[(j-i)], (j-i));
}else{
payload[(j-i)] = j;
printf("2.5 adding data: %d at location: %d\n", payload[(j-i)], (j-i));
}
}
}else{ payload[(j-i)] = '\0'; }
}
p++;
k=k/2;
//printf("send to proc: %d\n", ((i)/payload_size)%(processes-1)+1);
}
..but failed horribly.
You definitely can see the problem in the implementation, because it is poorly scalable, not complete, messy, badly written and so on, and on, and on, ...
So, I need help either with the implementation or with an idea of a better approach to do what I want to achieve, given the description.
P.S. I need the solution to be as 'in-liney' as possible (avoid loop nesting) - that is why I am using bunch of flags and global indexes.
Surely this can be done with extra loops and unnecessary iterations. I invite people that can and appreciate t̲h̲e̲ ̲a̲r̲t̲ ̲o̲f̲ ̲i̲n̲d̲e̲x̲i̲n̲g̲ when it comes to arrays.
I am sure there is a solution somewhere out there, but I just cannot make an appropriate Google query to find it.
Hint? I thought of using index % size_of_my_data to achieve this task..
P.S. Application: described here
Here is an O(n) solution I wrote using deque (double-ended queue, a deque is not necessary and a simple array can be used, but a deque makes the code clean because of popRight and popLeft). The code is Python, not pseudocode, but it should be pretty to understand (because it's Python).:
def balancingSumProblem(seqStart = None, seqStop = None, numberOfArrays = None):
from random import randint
from collections import deque
seq = deque(xrange(seqStart or randint(1, 10),
seqStop and seqStop + 1 or randint(11,30)))
arrays = [[] for _ in xrange(numberOfArrays or randint(1,6))]
print "# of elements: {}".format(len(seq))
print "# of arrays: {}".format(len(arrays))
averageNumElements = float(len(seq)) / len(arrays)
print "average number of elements per array: {}".format(averageNumElements)
oddIteration = True
try:
while seq:
for array in arrays:
if len(array) < averageNumElements and oddIteration:
array.append(seq.pop()) # pop() is like popright()
elif len(array) < averageNumElements:
array.append(seq.popleft())
oddIteration = not oddIteration
except IndexError:
pass
print arrays
print [sum(array) for array in arrays]
balancingSumProblem(10,21,3) # Given Example
print "\n---------\n"
balancingSumProblem() # Randomized Test
Basically, from iteration to iteration, it alternates between grabbing large elements and distributing them evenly in the arrays and grabbing small elements and distributing them evenly in the arrays. It goes from out to in (though you could go from in to out) and tries to use what should be the average number of elements per array to balance it out further.
It's not 100 percent accurate with all tests but it does a good job with most randomized tests. You can try running the code here: http://repl.it/cJg
With a simple sequence to assign, you can just iteratively add the min and max elements to each list in turn. There are some termination details to fix up, but that's the general idea. Applied to your example the output would look like:
john-schultzs-macbook-pro:~ jschultz$ ./a.out
10 21 13 18 = 62
11 20 14 17 = 62
12 19 15 16 = 62
A simple reflection assignment like this will be optimal when num_procs evenly divides num_elems. It will be sub-optimal, but still decent, when it doesn't:
#include <stdio.h>
int compute_dist(int lower, int upper, int num_procs)
{
if (lower > upper || num_procs <= 0)
return -1;
int num_elems = upper - lower + 1;
int num_elems_per_proc_floor = num_elems / num_procs;
int num_elems_per_proc_ceil = num_elems_per_proc_floor + (num_elems % num_procs != 0);
int procs[num_procs][num_elems_per_proc_ceil];
int i, j, sum;
// assign pairs of (lower, upper) to each process until we can't anymore
for (i = 0; i + 2 <= num_elems_per_proc_floor; i += 2)
for (j = 0; j < num_procs; ++j)
{
procs[j][i] = lower++;
procs[j][i+1] = upper--;
}
// handle left overs similarly to the above
// NOTE: actually you could use just this loop alone if you set i = 0 here, but the above loop is more understandable
for (; i < num_elems_per_proc_ceil; ++i)
for (j = 0; j < num_procs; ++j)
if (lower <= upper)
procs[j][i] = ((0 == i % 2) ? lower++ : upper--);
else
procs[j][i] = 0;
// print assignment results
for (j = 0; j < num_procs; ++j)
{
for (i = 0, sum = 0; i < num_elems_per_proc_ceil; ++i)
{
printf("%d ", procs[j][i]);
sum += procs[j][i];
}
printf(" = %d\n", sum);
}
return 0;
}
int main()
{
compute_dist(10, 21, 3);
return 0;
}
I have used this implementation, which I mentioned in this report (Implementation works for cases I've used for testing (1-15K) (1-30K) and (1-100K) datasets. I am not saying that it will be valid for all the cases):
int aFunction(long lower, long upper, int payload_size, int processes)
{
long result, i, j;
MPI_Status status;
long payload[payload_size];
int m = 0;
int k = (payload_size/2)+(payload_size%2)+1;
int lastAdded1 = 0;
int lastAdded2 = 0;
int p = 0;
int substituted = 0;
int allowUpdate = 1;
int s;
int times = 1;
int times2 = 0;
for (i = lower; i <= upper; i=i+payload_size){
for(j = i; j<(i+payload_size); j++){
if(j <= upper){
if(k != 0){
if((j-i) >= k){
payload[(j-i)] = j- (m);
lastAdded2 = payload[(j-i)];
}else{
payload[(j-i)] = upper - (p*payload_size) - (m++) + (p*payload_size);
if(allowUpdate){
lastAdded1 = payload[(j-i)];
allowUpdate = 0;
}
}
}else{
int n;
int from = lastAdded1 > lastAdded2 ? lastAdded2 : lastAdded1;
from = from + 1;
int to = lastAdded1 > lastAdded2 ? lastAdded1 : lastAdded2;
int tempFrom = (to-from)/payload_size + ((to-from)%payload_size>0 ? 1 : 0);
for(s = 0; s < tempFrom; s++){
int restIndex = -1;
for(n = from; n < from+payload_size; n++){
restIndex = restIndex + 1;
payload[restIndex] = '\0';
if(n < to && n >= from){
payload[restIndex] = n;
}else{
payload[restIndex] = '\0';
}
}
from = from + payload_size;
}
return 0;
}
}else{ payload[(j-i)] = '\0'; }
}
p++;
k=(k/2)+(k%2)+1;
allowUpdate = 1;
}
return 0;
}

C Language - General algorithm to read a square matrix, based on the square number of it's side?

So we're reading a matrix and saving it in an array sequentially. We read the matrix from a starting [x,y] point which is provided. Here's an example of some code I wrote to get the values of [x-1,y] [x+1,y] [x,y-1] [x,y+1], which is a cross.
for(i = 0, n = -1, m = 0, array_pos = 0; i < 4; i++, n++, array_pos++) {
if(x+n < filter_matrix.src.columns && x+n >= 0 )
if(y+m < filter_matrix.src.lines && y+m >= 0){
for(k = 0; k < numpixels; k++) {
arrayToProcess[array_pos].rgb[h] = filter_matrix.src.points[x+n][y+m].rgb[h];
}
}
m = n;
m++;
}
(The if's are meant to avoid reading null positions, since it's an image we're reading the origin pixel can be located in a corner. Not relevant to the issue here.)
Now is there a similar generic algorithm which can read ALL the elements around as a square (not just a cross) based on a single parameter, which is the size of the square's side squared?
If it helps, the only values we're dealing with are 9, 25 and 49 (a 3x3 5x5 and 7x7 square).
Here is a generalized code for reading the square centered at (x,y) of size n
int startx = x-n/2;
int starty = y-n/2;
for(int u=0;u<n;u++) {
for(int v=0;v<n;v++) {
int i = startx + u;
int j = starty + v;
if(i>=0 && j>=0 && i<N && j<M) {
printf(Matrix[i][j]);
}
}
}
Explanation: Start from top left value which is (x - n/2, y-n/2) now consider that you are read a normal square matrix from where i and j are indices of Matrix[i][j]. So we just added startx & starty to shift the matrix at (0,0) to (x-n/2,y-n/2).
Given:
static inline int min(int x, int y) { return (x < y) ? x : y; }
static inline int max(int x, int y) { return (x > y) ? x : y; }
or equivalent macros, and given that:
the x-coordinates range from 0 to x_max (inclusive),
the y-coordinates range from 0 to y_max (inclusive),
the centre of the square (x,y) is within the bounds,
the square you are creating has sides of (2 * size + 1) (so size is 1, 2, or 3 for the 3x3, 5x5, and 7x7 cases; or if you prefer to have sq_side = one of 3, 5, 7, then size = sq_side / 2),
the integer types are all signed (so x - size can produce a negative value; if they're unsigned, you will get the wrong result using the expressions shown),
then you can ensure that you are within bounds by setting:
x_lo = max(x - size, 0);
x_hi = min(x + size, x_max);
y_lo = max(y - size, 0);
y_hi = min(y + size, y_max);
for (x_pos = x_lo; x_pos <= x_hi; x_pos++)
{
for (y_pos = y_lo; y_pos <= y_hi; y_pos++)
{
// Process the data at array[x_pos][y_pos]
}
}
Basically, the initial assignments determine the bounds of the the array from [x-size][y-size] to [x+size][y+size], but bounded by 0 on the low side and the maximum sizes on the high end. Then scan over the relevant rectangular (usually square) sub-section of the matrix. Note that this determines the valid ranges once, outside the loops, rather than repeatedly within the loops.
If the integer types are signed, you have ensure you never try to create a negative number during subtraction. The expressions could be rewritten as:
x_lo = x - min(x, size);
x_hi = min(x + size, x_max);
y_lo = y - min(y, size);
y_hi = min(y + size, y_max);
which isn't as symmetric but only uses the min function.
Given the coordinates (x,y), you first need to find the surrounding elements. You can do that with a double for loop, like this:
for (int i = x-1; i <= x+1; i++) {
for (int j = y-1; j <= y+1; j++) {
int elem = square[i][j];
}
}
Now you just need to do a bit of work to make sure that 0 <= i,j < n, where n is the length of a side;
I don't know whether the (X,Y) in your code is the center of the square. I assume it is.
If the side of the square is odd. generate the coordinates of the points on the square. I assume the center is (0,0). Then the points on the squares are
(-side/2, [-side/2,side/2 - 1]); ([-side/2 + 1,side/2], -side/2); (side/2,[side/2 - 1,-side/2]);([side/2 - 1, side/2],-side/2);
side is the length of the square
make use of this:
while(int i<=0 && int j<=0)
for (i = x-1; i <= x+1; i++) {
for (j = y-1; j <= y+1; j++) {
int elem = square[i][j];
}
}
}

Progressive loop through pairs of increasing integers

Suppose one wanted to search for pairs of integers x and y a that satisfy some equation, such as (off the top of my head) 7 x^2 + x y - 3 y^2 = 5
(I know there are quite efficient methods for finding integer solutions to quadratics like that; but this is irrelevant for the purpose of the present question.)
The obvious approach is to use a simple double loop "for x = -max to max; for y = -max to max { blah}" But to allow the search to be stopped and resumed, a more convenient approach, picturing the possible integers of x and y as a square lattice of points in the plane, is to work round a "square spiral" outward from the origin, starting and stopping at (say) the top right corner.
So basically, I am asking for a simple and sound "pseudo-code" for the loops to start and stop this process at points (m, m) and (n, n) respectively.
For extra kudos, if the reader is inclined, I suggest also providing the loops if one of x can be assumed non-negative, or if both can be assumed non-negative. This is probably somewhat easier, especially the second.
I could whump this up myself without much difficulty, but am interested in seeing neat ideas of others.
This would make quite a good "constructive" interview challenge for those dreaded interviewers who like to torture candidates with white boards ;-)
def enumerateIntegerPairs(fromRadius, toRadius):
for radius in range(fromRadius, toRadius + 1):
if radius == 0: yield (0, 0)
for x in range(-radius, radius): yield (x, radius)
for y in range(-radius, radius): yield (radius, -y)
for x in range(-radius, radius): yield (-x, -radius)
for y in range(-radius, radius): yield (-radius, y)
Here is a straightforward implementation (also on ideone):
void turn(int *dr, int *dc) {
int tmp = *dc;
*dc = -*dr;
*dr = tmp;
}
int main(void) {
int N = 3;
int r = 0, c = 0;
int sz = 0;
int dr = 1, dc = 0, cnt = 0;
while (r != N+1 && c != N+1) {
printf("%d %d\n", r, c);
if (cnt == sz) {
turn(&dr, &dc);
cnt = 0;
if (dr == 0 && dc == -1) {
r++;
c++;
sz += 2;
}
}
cnt++;
r += dr;
c += dc;
}
return 0;
}
The key in the implementation is the turn function, that performs the right turn given a pair of {delta-Row, delta-Col}. The rest is straightforward arithmetic.

Resources