Generating a random cubic graph with uniform probability (or less) - c

While this may look like homework, I assure you it's not. It stems from some homework assignment I did, though.
Let's call an undirected graph without self-edges "cubic" if every vertex has degree exactly three. Given a positive integer N I'd like to generate a random cubic graph on N vertices. I'd like for it to have uniform probability, that is, if there are M cubic graphs on N vertices the probability of generating each one is 1/M. A weaker condition that is still fine is that every cubic graph has non-zero probability.
I feel there's a quick and smart way to do this, but so far I've been unsuccessful.
I am a bad coder, please bear with this awful code:
PRE: edges = (3*nodes)/2, nodes is even, the constants are selected in such a way that the hash works (BIG_PRIME is bigger than edges, SMALL_PRIME is bigger than nodes, LOAD_FACTOR is small).
void random_cubic_graph() {
int i, j, k, count;
int *degree;
char guard;
count = 0;
degree = (int*) calloc(nodes, sizeof(int));
while (count < edges) {
/* Try a new edge at random */
guard = 0;
i = rand() % nodes;
j = rand() % nodes;
/* Checks if it is a self-edge */
if (i == j)
guard = 1;
/* Checks that the degrees are 3 or less */
if (degree[i] > 2 || degree[j] > 2)
guard = 1;
/* Checks that the edge was not already selected with an hash */
k = 0;
while(A[(j + k*BIG_PRIME) % (LOAD_FACTOR*edges)] != 0) {
if (A[(j + k*BIG_PRIME) % (LOAD_FACTOR*edges)] % SMALL_PRIME == j)
if ((A[(j + k*BIG_PRIME) % (LOAD_FACTOR*edges)] - j) / SMALL_PRIME == i)
guard = 1;
k++;
}
if (guard == 0)
A[(j + k*BIG_PRIME) % (LOAD_FACTOR*edges)] = hash(i,j);
k = 0;
while(A[(i + k*BIG_PRIME) % (LOAD_FACTOR*edges)] != 0) {
if (A[(i + k*BIG_PRIME) % (LOAD_FACTOR*edges)] % SMALL_PRIME == i)
if ((A[(i + k*BIG_PRIME) % (LOAD_FACTOR*edges)] - i) / SMALL_PRIME == j)
guard = 1;
k++;
}
if (guard == 0)
A[(i + k*BIG_PRIME) % (LOAD_FACTOR*edges)] = hash(j,i);
/* If all checks were passed, increment the count, print the edge, increment the degrees. */
if (guard == 0) {
count++;
printf("%d\t%d\n", i, j);
degree[i]++;
degree[j]++;
}
}
The problem is that its final edge that has to be selected might be a self-edge. That happens when N - 1 vertices have already degree 3, only 1 has degree 1. Thus the algorithm might not terminate. Moreover, I'm not entirely convinced that the probability is uniform.
There's probably much to improve in my code, but can you suggest a better algorithm to implement?

Assume N is even. (Otherwise there cannot be a cubic graph on N vertices).
You can do the following:
Take 3N points and divide them into N groups of 3 points each.
Now pair up these 3N points randomly (note: 3N is even). i.e. Marry two points off randomly and form 3N/2 marriages).
If there is a pairing between group i and group j, create an edge between i and j. This gives a graph on N vertices.
If this random pairing does not create any multiple edges or loops, you have a cubic graph.
If not try again. This runs in expected linear time and generates a uniform distribution.
Note: all cubic graphs on N vertices are generated by this method (responding to Hamish's comments).
To see this:
Let G be a cubic graph on N vertices.
Let the vertices be, 1, 2, ...N.
Let the three neighbours of j be A(j), B(j) and C(j).
For each j, construct the group of ordered pairs { (j, A(j)), (j, B(j)), (j, C(j)) }.
This gives us 3N ordered pairs. We pair them up: (u,v) is paired with (v,u).
Thus any cubic graph corresponds to a pairing and vice versa...
More information on this algorithm and faster algorithms can be found here: Generating Random Regular Graphs Quickly.

Warning: I make a lot of intuitive-but-maybe-wrong claims in this answer. You should definitely prove them if you intend to use this idea.
Enumerating Cubic Graphs
When dealing with a random choice, a good starting point is to figure out how to enumerate over all of your possible elements. This might reveal some of the structure, and lead you to an algorithm.
Here is my strategy for enumerating cubic graphs: pick the first vertex, and iterate over all possible choices of three adjacent vertices. During those iterations, recurse on the next vertex, with the caveat that you keep track of how many edges are needed for each vertex degree to reach 3. Continue in that fashion until the lowest level is reached. Now you have your first cubic graph. Undo the recently added edges and continue to the next possibility until there are none left. There are a few implementation details you need to consider, but generally straight-forward.
Generalize Enumeration into Choice
Once you can enumerate all the elements, it is trivial to make a random choice. For example, you can scan the list once to compute its size then pick a random number in [0, size) then scan the sequence again to get the element at that offset. This is incredibly inefficient, taking at LEAST time proportional to the O(n^3) number of cubic graphs, but it works.
Sacrifice Uniform Probability for Efficiency
The obvious speed-up here is to make random edge choices at each level, instead of iterating over each possibility. Unfortunately, this will favor some graphs because of how your early choices affect the availability of later choices. Taking into account the need to track the remaining free vertices, you should be able to achieve O(n log n) time and O(n) space. Significantly better than the enumerating algorithm.
...
It's probably possible to do better. Probably a lot better. But this should get you started.

Another term for cubic graph is 3-regular graph or trivalent graph.
Your problem needs a little more clarification because "the number of cubic graphs" could mean the number of cubic graphs on 2n nodes that are non-isomorphic to one another or the number of (non-isomorphic) cubic graphs on 2n labelled nodes. The former is given by integer sequence A005638, and it is likely a non-trivial problem to uniformly pick a random isomorphism class of cubic graphs efficiently (i.e. not listing them all out and then picking one class). The latter is given by A002829.
There is an article on Wikipedia about random regular graphs that you should take a look at.

Related

Optimizing a method to find the most traversed edge given an adjacency graph and several traversals

I am given N vertices of a tree and its corresponding adjacency graph represented as an N by N array, adjGraph[N][N]. For example, if (1,3) is an edge, then adjGraph[0][2] == 1. Otherwise, adjGraph[i][j] == 0 for (i,j)s that are not edges.
I'm given a series of inputs in the form of:
1 5
which denote that a path has been traversed starting from vertex 1 to vertex 5. I wish to find the edge that was travesed the most times, along with the number of times it was traversed. To do this, I have another N by N array, numPass[N][N], whose elements I first initialize to 0, then increment by 1 every time I identify a path that includes an edge that matches its index. For example, if path (2,4) included edges (2,3) and (3,4), I would increment numPass[1][2] and numPass[2][3] by 1 each.
As I understand it, the main issue to tackle is that the inputs only give information of the starting vertex and ending vertex, and it is up to me to figure out which edges connect the two. Since the given graph is a tree, any path between two vertices is unique. Therefore, I assumed that given the index of the ending vertex for any input path, I would be able to recursively backtrack which edges were connected.
The following is the function code that I have tried to implement with that idea in mind:
// find the (unique) path of edges from vertices x to y
// and increment edges crossed during such a path
void findPath(int x, int y, int N, int adjGraph[][N], int numPass[][N]) {
int temp;
// if the path is a single edge, case is trivial
if (adjGraph[x][y] == 1) {
numPass[x][y] += 1;
return;
}
// otherwise, find path by backtracking from y
backtrack: while (1) {
temp = y-1;
if (adjGraph[temp][y] == 1) {
numPass[temp][y] += 1;
break;
}
}
if (adjGraph[x][temp] == 1) {
numPass[x][temp] += 1;
return;
} else {
y = temp;
goto backtrack;
}
However, the problem is that while my code works fine for small inputs, it runs out of memory for large inputs, since I have a required memory limit of 128MB and time limit of 1 second. The ranges for the inputs are up to 222222 vertices, and 222222 input paths.
How could I optimize my method to satisfy such large inputs?
Get rid of the adjacency matrix (it uses O(N^2) space). Use adjacency lists instead.
Use a more efficient algorithm. Let's make the tree rooted. For a path from a to b we can add 1 to a and b and subtract 1 from their lca (it is easy to see that this way a one is added to edges on this path and only to them).
After processing all paths, the number of paths going through the edge is just a sum in the subtree.
If we use an efficient algorithm to compute lca, this solution works in O(N + Q * log N), where Q is the number of paths. It looks good enough for this constraints (we can actually do even better by using more complex and more efficient algorithms for finding the lca, but I don't think it's necessary here).
Note: lca means lowest common ancestor.

Generating a connected graph and checking if it has eulerian cycle

So, I wanted to have some fun with graphs and now it's driving me crazy.
First, I generate a connected graph with a given number of edges. This is the easy part, which became my curse. Basically, it works as intended, but the results I'm getting are quite bizarre (well, maybe they're not, and I'm the issue here). The algorithm for generating the graph is fairly simple.
I have two arrays, one of them is filled with numbers from 0 to n - 1, and the other is empty.
At the beginning I shuffle the first one move its last element to the empty one.
Then, in a loop, I'm creating an edge between the last element of the first array and a random element from the second one and after that I, again, move the last element from the first array to the other one.
After that part is done, I have to create random edges between the vertexes until I get as many as I need. This is, again, very easy. I just random two numbers in the range from 0 to n - 1 and if there is no edge between these vertexes, I create one.
This is the code:
void generate(int n, double d) {
initMatrix(n); // <- creates an adjacency matrix n x n, filled with 0s
int *array1 = malloc(n * sizeof(int));
int *array2 = malloc(n * sizeof(int));
int j = n - 1, k = 0;
for (int i = 0; i < n; ++i) {
array1[i] = i;
array2[i] = 0;
}
shuffle(array1, 0, n); // <- Fisher-Yates shuffle
array2[k++] = array1[j--];
int edges = d * n * (n - 1) * .5;
if (edges % 2) {
++edges;
}
while (j >= 0) {
int r = rand() % k;
createEdge(array1[j], array2[r]);
array2[k++] = array1[j--];
--edges;
}
free(array1);
free(array2);
while (edges) {
int a = rand() % n;
int b = rand() % n;
if (a == b || checkEdge(a, b)) {
continue;
}
createEdge(a, b);
--edges;
}
}
Now, if I print it out, it's a fine graph. Then I want to find a Hammiltonian cycle. This part works. Then I get to my bane - Eulerian cycle. What's the problem?
Well, first I check if all vertexes are even. And they are not. Always. Every single time, unless I choose to generate a complete graph.
I now feel destroyed by my own code. Is something wrong? Or is it supposed to be like this? I knew that Eulerian circuits would be rare, but not that rare. Please, help.
Let's analyze the probability for having euleran cycle, and for simplicity - let's do it for all graphs with n vertices, no matter number of edges.
Given a graph G of size n, choose one arbitrary vertex. The probability of it's degree being even is roughly 1/2 (assuming for each u1,u2, P((v,u1) exists) = P((v,u2) exists)).
Now, remove v from G, and create a new graph G' with n-1 vertices, and without all edges connected to v.
Similarly, for any arbitrary vertex v' in G' - if (v,v') was an edge on G', we need d(v') to be odd. Otherwise, we need d(v') to be even (both in G'). Either way, probability of it is still roughly ~1/2. (independent from previous degree of v).
....
For the ith round, let #(v) be the number of discarded edges until reaching the current graph that are connected to v. If #(v) is odd, the probability of its current degree being odd is ~1/2, and if #(v) is even, the probability of its current degree being even is also ~1/2, and we remain with current probability of ~1/2
We can now understand how it works, and make a recurrence formula for the probability of the graph being eulerian cyclic:
P(n) ~= 1/2*P(n-1)
P(1) = 1
This is going to give us P(n) ~= 2^-n, which is very unlikely for reasonable n.
Note, 1/2 is just a rough estimation (and is correct when n->infinity), probability is in fact a bit higher, but it is still exponential in -n - which makes it very unlikely for reasonable size graphs.

Randomly permuting an array [duplicate]

The famous Fisher-Yates shuffle algorithm can be used to randomly permute an array A of length N:
For k = 1 to N
Pick a random integer j from k to N
Swap A[k] and A[j]
A common mistake that I've been told over and over again not to make is this:
For k = 1 to N
Pick a random integer j from 1 to N
Swap A[k] and A[j]
That is, instead of picking a random integer from k to N, you pick a random integer from 1 to N.
What happens if you make this mistake? I know that the resulting permutation isn't uniformly distributed, but I don't know what guarantees there are on what the resulting distribution will be. In particular, does anyone have an expression for the probability distributions over the final positions of the elements?
An Empirical Approach.
Let's implement the erroneous algorithm in Mathematica:
p = 10; (* Range *)
s = {}
For[l = 1, l <= 30000, l++, (*Iterations*)
a = Range[p];
For[k = 1, k <= p, k++,
i = RandomInteger[{1, p}];
temp = a[[k]];
a[[k]] = a[[i]];
a[[i]] = temp
];
AppendTo[s, a];
]
Now get the number of times each integer is in each position:
r = SortBy[#, #[[1]] &] & /# Tally /# Transpose[s]
Let's take three positions in the resulting arrays and plot the frequency distribution for each integer in that position:
For position 1 the freq distribution is:
For position 5 (middle)
And for position 10 (last):
and here you have the distribution for all positions plotted together:
Here you have a better statistics over 8 positions:
Some observations:
For all positions the probability of
"1" is the same (1/n).
The probability matrix is symmetrical
with respect to the big anti-diagonal
So, the probability for any number in the last
position is also uniform (1/n)
You may visualize those properties looking at the starting of all lines from the same point (first property) and the last horizontal line (third property).
The second property can be seen from the following matrix representation example, where the rows are the positions, the columns are the occupant number, and the color represents the experimental probability:
For a 100x100 matrix:
Edit
Just for fun, I calculated the exact formula for the second diagonal element (the first is 1/n). The rest can be done, but it's a lot of work.
h[n_] := (n-1)/n^2 + (n-1)^(n-2) n^(-n)
Values verified from n=3 to 6 ( {8/27, 57/256, 564/3125, 7105/46656} )
Edit
Working out a little the general explicit calculation in #wnoise answer, we can get a little more info.
Replacing 1/n by p[n], so the calculations are hold unevaluated, we get for example for the first part of the matrix with n=7 (click to see a bigger image):
Which, after comparing with results for other values of n, let us identify some known integer sequences in the matrix:
{{ 1/n, 1/n , ...},
{... .., A007318, ....},
{... .., ... ..., ..},
... ....,
{A129687, ... ... ... ... ... ... ..},
{A131084, A028326 ... ... ... ... ..},
{A028326, A131084 , A129687 ... ....}}
You may find those sequences (in some cases with different signs) in the wonderful http://oeis.org/
Solving the general problem is more difficult, but I hope this is a start
The "common mistake" you mention is shuffling by random transpositions. This problem was studied in full detail by Diaconis and Shahshahani in Generating a random permutation with random transpositions (1981). They do a complete analysis of stopping times and convergence to uniformity. If you cannot get a link to the paper, then please send me an e-mail and I can forward you a copy. It's actually a fun read (as are most of Persi Diaconis's papers).
If the array has repeated entries, then the problem is slightly different. As a shameless plug, this more general problem is addressed by myself, Diaconis and Soundararajan in Appendix B of A Rule of Thumb for Riffle Shuffling (2011).
Let's say
a = 1/N
b = 1-a
Bi(k) is the probability matrix after i swaps for the kth element. i.e the answer to the question "where is k after i swaps?". For example B0(3) = (0 0 1 0 ... 0) and B1(3) = (a 0 b 0 ... 0). What you want is BN(k) for every k.
Ki is an NxN matrix with 1s in the i-th column and i-th row, zeroes everywhere else, e.g:
Ii is the identity matrix but with the element x=y=i zeroed. E.g for i=2:
Ai is
Then,
But because BN(k=1..N) forms the identity matrix, the probability that any given element i will at the end be at position j is given by the matrix element (i,j) of the matrix:
For example, for N=4:
As a diagram for N = 500 (color levels are 100*probability):
The pattern is the same for all N>2:
The most probable ending position for k-th element is k-1.
The least probable ending position is k for k < N*ln(2), position 1 otherwise
I knew I had seen this question before...
" why does this simple shuffle algorithm produce biased results? what is a simple reason? " has a lot of good stuff in the answers, especially a link to a blog by Jeff Atwood on Coding Horror.
As you may have already guessed, based on the answer by #belisarius, the exact distribution is highly dependent on the number of elements to be shuffled. Here's Atwood's plot for a 6-element deck:
What a lovely question! I wish I had a full answer.
Fisher-Yates is nice to analyze because once it decides on the first element, it leaves it alone. The biased one can repeatedly swap an element in and out of any place.
We can analyze this the same way we would a Markov chain, by describing the actions as stochastic transition matrices acting linearly on probability distributions. Most elements get left alone, the diagonal is usually (n-1)/n. On pass k, when they don't get left alone, they get swapped with element k, (or a random element if they are element k). This is 1/(n-1) in either row or column k. The element in both row and column k is also 1/(n-1). It's easy enough to multiply these matrices together for k going from 1 to n.
We do know that the element in last place will be equally likely to have originally been anywhere because the last pass swaps the last place equally likely with any other. Similarly, the first element will be equally likely to be placed anywhere. This symmetry is because the transpose reverses the order of matrix multiplication. In fact, the matrix is symmetric in the sense that row i is the same as column (n+1 - i). Beyond that, the numbers don't show much apparent pattern. These exact solutions do show agreement with the simulations run by belisarius: In slot i, The probability of getting j decreases as j raises to i, reaching its lowest value at i-1, and then jumping up to its highest value at i, and decreasing until j reaches n.
In Mathematica I generated each step with
step[k_, n_] := Normal[SparseArray[{{k, i_} -> 1/n,
{j_, k} -> 1/n, {i_, i_} -> (n - 1)/n} , {n, n}]]
(I haven't found it documented anywhere, but the first matching rule is used.)
The final transition matrix can be calculated with:
Fold[Dot, IdentityMatrix[n], Table[step[m, n], {m, s}]]
ListDensityPlot is a useful visualization tool.
Edit (by belisarius)
Just a confirmation. The following code gives the same matrix as in #Eelvex's answer:
step[k_, n_] := Normal[SparseArray[{{k, i_} -> (1/n),
{j_, k} -> (1/n), {i_, i_} -> ((n - 1)/n)}, {n, n}]];
r[n_, s_] := Fold[Dot, IdentityMatrix[n], Table[step[m, n], {m, s}]];
Last#Table[r[4, i], {i, 1, 4}] // MatrixForm
Wikipedia's page on the Fisher-Yates shuffle has a description and example of exactly what will happen in that case.
You can compute the distribution using stochastic matrices. Let the matrix A(i,j) describe the probability of the card originally at position i ending up in position j. Then the kth swap has a matrix Ak given by Ak(i,j) = 1/N if i == k or j == k, (the card in position k can end up anywhere and any card can end up at position k with equal probability), Ak(i,i) = (N - 1)/N for all i != k (every other card will stay in the same place with probability (N-1)/N) and all other elements zero.
The result of the complete shuffle is then given by the product of the matrices AN ... A1.
I expect you're looking for an algebraic description of the probabilities; you can get one by expanding out the above matrix product, but it I imagine it will be fairly complex!
UPDATE: I just spotted wnoise's equivalent answer above! oops...
I've looked into this further, and it turns out that this distribution has been studied at length. The reason it's of interest is because this "broken" algorithm is (or was) used in the RSA chip system.
In Shuffling by semi-random transpositions, Elchanan Mossel, Yuval Peres, and Alistair Sinclair study this and a more general class of shuffles. The upshot of that paper appears to be that it takes log(n) broken shuffles to achieve near random distribution.
In The bias of three pseudorandom shuffles (Aequationes Mathematicae, 22, 1981, 268-292), Ethan Bolker and David Robbins analyze this shuffle and determine that the total variation distance to uniformity after a single pass is 1, indicating that it is not very random at all. They give asympotic analyses as well.
Finally, Laurent Saloff-Coste and Jessica Zuniga found a nice upper bound in their study of inhomogeneous Markov chains.
This question is begging for an interactive visual matrix diagram analysis of the broken shuffle mentioned. Such a tool is on the page Will It Shuffle? - Why random comparators are bad by Mike Bostock.
Bostock has put together an excellent tool that analyzes random comparators. In the dropdown on that page, choose naïve swap (random ↦ random) to see the broken algorithm and the pattern it produces.
His page is informative as it allows one to see the immediate effects a change in logic has on the shuffled data. For example:
This matrix diagram using a non-uniform and very-biased shuffle is produced using a naïve swap (we pick from "1 to N") with code like this:
function shuffle(array) {
var n = array.length, i = -1, j;
while (++i < n) {
j = Math.floor(Math.random() * n);
t = array[j];
array[j] = array[i];
array[i] = t;
}
}
But if we implement a non-biased shuffle, where we pick from "k to N" we should see a diagram like this:
where the distribution is uniform, and is produced from code such as:
function FisherYatesDurstenfeldKnuthshuffle( array ) {
var pickIndex, arrayPosition = array.length;
while( --arrayPosition ) {
pickIndex = Math.floor( Math.random() * ( arrayPosition + 1 ) );
array[ pickIndex ] = [ array[ arrayPosition ], array[ arrayPosition ] = array[ pickIndex ] ][ 0 ];
}
}
The excellent answers given so far are concentrating on the distribution, but you have asked also "What happens if you make this mistake?" - which is what I haven't seen answered yet, so I'll give an explanation on this:
The Knuth-Fisher-Yates shuffle algorithm picks 1 out of n elements, then 1 out of n-1 remaining elements and so forth.
You can implement it with two arrays a1 and a2 where you remove one element from a1 and insert it into a2, but the algorithm does it in place (which means, that it needs only one array), as is explained here (Google: "Shuffling Algorithms Fisher-Yates DataGenetics") very well.
If you don't remove the elements, they can be randomly chosen again which produces the biased randomness. This is exactly what the 2nd example your are describing does. The first example, the Knuth-Fisher-Yates algorithm, uses a cursor variable running from k to N, which remembers which elements have already been taken, hence avoiding to pick elements more than once.

Dividing a graph in three parts such the maximum of the sum of weights of the three parts is minimized

I want to divide a graph with N weighted-vertices and N-1 edges into three parts such that the maximum of the sum of weights of all the vertices in each of the parts is minimized. This is the actual problem i am trying to solve, http://www.iarcs.org.in/inoi/contests/jan2006/Advanced-1.php
I considered the following method
/*Edges are stored in an array E, and also in an adjacency matrix for depth first search.
Every edge in E has two attributes a and b which are the nodes of the edge*/
min-max = infinity
for i -> 0 to length(E):
for j -> i+1 to length(E):
/*Call depth first search on the nodes of both the edges E[i] and E[j]
the depth first search returns the sum of weights of the vertices it visits,
we keep track of the maximum weight returned by dfs*/
Adjacency-matrix[E[i].a][E[i].b] = 0;
Adjacency-matrix[E[j].a][E[j].b] = 0;
max = 0
temp = dfs(E[i].a)
if temp > max then max = temp
temp = dfs(E[i].b)
if temp > max then max = temp
temp = dfs(E[i].a)
if temp > max then max = temp
temp = dfs(E[i].a)
if temp > max then max = temp
if max < min-max
min-max = max
Adjacency-matrix[E[i].a][E[i].b] = 1;
Adjacency-matrix[E[j].a][E[j].b] = 1;
/*The depth first search is called four times but it will terminate one time
if we keep track of the visited vertices because there are only three components*/
/*After the outer loop terminates what we have in min-max will be the answer*/
The above algorithm takes O(n^3) time, as the number of edges will be n-1 the outer loop will run (n-1)! times that takes O(n^2) the dfs will visit each vertex only one so that is O(n) time.
But the problem is that n can be <= 3000 and O(n^3) time is not good for this problem. Is there any other method which will calculate the solve the question in the link faster than n^3?
EDIT: I implemented #BorisStrandjev's algorithm in c, it gave me a correct answer for the test input in the question, but for all other test inputs it gives a wrong answer, here is a link to my code in ideone http://ideone.com/67GSa2, the output here should be 390 but the program prints 395.
I am trying to find if i have made any mistake in my code but i dont see any. Can anyone please help me here the answers my code gave are very close to the correct answer so is there anything more to the algorithm?
EDIT 2: In the following graph-
#BorisStrandjev, your algorithm will chose i as 1, j as 2 in one of the iterations, but then the third part (3,4) is invalid.
EDIT 3
I finally got the mistake in my code, instead of V[i] storing sum of i and all its descendants it stored V[i] and its ancestors, otherwise it would solve the above example correctly, thanks to all of you for your help.
Yes there is faster method.
I will need few auxiliary matrices and I will leave their creation and initialization in correct way to you.
First of all plant the tree - that is make the graph directed. Calculate array VAL[i] for each vertex - the amount of passengers for a vertex and all its descendants (remember we planted, so now this makes sense). Also calculate the boolean matrix desc[i][j] that will be true if vertex i is descendant of vertex j. Then do the following:
best_val = n
for i in 1...n
for j in i + 1...n
val_of_split = 0
val_of_split_i = VAL[i]
val_of_split_j = VAL[j]
if desc[i][j] val_of_split_j -= VAL[i] // subtract all the nodes that go to i
if desc[j][i] val_of_split_i -= VAL[j]
val_of_split = max(val_of_split, val_of_split_i)
val_of_split = max(val_of_split, val_of_split_j)
val_of_split = max(val_of_split, n - val_of_split_i - val_of_split_j)
best_val = min(best_val, val_of_split)
After the execution of this cycle the answer will be in best_val. the algorithm is clearly O(n^2) you just need to figure out how to calculate desc[i][j] and VAL[i] in such complexity, but it is not so complex a task, I think you can figure it out yourself.
EDIT Here I will include the code for the whole problem in pseudocode. I deliberately did not include the code before the OP tried and solved it by himself:
int p[n] := // initialized from the input - price of the node itself
adjacency_list neighbors := // initialized to store the graph adjacency list
int VAL[n] := { 0 } // the price of a node and all its descendants
bool desc[n][n] := { false } // desc[i][j] - whether i is descendant of j
boolean visited[n][n] := {false} // whether the dfs visited the node already
stack parents := {empty-stack}; // the stack of nodes visited during dfs
dfs ( currentVertex ) {
VAL[currentVertex] = p[currentVertex]
parents.push(currentVertex)
visited[currentVertex] = true
for vertex : parents // a bit extended stack definition supporting iteration
desc[currentVertex][vertex] = true
for vertex : adjacency_list[currentVertex]
if visited[vertex] continue
dfs (currentvertex)
VAL[currentVertex] += VAL[vertex]
perents.pop
calculate_best ( )
dfs(0)
best_val = n
for i in 0...(n - 1)
for j in i + 1...(n - 1)
val_of_split = 0
val_of_split_i = VAL[i]
val_of_split_j = VAL[j]
if desc[i][j] val_of_split_j -= VAL[i]
if desc[j][i] val_of_split_i -= VAL[j]
val_of_split = max(val_of_split, val_of_split_i)
val_of_split = max(val_of_split, val_of_split_j)
val_of_split = max(val_of_split, n - val_of_split_i - val_of_split_j)
best_val = min(best_val, val_of_split)
return best_val
And the best split will be {descendants of i} \ {descendants of j}, {descendants of j} \ {descendants of i} and {all nodes} \ {descendants of i} U {descendants of j}.
You can use a combination of Binary Search & DFS to solve this problem.
Here's how I would proceed:
Calculate the total weight of the graph, and also find the heaviest edge in the graph. Let them be Sum, MaxEdge resp.
Now we have to run a binary search between this range: [maxEdge, Sum].
In each search iteration, middle = (start + end / 2). Now, pick a start node and perform a DFS s.t. the sum of edges traversed in the sub-graph is as close to 'middle' as possible. But keep this sum to be less than middle. This will be one sub graph. In the same iteration, now pick another node which is unmarked by the previous DFS. Perform another DFS in the same way. Likewise, do it once more because we need to break the graph into 3 parts.
The min. weight amongst the 3 sub-graphs calculated above is the solution from this iteration.
Keep running this binary search until its end variable exceeds its start variable.
The max of all the mins obtained in step 4 is your answer.
You can do extra book-keeping in order to get the 3-sub-graphs.
Order complexity : N log(Sum) where Sum is the total weight of the graph.
I just noticed that you have talked about weighted vertices, and not edges. In that case, just treat edges as vertices in my solution. It should still work.
EDIT 4: THIS WON'T WORK!!!
If you process the nodes in the link in the order 3,4,5,6,1,2, after processing 6, (I think) you'll have the following sets: {{3,4},{5},{6}}, {{3,4,5},{6}}, {{3,4,5,6}}, with no simple way to split them up again.
I'm just leaving this answer here in case anyone else was thinking of a DP algorithm.
It might work to look at all the already processed neighbours in the DP algorithm.
.
I'm thinking a Dynamic Programming algorithm, where the matrix is (item x number of sets)
n = number of sets
k = number of vertices
// row 0 represents 0 elements included
A[0, 0] = 0
for (s = 1:n)
A[0, s] = INFINITY
for (i = 1:k)
for (s = 0:n)
B = A[i-1, s] with i inserted into minimum one of its neighbouring sets
A[i, s] = min(A[i-1, s-1], B)) // A[i-1, s-1] = INFINITY if s-1 < 0
EDIT: Explanation of DP:
This is a reasonably basic Dynamic Programming algorithm. If you need a better explanation, you should read up on it some more, it's a very powerful tool.
A is a matrix. The row i represents a graph with all vertices up to i included. The column c represents the solution with number of sets = c.
So A[2,3] would give the best result of a graph containing item 0, item 1 and item 2 and 3 sets, thus each in it's own set.
You then start at item 0, calculate the row for each number of sets (the only valid one is number of sets = 1), then do item 1 with the above formula, then item 2, etc.
A[a, b] is then the optimal solution with all vertices up to a included and b number of sets. So you'll just return A[k, n] (the one that has all vertices included and the target number of sets).
EDIT 2: Complexity
O(k*n*b) where b is the branching factor of a node (assuming you use an adjacency list).
Since n = 3, this is O(3*k*b) = O(k*b).
EDIT 3: Deciding which neighbouring set a vertex should be added to
Keep n arrays of k elements each in a union find structure, with each set pointing to the sum for that set. For each new row, to determine which sets a vertex can be added to, we use its adjacency list and look-up the set and value of each of its neighbours. Once we find the best option, we can just add that element to the applicable set and increment its sum by the added element's value.
You'll notice the algorithm only looks down 1 row, so we only need to keep track of the last row (not store the whole matrix), and can modify the previous row's n arrays rather than copying them.

How can I find a number which occurs an odd number of times in a SORTED array in O(n) time?

I have a question and I tried to think over it again and again... but got nothing so posting the question here. Maybe I could get some view-point of others, to try and make it work...
The question is: we are given a SORTED array, which consists of a collection of values occurring an EVEN number of times, except one, which occurs ODD number of times. We need to find the solution in log n time.
It is easy to find the solution in O(n) time, but it looks pretty tricky to perform in log n time.
Theorem: Every deterministic algorithm for this problem probes Ω(log2 n) memory locations in the worst case.
Proof (completely rewritten in a more formal style):
Let k > 0 be an odd integer and let n = k2. We describe an adversary that forces (log2 (k + 1))2 = Ω(log2 n) probes.
We call the maximal subsequences of identical elements groups. The adversary's possible inputs consist of k length-k segments x1 x2 … xk. For each segment xj, there exists an integer bj ∈ [0, k] such that xj consists of bj copies of j - 1 followed by k - bj copies of j. Each group overlaps at most two segments, and each segment overlaps at most two groups.
Group boundaries
| | | | |
0 0 1 1 1 2 2 3 3
| | | |
Segment boundaries
Wherever there is an increase of two, we assume a double boundary by convention.
Group boundaries
| || | |
0 0 0 2 2 2 2 3 3
Claim: The location of the jth group boundary (1 ≤ j ≤ k) is uniquely determined by the segment xj.
Proof: It's just after the ((j - 1) k + bj)th memory location, and xj uniquely determines bj. //
We say that the algorithm has observed the jth group boundary in case the results of its probes of xj uniquely determine xj. By convention, the beginning and the end of the input are always observed. It is possible for the algorithm to uniquely determine the location of a group boundary without observing it.
Group boundaries
| X | | |
0 0 ? 1 2 2 3 3 3
| | | |
Segment boundaries
Given only 0 0 ?, the algorithm cannot tell for sure whether ? is a 0 or a 1. In context, however, ? must be a 1, as otherwise there would be three odd groups, and the group boundary at X can be inferred. These inferences could be problematic for the adversary, but it turns out that they can be made only after the group boundary in question is "irrelevant".
Claim: At any given point during the algorithm's execution, consider the set of group boundaries that it has observed. Exactly one consecutive pair is at odd distance, and the odd group lies between them.
Proof: Every other consecutive pair bounds only even groups. //
Define the odd-length subsequence bounded by the special consecutive pair to be the relevant subsequence.
Claim: No group boundary in the interior of the relevant subsequence is uniquely determined. If there is at least one such boundary, then the identity of the odd group is not uniquely determined.
Proof: Without loss of generality, assume that each memory location not in the relevant subsequence has been probed and that each segment contained in the relevant subsequence has exactly one location that has not been probed. Suppose that the jth group boundary (call it B) lies in the interior of the relevant subsequence. By hypothesis, the probes to xj determine B's location up to two consecutive possibilities. We call the one at odd distance from the left observed boundary odd-left and the other odd-right. For both possibilities, we work left to right and fix the location of every remaining interior group boundary so that the group to its left is even. (We can do this because they each have two consecutive possibilities as well.) If B is at odd-left, then the group to its left is the unique odd group. If B is at odd-right, then the last group in the relevant subsequence is the unique odd group. Both are valid inputs, so the algorithm has uniquely determined neither the location of B nor the odd group. //
Example:
Observed group boundaries; relevant subsequence marked by […]
[ ] |
0 0 Y 1 1 Z 2 3 3
| | | |
Segment boundaries
Possibility #1: Y=0, Z=2
Possibility #2: Y=1, Z=2
Possibility #3: Y=1, Z=1
As a consequence of this claim, the algorithm, regardless of how it works, must narrow the relevant subsequence to one group. By definition, it therefore must observe some group boundaries. The adversary now has the simple task of keeping open as many possibilities as it can.
At any given point during the algorithm's execution, the adversary is internally committed to one possibility for each memory location outside of the relevant subsequence. At the beginning, the relevant subsequence is the entire input, so there are no initial commitments. Whenever the algorithm probes an uncommitted location of xj, the adversary must commit to one of two values: j - 1, or j. If it can avoid letting the jth boundary be observed, it chooses a value that leaves at least half of the remaining possibilities (with respect to observation). Otherwise, it chooses so as to keep at least half of the groups in the relevant interval and commits values for the others.
In this way, the adversary forces the algorithm to observe at least log2 (k + 1) group boundaries, and in observing the jth group boundary, the algorithm is forced to make at least log2 (k + 1) probes.
Extensions:
This result extends straightforwardly to randomized algorithms by randomizing the input, replacing "at best halved" (from the algorithm's point of view) with "at best halved in expectation", and applying standard concentration inequalities.
It also extends to the case where no group can be larger than s copies; in this case the lower bound is Ω(log n log s).
A sorted array suggests a binary search. We have to redefine equality and comparison. Equality simple means an odd number of elements. We can do comparison by observing the index of the first or last element of the group. The first element will be an even index (0-based) before the odd group, and an odd index after the odd group. We can find the first and last elements of a group using binary search. The total cost is O((log N)²).
PROOF OF O((log N)²)
T(2) = 1 //to make the summation nice
T(N) = log(N) + T(N/2) //log(N) is finding the first/last elements
For some N=2^k,
T(2^k) = (log 2^k) + T(2^(k-1))
= (log 2^k) + (log 2^(k-1)) + T(2^(k-2))
= (log 2^k) + (log 2^(k-1)) + (log 2^(k-2)) + ... + (log 2^2) + 1
= k + (k-1) + (k-2) + ... + 1
= k(k+1)/2
= (k² + k)/2
= (log(N)² + log(N))/ 2
= O(log(N)²)
Look at the middle element of the array. With a couple of appropriate binary searches, you can find the first and its last appearance in the array. E.g., if the middle element is 'a', you need to find i and j as shown below:
[* * * * a a a a * * *]
^ ^
| |
| |
i j
Is j - i an even number? You are done! Otherwise (and this is the key here), the question to ask is i an even or an odd number? Do you see what this piece of knowledge implies? Then the rest is easy.
This answer is in support of the answer posted by "throwawayacct". He deserves the bounty. I spent some time on this question and I'm totally convinced that his proof is correct that you need Ω(log(n)^2) queries to find the number that occurs an odd number of times. I'm convinced because I ended up recreating the exact same argument after only skimming his solution.
In the solution, an adversary creates an input to make life hard for the algorithm, but also simple for a human analyzer. The input consists of k pages that each have k entries. The total number of entries is n = k^2, and it is important that O(log(k)) = O(log(n)) and Ω(log(k)) = Ω(log(n)). To make the input, the adversary makes a string of length k of the form 00...011...1, with the transition in an arbitrary position. Then each symbol in the string is expanded into a page of length k of the form aa...abb...b, where on the ith page, a=i and b=i+1. The transition on each page is also in an arbitrary position, except that the parity agrees with the symbol that the page was expanded from.
It is important to understand the "adversary method" of analyzing an algorithm's worst case. The adversary answers queries about the algorithm's input, without committing to future answers. The answers have to be consistent, and the game is over when the adversary has been pinned down enough for the algorithm to reach a conclusion.
With that background, here are some observations:
1) If you want to learn the parity of a transition in a page by making queries in that page, you have to learn the exact position of the transition and you need Ω(log(k)) queries. Any collection of queries restricts the transition point to an interval, and any interval of length more than 1 has both parities. The most efficient search for the transition in that page is a binary search.
2) The most subtle and most important point: There are two ways to determine the parity of a transition inside a specific page. You can either make enough queries in that page to find the transition, or you can infer the parity if you find the same parity in both an earlier and a later page. There is no escape from this either-or. Any set of queries restricts the transition point in each page to some interval. The only restriction on parities comes from intervals of length 1. Otherwise the transition points are free to wiggle to have any consistent parities.
3) In the adversary method, there are no lucky strikes. For instance, suppose that your first query in some page is toward one end instead of in the middle. Since the adversary hasn't committed to an answer, he's free to put the transition on the long side.
4) The end result is that you are forced to directly probe the parities in Ω(log(k)) pages, and the work for each of these subproblems is also Ω(log(k)).
5) Things are not much better with random choices than with adversarial choices. The math is more complicated, because now you can get partial statistical information, rather than a strict yes you know a parity or no you don't know it. But it makes little difference. For instance, you can give each page length k^2, so that with high probability, the first log(k) queries in each page tell you almost nothing about the parity in that page. The adversary can make random choices at the beginning and it still works.
Start at the middle of the array and walk backward until you get to a value that's different from the one at the center. Check whether the number above that boundary is at an odd or even index. If it's odd, then the number occurring an odd number of times is to the left, so repeat your search between the beginning and the boundary you found. If it's even, then the number occurring an odd number of times must be later in the array, so repeat the search in the right half.
As stated, this has both a logarithmic and a linear component. If you want to keep the whole thing logarithmic, instead of just walking backward through the array to a different value, you want to use a binary search instead. Unless you expect many repetitions of the same numbers, the binary search may not be worthwhile though.
I have an algorithm which works in log(N/C)*log(K), where K is the length of maximum same-value range, and C is the length of range being searched for.
The main difference of this algorithm from most posted before is that it takes advantage of the case where all same-value ranges are short. It finds boundaries not by binary-searching the entire array, but by first quickly finding a rough estimate by jumping back by 1, 2, 4, 8, ... (log(K) iterations) steps, and then binary-searching the resulting range (log(K) again).
The algorithm is as follows (written in C#):
// Finds the start of the range of equal numbers containing the index "index",
// which is assumed to be inside the array
//
// Complexity is O(log(K)) with K being the length of range
static int findRangeStart (int[] arr, int index)
{
int candidate = index;
int value = arr[index];
int step = 1;
// find the boundary for binary search:
while(candidate>=0 && arr[candidate] == value)
{
candidate -= step;
step *= 2;
}
// binary search:
int a = Math.Max(0,candidate);
int b = candidate+step/2;
while(a+1!=b)
{
int c = (a+b)/2;
if(arr[c] == value)
b = c;
else
a = c;
}
return b;
}
// Finds the index after the only "odd" range of equal numbers in the array.
// The result should be in the range (start; end]
// The "end" is considered to always be the end of some equal number range.
static int search(int[] arr, int start, int end)
{
if(arr[start] == arr[end-1])
return end;
int middle = (start+end)/2;
int rangeStart = findRangeStart(arr,middle);
if((rangeStart & 1) == 0)
return search(arr, middle, end);
return search(arr, start, rangeStart);
}
// Finds the index after the only "odd" range of equal numbers in the array
static int search(int[] arr)
{
return search(arr, 0, arr.Length);
}
Take the middle element e. Use binary search to find the first and last occurrence. O(log(n))
If it is odd return e.
Otherwise, recurse onto the side that has an odd number of elements [....]eeee[....]
Runtime will be log(n) + log(n/2) + log(n/4).... = O(log(n)^2).
AHhh. There is an answer.
Do a binary search and as you search, for each value, move backwards until you find the first entry with that same value. If its index is even, it is before the oddball, so move to the right.
If its array index is odd, it is after the oddball, so move to the left.
In pseudocode (this is the general idea, not tested...):
private static int FindOddBall(int[] ary)
{
int l = 0,
r = ary.Length - 1;
int n = (l+r)/2;
while (r > l+2)
{
n = (l + r) / 2;
while (ary[n] == ary[n-1])
n = FindBreakIndex(ary, l, n);
if (n % 2 == 0) // even index we are on or to the left of the oddball
l = n;
else // odd index we are to the right of the oddball
r = n-1;
}
return ary[l];
}
private static int FindBreakIndex(int[] ary, int l, int n)
{
var t = ary[n];
var r = n;
while(ary[n] != t || ary[n] == ary[n-1])
if(ary[n] == t)
{
r = n;
n = (l + r)/2;
}
else
{
l = n;
n = (l + r)/2;
}
return n;
}
You can use this algorithm:
int GetSpecialOne(int[] array, int length)
{
int specialOne = array[0];
for(int i=1; i < length; i++)
{
specialOne ^= array[i];
}
return specialOne;
}
Solved with the help of a similar question which can be found here on http://www.technicalinterviewquestions.net
We don't have any information about the distribution of lenghts inside the array, and of the array as a whole, right?
So the arraylength might be 1, 11, 101, 1001 or something, 1 at least with no upper bound, and must contain at least 1 type of elements ('number') up to (length-1)/2 + 1 elements, for total sizes of 1, 11, 101: 1, 1 to 6, 1 to 51 elements and so on.
Shall we assume every possible size of equal probability? This would lead to a middle length of subarrays of size/4, wouldn't it?
An array of size 5 could be divided into 1, 2 or 3 sublists.
What seems to be obvious is not that obvious, if we go into details.
An array of size 5 can be 'divided' into one sublist in just one way, with arguable right to call it 'dividing'. It's just a list of 5 elements (aaaaa). To avoid confusion let's assume the elements inside the list to be ordered characters, not numbers (a,b,c, ...).
Divided into two sublist, they might be (1, 4), (2, 3), (3, 2), (4, 1). (abbbb, aabbb, aaabb, aaaab).
Now let's look back at the claim made before: Shall the 'division' (5) be assumed the same probability as those 4 divisions into 2 sublists? Or shall we mix them together, and assume every partition as evenly probable, (1/5)?
Or can we calculate the solution without knowing the probability of the length of the sublists?
The clue is you're looking for log(n). That's less than n.
Stepping through the entire array, one at a time? That's n. That's not going to work.
We know the first two indexes in the array (0 and 1) should be the same number. Same with 50 and 51, if the odd number in the array is after them.
So find the middle element in the array, compare it to the element right after it. If the change in numbers happens on the wrong index, we know the odd number in the array is before it; otherwise, it's after. With one set of comparisons, we figure out which half of the array the target is in.
Keep going from there.
Use a hash table
For each element E in the input set
if E is set in the hash table
increment it's value
else
set E in the hash table and initialize it to 0
For each key K in hash table
if K % 2 = 1
return K
As this algorithm is 2n it belongs to O(n)
Try this:
int getOddOccurrence(int ar[], int ar_size)
{
int i;
int xor = 0;
for (i=0; i < ar_size; i++)
xor = xor ^ ar[i];
return res;
}
XOR will cancel out everytime you XOR with the same number so 1^1=0 but 1^1^1=1 so every pair should cancel out leaving the odd number out.
Assume indexing start at 0. Binary search for the smallest even i such that x[i] != x[i+1]; your answer is x[i].
edit: due to public demand, here is the code
int f(int *x, int min, int max) {
int size = max;
min /= 2;
max /= 2;
while (min < max) {
int i = (min + max)/2;
if (i==0 || x[2*i-1] == x[2*i])
min = i+1;
else
max = i-1;
}
if (2*max == size || x[2*max] != x[2*max+1])
return x[2*max];
return x[2*min];
}

Resources