Suppose you have an unweighted DAG, and two vertices, start s and endt. The problem is to count how many paths there are from s to t of length 1, 2, 3...N-1, where N is the number of vertices in the DAG.
My approach:
Build a matrix d of size N*N, where d[u][k] is the number of ways to reach u from s in exactly k steps, and set d[s][0] = 1
Find a topological sorting TS of the DAG
Now, for every vertex u in TS
clone the array d[u] as a
shift every element in a right by 1 (ie, insert 0 on left, discard rightmost element)
for every adjacent vertex v of u, add array a to array d[v]
The answer is d[t]
This seems to work in O(V+EV). I'm wondering if there is a more efficient O(V+E) way?
The optimal algorithm is quite likely O(VE). However a simpler implementation is possible using BFS allowing vertices to be visited multiple times (on most practical cases, this will use less memory than O(V^2).
Related
Assume you have two piles, each made of N boxes of different heights. You want to remove boxes so as to obtain two piles of equal heights (if possible). You cannot remove a box which is not at the top or the bottom of the pile! One can see for instance that if we remove the red boxes below we obtain two towers of equal heights:
Another way to state this problem is: given two arrays of positive numbers, are there two consecutive sub-sequences (one in each array) whose sums are equal?
This problem looks similar to this one, in which we have an array A of size N and a target t, and we want to find a consecutive sub-sequence of A whose sum is t (equivalently, we have a single pile of boxes and we want to remove boxes at the top and the bottom so as to obtain a pile of height t). This problem can be solved in time O(N). On the other hand, the above-mentioned problem has a trivial O(N^2) solution (see the answers below), but is there also a o(N^2) algorithm (O(N log N) for instance)?
Addendum: The sizes of the boxes are positive integers. They are assumed to be all differents (otherwise the problem can be trivially solved in O(N)). If we denote by H the total size of the two piles, there is a O(H log H) algorithm (see the answers below). Thus, the problem starts to be interesting for H greater than N (an efficient solution for H = N^2 would be a good start ;)
Let's find all the sums in O(n^2) and add them to a hash table:
for l in [0, n)
cur_sum = 0
for r in [l, n)
cur_sum += a[r]
hash_table.add(cur_sum)
After that, we need to do the same thing for the second array and check if at least one sum occurs in the hash table.
That's much better than a naive O(n^3) solution.
It's also possible to solve this problem in O(H log H) time, where H is the sum of boxes' heights if all heights are integers, but it's useful only if the total height is limited.
The O(H log H) solution goes like this:
We can find all subarray sums quickly. Let's take a look at what a subarray S sum is. It's a difference of two prefix sums P[r] - P[l] = S. If we add H to both sides of the equation, we obtain P[r] + (H - P[l]) = H + S. So we need to check if there exist two elements in the array P[i] and H - P[i], respectively, that add up to a given value. Let's not just check their existence, but count the number of such pairs. No it looks exactly like a multiplication of two polynomials. The coefficients of the first polynomial are the number of occurrences of a specific prefix sum (C[i] = count(j: P[j] == i)). The second polynomial is essentially the same thing reversed. We can multiply these two polynomials in O(H log H) time using Fourier fast transform (as they both are of degree H). After that, we just need to check the H + i coefficient of the product is non-zero.
Once we know all sums for the first and the second array, we need to check if they have at least one in common.
This sounds similar to the algorithm to find the length of the longest palindrome in a string. In the palindrome algorithm, the beginning and end of the string is examined to determine if a palindrome exists, if not then the first character of the string is removed from the string so the rest of the string is compared with the result of taking off the last character of the string.
In this case, you are seeking a box that results from taking off the top and bottom of the pile. If the pile doesn't result in a box of equal size in another pile, then take off the top of the pile and compare the result with taking the bottom of a pile. The recursive pseudocode could look similar to:
global pile2
global HashTableForPile2 # each value is a box in Pile2
def FindBoxInPile(pile, top, bottom):
if (top == ReachedBottomInPile or bottom == ReachedTopInPile):
Stop processing and return False ## base case
else:
# find the box created from the top to the bottom
for i in range(bottom, top)
sumPile1 += pile[i]
# Determine if an identical box is inside both piles.
# If so, return True; otherwise, recurse.
# (assuming HashTableForPile2 was calculated earlier)
if (sumPile1 == HashTableForPile2[sumPile1]):
return True
# take top AND bottom off pile
if (FindBoxInPile(pile1, top-1, bottom+1) == True):
return True # A box is inside both piles
# take top OR bottom off pile and compare result
elif (FindBoxInPile(pile1, top-1, bottom) == True or
FindBoxInPile(pile1, top, bottom+1) == True)
return True # A box is inside both piles, return True
A dynamic programming solution would run in O(N) time.
I am given N vertices of a tree and its corresponding adjacency graph represented as an N by N array, adjGraph[N][N]. For example, if (1,3) is an edge, then adjGraph[0][2] == 1. Otherwise, adjGraph[i][j] == 0 for (i,j)s that are not edges.
I'm given a series of inputs in the form of:
1 5
which denote that a path has been traversed starting from vertex 1 to vertex 5. I wish to find the edge that was travesed the most times, along with the number of times it was traversed. To do this, I have another N by N array, numPass[N][N], whose elements I first initialize to 0, then increment by 1 every time I identify a path that includes an edge that matches its index. For example, if path (2,4) included edges (2,3) and (3,4), I would increment numPass[1][2] and numPass[2][3] by 1 each.
As I understand it, the main issue to tackle is that the inputs only give information of the starting vertex and ending vertex, and it is up to me to figure out which edges connect the two. Since the given graph is a tree, any path between two vertices is unique. Therefore, I assumed that given the index of the ending vertex for any input path, I would be able to recursively backtrack which edges were connected.
The following is the function code that I have tried to implement with that idea in mind:
// find the (unique) path of edges from vertices x to y
// and increment edges crossed during such a path
void findPath(int x, int y, int N, int adjGraph[][N], int numPass[][N]) {
int temp;
// if the path is a single edge, case is trivial
if (adjGraph[x][y] == 1) {
numPass[x][y] += 1;
return;
}
// otherwise, find path by backtracking from y
backtrack: while (1) {
temp = y-1;
if (adjGraph[temp][y] == 1) {
numPass[temp][y] += 1;
break;
}
}
if (adjGraph[x][temp] == 1) {
numPass[x][temp] += 1;
return;
} else {
y = temp;
goto backtrack;
}
However, the problem is that while my code works fine for small inputs, it runs out of memory for large inputs, since I have a required memory limit of 128MB and time limit of 1 second. The ranges for the inputs are up to 222222 vertices, and 222222 input paths.
How could I optimize my method to satisfy such large inputs?
Get rid of the adjacency matrix (it uses O(N^2) space). Use adjacency lists instead.
Use a more efficient algorithm. Let's make the tree rooted. For a path from a to b we can add 1 to a and b and subtract 1 from their lca (it is easy to see that this way a one is added to edges on this path and only to them).
After processing all paths, the number of paths going through the edge is just a sum in the subtree.
If we use an efficient algorithm to compute lca, this solution works in O(N + Q * log N), where Q is the number of paths. It looks good enough for this constraints (we can actually do even better by using more complex and more efficient algorithms for finding the lca, but I don't think it's necessary here).
Note: lca means lowest common ancestor.
Q) Given an array A1, A2 ... AN and K count how many subarrays have inversion count greater than or equal to K.
N <= 10^5
K <= N*(N-1)/2
So, this question I came across in an interview. I came up with the naive solution of forming all subarrays with two for loops (O(n^2) ) and counting inversions in the array using modified merge sort which is O(nlogn). This leads to a complexity of O(n^3logn) which I guess can be improved. Any leads how I can improve it? Thanks!
You can solve it in O(nlogn) if I'm not wrong, using two moving pointers.
Start with the left pointer in the first element and move the right pointer until you have a subarray with >= K inversions. To do that, you can use any balanced binary search tree and every time you move the pointer to the right, count how many elements bigger than this one are already in the tree. Then you insert the element in the tree too.
When you hit the point in which you already have >= K inversions, you know that every longer subarray with the same starting element also satisfies the restriction, so you can add them all.
Then move the left pointer one position to the right and subtract the inversions of it (again, look in the tree for elements smaller than it). Now you can do the same as before again.
An amortized analysis easily shows that this is O(nlogn), as the two pointers only traverse once the array and each operation in the tree is O(logn).
Quicksort is a well known algorithm, but it's complex to decipher the C (for me). The inline version speed things up a lot http://www.corpit.ru/mjt/qsort.html‎.
However, could it be easily converted to output the first m samples of an N-element array ?
So a call that would simply stop the sort after the first m samples are sorted ? I suspect not as it does a quicksort into blocks then stitches blocks together for the final output. If I make the initial quicksort block size the size of m then I'm in a bad place, not taking advantage of the clever stuff in qsort.
Thanks in advance
Grog
Use Quickselect, as #R.. suggested, to get the first k elements, then sort them. Running time is O(N) to get the elements, and O(k log k) to sort them.
However, emperical evidence suggests that if the number of items to select (k) is less than 1% of the total number of elements (N), then using a binary heap will be faster than Quickselect followed by sort. When I had to select 200 items from a list of 2 million, the heap selection algorithm was a lot faster. See the linked blog for details.
(Restate the question: given N items, find the largest m of them.)
A simple solution is a priority queue. Feed all N items into the queue, then pop the top m items off the list. Feeding the N items in will be O(N log m). Each individual pop operation is O(log m), so removing the top n items would be O(m log m).
An in-place algorithm should be relatively straightforward. We an array of N elements. Each position in the array is numbered, with a number between 1 and N (inclusive). For each position in the array, take its position and divide by two (rounding down if necessary), and defining that position as its parent. Every position, apart from position 1, will have a parent. And most positions (not all) will have two children. For example:
node position: 1 2 3 4 5 6 7 8 9 ...
parent: - 1 1 2 2 3 3 4 4 ...
We want to swap the nodes until each node has a value less than (or equal to) its parent. This will guarantee that the largest value is in position 1. It is quite easy to reorder an array to have this form. Simply go through the nodes in order from position 1 to N, and call this function on it once:
void fixup_position(int x) {
if(x==1)
return;
int parent_position = (x/2) ; // rounding-down where necessary
if (data[x] > data[parent_position]) {
swap(data[x], data[parent_position]);
check_position(parent_position); // note this recursive call
}
}
for(x = 1; x <= N; ++x) {
fixup_position(x);
}
(Yes, I'm counting the array with position one, not zero! You'll have to take this account when implementing it for real. But this is easier to understand the logic of priority queue.)
The average number of recursive calls (and therefore swaps) is a constant (2, if I remember correctly). So this will be pretty quick, even with large datasets.
It's worth taking a moment to understand why this is correct. Just before calling fixup_position(x), every position up to, but not including x, are in a 'correct' state. By 'correct' I mean that they're not fully sorted, but each node is less than its parent. A new value is introduced (at position x), and will 'bubble up' through the queue. You might worry that this will invalidate other positions, and their parent-child relationship, but it won't. Only one node at a time will be in an invalid state, and it will keep bubbling up to its rightful place.
This is the O(N) step that will rearrange your array into a priority queue.
Removing the top n items. After the above method, it's clear that the biggest number will be in position 1, but what about the second-biggest, and third-biggest, and so on? What we do is we pop one value at a time from position 1 and then rearrange the data so that the next-biggest value is moved into position 1. This is slightly more complex than the fixup_position.
for(int y = 1; y <= m; ++y) {
print the number in position 1 .... it's the next biggest number
data[1] = -10000000000000; // a number smaller than all your data
fixup_the_other_way(1); // yes, this is '1', not 'y' !
}
where fixup_the_other_way is:
void fixup_the_other_way(int x) {
int child1 = 2*x;
int child2 = 2*x+1;
if(child1 > N) // doesn't have any children, we're done here
return;
if(child2 > N) { // has one child, at position[child1]
swap(data[x], data[child1]);
fixup_the_other_way(child1);
return;
}
// otherwise, two children, we must identify the biggest child
int position_of_largest_child = (data[child1]>data[child2]) ? child1 : child2;
swap(data[x], data[position_of_largest_child]);
fixup_the_other_way(position_of_largest_child);
return;
}
This means we print out the biggest remaining item, then replace that with a really small number and force it to 'bubble down' to the bottom of our data structures.
There are two ways to solve the problem efficiently:-
1.> Priority Queues
Algorithm: -
Insert first n items into Priority Queue with max heap
Peek on max element to check if current element compared is less than that
if less delete top element and add current
Do steps for all N-n elements.
2.> Your Problem can be reduced to selection problem : -
Algorithm
Do randomized selection for nth element on N elements (O(N) in average case)
sort first n elements using qsort or any other efficient sorting algorithm
Using both algorithms you would get average case O(N) performance
I was reading about linear probing in a hash table tutorial and came upon this:
The step size is almost always 1 with linear probing, but it is acceptable to use other step sizes as long as the step size is relatively prime to the table size so that every index is eventually visited. If this restriction isn't met, all of the indices may not be visited...
(The basic problem is: You need to visit every index in an array starting at an arbitrary index and skipping ahead a fixed number of indices [the skip] to the next index, wrapping to the beginning of the array if necessary with modulo.)
I understand why not all indices could be visited if the step size isn't relatively prime to the table size, but I don't understand why the converse is true: that all the indices will be visited if the step size is relatively prime to the array size.
I've observed this relatively prime property working in several examples that I've worked out by hand, but I don't understand why it works in every case.
In short, my question is: Why is every index of an array visited with a step that is relatively prime to the array size? Is there a proof of this?
Thanks!
Wikipedia about Cyclic Groups
The units of the ring Z/nZ are the numbers coprime to n.
Also:
[If two numbers are co-prime] There exist integers x and y such that ax + by = 1
So, if "a" is your step length, and "b" the length of the array, you can reach any index "z" by
axz + byz = z
=>
axz = z (mod b)
i.e stepping "xz" times (and wrapping over the array "yz" times).
number of steps is lcm(A,P)/P or A/gcd(A,P) where A is array size and P is this magic coprime.
so if gcd(A,P) != 1 then number of steps will be less than A
On contrary if gcd(A,P) == 1 (coprimes) then number of steps will be A and all indexes will be visited