Rebalancing an AVL Tree - avl-tree

I have the following AVL tree:
10
/ \
5 12
/ \ / \
2 8 11 13
/ \ /\
1 4 7 9
If I insert 3 then I get:
10
/ \
5 12
/ \ / \
2 8 11 13
/ \ /\
1 4 7 9
/
3
If I calculate the Balance Factor for each node it seems that every BF is valid:
(Node:BF) -> 10:1, 5:0, 2:-1, 1:0, 4:-1, 8:0, 7:0, 9:0, 3:0, 12:0, 11:0, 13:0
But apparently this tree needs to be rebalanced. Where is there an invalid BF and then how would go about making the necessary rotations.

10 should have a balance factor of 2 with it's left subtree 5-2-4-3 and its right subtree 12-13. A valid tree after a single rotation might look like 5 | 2 10 | 1 4 8 12 | nil nil 3 nil 7 9 11 13.
A possible method for rebalancing is the cut-link-algorithm:
1. Name the unbalanced node z, one of it's child y and one of its child's child x.
2. Rename the nodes to a, b, c in inorder-traversal and let their children be T0, T1, T2 and T3 from left to right.
3. Set b as the new root, a as its left child and c as its right child.
4. Append the four subtrees corresponding from left to right as T0, T1, T2 and T3.

Related

Compute sum of (min*max) across all subarrays of the array

given N elements of an array compute the sum (min*max) across all the subarrays of the array.
e.g.
N = 5
Array: 5 7 2 3 9
output: 346
(5*5 + 7*7 + 2*2 + 3*3 + 9*9 + 5*7 + 2*7 + 2*3 + 3*9 + 2*7+2*7 + 2*9 + 2*7 + 2*9 + 2*9)
here is the complete question
i cannot think of anything better than O(n^2). The editorial solution uses segment trees which i couldnt understand.
Hint regarding the editorial (the details of which I am uncertain about): if we can solve the problem for all the intervals that include both A[i] and A[i+1], where i divides A in half, in O(n) time, then we can solve the whole problem in O(n log n) time using divide and conquer, solving left and right separately, and adding to that the intervals that overlap both left and right.
input:
5 7 2|3 9
i (divides input in half)
Task: find solution in O(n) for all intervals that include 2 and 3.
5 7 2 3 9
xxx
2 2 -> prefix min
2 2 2 <- prefix min
2 4 -> prefix sum min
6 4 2 <- prefix sum min
3 9 -> prefix max
7 7 3 <- prefix max
Notice that because they are monotonically increasing, maxes can be counted as extending back to the next higher element into the opposite prefix. For example, we can find that the 7 extends back to 9 by extending pointers in either direction as we mark the current max. We then want to multiply each max by the relevent min prefix sum.
Relevant contributions as we extend pointers marking current max, and multiply max by the prefix sum min (remembering that intervals must span both 2 and 3):
3 * 2
7 * 2
7 * 2
9 * 6
These account for the following intervals:
5 7 2 3 9
---
-----
-------
-----
-------
---------
3*2 + 7*2 + 7*2 + 9*2 + 9*2 + 9*2
Now solve the problem for left and right separately and add. Doing this recursively is divide and conquer.

Data structure for finding maximum of subarrays in better than O(log n) expected time

Given an array of values, how can you build a data structure that lets you find the maximum of any contiguous subarray quickly? Ideally the overhead of building this structure should be small and the structure should allow efficient appends and mutation of single elements.
An example array would be [6, 2, 3, 7, 4, 5, 1, 0, 3]. A request may be to find the maximum of the slice from index 2 to 7 (subarray [3, 7, 5, 1, 0]), which would result in 7.
Let n be the length of the array and k be the length of the slice.
The naïve, O(log k), method
An obvious solution is to build a tree that repeatedly gives a pairwise summary of the maximums
1 8 4 5 4 0 1 5 6 9 1 7 0 4 0 9 0 7 0 4 5 7 4 3 4 6 3 8 2 4 · ·
8 5 4 5 9 7 4 9 7 4 7 4 6 8 4 ·
8 5 9 9 7 7 8 4
8 9 7 8
9 8
9
These summaries take at most O(n) space, and the lower levels can be stored efficiently by using short indices. The bottom level, for example, can be a bit array. Appends and single mutations take O(log n) time. There are many other areas for optimization if need be.
The chosen slice can be split into two slices, split on a boundary between two triangles. In this example, for a given slice we'd split as so:
|---------------------------------|
6 9 1 7 0 4 0 9|0 7 0 4 5 7 4 3 4 6 3 8 2 4 · ·
9 7 4 9 | 7 4 7 4 6 8 4 ·
9 9 | 7 7 8 4
9 | 7 8
| 8
In each triangle we are interested in a forest of these trees that minimally determines the elements we actually care about:
|---------------------------------|
1 7 0 4 0 9|0 7 0 4 5 7 4 3 4 6 3
7 4 9 | 7 4 7 4 6
9 | 7 7
| 7
Note that in this case there are two trees on the left and three on the right. The total number of trees will be at most O(log k), since there are at most two of any given height. We can find the splitting point with a little bit-math
round_to = (start ^ end).bit_length() - 1
split_point = (end >> height) << height
Note that Python's bit_length can be done quickly with the lzcnt instruction on x86 architectures. The relevant trees are on each side of the split. The sizes of the relevant subtrees are encoded in the bits of the residuals of these numbers:
lhs_residuals = split_point - start
rhs_residuals = end - split_point
bin(lhs_residuals)
# eg. 10010110
# sizes = 10000000
# 10000
# 100
# 10
It's hard to traverse the most significant bits of an integer, but if you do a bit swap (a byteswap instruction plus a few shift-and-masks) you can then traverse the lowest significant bits by iterating this:
new_value = value & (value - 1)
lowest_set_bit = value ^ new_value
value = new_value
A traversal down the left and right halves takes O(log k) expected time because there are at most 2log₂ k trees - one per bit for each side.
A tangent: handling residuals in O(1) time and O(n log n) space
O(log k) is better than O(log n), but it's still not groundbreaking. One helpful effect of the previous attempt is that the trees to each side are "attached" to one side; there are only n ranges in their slice, not n² for an arbitrary slice. You can utilize this by adding to each level cumulative maxima, as so:
1 8 4 5 4 0 1 5 6 9 1 7 0 4 0 9 0 7 0 4 5 7 4 3 4 6 3 8 2 4 · ·
- 8|- 5|- 4|- 5|- 9|- 7|- 4|- 9|- 7|- 4|- 7|- 4|- 6|- 8|- 4|- · left to right
8 -|5 -|4 -|5 -|9 -|7 -|4 -|9 -|7 -|4 -|7 -|4 -|6 -|8 -|4 -|· - right to left
- - 8 8|- - 4 5|- - 9 9|- - 4 9|- - 7 7|- - 7 7|- - 6 8|- - · · left to right
8 8 - -|5 5 - -|9 9 - -|9 9 - -|7 7 - -|7 7 - -|8 8 - -|4 4 - - right to left
- - - - 8 8 8 8|- - - - 9 9 9 9|- - - - 7 7 7 7|- - - - 8 8 · · left to right
8 8 5 5 - - - -|9 9 9 9 - - - -|7 7 7 7 - - - -|8 8 8 8 - - - - right to left
- - - - - - - - 8 9 9 9 9 9 9 9|- - - - - - - - 7 7 7 8 8 8 · · left to right
9 9 9 9 9 9 9 9 - - - - - - - -|8 8 8 8 8 8 8 8 - - - - - - - - right to left
- - - - - - - - - - - - - - - - 9 9 9 9 9 9 9 9 9 9 9 9 9 9 · · left to right
9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 - - - - - - - - - - - - - - - - right to left
The marker - is used to ignore those parts necessarily the same as the level below them, which do not need to be copied. In this case, the relevant slices are
|---------------------------------|
1 7 0 4 0 9 0 7 0 4 5 7 4 3 4 6 3
↓ ↓
9 9 9 9 - - - -|- - - - - - - - 7 7 7 8 8 8 · ·
right to left | left to right
and the wanted maxima are as indicated. The true maxima is then the maximum of those two values.
This obviously takes O(n log n) memory, since there are log n levels and each needs a complete row of values (though they can be indexes to save space). Updates, however, take O(n) time as they may propagate - adding a 10 to this would invalidate the whole bottom right-to-left row, for example. Mutations are obviously equally inefficient.
O(1) time by answering a different question
Depending on the context you need this for, you may find it is possible to truncate the search depth. This works if you are allowed some leeway in your slice relative to the size of the slice. Since the slices shrink geometrically, although a slice from 0:4294967295 takes a massive 22 iterations, truncating to a fixed quantity of 11 iterations gives the maximum of the slice 0:4292870144, a 0.05% difference. This may be acceptable.
O(1) expected time by exploiting probability
Rounding may be acceptable, but even if it is you're still doing an O(log n) algorithm - just with a smaller, fixed n. It is possible to do a lot better on randomly distributed data.
Consider one side of a forest. As you traverse down it, the fraction of numbers you've seen exceeds the fraction you haven't seen geometrically. Thus the probability that you've already seen the maximum increases in par. It makes sense that you can use this to your advantage.
Consider this half again:
---------------------|
0 7 0 4 5 7 4 3 4 6 3 8 2 4 · ·
7 4 7 4 6* 8 4 ·
7 7 8* 4
7* 8
8
After checking the 7*, don't immediately traverse to the 6*. Instead, check the smallest parent of all of the rest, which is the 8*. Only traverse down if this parent is larger than the maximum so far. If it is not, you can stop iterating. Only if it is larger do you need to continue traversing down. It just so happens that the largest values is one past the end here, so we traverse all the way down, but you can imagine this is unusual.
At least half of the time you only need to evaluate the first triangle, at least half of the rest you only need to look down once more, etc. This is a geometric sequence that shows the average traversal cost is two traversals; less if you include the fact that the remaining triangles can be less than half the size some of the time.
And in the worst case?
The worst case occurs with nonrandom trees. The most pathological is sorted data:
---------------------|
0 1 2 3 4 5 6 7 8 9 a b c d e f
1 3 5 7 9 b d f
3 7 b f
7 f
f
Since the maximum is always in the fragment of the range you haven't seen, regardless of which slice you choose. Thus the traversal is always O(log n). Unfortunately sorted data is frequent in practice, and this algorithm is hurt here (this property is shared with several other algorithms, like quicksort). It is possible to mitigate the harm, though.
Not dying on sorted data
If each node says whether it's sorted, or sorted in reverse, then upon reaching that node you don't need to do any more traversal - you can just take the first or last element in the subarray.
---------------------|
0 1 2 3 4 5 6 7 8 9 a b c d e f
→ → → → → → → →
→ → → →
→ →
→
You might find you instead have mostly-sorted data with some small randomization, though, which breaks the scheme:
---------------------|
0 1 2 3 4 5 6 7 a 9 a b d 0 e f
→ → → → ← → ← →
→ → b f
→ f
f
so instead each node can have the maximum number of levels down you can go whilst remaining sorted, and in which direction. You then skip down that many iterations. An example:
---------------------|
0 1 2 3 4 5 6 7 a 9 a b d 0 e f
→1 →1 →1 →1 ←1 →1 ←1 →1
0 3 5 7 a b d f
→2 →2 →1 →1
3 7 b f
→3 →2
7 f
→3
f
→n means if you skip down n levels the nodes will all be sorted left to right. The top node is →3 because three levels down is ordered: 0 3 5 7 a b d f. The direction is easy to encode in a single bit. Thus mostly-sortedness is handled gracefully.
This is easy to keep updated, because each node can calculate its value from its direct children. If they agree and are sorted in the same direction they agree, the the minimum distance and add one. Otherwise reset to a distance of 1 and point in the direction the children are sorted. The hardest part is the logic in traversal, which looks a bit finicky.
It is still possible to produce examples that require traversal all the way to the bottom, but they should not occur frequently in non-adversarial data.
I have discovered by coincidence the term for this problem:
Range minimum query
Unsurprisingly, this is a well-studied problem, albeit one it seems hard to search for. Wikipedia gives some solutions which are noticeably different to mine.
The O(1) time, O(n log n) space solution in particular is much more effective than my similar aside, since it allows appends in O(log n) time, which may suffice, rather than the terrible O(n) mine caused.
The other approaches are asymptotically decent, and the final result is particularly good. The O(log n) time, O(n) space solution is technically weaker than my final result, but log n is never large and it has better constant factors on search due to its linear scan of memory. Appending elements in both cases is amortized O(1), with the Wikipedia variant doing better with sufficient care. I would expect setting the block size to something fixed and applying the algorithm straightforwardly would be a practical win. In my case, even excessive block sizes of, say, 128 would be plenty fast for searches, and minimise both overhead on append and the constant factor of the space overhead.
The final constant-time approach seems like an academic result of little practical use.

Saving Constructed Means in a Matrix in Stata

Have sales and a time indicator as such:
time sales
1 6
2 7
1 5
3 4
2 4
5 7
4 3
3 2
5 1
5 4
3 1
4 9
1 8
I want the mean, stdev, and N of the above saved in a t (each time period has a row) X 4 (time period, mean, stdev, N) matrix.
For time = 5 the matrix would be:
time mean stdev N
... ... ... ...
5 4 3 3
... ... ... ...
Just for the mean I tried:
mat t1=J(5,1,0)
forval i = 1/5 {
summ sales if time == `i'
mat t1[`i']=r(mean)
}
However, I kept getting an error. Even if it worked I was unsure how to get the other (stdev and N) variables of interest.
You were probably aiming for something like
matrix t1 = J(5, 1, .)
forvalues i = 1/5 {
summarize sales if time == `i'
matrix t1[`i', 1] = r(mean)
}
matrix list t1
U[14.9] Subscripting specifies you need matname[r,c]. You were leaving out the second subscript. In Mata you are allowed to subscript vectors in this way but you never enter Mata.
An alternative is
forval i = 1/5 {
summarize sales if time == `i'
matrix t1 = (nullmat(t1) \ r(mean))
}
With the latter, you have no need of declaring the matrix beforehand. See help nullmat().
But it's probably easiest to use collapse and get all information in one step:
clear all
set more off
input ///
time sales
1 6
2 7
1 5
3 4
2 4
5 7
4 3
3 2
5 1
5 4
3 1
4 9
1 8
end
collapse (mean) msales=sales (sd) sdsales=sales ///
(count) csales=sales, by(time)
list
Note that count counts nonmissing observations only.
If you want a matrix then convert the variables using mkmat, after the collapse:
mkmat time msales sdsales csales, matrix(summatrix)
matrix list summatrix

Heaping a binary tree

I am learning about heaping and I am having trouble understand how you are supposed to move each node.I will give you an example tree below:
1
/ \
2 3
/ \ / \
4 5 6 7
/ \ /
8 9 10
So that is my tree. I am trying to get 10 to the node but do not understand the steps that I take. Would I first look at the bottom of the tree? Heres my attempts:
1
/ \
2 3
/ \ / \
4 5 6 7
/ \ /
8 9 10
-> Move ten up and the two down.
1
/ \
10 3
/ \ / \
4 5 6 7
/ \ /
8 9 2
-> Move the 9 up
1
/ \
10 3
/ \ / \
9 5 6 7
/ \ /
8 4 2
-> move the 7 up
1
/ \
10 7
/ \ / \
9 5 6 3
/ \ /
8 4 2
-> Move the whole left side up and bring the 1 down.
10
/ \
9 7
/ \ / \
8 5 6 3
/ \ /
1 4 2
This is what I end up with but I have a feeling this is not right because it is not an ordered tree. Can someone help me understand where I went wrong?
Heap is not an ordered binary tree. The only ordering that heap preserves is that any child node is less (or equal) than it's parent node. Child nodes at the same level of the tree can be in any order relatively to each other.

Algorithm for Vertex connections From List of Directed Edges

The square of a directed graph G = (V, E) is the graph G2 = (V, E2) such that u→w is in E2 if and only if u ≠ w and there is a vertex v such that both u→v and v→w are in E2. The input file simply lists the edges in arbitrary order as ordered pairs of vertices, with each edge on a separate line. The vertices are numbered in order from 1 to the total number of vertices.
*self-loops and duplicate/parallel edges are not allowed
If we look at the an example of input data:
1 6
1 4
1 3
2 4
2 8
2 6
2 5
3 5
3 2
3 6
4 7
4 5
4 6
4 8
5 1
5 8
5 7
6 3
6 4
7 5
7 4
7 6
8 1
Then the output would be:
1: 3 4 7 8 5 2 6
2: 5 6 3 4 1 8 7
3: 1 7 8 6 5 4
4: 5 6 8 7 3 1
5: 3 1 4 6
6: 2 7 5 8
7: 1 5 6 8 3 4
8: 6 4 3
I'm writing the code in C.
My thoughts are to run through the file, see how many vertices they are and then allocate an array of pointers. Proceed to go through the list again searching for just where the line has a 1 in it, then look at where those corresponding numbers lead. If its not a duplicate or the same number(1) then I'll add it to a linked list, from the array of pointers. I will do this for every number vertex number in the file.
However, I feel this is terribly inefficient, and not the best way to go about doing this. If anyone has any other suggestions I would be extremely grateful.
if I get it right, you want to build a result set for each node where all nodes with a distance of one and two for each node are stated.
therefore, one can hold the edges in an adjacency matrix of bit arrays, where a bit is one when an edge exists and zero if not.
now one can multiply this matrix with itself. in this case multiply means you can make an AND on row and column.
A small example (sorry, don't know how to insert a matrix properly):
0 1 0 0 1 0 0 0 1
0 0 1 x 0 0 1 = 1 1 0
1 1 0 1 1 0 0 1 1
This matrix contains a one for all nodes reachable in two steps. simply it's the adjacency matrix for two instead of one steps. If you now OR this matrix with your initial matrix you have a matrix which holds all paths of length one and two.
this approach has multiple advantages. at first bit operations are very fast. the cpu parallyzes your calculations and you can stop for the result matrix cell if one pair is found where the results gives one.
furthermore it is well documented how to calculate matrix multiplication in parallel.
you can easily calculate all other length of pathes. for a length k one has to calculate:
A^k = A^(k-1) * A
hope that helped

Resources