Multiple AVL Trees From Sorted List? - avl-tree

I'm working on an AVL tree assignment and I have a quick question about their definition - we're given a sorted list, and we have to generate an AVL tree from it in O(n) time. I've completed this (thanks to other help from StackOverflow!), but my result, while a valid AVL tree, is different from the result of the example provided. Are multiple AVL trees able to be generated from the same sorted list?
Thanks!

Yes. Consider the degenerate case of a tree with only two nodes. In this case, either node can be the root, and the other will be a leaf. The two are equivalent as far as overall balance goes.

Yes, for instance, these are two possible AVL trees for <1,2,3,4,5>:
(2 1 (3 4 5))
and
(4 (2 1 3) 5)
where (a T1 T2) denotes a tree with root a, left tree T1 and left right T2.

Related

Can we delete avl tree node in this way

So I was studying AVL trees and came across deleting a particular node from the tree. It was done by deleting a node similar to a BST and then balancing the height difference factor. But if we have the array which contains the order of inserted elements and we have a node to be deleted. I delete the occurrence of that node in the array and then construct AVL from scratch. Can this be a good way to do the deletion?
Of course you can delete this way. But the real question when discussing algorithms is what is the complexity of that?
The complexity of the standard algorithm of deletion from an AVL tree is o(lg(n)) - you can find explanations online everywhere to that fact.
Now let's look at your method - the complexity of converting the AVL tree to a sorted array would take O(n) using inorder traversal. Than constructing the AVL tree from a sorted array is O(n).
So in the bottom line, this method is just less efficient.

Binary Search Tree Complexity based on Values

I have created a binary search tree in C language, when i am testing my tree, the insertion and search operations take different times to execute. for example, i have two scenarios, inserting random values from 1 to 10000 and inserting sorted values from 1 to 10000. when i insert random values from 1 to 10000 into my BST then it takes less time than inserting sorted values from 1 to 10000 into my BST.
the same for search operation to be executed in my BST it takes less time while i am searching in those random values, but takes too long while searching in sorted values in my BST.
Now, the problem is the time complexity, can anyone explain, how is this handled? what is the time complexity for all four cases?
Note: Inserting and searching those sorted values almost take the same time, still searching takes a bit longer!
If you don't balance the tree, its structure depends on the insertion order, and a "fully unbalanced" binary search tree is equivalent to a sorted linked list.
Thus, the worst case time complexity for your operations is linear in the tree's size, not logarithmic as it would be in a balanced tree.
For instance, if you insert from 1 and incrementing, you'll end up with
1
/\
2
/\
3
/\
...
where the right "spine" is a linked list.
Use AVL Tree. It will keep your tree balanced and you will always get search time of log(n)

Get the amount of conflicts between two arrays [Divide and Conquer]

I'm currently working on a project which involved getting the amount of conflicts between two arrays. This means the differences in order in which certain numbers are placed in the array. Each number only occurs once and the two array's are always the same size.
For example:
[1,2,3,4]
[4,3,2,1]
These two arrays have 6 conflicts:
1 comes for 2 in the first array, but 2 comes for 1 in the second, so conflict + 1.
1 comes for 3 in the first array, but 3 comes for 1 in the second, so conflict + 1.
etc.
I've tried certain approaches to make an algorithm which computes the amount in O(n log n). I've already made one by using Dynamic Programming which is O(N²), but I want an algorithm which computes the value by Divide and Conquer.
Anyone has any thought on this?
You can also use self balancing binary search tree for finding the number of conflicts ("inversions").
Lets take an AVL tree for example.
Initialize inversion count = 0.
Iterate from 0 to n-1 and do following for every arr[i]
Insertion also updates the result. Keep counting the number of greater nodes when tree is traversed from root to leaf.
When we insert arr[i], elements from arr[0] to arr[i-1] are already inserted into AVL Tree. All we need to do is count these nodes.
For insertion into AVL Tree, we traverse tree from root to a leaf by comparing every node with arr[i[].
When arr[i[ is smaller than current node, we increase inversion count by 1 plus the number of nodes in right subtree of current node. Which is basically count of greater elements on left of arr[i], i.e., inversions.
Time complexity of above solution is O(n Log n) as AVL insert takes O(Log n) time.

How to implement function which picks node of a tree randomly using reservoir sampling in C

I need to write a function that will pick one node of a tree randomly with probability 1/n , where n is number of all nodes in C , using reservoir sampling or any other clever and efficient way .
I don't know the number of nodes .
How to do this ?
Walk the tree recursively. Store (references to) all nodes in a flat list.
After returning from the walk you pick one item form the list (which's length is now known)

Binary trees and quicksort?

I have a homework assignment that reads as follows (don't flame/worry, I am not asking you to do my homework):
Write a program that sorts a set of numbers by using the Quick Sort method using a binary search
tree. The recommended implementation is to use a recursive algorithm.
What does this mean? Here are my interpretations thus far, and as I explain below, I think both are flawed:
A. Get an array of numbers (integers, or whatever) from the user. Quicksort them with the normal quicksort algorithm on arrays. Then put stuff into a binary search tree, make the middle element of the array the root, et cetera, done.
B. Get numbers from the user, put them directly one by one into the tree, using standard properties of binary search trees. Tree is 'sorted', all is well--done.
Here's why I'm confused. Option 'A' does everything the assignment asks for, except it doesn't really use the binary tree so much as it throws it last minute in the end since it's a homework assignment on binary trees. This makes me think the intended exercise couldn't have been 'A', since the main topic's not quicksort, but binary trees.
But option 'B' isn't much better--it doesn't use quicksort at all! So, I'm confused.
Here are my questions:
if the interpretation is option 'A', just say so, I have no questions, thank you for your time, goodbye.
if the interpretation is option 'B', why is the sorting method used for inserting values in binary trees the same as quicksort? they don't seem inherently similar other than the fact that they both (in the forms I've learned so far) use the recursion divide-and-conquer strategy and divide their input in two.
if the interpretation is something else...what am I supposed to do?
Here's a really cool observation. Suppose you insert a series of values into a binary search tree in some order of your choosing. Some values will end up in the left subtree, and some values will end in the right subtree. Specifically, the values in the left subtree are less than the root, and the values of the right subtree are greater than the root.
Now, imagine that you were quicksorting the same elements, except that you use the value that was in the root of the BST as the pivot. You'd then put a bunch of elements into the left subarray - the ones less than the pivot - and a bunch of elements into the right subarray - the ones greater than the pivot. Notice that the elements in the left subtree and the right subtree of the BST will correspond perfectly to the elements in the left subarray and the right subarray of the first quicksort step!
When you're putting things into a BST, after you've compared the element against the root, you'd then descend into either the left or right subtree and compare against the root there. In quicksort, after you've partitioned the array into a left and right subarray, you'll pick a pivot for the left and partition it, and pick a pivot to the right and partition it. Again, there's a beautiful correspondence here - each subtree in the the overall BST corresponds to doing a pivot step in quicksort using the root of the subtree, then recursively doing the same in the left and right subtrees.
Taking this a step further, we get the following claim:
Every run of quicksort corresponds to a BST, where the root is the initial pivot and each left and right subtree corresponds to the quicksort recursive call in the appropriate subarrays.
This connection is extremely strong: every comparison made in that run of quicksort will be made when inserting the elements into the BST and vice-versa. The comparisons aren't made in the same order, but they're still made nonetheless.
So I suspect that what your instructor is asking you to do is to implement quicksort in a different way: rather than doing manipulations on arrays and pivots, instead just toss everything into a BST in whatever order you'd like, then walk the tree with an inorder traversal to get back the elements in sorted order.
A really cool consequence of this is that you can think of quicksort as essentially a space-optimized implementation of binary tree sort. The partitioning and pivoting steps correspond to building left and right subtrees and no explicit pointers are needed.

Resources