I am looking for a way to traverse an ordered binary tree with a left and right limit.
So traverse only nodes that are within a left and right bound, preferably inorder and iterative.
I tried modifying an iterative inorder implementation i found on that site: https://www.geeksforgeeks.org/inorder-tree-traversal-without-recursion/
The problem i am facing right now is, that i cant just say dont go left if left node <= leftBound and dont go right if node >= rightBound, cause a node smaller/bigger than a given bound can still have childen within that bound.
Related
I have many subtrees saved in a file and I want to search them to find many things for each one of these subtrees like : the number of nodes, the number of leafs and the number of levels a subtree consist of...
To be more precise, the difference between a node and a leaf in my work; a node is any vertex in a subtree that could be a parent or a child where as a leaf is only a child vertex i.e every leaf is a node and the opposite is not true.
I am facing many problems in this work, the first one: the file containing the subtrees is not showing the rooted node and is not differentiating between parents and children..
The second problem: I read that for searching a tree programmers usually use a recursive method so I tried to search through the INTERNET for references or algorithms or pseudo-codes but all what I found is dealing with binary tree which is not my case (I am dealing with all configurations of a subtree) !!!
So could anyone kindly help me by giving a reference, an algorithm or an example for searching a tree to find the previous characteristics for such a subtree??
Another question: Is it possible to do this work using R ??
I will use any program to write the code but mainly I am interested in C.
Again,, please my subtree is not a binary one
UPDATE:
Each subtree is represented in my file as a set of edges, You can see below an example of a subtree of size 4:
44180 0
44180 18238
44180 13362
69677 44180
UPDATE: Sorry for the new update but Can I use R in my case even if there is a huge number of subtrees like 100000 subtrees each one with 20 edges (100000*20) ??
A Tree is well defined.
I am going to assume that each file describes exactly one tree. It also seems reasonable to assume that the line 44180 0 means an edge from node 44180 to node 0. You should double check these assumptions.
With these assumptions, you can parse the file into the following data structure:
Node
int id (id of self)
Node* parent (pointer to parent Node)
Node** children (list of children Nodes)
Or even simpler:
Node
int id
int parent (id of the parent node)
int* children (ids of all children)
Once you have the entire file parsed into a list/array of Nodes, then pick ANY node, and recursively traverse up the parent node until you hit a node that has no parent. Then the final node must be the root of the tree.
Now that you have the pointer to the root Node, you should be able to apply other algorithms.
I need help with this problem, I have given array of N elements, and I want to generate new array in which for every index I will be keeping how many numbers are on left of this index and are bigger than this element.
Let's say we have this array {3,2,1,0} and i want to generate this array {0, 1,2,3}. In the second array we have zero because there are no elements on left of the element 3, we have 1 because number 3 is on the left of number 2 and it is bigger...
I think this could be done with binary index tree, but i don't know how to implement it.
Thanks in advance.
You can do this in NlogN by constructing a binary tree and saving some metadata on the nodes along the way - namely the current size of the right subtree of each node. After each element is added, you can count the number of elements that were previously added to the tree that are larger than it. I will assume you know how to create a binary search tree by inserting nodes one by one. So let's split this into two things we need to do:
Maintain the size of the right subtree of each node: At each node we pick if the current value goes on the right subtree (current value is larger than the node value) or left subtree. If we choose to go right, increment the value of the rightSubtreeSize by one.
Count how many elements that were previously inserted are larger than the current element: Assuming that for each node we know the size of the right subtree, we can now count how many elements to the left of the current element are larger than it (i.e. elements to the left have already been added to the tree). Again, we follow the binary tree insert operation. At each node, if the current value is smaller than the node, it means that the node and it's whole right subtree are larger than the current value. so for each node we traverse, we keep a sum of all the elements that are larger than the current value. By the time we finish inserting to the tree, we have the sum we are looking for.
Let me know if you need any clarification.
I have a homework assignment that reads as follows (don't flame/worry, I am not asking you to do my homework):
Write a program that sorts a set of numbers by using the Quick Sort method using a binary search
tree. The recommended implementation is to use a recursive algorithm.
What does this mean? Here are my interpretations thus far, and as I explain below, I think both are flawed:
A. Get an array of numbers (integers, or whatever) from the user. Quicksort them with the normal quicksort algorithm on arrays. Then put stuff into a binary search tree, make the middle element of the array the root, et cetera, done.
B. Get numbers from the user, put them directly one by one into the tree, using standard properties of binary search trees. Tree is 'sorted', all is well--done.
Here's why I'm confused. Option 'A' does everything the assignment asks for, except it doesn't really use the binary tree so much as it throws it last minute in the end since it's a homework assignment on binary trees. This makes me think the intended exercise couldn't have been 'A', since the main topic's not quicksort, but binary trees.
But option 'B' isn't much better--it doesn't use quicksort at all! So, I'm confused.
Here are my questions:
if the interpretation is option 'A', just say so, I have no questions, thank you for your time, goodbye.
if the interpretation is option 'B', why is the sorting method used for inserting values in binary trees the same as quicksort? they don't seem inherently similar other than the fact that they both (in the forms I've learned so far) use the recursion divide-and-conquer strategy and divide their input in two.
if the interpretation is something else...what am I supposed to do?
Here's a really cool observation. Suppose you insert a series of values into a binary search tree in some order of your choosing. Some values will end up in the left subtree, and some values will end in the right subtree. Specifically, the values in the left subtree are less than the root, and the values of the right subtree are greater than the root.
Now, imagine that you were quicksorting the same elements, except that you use the value that was in the root of the BST as the pivot. You'd then put a bunch of elements into the left subarray - the ones less than the pivot - and a bunch of elements into the right subarray - the ones greater than the pivot. Notice that the elements in the left subtree and the right subtree of the BST will correspond perfectly to the elements in the left subarray and the right subarray of the first quicksort step!
When you're putting things into a BST, after you've compared the element against the root, you'd then descend into either the left or right subtree and compare against the root there. In quicksort, after you've partitioned the array into a left and right subarray, you'll pick a pivot for the left and partition it, and pick a pivot to the right and partition it. Again, there's a beautiful correspondence here - each subtree in the the overall BST corresponds to doing a pivot step in quicksort using the root of the subtree, then recursively doing the same in the left and right subtrees.
Taking this a step further, we get the following claim:
Every run of quicksort corresponds to a BST, where the root is the initial pivot and each left and right subtree corresponds to the quicksort recursive call in the appropriate subarrays.
This connection is extremely strong: every comparison made in that run of quicksort will be made when inserting the elements into the BST and vice-versa. The comparisons aren't made in the same order, but they're still made nonetheless.
So I suspect that what your instructor is asking you to do is to implement quicksort in a different way: rather than doing manipulations on arrays and pivots, instead just toss everything into a BST in whatever order you'd like, then walk the tree with an inorder traversal to get back the elements in sorted order.
A really cool consequence of this is that you can think of quicksort as essentially a space-optimized implementation of binary tree sort. The partitioning and pivoting steps correspond to building left and right subtrees and no explicit pointers are needed.
I have an theoretical question about Balanced BST.
I would like to build Perfect Balanced Tree that has 2^k - 1 nodes, from a regular unbalanced BST. The easiest solution I can think of is to use a sorted Array/Linked list and recursively divide the array to sub-arrays, and build Perfect Balanced BST from it.
However, in a case of extremely large Tree sizes, I will need to create an Array/List in the same size so this method will consume a large amount of memory.
Another option is to use AVL rotation functions, inserting element by element and balancing the tree with rotations depending on the Tree Balance Factor - three height of the left and right sub trees.
My questions are, am I right about my assumptions? Is there any other way to create a perfect BST from unbalanced BST?
AVL and similar trees are not perfectly balanced so I'm not sure how they are useful in this context.
You can build a doubly-linked list out of tree nodes, using left and right pointers in lieu of forward and backward pointers. Sort that list, and build the tree recursively from the bottom up, consuming the list from left to right.
Building a tree of size 1 is trivial: just bite the leftmost node off the list.
Now if you can build a tree of size N, you can also build a tree of size 2N+1: build a tree of size N, then bite off a single node, then build another tree of size N. The singe node will be the root of your larger tree, and the two smaller trees will be its left and right subtrees. Since the list is sorted, the tree is automatically a valid search tree.
This is easy to modify for sizes other than 2^k-1 too.
Update: since you are starting from a search tree, you can build a sorted list directly from it in O(N) time and O(log N) space (perhaps even in O(1) space with a little ingenuity), and build the tree bottom-up also in O(N) time and O(log N) (or O(1)) space.
I did not yet find a very good situation for needing a perfectly balanced search tree. If your case really needs it, I would like to hear about it. Usually it is better and faster to have a almost balanced tree.
If you have a large search tree, throwing away all existing structure is usually no good idea. Using rotation functions is a good way of getting a more balanced tree while preserving most of the existing structure. But normally you use a suitable data structure to make sure you never have a completely unbalanced tree. So called self balancing trees.
There is for example an AVL tree, a red-black-tree or a splay-tree, which use slightly different variants of rotation to keep the tree balanced.
If you really have a totally unbalanced tree you might have a different problem. - In your case rotating it the AVL way is probably the best way to fix it.
If you are memory constrained, then you can use the split and join operations which can be done on an AVL tree in O(log n) time, and I believe constant space.
If you also were able to maintain the order statistics, then you can split on median, make the LHS and RHS perfect and then join.
The pseudo-code (for a recursive version) will be
void MakePerfect (AVLTree tree) {
Tree left, right;
Data median;
SplitOnMedian(tree, &left, &median, &right);
left = MakePerfect(left);
right = MakePerfect(right);
return Join(left, median, right);
}
This can be implemented in O(n) time and O(log n) space, I believe.
As mentioned in the Title , I have a Binary search tree. I want to convert it to sorted doubly linklist using recursion.
My code
for each node in tree
find max of left sub-tree and assign its right to present node ,present node left to max
find min of right sub-tree and assign its left to present node ,present node right to max
and now recursively do same thing to other nodes in BST .
but this solution is not efficient as it reaches each node more than one time .In my quest of optimized code i got a link from google greatTreeList sollution . I have searched the same in SO both sollutions are same and worked for me. I didnt understand the append function of the sollution as it contains code
join(alast,b)
join(blast,a)
For tree whose nodes are inserted in following order 10,5,9,6,8,7,12,11,13,14
can anyone please explain how
join(alast,b)
join(blast,a)
are linking node in each recursion call.
I think you are over thinking this actually quite easy task - extracting the data from a binary tree in order is as simple as doing a depth first traversal - this is the point of a binary tree that it very efficiently gives you the elements in the sorted order.
So what you need to do is a standard depth first walk of the tree and each time you find a node add it to you linked list.
This in order depth first recursion is fairly straight forward is pseudocode:
Traverse(Node N)
Traverse(N.left);
Add N to the linked list
Traverse(N.right);
I suggest you try this manually on you example so you see how it works.
In order to convert a binary search tree to a sorted doubly linked list, one typically performs an inorder depth-first traversal, building the list along the traversal.
Try to come up with code that performs an inorder depth-first traversal and prints the items of the binary tree in sorted order. From there, it should be easy to complete your task.
Expanding on Elemental's answer
Traverse(Node N)
Traverse(N.left);
Add N to the linked list
Traverse(N.right);
To add N to the linked list,
your linked list class or data structure should have an append() method or similar.
Use your imagination, but something like this:
def append(N):
new_tail = node(val=N, prev=self.tail, next=None)
self.tail.next = new_tail
self.tail = new_tail
of course you also need to add self.head = self.tail the first time you append.