In a tree data structure, display tree nodes level by level - c

Question: how can we display tree nodes level by level ?. could you please give me time and space efficient solution .
Example :
A
/ \
B C
/ \ / \
D E F G
void PrintTree(struct tree *root);
Output:
You have to print tree nodes level by level
A
B C
D E F G

If you're feeling brutish, and want to think very simply about the level you are at...
You will need:
Two queues
A slight twist on Jack's approach
So, start with root.
Tack its children onto the first queue.
Step through them, tacking their children onto the second queue as you go.
Switch to the second queue, step through, pushing their children onto the first queue.
Wax on, wax off.
Really it's just a slight expansion of the same idea, the breadth first search or sweep, which is worth thinking about as a pattern, since it applies to a variety of data structures. Almost anything that's a tree or trie, and a few things that aren't, in fact!

To save space and time on SO:
http://thecodecracker.com/c-programming/bfs-and-dfs/

This kind of visit is called Breadth-first or Level Order. You can see additional infos here.
Basically you
first visit the current node
then all the children of that node
then all the children of every children and so on
This should be achieved easily with a FIFO structure:
push the root
until queue is empty
take first element, visit it, and push all its children to the end of the queue
repeat

Related

What's the difference between ExitRule and EnterRule in Antrl4?

I've been looking for the difference of ExitRule and EnterRule in Listener in "the definitive antrl4 reference" book but I still don't understand the difference. What is the difference between these two? And how does Listener travel the tree?
simply said, these are auto generated events created by Antrl
to keep track of the walker, so to speak.
Imagine, you stand in the middle of a long floor with countless doors on either site. Which you have to open and something in the rooms.
To keep track on which doors you've already visited, you mark them with a X and an incrementing number behind.
enterRule == You opens the door of a room.
you get in and search
exitRule == leave the room and paint the X and the next number on the door.
Now you are able to tell exactly which rooms you have already visited, but further more you are able to go to a specific room again for another search, without having to go all the way back and start all over again.
more technically spoken.
Antrl creates an enter and exit method for each Rule that is defined.
These Methods, or also known as callbacks, being used by the walker to walk the given tree.
Using a *ParseTreeListener you provide an entry point which indicates the beginning of the tree. For example enterAssign.
The Walker looks for this event and triggers it.
It then looks for a sub enterRule to trigger this event, and so on...
It keeps walking for as long as it find further enter rules or triggers exitAssign and the walker stops its walk.
Keypoint here is the automated or indipendent walk behavior.
The ParseTreeVisitor on the other hand,
will not generating enter/exit Rules to walk a tree.
It will generate visitRule instead.
The visit methods have to be called explicitly!
That means, if you forget to invoke a visit all its children don't get visited at all.
Antrl-Doc --> Parse Tree Listeners
Antrl-Mega-Tutorial --> many Information, short and precise
Tree Walking
Walking starts from a Root-Node and goes down on it until it have found the very left-most nested item. Then goes back up until it reaches the first node and looks for a sub-tree on the right.
It enters the right tree if one is found.
Or, it gets futher up the chain to find the next node.
...
Until it reaches the Root-Node again.
short Picture:
parent
|
/ \
/ \
Child1 Child2
/
/
Grandchild
chain of Calls:
enter parent
enter Child1
enter Grandchild
exit Grandchild
exit Child1
enter Child2
exit Child2
exit parent

Clone a Binary Tree with Random Pointers

Can anyone explain the way of cloning the binary tree with random pointers apart from left to right? every node has following structure.
struct node {
int key;
struct node *left,*right,*random;
}
This is very popular interview question and I am able to figure out the solution based on hashing(which is similar to cloning of linked lists). I tried to understand the solution given in Link (approach 2) but am not able to figure out what does it want to convey by reading code also.
I don't expect solution based on hashing as it is intuitive and pretty straight forward. Please explain solution based on modifying binary tree and cloning it.
The solution presented is based on the idea of interleaving both trees, the original one and its clone.
For every node A in the original tree, its clone cA is created and inserted as A's left child. The original left child of A is shifted one level down in the tree structure and becomes a left child of cA.
For each node B, which is a right child of its parent P (i.e., B == P->right), a pointer to its clone node cB is copied to a clone of its parent.
P P
/ \ / \
/ \ / \
A B cP B
/ \ / \ / \
/ \ / \ / \
X Z A cB Z
/ \ /
cA cZ
/
X
/
cX
Finally we can extract the cloned tree by traversing the interleaved tree and unlinking every other node on each 'left' path (starting from root->left) together with its 'rightmost' descendants path and, recursively, every other 'left' descendant of those and so on.
What's important, each cloned node is a direct left child of its original node. So in the middle part of the algorithm, after inserting the cloned nodes but before extracting them, we can traverse the whole tree walking on original nodes, and whenever we find a random pointer, say A->random == Z, we can copy the binding into clones by setting cA->random = cZ, which resolves to something like
A->left->random = A->random->left;
This allows cloning random pointers directly and does not require additional hash maps (at the cost of interleaving new nodes into the original tree and extracting them later).
The interleaving method can be simplified a little, I think.
1) For every node A in the original tree, create clone cA with the same left and right pointers as A. Then, set As left pointer to cA.
P P
/ \ /
/ \ /
A B cP
/ \ / \
/ \ / \
X Z A B
/ /
cA cB
/ \
X Z
/ /
cX cZ
2) Now given a node and it's clone (which is just node.left), the random pointer for the clone is: node.random.left (if node.random exists).
3) Finally, the binary tree can be un-interleaved.
I find this interleaving makes reasoning about the code much simpler.
Here is the code:
def clone_and_interleave(root):
if not root:
return
clone_and_interleave(root.left)
clone_and_interleave(root.right)
cloned_root = Node(root.data)
cloned_root.left, cloned_root.right = root.left, root.right
root.left = cloned_root
root.right = None # This isn't necessary, but doesn't hurt either.
def set_randoms(root):
if not root:
return
cloned_root = root.left
set_randoms(cloned_root.left)
set_randoms(cloned_root.right)
cloned_root.random = root.random.left if root.random else None
def unterleave(root):
if not root:
return (None, None)
cloned_root = root.left
cloned_root.left, root.left = unterleave(cloned_root.left)
cloned_root.right, root.right = unterleave(cloned_root.right)
return (cloned_root, root)
def cloneTree(root):
clone_and_interleave(root)
set_randoms(root)
cloned_root, root = unterleave(root)
return cloned_root
The terminology used in those interview questions is absurdly bad. It’s the case of one unwitting kuckledgragger somewhere calling that pointer the “random” pointer and everyone just nods and accept this as if it was some CS mantra from an ivory tower. Alas, it’s sheer lunacy.
Either what you have is a tree or it isn’t. A tree is an acyclic directed graph with at most a single edge directed toward any node, and adding extra pointers can’t change it - the things the pointers point to must retain this property.
But when the node has a pointer that can point to any other node, it’s not a tree. You got a proper directed graph with cycles in it, and looking at it as if it were a tree is silly at this point. It’s not a tree. It’s just a generic directed edge graph that you’re cloning. So any relevant directed graph cloning technique will work, but the insistence on using the terms “tree” and “random pointer” obscure this simple fact, and confuse the matters terribly.
This snafu indicates that whoever came up with the question was not qualified to be doing any such interviewing. This stuff is covered in any decent introductory data structure textbook so you’d think it shouldn’t present some astronomical uphill effort to just articulate what you need in a straightforward manner. Let the interviewees deal with users who can’t articulate themselves once they get that job - the data structure interview is neither the place nor time for that. It reeks of stupidity and carelessness, and leaves permanently bad aftertaste. It’s probably yet another stupid thing that ended up in some “interview question bank” because one poor soul got it asked by a careless idiot once and now everyone treats it as gospel. It’s yet again the blind leading the blind and cluelessness abounds.
Copying arbitrary graphs is a well solved problem and in all cases you need to retain the state of your traversal somehow. Whether it’s done by inserting nodes into the original graph to mark the progress - one could call it intrusive marking - or by adding data to the copy in progress and removing it when done, or by using an auxiliary structure such as a hash, or by doing repeat traversal to check it you made a copy of that node elsewhere - is of secondary importance, since the purpose Is always the same: to retain the same state information, just encoding it in various ways, trading off speed and memory use (as always).
When thinking of this problem, you need to tell yourself what sort of state you need to finish the copy, and abstract it away, and implement the copy using this abstract interface. Then you can implement it in a few ways, but at that point the copy itself doesn’t obscure things since you look at this simple abstract state-preserving interface and not at the copy process.
In real life the choice of any particular implementation highly depends on the amount and structure of data being copied, and the extent you have control over it all. If you’re the one controlling the structure of the nodes, then you’ll usually find that they have some padding that you could use to store a bit of state information. Or you’ll find that the memory block allocated for the nodes is actually larger than requested: malloc will often end up providing a block larger than asked for, and all reasonable platforms have APIs that let you retrieve the actual size of the block and thus check if there’s maybe some leftover space just begging to be used. These APIs are not always fast so be careful there of course. But you see where this is going: such optimization requires benchmarks and a clear need driven by demands of the application. Otherwise, use whatever is least likely to be buggy - ideally a C library that provides data structures that you could use right away. If you need a cyclic graph there are libraries that do just that - use them first.
But boy, do I hate that idiotic “random” name of the pointer. Who comes up with this nonsense and why do they pollute so many minds? There’s nothing random about it. And a tree that’s not a tree is not a tree. I’d fail that interviewer in a split second…

Monte Carlo Tree Search Alternating

Could anybody please clarify how (as I have not found any clear example anywhere) The MCTS algorithm iterates for the second player.
Everything I seem just seems to look like it is playing eg P1 move every time.
I understand the steps for one agent but I never find anything showing code where P2 places its counter, which surely must happen when growing the tree.
Essentially I would expect:
for each iter:
select node Player1
expand Player1
select node Player2
expand player 2
rollout
backpropogate
next iter
Is this right?? Could anybody please spell out some psuedocode showing that? Either iteratively or recursion i don't mind.
Thanks for any help.
The trick is in backpropagation part, where you update "wins" variable from the point of view of player whose move led into this position.
Code for MCTS
Notice under UCT function, specially the comments:
#Backpropagate
while node != None: # backpropagate from the expanded node and work back to the root node
node.Update(state.GetResult(node.playerJustMoved)) # state is terminal. Update node with result from POV of node.playerJustMoved
node = node.parentNode
IF you follow the function call, you would realize visit variable is always updated; wins however, is not.

level order queue implemmentation in C

I understand the logic of using a queue to visit the nodes in a binary search tree level by level.
However i tried to implemment in C but i'am stuck because i don't know how to enqueue them properly. Starting with the root i can create a Queue but after that if i add the children of the root to the queue i will lose the children of those new nodes since iam modifying the connections in the Queue every time a add a new node.
I could create a new data type that has one more link to use in the linked list Queue, that should work. What is the best approach here?
visit[ing] the nodes in a binary search tree level by level
has a name: it is called a "breadth-first" traversal of the tree. Starting with an empty queue, you enqueue the root node and then repeatedly dequeue the first node in the queue, process it somehow, and enqueue all that node's children, until there are no more nodes enqueued. When exactly you should enqueue a node's children relative to other processing of that node may depend on exactly what processing you intend to perform, especially if it involves structurally modifying the tree.
As long as the per-node processing can affect only the subtree rooted at the then-current node, this is all fine. If you need to be able to affect other parts of the overall tree, however, then a breadth-first traversal probably is not appropriate for your task.
You said
[I] don't know how to enqueue them properly. Starting with the root [I] can create a Queue but after that if [I] add the children of the root to the queue [I] will lose the children of those new nodes since [I] am modifying the connections in the Queue every time a add a new node.
The key concept here is that membership and position in the queue are separate and independent from membership and position in the tree. You could manage that by adding additional links to the node structures themselves, or by creating a new structure for the queue elements that contains a pointer to the enqueued BST node. The latter decouples the tree from the queue, which many, including me, would consider preferable for most purposes.

Rush hour - Iterative Deepening

I have to solve the "rush hour puzzle" by iterative deepening algorithm. I have read a lot of topics here on stackoverflow and also on the internet. I think that I understand the iterative deepening algorithm. Basically you just go deeper into the tree and try to find the solution.
I figured that I need to create a graph or a tree from the puzzle, but I really don't have an idea how. Also, if I would have the tree, then how would I tell if something is a valid move or a final state?
There were answers that the nodes should be possible moves and the edges are between the nodes that can be reached in one move. I can imagine this, but somehow I'm getting trouble in see how this can be useful or better yet how can this solve the problem.
Please help me, I'm not asking for complete solution or code sample, I just need some easy explanation of the problem.
There is a reason you need to use the deepening algorithm. Imagine you name each car A, B, C, D... The root node of your tree is the initial board state. Now, move car A. You go down one node in the tree. Move car A back. You are at the initial state, but you made two moves to get here, so you are two nodes down the tree. Repeat over and over. You will never hit a final state.
The root node of your tree is the initial board state. Given that node, add a child node to it for every possible valid move. So, each child node will be what the initial tree looks like after one move. Now, for each of those child nodes, do the same thing: make a child node where each node is one move off the original child node.
Eventually, you will hit a solution to the puzzle. When that happens, you print the moves from the root node to the solution child node and quit. This algorithm ensures that you find a solution with the least number of moves.

Resources