I'm learning AVL Tree and got TLE in recursive code. My tutor suggests to iterative solution. I searched and found a solution which saves parent node in child.
I wonder this one could get problem in memory, doesn't it?
And is there another way to insert, delete in AVL Tree what doesn't need to save parent in child ones? Please give me a hint.
There are several choices when implementing AVL trees:
- recursion or iterative
- store balance factor (height of right minus height of left) or height
- store parent reference or not
Recursive with height tends to give the most elegant solution but iterative may perform better in some cases so it is worth considering.
You can read about the choices:
http://www.eternallyconfuzzled.com/tuts/datastructures/jsw_tut_avl.aspx
and view an iterative implementation in Java:
https://github.com/dmcmanam/bbst-showdown
Parent reference is required in iterative (non-recursive) approach, because it is necessary to retrace after insertion/deletion, we could retrace with stack in recursive approach, while we could only retrace with parent reference in iterative approach. See https://en.wikipedia.org/wiki/AVL_tree#Insert and https://en.wikipedia.org/wiki/AVL_tree#Delete.
After this insertion it is necessary to check each of the node’s ancestors for consistency with the invariants of AVL trees: this is called "retracing".
Here is an iterative BalanceFactor-based AVL tree implementation in C: https://github.com/xieqing/avl-tree
Related
I know in Depth-First search we always go with the left-most child, I was wondering if when we use BFS do we also have to go left to right or it doesn't matter ?
thank you for your time.
The difference between both algorithms does not depend on where you start searching. Instead it depends on when you start searching.
In depth-first search, you always explore the children of the first child found until there are no more children (this could mean leftmost, rightmost, centermost, etc. depending on the application of the algorithm). You don't start searching the next child of a node until after exploring the children of the previous node.
In breadth-first search, you first identify all children in the order that they are given before going ahead and exploring the first child that you have identified. For example, if you are getting children in a left-to-right fashion, then you would "start from the left" and work to the right, and then you would go down to find a root.
Here is a great website that will allow you to play with bfs and dfs so that what I just said makes sense to you:
https://visualgo.net/en/dfsbfs
I need to implement an intelligent agent to play Abalone game, for this kind of game the best way to proceed seems a min-max strategy with alpha beta pruning.
I have already implemented a naive search algorithm that use min-max with pruning,
my problem is...
How to generate the nodes of the tree where perform the search?
I have no idea of the right way to do this, and how assign the weigh to each node.
For generating the tree nodes: You need to implement a method that returns a collection of all possible legal moves given the current board position and the player whose turn it is. All these moves will become children of the node representing the current board position. Repeat until memory is exhausted to generate the game tree ;) or rather until you reach a reasonable tree depth.
For alpha-beta search you also need an evaluation function which calculates the weight for each board position/node. You can do some research or think about such a function yourself, maybe considering the number of stones still on the board. However a bad evaluation function can seriously screw up your results, so take care and run a lot of tests.
If you have trouble coming up with a reasonable evaluation function, I recommend you take a look into Monte-Carlo techniques such as UCT.
http://en.wikipedia.org/wiki/Monte_Carlo_tree_search
These tackle the problem using a probabilistic approach and have some nice advantages over alpha-beta. Also they don't require an evaluation function so you could skip this step.
Good luck!
I have published two libraries for move generation in Abalone. You didn't mention the programming language used for your search implementation, but you can easily port the functions.
For C++, https://sourceforge.net/projects/abnet/
For Python, https://gitlab.com/peer.sommerlund/haliotis
For an evaluation function, distance between all your marbles, or distance to their gravity center (same thing), works nicely. Tino Werner used this with a twist for his program that won ICGA 2003.
For understanding distance when using hex coordinates, I can recommend Amit Patel's page: https://www.redblobgames.com/grids/hexagons/
I have to solve the "rush hour puzzle" by iterative deepening algorithm. I have read a lot of topics here on stackoverflow and also on the internet. I think that I understand the iterative deepening algorithm. Basically you just go deeper into the tree and try to find the solution.
I figured that I need to create a graph or a tree from the puzzle, but I really don't have an idea how. Also, if I would have the tree, then how would I tell if something is a valid move or a final state?
There were answers that the nodes should be possible moves and the edges are between the nodes that can be reached in one move. I can imagine this, but somehow I'm getting trouble in see how this can be useful or better yet how can this solve the problem.
Please help me, I'm not asking for complete solution or code sample, I just need some easy explanation of the problem.
There is a reason you need to use the deepening algorithm. Imagine you name each car A, B, C, D... The root node of your tree is the initial board state. Now, move car A. You go down one node in the tree. Move car A back. You are at the initial state, but you made two moves to get here, so you are two nodes down the tree. Repeat over and over. You will never hit a final state.
The root node of your tree is the initial board state. Given that node, add a child node to it for every possible valid move. So, each child node will be what the initial tree looks like after one move. Now, for each of those child nodes, do the same thing: make a child node where each node is one move off the original child node.
Eventually, you will hit a solution to the puzzle. When that happens, you print the moves from the root node to the solution child node and quit. This algorithm ensures that you find a solution with the least number of moves.
I have a very simple binary tree structure, something like:
struct nmbintree_s {
unsigned int size;
int (*cmp)(const void *e1, const void *e2);
void (*destructor)(void *data);
nmbintree_node *root;
};
struct nmbintree_node_s {
void *data;
struct nmbintree_node_s *right;
struct nmbintree_node_s *left;
};
Sometimes i need to extract a 'tree' from another and i need to get the size to the 'extracted tree' in order to update the size of the initial 'tree' .
I was thinking on two approaches:
1) Using a recursive function, something like:
unsigned int nmbintree_size(struct nmbintree_node* node) {
if (node==NULL) {
return(0);
}
return( nmbintree_size(node->left) + nmbintree_size(node->right) + 1 );
}
2) A preorder / inorder / postorder traversal done in an iterative way (using stack / queue) + counting the nodes.
What approach do you think is more 'memory failure proof' / performant ?
Any other suggestions / tips ?
NOTE: I am probably going to use this implementation in the future for small projects of mine. So I don't want to unexpectedly fail :).
Just use a recursive function. It's simple to implement this way and there is no need to make it more compilcated.
If you were doing it "manually" you'd basically end up implementing the same thing, just that you wouldn't use the system call stack for temporary variables but your own stack. Usually this won't have any advantages outweighing the more complicated code.
If you later find out that a substantial time in your program is spend calculating the sizes of trees (which probably won't happen) you can still start to profile things and try how a manual implementation performs. But then it might also better to do algorithmic improvements like already keeping track of the changes in size during the extraction process.
If your "very simple" binary tree isn't balanced, then the recursive option is scary, because of the unconstrained recursion depth. The iterative traversals have the same time problem, but at least the stack/queue is under your control, so you needn't crash. In fact, with flags and an extra pointer in each node and exclusive access, you can iterate over your tree without any stack/queue at all.
Another option is for each node to store the size of the sub-tree below it. This means that whenever you add or remove something, you have to track all the way up to the root updating all the sizes. So again if the tree isn't balanced that's a hefty operation.
If the tree is balanced, though, then it isn't very deep. All options are failure-proof, and performance is estimated by measurement :-) But based on your tree node struct, either it's not balanced or else you're playing silly games with flags in the least significant bits of pointers...
There might not be much point being very clever with this. For many practical uses of a binary tree (in particular if it's a binary search tree), you realise sooner rather than later that you want it to be balanced. So save your energy for when you reach that point :-)
How big is this tree, and how often do you need to know its size? As sth said, the recursive function is the simplest and probably the fastest.
If the tree is like 10^3 nodes, and you change it 10^3 times per second, then you could just keep an external count, which you decrement when you remove a node, and increment when you add one. Other than that, simple is best.
Personally, I don't like any solution that requires decorating the nodes with extra information like counts and "up" pointers (although sometimes I do it). Any extra data like that makes the structure denormalized, so changing it involves extra code and extra chances for errors.
I've got a manually created NavGraph in a 3D environment. I understand (and have implemented previously) an A* routine to find my way through the graph once you've 'got on the graph'.
What I'm interested in, is the most optimal way to get onto and 'off' the Graph.
Ex:
So the routine go's something like this:
Shoot a ray from the source to the destination, if theres nothing in the way, go ahead and just walk it.
if theres something in the way, we need to use the graph, so to get onto the graph, we need to find the closest visible node on the graph. (to do this, I previously sorted the graph based on the distance from the source, then fired rays from closet to furthest till i found one that didn't have an obstacle. )
Then run the standard A*...
Then 'exit' the graph, through the same method as we got on the graph (used to calculate the endpoint for the above A*) so I take and fire rays from the endpoint to the closest navgraph node.
so by the time this is all said and done, unless my navgraph is very dense, I've spent more time getting on/off the graph than I have calculating the path...
There has to be a better/faster way? (is there some kind of spacial subdivision trick?)
You could build a Quadtree of all the nodes, to quickly find the closest node from a given position.
It is very common to have a spatial subdivision of the world. Something like a quadtree or octree is common in 3D worlds, although you could overlay a grid too, or track arbitrary regions, etc. Basically it's a simple data-structures problem of giving yourself some sort of access to N navgraph nodes without needing an O(N) search to find where you are, and your choices tend to come down to some sort of tree or some sort of hash table.