To improve performance of Minimax algorithm with Alpha-Beta pruning, I've implemented Iterative deepening:
public Integer iterativeDeepening(int maxDepth, boolean isFirstPlayer) {
Integer bestCell = -1;
for (Integer depth = 1; depth <= maxDepth; depth++) {
bestCell = alphabeta.minimax(depth, false, Integer.MIN_VALUE, Integer.MAX_VALUE)[1];
}
return bestCell;
}
where method iterativeDeepening simply returns id of best move.
Firstly, I am not sure if this is correct way to implement Iterative Deepening.
Secondly, I noticed that AI started making wrong moves. Is it possible for Iterative Deepening to affect decision making?
While working with Transposition tables and Iterative Deepening, I measure significant improvements to algorithm speed, but I don't really want to sacrifice AI quality for speed.
Firstly, I am not sure if this is correct way to implement Iterative
Deepening.
I think it is correct, but if you want iterative deepening to speed your algorithm up, you should also add move ordering to it.
The idea is that you use results from shallower search, and search moves that seem the best as first at the next iteration.
This means a better chance for fast cut-offs in your alpha-beta algorithm.
Secondly, I noticed that AI started making wrong moves. Is it possible
for Iterative Deepening to affect decision making?
Implementing iterative deepening for your alpha-beta algorithm should not affect the decision making at all.
There must be something wrong with your implementation, but there's not enough information to say what it is.
Related
I have some doubts regarding best first search algorithm. The pseudocode that I have is the following:
best first search pseudocode
First doubt: is it complete? I have read that it is not because it can enter in a dead end, but I don't know when can happen, because if the algorithm chooses a node that has not more neighbours it does not get stucked in it because this node is remove from the open list and in the next iteration the following node of the open list is treated and the search continues.
Second doubt: is it optimal? I thought that if it is visiting the nodes closer to the goal along the search process, then the solution would be the shortest, but it is not in that way and I do not know the reason for that and therefore, the reason that makes this algorithm not optimal.
The heuristic I was using is the straight line distance between two points.
Thanks for your help!!
Of course, if heuristic function underestimates the costs, best first search is not optimal. In fact, even if your heuristic function is exactly right, best first search is never guaranteed to be optimal. Here is a counter example. Consider the following graph:
The green numbers are the actual costs and the red numbers are the exact heuristic function. Let's try to find a path from node S to node G.
Best first search would give you S->A->G following the heuristic function. However, if you look at the graph closer, you would see that the path S->B->C->G has lower cost of 5 instead of 6. Thus, this is an example of best first search performing suboptimal under perfect heuristic function.
In general case best first search algorithm is complete as in worst case scenario it will search the whole space (worst option). Now, it should be also optimal - given the heuristic function is admissible - meaning it does not overestimate the cost of the path from any of the nodes to goal. (It also needs to be consistent - that means that it adheres to triangle inequality, if it is not then the algorithm would not be complete - as it could enter a cycle)
Checking your algorithm I do not see how the heuristic function is calculated. Also I do not see there is calculated the cost of the path to get to the particular node.
So, it needs to calculate the actual cost of the path to reach a particular node and then it needs to add a heuristics estimate of the cost of the path from the node towards goal.
The formula is f(n)=g(n)+h(n) where g(n) is the cost of the path to reach the node and h(n) is the heuristics estimating the cost of the cheapest path from n to the goal.
Check the implementation of A* algorithm which is an example of best first search on path planning.
TLDR In best first search, you need to calculate the cost of a node as a sum of the cost of the path to get to that node and the heuristic function that estimate the cost of the path from that node to the goal. If the heuristic function will be admissible and consistent the algorithm will be optimal and complete.
I have created a Gomoku(5 in a row) AI using Alpha-Beta Pruning. It makes moves on a not-so-stupid level. First, let me vaguely describe the grading function of the Alpha-Beta algorithm.
When it receives a board as an input, it first finds all repetitions of stones and gives it a score out of 4 possible values depending on its usefulness as an threat, which is decided by length. And it will return the summation of all the repetition scores.
But, the problem is that I explicitly decided the scores(4 in total), and they don't seem like the best choices. So I've decided to implement a genetic algorithm to generate these scores. Each of the genes will be one of 4 scores. So for example, the chromosome of the hard-coded scores would be: [5, 40000,10000000,50000]
However, because I'm using the genetic algorithm to create the scores of the grading function, I'm not sure how I should implement the genetic fitness function. So instead, I have thought of the following:
Instead of using a fitness function, I'll just merge the selection process together: If I have 2 chromosomes, A and B, and need to select one, I'll simulate a game using both A and B chromosomes in each AI, and select the chromosome which wins.
1.Is this a viable replacement to the Fitness function?
2.Because of the characteristics of the Alpha-Beta algorithm, I need to give the max score to the win condition, which in most cases is set to infinity. However, because I can't use Infinity, I just used an absurdly large number. Do I also need to add this score to the chromosome? Or because it's insignificant and doesn't change the values of the grading function, leave it as a constant?
3.When initially creating chromosomes, random generation, following standard distribution is said to be the most optimal. However, genes in my case have large deviation. Would it still be okay to generate chromosomes randomly?
Is this a viable replacement to the Fitness function?
Yes, it is. It's a fairly common way to define a fitness function for board games. Probably a single round is not enough (but you have to experiment).
A slight variant is something like:
double fitness(Agent_k)
fit = 0
repeat M times
randomly extract an individual Agent_i (i <> k)
switch (result of Agent_k vs Agent_i)
case Agent_k wins: fit = fit + 1
case Agent_i wins: fit = fit - 2
case draw: fit doesn't change
return fit
i.e. an agent plays against M randomly selected opponents from the population (with replacement but avoiding self match).
Increasing M the noise decreases but longer simulation times are required (M=5 is a value used in some chess-related experiments).
2.Because of the characteristics of the Alpha-Beta algorithm...
Not sure of the question. A very large value is a standard approach for a static evaluation function signaling a winning condition.
The exact value isn't very important and shouldn't probably be subject to optimization.
3.When initially creating chromosomes, random generation, following standard distribution is said to be the most optimal. However, genes in my case have large deviation. Would it still be okay to generate chromosomes randomly?
This is somewhat related to the specific genetic algorithm "flavor" you are going to use.
A standard genetic algorithm could work better with not completely random initial values.
Other variants (e.g. Differential Evolution) could be less sensitive to this aspect.
Take also a look at this question / answer: Getting started with machine learning a zero sum game?
What is the difference between best-first-search and the breadth-first-search ? and which one do we call "BFS" ?
To answer your second question first:
which one do we call "BFS" ?
Typically when we refer to BFS, we are talking Breadth-first Search.
What is the difference between best-first-search and the breadth-first-search
The analogy that I like to consult when comparing such algorithms is robots digging for gold.
Given a hill, our goal is to simply find gold.
Breadth-first search has no prior knowledge of the whereabouts of the gold so the robot simply digs 1 foot deep along the 10-foot strip if it doesn't find any gold, it digs 1 foot deeper.
Best-first search, however, has a built-in metal detector, thus meaning it has prior knowledge. There is, of course, the cost in having a metal detector, and cost in turning it on and seeing which place would be the best to start digging.
Best-first search is informed whereas Breadth-first search is uninformed, as in one has a metal detector and the other doesn't!
Breadth-first search is complete, meaning it'll find a solution if one exists, and given enough resources will find the optimal solution.
Best-first search is also complete provided the heuristic — estimator of the cost/ so the prior knowledge — is admissible — meaning it overestimates the cost of getting to the solution)
I got the BFS image from http://slideplayer.com/slide/9063462/ the Best-first search is my failed attempt at photoshop!
Thats 2 algorithms to search a graph (tree).
Breadth first looks at all elements(nodes) of a certain depth, trying to find a solutuion (searched value or whatever) then continous one level deeper and looks at every node and so on.
Best first looks at the "best" node defined mostly by a heuristic, checks the best subnode of that node and so on.
A* would be an example for heursitic (best first search) and its way faster. But you need a heuristic what you wouldn't need for breadth search.
Creating a heuristic needs some own effort. Breadth first is out of the box.
the description of the minimax algo says, that both player have to play optimal, so that the algorithm is optimal. Intuitively it is understandable. But colud anyone concretise, or proof what happens if min plays not optimal?
thx
The definition of "optimal" is that you play so as to minimize the "score" (or whatever you measure) of your opponent's optimal answer, which is defined by the play that minimizes the score of your optimal answer and so forth.
Thus, by definition, if you don't play optimal, your opponent has at least one path that will give him a higher score than his best score if you played optimal.
One way to find out what is optimal is to brute force the entire game tree. For less than trivial problems you can use alpha-beta search, which guarantees optimum without needing to search the entire tree. If you tree is still too complex, you need a heuristic that estimates what the score of a "position" is and halts at a certain depth.
Was that understandable?
I was having problems with that precise question.
When you think about it for a bit you will get the idea that the minimax graph contains ALL possible games including the bad games. So if a player plays a sub optimal game then that game is part of the tree - but has been discarded in favor of a better game.
Its similar to alpha beta. I was getting stuck on what happens if I sacrifice some pieces intentionally to create space and then make a winning move through the gap. ie there is a better move further down the tree.
With alpha beta - lets say a sequence of losing moves followed by a killer move is in fact in the tree - but in that case the alpha and beta act as a window filter "a< x < b" and would have discarded it if YOU had a better game. You can see it in alpha beta if you imagine putting a +/- infinity into a pruned branch to see what happens.
In any case both algorithms recalculate every move so that if a player plays a sub optimal game them that will open up branches of the graph that are better for the opponent.
rinse repeat.
Consider a MIN node whose children are terminal nodes. If MIN plays suboptimally, then the value of the node is greater than or equal to the value it would have if MIN played optimally. Hence, the value of the MAX node that is the MIN node’s parent can only be increased. This argument can be extended by a simple induction all the way to the root. If the suboptimal play by MIN is predictable, then one can do better than a minimax strategy. For example, if MIN always falls for a certain kind of trap and loses, then setting the trap guarantees a win even if there is actually a devastating response for MIN.
Source: https://www.studocu.com/en-us/document/university-of-oregon/introduction-to-artificial-intelligence/assignments/solution-2-past-exam-questions-on-computer-information-system/1052571/view
I've read in one of my AI books that popular algorithms (A-Star, Dijkstra) for path-finding in simulation or games is also used to solve the well-known "15-puzzle".
Can anyone give me some pointers on how I would reduce the 15-puzzle to a graph of nodes and edges so that I could apply one of these algorithms?
If I were to treat each node in the graph as a game state then wouldn't that tree become quite large? Or is that just the way to do it?
A good heuristic for A-Star with the 15 puzzle is the number of squares that are in the wrong location. Because you need at least 1 move per square that is out of place, the number of squares out of place is guaranteed to be less than or equal to the number of moves required to solve the puzzle, making it an appropriate heuristic for A-Star.
A quick Google search turns up a couple papers that cover this in some detail: one on Parallel Combinatorial Search, and one on External-Memory Graph Search
General rule of thumb when it comes to algorithmic problems: someone has likely done it before you, and published their findings.
This is an assignment for the 8-puzzle problem talked about using the A* algorithm in some detail, but also fairly straightforward:
http://www.cs.princeton.edu/courses/archive/spring09/cos226/assignments/8puzzle.html
The graph theoretic way to solve the problem is to imagine every configuration of the board as a vertex of the graph and then use a breath-first search with pruning based on something like the Manhatten Distance of the board to derive a shortest path from the starting configuration to the solution.
One problem with this approach is that for any n x n board where n > 3 the game space becomes so large that it is not clear how you can efficiently mark the visited vertices. In other words there is no obvious way to assess if the current configuration of the board is identical to one that has previously been discovered through traversing some other path. Another problem is that the graph size grows so quickly with n (it's approximately (n^2)!) that it is just not suitable for a brue-force attack as the number of paths becomes computationally infeasible to traverse.
This paper by Ian Parberry A Real-Time Algorithm for the (n^2 − 1) - Puzzle describes a simple greedy algorithm that iteritively arrives at a solution by completing the first row, then the first column, then the second row... It arrives at a solution almost immediately, however the solution is far from optimal; essentially it solves the problem the way a human would without leveraging any computational muscle.
This problem is closely related to that of solving the Rubik's cube. The graph of all game states it too large to solve by brue force, but there is a fairly simple 7 step method that can be used to solve any cube in about 1 ~ 2 minutes by a dextrous human. This path is of course non-optimal. By learning to recognise patterns that define sequences of moves the speed can be brought down to 17 seconds. However, this feat by Jiri is somewhat superhuman!
The method Parberry describes moves only one tile at a time; one imagines that the algorithm could be made better up by employing Jiri's dexterity and moving multiple tiles at one time. This would not, as Parberry proves, reduce the path length from n^3, but it would reduce the coefficient of the leading term.
Remember that A* will search through the problem space proceeding down the most likely path to goal as defined by your heurestic.
Only in the worst case will it end up having to flood fill the entire problem space, this tends to happen when there is no actual solution to your problem.
Just use the game tree. Remember that a tree is a special form of graph.
In your case the leaves of each node will be the game position after you make one of the moves that is available at the current node.
Here you go http://www.heyes-jones.com/astar.html
Also. be mindful that with the A-Star algorithm, at least, you will need to figure out a admissible heuristic to determine whether a possible next step is closer to the finished route than another step.
For my current experience, on how to solve an 8 puzzle.
it is required to create nodes. keep track of each step taken
and get the manhattan distance from each following steps, taking/going to the one with the shortest distance.
update the nodes, and continue until reaches the goal