Genetic Algorithm Sudoku - optimizing mutation - artificial-intelligence

I am in the process of writing a genetic algorithm to solve Sudoku puzzles and was hoping for some input. The algorithm solves puzzles occasionally (about 1 out of 10 times on the same puzzle with max 1,000,000 iterations) and I am trying to get a little input about mutation rates, repopulation, and splicing. Any input is greatly appreciated as this is brand new to me and I feel like I am not doing things 100% correct.
A quick overview of the algorithm
Fitness Function
Counts the number of unique values of numbers 1 through 9 in each column, row, and 3*3 sub box. Each of these unique values in the subsets are summed and divided by 9 resulting in a floating value between 0 and 1. The sum of these values is divided by 27 providing a total fitness value ranging between 0 and 1. 1 indicates a solved puzzle.
Population Size:
100
Selection:
Roulette Method. Each node is randomly selected where nodes containing higher fitness values have a slightly better chance of selection
Reproduction:
Two randomly selected chromosomes/boards swap a randomly selected subset (row, column, or 3*3 subsets) The selection of subset(which row, column, or box) is random. The resulting boards are introduced into population.
Reproduction Rate: 12% of population per cycle
There are six reproductions per iteration resulting in 12 new chromosomes per cycle of the algorithm.
Mutation: mutation occurs at a rate of 2 percent of population after 10 iterations of no improvement of highest fitness.
Listed below are the three mutation methods which have varying weights of selection probability.
1: Swap randomly selected numbers. The method selects two random numbers and swaps them throughout the board. This method seems to have the greatest impact on growth early in the algorithms growth pattern. 25% chance of selection
2: Introduce random changes: Randomly select two cells and change their values. This method seems to help keep the algorithm from converging. %65 chance of selection
3: count the number of each value in the board. A solved board contains a count of 9 of each number between 1 and 9. This method takes any number that occurs less than 9 times and randomly swaps it with a number that occurs more than 9 times. This seems to have a positive impact on the algorithm but only used sparingly. %10 chance of selection
My main question is at what rate should I apply the mutation method. It seems that as I increase mutation I have faster initial results. However as the result approaches a correct result, I think the higher rate of change is introducing too many bad chromosomes and genes into the population. However, with the lower rate of change the algorithm seems to converge too early.
One last question is whether there is a better approach to mutation.

You can anneal the mutation rate over time to get the sort of convergence behavior you're describing. But I actually think there are probably bigger gains to be had by modifying other parts of your algorithm.
Roulette wheel selection applies a very high degree of selection pressure in general. It tends to cause a pretty rapid loss of diversity fairly early in the process. Binary tournament selection is usually a better place to start experimenting. It's a more gradual form of pressure, and just as importantly, it's much better controlled.
With a less aggressive selection mechanism, you can afford to produce more offspring, since you don't have to worry about producing so many near-copies of the best one or two individuals. Rather than 12% of the population producing offspring (possible less because of repetition of parents in the mating pool), I'd go with 100%. You don't necessarily need to literally make sure every parent participates, but just generate the same number of offspring as you have parents.
Some form of mild elitism will probably then be helpful so that you don't lose good parents. Maybe keep the best 2-5 individuals from the parent population if they're better than the worst 2-5 offspring.
With elitism, you can use a bit higher mutation rate. All three of your operators seem useful. (Note that #3 is actually a form of local search embedded in your genetic algorithm. That's often a huge win in terms of performance. You could in fact extend #3 into a much more sophisticated method that looped until it couldn't figure out how to make any further improvements.)
I don't see an obvious better/worse set of weights for your three mutation operators. I think at that point, you're firmly within the realm of experimental parameter tuning. Another idea is to inject a bit of knowledge into the process and, for example, say that early on in the process, you choose between them randomly. Later, as the algorithm is converging, favor the mutation operators that you think are more likely to help finish "almost-solved" boards.

I once made a fairly competent Sudoku solver, using GA. Blogged about the details (including different representations and mutation) here:
http://fakeguido.blogspot.com/2010/05/solving-sudoku-with-genetic-algorithms.html

Related

Why does an AI genetic algortihm gives an equal fit or fitter solution in each generation?

The genetic algorithm is a meta-heuristic algorithm. The statement is that the population evolves each generation into a better (fitter) solution. Why is that?
I am pretty new at AI but want to improve step by step ;-) So please help me understand this algorithm.
At each iteration, a new generation of the population is created. Why will it contain an equal fit or fitter Individual?
Create a population of Individuals
WHILE population does not have the optimal fittest OR not maximum number of generations
call: evoluate the population
print fittest of population
method: evoluate the population
Craete a new population
FOR the number of individuals in the population
Select a fittest individual out of 5 random Individuals
Select a fittest individual out of 5 random Individuals
Store the crossover of these (parent) Individuals in the new population
FOR the number of individuals in the population
mutate the individual
Is it possible that the next population contains a less fitt solution?
Note: Apology for any grammatical mistakes.
Ques: Why will it contain an equal fit or fitter Individual?
Ans: Let's say the algorithm starts with a certain number (say 30) of the population (individual/solutions set) and will run for a certain number (say 30) of generation.
Initially a fitness score is given to each individual either randomly or using a fitness function.
In Each generation all the individual goes through some steps (Selection, crossover, mutation). In selection step the individuals with higher fitness value are more likely to be selected. During the crossover process, the individuals with higher fitness values are more likely to be selected as parents. Similarly, in the mutation step.
(NOTE: A probability value is used in selection, crossover, mutation step.)
Thus, in the next generation, the new population is more likely to perform better than the previous generation.
For details, you can check this book: https://www.amazon.com/Hands-Genetic-Algorithms-Python-intelligence-ebook/dp/B0842372RQ/ref=zg_bs_3880_18?_encoding=UTF8&psc=1&refRID=CEXPB1J6G099H25M0S21
Ques: Is it possible that the next population contains a less fitt solution?
Ans: Yes. Due to crossover and mutation, some of the offspring (new individuals) may change a lot from their best parent individual (previous individual selected for crossover/mutation) that they may not give the best result.
However, in each generation, the individuals ultimately get better.
Sample Image
It could also contain a less fit solution too, to escape a local optima. That's why the global best solution must be remembered too, unless the first individual is guaranteed to contain it and survive.

Fitness Function altenatives in Genetic Algorithms for game AI

I have created a Gomoku(5 in a row) AI using Alpha-Beta Pruning. It makes moves on a not-so-stupid level. First, let me vaguely describe the grading function of the Alpha-Beta algorithm.
When it receives a board as an input, it first finds all repetitions of stones and gives it a score out of 4 possible values depending on its usefulness as an threat, which is decided by length. And it will return the summation of all the repetition scores.
But, the problem is that I explicitly decided the scores(4 in total), and they don't seem like the best choices. So I've decided to implement a genetic algorithm to generate these scores. Each of the genes will be one of 4 scores. So for example, the chromosome of the hard-coded scores would be: [5, 40000,10000000,50000]
However, because I'm using the genetic algorithm to create the scores of the grading function, I'm not sure how I should implement the genetic fitness function. So instead, I have thought of the following:
Instead of using a fitness function, I'll just merge the selection process together: If I have 2 chromosomes, A and B, and need to select one, I'll simulate a game using both A and B chromosomes in each AI, and select the chromosome which wins.
1.Is this a viable replacement to the Fitness function?
2.Because of the characteristics of the Alpha-Beta algorithm, I need to give the max score to the win condition, which in most cases is set to infinity. However, because I can't use Infinity, I just used an absurdly large number. Do I also need to add this score to the chromosome? Or because it's insignificant and doesn't change the values of the grading function, leave it as a constant?
3.When initially creating chromosomes, random generation, following standard distribution is said to be the most optimal. However, genes in my case have large deviation. Would it still be okay to generate chromosomes randomly?
Is this a viable replacement to the Fitness function?
Yes, it is. It's a fairly common way to define a fitness function for board games. Probably a single round is not enough (but you have to experiment).
A slight variant is something like:
double fitness(Agent_k)
fit = 0
repeat M times
randomly extract an individual Agent_i (i <> k)
switch (result of Agent_k vs Agent_i)
case Agent_k wins: fit = fit + 1
case Agent_i wins: fit = fit - 2
case draw: fit doesn't change
return fit
i.e. an agent plays against M randomly selected opponents from the population (with replacement but avoiding self match).
Increasing M the noise decreases but longer simulation times are required (M=5 is a value used in some chess-related experiments).
2.Because of the characteristics of the Alpha-Beta algorithm...
Not sure of the question. A very large value is a standard approach for a static evaluation function signaling a winning condition.
The exact value isn't very important and shouldn't probably be subject to optimization.
3.When initially creating chromosomes, random generation, following standard distribution is said to be the most optimal. However, genes in my case have large deviation. Would it still be okay to generate chromosomes randomly?
This is somewhat related to the specific genetic algorithm "flavor" you are going to use.
A standard genetic algorithm could work better with not completely random initial values.
Other variants (e.g. Differential Evolution) could be less sensitive to this aspect.
Take also a look at this question / answer: Getting started with machine learning a zero sum game?

Number of simulation per node in Monte Carlo tree search

In the mcts algorithm described in Wikipedia, it performs exactly one playout(simulation) in each node selection. Now, I am experimenting this algorithm in a simple connect-k game. I wonder, in practice, do we perform more playouts to reduce the variance?
I tried the original algorithm with exactly one random playout (non-biased). The result is bad compared to my heuristic search with alpha-beta pruning. It converges very slowly. When I perform 500 playouts instead, the noise is a lot less. However, each node simulation is too slow for the algorithm to explore other parts of the tree in the given time hence missing the most critical move sometimes.
I then added the AMAF (in particular with RAVE transition) heuristic to the basic MCTS. I don't notice too much difference with 500 playouts perhaps because the variance is already low. I haven't analyzed the result with 1 playout yet.
Could anyone give me any insights?
Typically, you'd do exactly one play-out per selection step. However, subsequent selection steps can go through the same node multiple times.
Consider, for example, a case where there are only two moves available in the root node. If you then run, let's say, 10,000 complete iterations of MCTS (where one iteration = Selection + Expansion + Play-out + Backpropagation), each of the two nodes below the root node will get selected roughly 5,000 times (or maybe one gets selected 9,000 times and the other 1,000 times if the first is clearly a better option than the seocnd, but still, both get selected more than once).
Does this match what you are currently doing in your implementation? If not, try providing some code that you currently have so that we can see where it goes wrong. But if this is how you implemented it (which is how it should be), then there should be no problems with doing only one play-out per selection step

Crossover function for genetic

I am writing a Time table generator in java, using AI approaches to satisfy the hard constraints and help find an optimal solution. So far I have implemented and Iterative construction (a most-constrained first heuristic) and Simulated Annealing, and I'm in the process of implementing a genetic algorithm.
Some info on the problem, and how I represent it then :
I have a set of events, rooms , features (that events require and rooms satisfy), students and slots
The problem consists in assigning to each event a slot and a room, such that no student is required to attend two events in one slot, all the rooms assigned fulfill the necessary requirements.
I have a grading function that for each set if assignments grades the soft constraint violations, thus the point is to minimize this.
The way I am implementing the GA is I start with a population generated by the iterative construction (which can leave events unassigned) and then do the normal steps: evaluate, select, cross, mutate and keep the best. Rinse and repeat.
My problem is that my solution appears to improve too little. No matter what I do, the populations tends to a random fitness and is stuck there. Note that this fitness always differ, but nevertheless a lower limit will appear.
I suspect that the problem is in my crossover function, and here is the logic behind it:
Two assignments are randomly chosen to be crossed. Lets call them assignments A and B. For all of B's events do the following procedure (the order B's events are selected is random):
Get the corresponding event in A and compare the assignment. 3 different situations might happen.
If only one of them is unassigned and if it is possible to replicate
the other assignment on the child, this assignment is chosen.
If both of them are assigned, but only one of them creates no
conflicts when assigning to the child, that one is chosen.
If both of them are assigned and none create conflict, on of
them is randomly chosen.
In any other case, the event is left unassigned.
This creates a child with some of the parent's assignments, some of the mother's, so it seems to me it is a valid function. Moreover, it does not break any hard constraints.
As for mutation, I am using the neighboring function of my SA to give me another assignment based on on of the children, and then replacing that child.
So again. With this setup, initial population of 100, the GA runs and always tends to stabilize at some random (high) fitness value. Can someone give me a pointer as to what could I possibly be doing wrong?
Thanks
Edit: Formatting and clear some things
I think GA only makes sense if part of the solution (part of the vector) has a significance as a stand alone part of the solution, so that the crossover function integrates valid parts of a solution between two solution vectors. Much like a certain part of a DNA sequence controls or affects a specific aspect of the individual - eye color is one gene for example. In this problem however the different parts of the solution vector affect each other making the crossover almost meaningless. This results (my guess) in the algorithm converging on a single solution rather quickly with the different crossovers and mutations having only a negative affect on the fitness.
I dont believe GA is the right tool for this problem.
If you could please provide the original problem statement, I will be able to give you a better solution. Here is my answer for the present moment.
A genetic algorithm is not the best tool to satisfy hard constraints. This is an assigment problem that can be solved using integer program, a special case of a linear program.
Linear programs allow users to minimize or maximize some goal modeled by an objective function (grading function). The objective function is defined by the sum of individual decisions (or decision variables) and the value or contribution to the objective function. Linear programs allow for your decision variables to be decimal values, but integer programs force the decision variables to be integer values.
So, what are your decisions? Your decisions are to assign students to slots. And these slots have features which events require and rooms satisfy.
In your case, you want to maximize the number of students that are assigned to a slot.
You also have constraints. In your case, a student may only attend at most one event.
The website below provides a good tutorial on how to model integer programs.
http://people.brunel.ac.uk/~mastjjb/jeb/or/moreip.html
For a java specific implementation, use the link below.
http://javailp.sourceforge.net/
SolverFactory factory = new SolverFactoryLpSolve(); // use lp_solve
factory.setParameter(Solver.VERBOSE, 0);
factory.setParameter(Solver.TIMEOUT, 100); // set timeout to 100 seconds
/**
* Constructing a Problem:
* Maximize: 143x+60y
* Subject to:
* 120x+210y <= 15000
* 110x+30y <= 4000
* x+y <= 75
*
* With x,y being integers
*
*/
Problem problem = new Problem();
Linear linear = new Linear();
linear.add(143, "x");
linear.add(60, "y");
problem.setObjective(linear, OptType.MAX);
linear = new Linear();
linear.add(120, "x");
linear.add(210, "y");
problem.add(linear, "<=", 15000);
linear = new Linear();
linear.add(110, "x");
linear.add(30, "y");
problem.add(linear, "<=", 4000);
linear = new Linear();
linear.add(1, "x");
linear.add(1, "y");
problem.add(linear, "<=", 75);
problem.setVarType("x", Integer.class);
problem.setVarType("y", Integer.class);
Solver solver = factory.get(); // you should use this solver only once for one problem
Result result = solver.solve(problem);
System.out.println(result);
/**
* Extend the problem with x <= 16 and solve it again
*/
problem.setVarUpperBound("x", 16);
solver = factory.get();
result = solver.solve(problem);
System.out.println(result);
// Results in the following output:
// Objective: 6266.0 {y=52, x=22}
// Objective: 5828.0 {y=59, x=16}
I would start by measuring what's going on directly. For example, what fraction of the assignments are falling under your "any other case" catch-all and therefore doing nothing?
Also, while we can't really tell from the information given, it doesn't seem any of your moves can do a "swap", which may be a problem. If a schedule is tightly constrained, then once you find something feasible, it's likely that you won't be able to just move a class from room A to room B, as room B will be in use. You'd need to consider ways of moving a class from A to B along with moving a class from B to A.
You can also sometimes improve things by allowing constraints to be violated. Instead of forbidding crossover from ever violating a constraint, you can allow it, but penalize the fitness in proportion to the "badness" of the violation.
Finally, it's possible that your other operators are the problem as well. If your selection and replacement operators are too aggressive, you can converge very quickly to something that's only slightly better than where you started. Once you converge, it's very difficult for mutations alone to kick you back out into a productive search.
I think there is nothing wrong with GA for this problem, some people just hate Genetic Algorithms no matter what.
Here is what I would check:
First you mention that your GA stabilizes at a random "High" fitness value, but isn't this a good thing? Does "high" fitness correspond to good or bad in your case? It is possible you are favoring "High" fitness in one part of your code and "Low" fitness in another thus causing the seemingly random result.
I think you want to be a bit more careful about the logic behind your crossover operation. Basically there are many situations for all 3 cases where making any of those choices would not cause an increase in fitness at all of the crossed-over individual, but you are still using a "resource" (an assignment that could potentially be used for another class/student/etc.) I realize that a GA traditionally will make assignments via crossover that cause worse behavior, but you are already performing a bit of computation in the crossover phase anyway, why not choose one that actually will improve fitness or maybe don't cross at all?
Optional Comment to Consider : Although your iterative construction approach is quite interesting, this may cause you to have an overly complex Gene representation that could be causing problems with your crossover. Is it possible to model a single individual solution as an array (or 2D array) of bits or integers? Even if the array turns out to be very long, it may be worth it use a more simple crossover procedure. I recommend Googling "ga gene representation time tabling" you may find an approach that you like more and can more easily scale to many individuals (100 is a rather small population size for a GA, but I understand you are still testing, also how many generations?).
One final note, I am not sure what language you are working in but if it is Java and you don't NEED to code the GA by hand I would recommend taking a look at ECJ. Maybe even if you have to code by hand, it could help you develop your representation or breeding pipeline.
Newcomers to GA can make any of a number of standard mistakes:
In general, when doing crossover, make sure that the child has some chance of inheriting that which made the parent or parents winner(s) in the first place. In other words, choose a genome representation where the "gene" fragments of the genome have meaningful mappings to the problem statement. A common mistake is to encode everything as a bitvector and then, in crossover, to split the bitvector at random places, splitting up the good thing the bitvector represented and thereby destroying the thing that made the individual float to the top as a good candidate. A vector of (limited) integers is likely to be a better choice, where integers can be replaced by mutation but not by crossover. Not preserving something (doesn't have to be 100%, but it has to be some aspect) of what made parents winners means you are essentially doing random search, which will perform no better than linear search.
In general, use much less mutation than you might think. Mutation is there mainly to keep some diversity in the population. If your initial population doesn't contain anything with a fractional advantage, then your population is too small for the problem at hand and a high mutation rate will, in general, not help.
In this specific case, your crossover function is too complicated. Do not ever put constraints aimed at keeping all solutions valid into the crossover. Instead the crossover function should be free to generate invalid solutions and it is the job of the goal function to somewhat (not totally) penalize the invalid solutions. If your GA works, then the final answers will not contain any invalid assignments, provided 100% valid assignments are at all possible. Insisting on validity in the crossover prevents valid solutions from taking shortcuts through invalid solutions to other and better valid solutions.
I would recommend anyone who thinks they have written a poorly performing GA to conduct the following test: Run the GA a few times, and note the number of generations it took to reach an acceptable result. Then replace the winner selection step and goal function (whatever you use - tournament, ranking, etc) with a random choice, and run it again. If you still converge roughly at the same speed as with the real evaluator/goal function then you didn't actually have a functioning GA. Many people who say GAs don't work have made some mistake in their code which means the GA converges as slowly as random search which is enough to turn anyone off from the technique.

Determining which inputs to weigh in an evolutionary algorithm

I once wrote a Tetris AI that played Tetris quite well. The algorithm I used (described in this paper) is a two-step process.
In the first step, the programmer decides to track inputs that are "interesting" to the problem. In Tetris we might be interested in tracking how many gaps there are in a row because minimizing gaps could help place future pieces more easily. Another might be the average column height because it may be a bad idea to take risks if you're about to lose.
The second step is determining weights associated with each input. This is the part where I used a genetic algorithm. Any learning algorithm will do here, as long as the weights are adjusted over time based on the results. The idea is to let the computer decide how the input relates to the solution.
Using these inputs and their weights we can determine the value of taking any action. For example, if putting the straight line shape all the way in the right column will eliminate the gaps of 4 different rows, then this action could get a very high score if its weight is high. Likewise, laying it flat on top might actually cause gaps and so that action gets a low score.
I've always wondered if there's a way to apply a learning algorithm to the first step, where we find "interesting" potential inputs. It seems possible to write an algorithm where the computer first learns what inputs might be useful, then applies learning to weigh those inputs. Has anything been done like this before? Is it already being used in any AI applications?
In neural networks, you can select 'interesting' potential inputs by finding the ones that have the strongest correlation, positive or negative, with the classifications you're training for. I imagine you can do similarly in other contexts.
I think I might approach the problem you're describing by feeding more primitive data to a learning algorithm. For instance, a tetris game state may be described by the list of occupied cells. A string of bits describing this information would be a suitable input to that stage of the learning algorithm. actually training on that is still challenging; how do you know whether those are useful results. I suppose you could roll the whole algorithm into a single blob, where the algorithm is fed with the successive states of play and the output would just be the block placements, with higher scoring algorithms selected for future generations.
Another choice might be to use a large corpus of plays from other sources; such as recorded plays from human players or a hand-crafted ai, and select the algorithms who's outputs bear a strong correlation to some interesting fact or another from the future play, such as the score earned over the next 10 moves.
Yes, there is a way.
If you choose M selected features there are 2^M subsets, so there is a lot to look at.
I would to the following:
For each subset S
run your code to optimize the weights W
save S and the corresponding W
Then for each pair S-W, you can run G games for each pair and save the score L for each one. Now you have a table like this:
feature1 feature2 feature3 featureM subset_code game_number scoreL
1 0 1 1 S1 1 10500
1 0 1 1 S1 2 6230
...
0 1 1 0 S2 G + 1 30120
0 1 1 0 S2 G + 2 25900
Now you can run some component selection algorithm (PCA for example) and decide which features are worth to explain scoreL.
A tip: When running the code to optimize W, seed the random number generator, so that each different 'evolving brain' is tested against the same piece sequence.
I hope it helps in something!

Resources