What are the differences between Monte Carlo and Markov chains techniques? - artificial-intelligence

I want to develop RISK board game, which will include an AI for computer players. Moreovor, I read two articles, this and this, about it, and I realised that I must learn about Monte Carlo simulation and Markov chains techniques. And I thought that I have to use these techniques together, but I guess they are different techniques relevant to calculate probabilities about transition states.
So, could anyone explain what are the important differences and advantages and disadvantages between them?
Finally, which way you will prefer if you would implement an AI for RISK game?
Here you can find simple determined probabilities about outcomes of a battle in risk board game, and the brute force algorithm used. There is a tree diagram to which specifies all possible states. Should I use Monte Carlo or Markov chain on this tree?

Okay, so I skimmed the articles to get a sense of what they were doing. Here is my overview of the terms you asked about:
A Markov Chain is simply a model of how your system moves from state to state. Developing a Markov model from scratch can sometimes be difficult, but once you have one in hand, they're relatively easy to use, and relatively easy to understand. The basic idea is that your game will have certain states associated with it; that as part of the game, you will move from state to state; and, critically, that this movement from state to state happens based on probability and that you know these probabilities.
Once you have that information, you can represent it all as a graph, where nodes are states and arcs between states (labelled with probabilities) are transitions. You can also represent as a matrix that satisfies certain constraints, or several other more exotic data structures.
The short article is actually all about the Markov Chain approach, but-- and this is important-- it is using that approach only as a quick means of estimating what will happen if army A attacks a territory with army B defending it. It's a nice use of the technique, but it is not an AI for Risk, it is merely a module in the AI helping to figure out the likely results of an attack.
Monte Carlo techniques, by contrast, are estimators. Once you have a model of something, whether a Markov model or any other, you often find yourself in the position of wanting to estimate something about it. (Often it happens to be something that you can, if you squint hard enough, put into the form of an integral of something that happens to be very hard to work with in that format.) Monte Carlo techniques just sample randomly and aggregate the results into an estimate of what's going to happen.
Monte Carlo techniques are not, in my opinion, AI techniques. They are a very general purpose technique that happen to be useful for AI, but they are not AI per se. (You can say the same thing about Markov models, but the statement is weaker because Markov models are so extremely useful for planning in AI, that entire philosophies of AI are built around the technique. Markov models are used elsewhere, though, as well.)
So that is what they are. You also asked which one I would use if I had to implement a Risk AI. Well, neither of those is going to be sufficient. Monte Carlo, as I said, is not an AI technique, it's a general math tool. And Markov Models, while they could in theory represent the entirety of a game of Risk, are going to end up being very unwieldy: You would need to represent every state of the game, meaning every possible configuration of armies in territories and every possible configuration of cards in hands, etc. (I am glossing over many details, here: There are a lot of other difficulties with this approach.)
The core of Wolf's thesis is neither the Markov approach nor the Monte Carlo approach, it is actually what he describes as the evaluation function. This is the heart of the AI problem: How to figure out what action is best. The Monte Carlo method in Blatt's paper describes a method to figure out what the expected result of an action is, but that is not the same as figuring out what action is best. Moreover, there's a low key statement in Wolf's thesis about look-ahead being hard to perform in Risk because the game trees become so very large, which is what led him (I think) to focus so much on the evaluation function.
So my real advice would be this: Read up on search-tree approaches, like minimax, alpha-beta pruning and especially expecti-minimax. You can find good treatments of these early in Russell and Norvig, or even on Wikipedia. Try to understand why these techniques work in general, but are cumbersome for Risk. That will lead you to some discussion of board evaluation techniques. Then go back and look at Wolf's thesis, focusing on his action evaluation function. And finally, focus on the way he tries to automatically learn a good evaluation function.
That is a lot of work. But Risk is not an easy game to develop an AI for.
(If you just want to figure out the expected results of a given attack, though, I'd say go for Monte Carlo. It's clean, very intuitive to understand, and very easy to implement. The only difficult-- and it's not a big one-- is making sure you run enough trials to get a good result.)

Markov chains are simply a set of transitions and their probabilities, assuming no memory of past events.
Monte Carlo simulations are repeated samplings of random walks over a set of probabilities.
You can use both together by using a Markov chain to model your probabilities and then a Monte Carlo simulation to examine the expected outcomes.
For Risk I don't think I would use Markov chains because I don't see an advantage. A Monte Carlo analysis of your possible moves could help you come up with a pretty good AI in conjunction with a suitable fitness function.
I know this glossed over a lot but I hope it helped.

Related

Artificial neural networks and Markov Processes

I read a little about ANN and Markov process. Can someone please help me in understanding where exactly Markov process fits in with ANN and genetic algorithms. Or simply, what could be the role of Markov processes in this scenario.
Thanks alot
The accepted answer is correct, but I just wanted to add a couple of details.
A Markov process is any system that goes through a series of states randomly in such a way that if you know the current state, you can predict the likelihood of each possible next states. A common example is the weather; if it's sunny now, you can predict that it is likely to be sunny later, regardless of previous weather.
A genetic algorithm is one that begins by generating a bunch of arbitrary random solutions to a given problem. It then checks these solutions to see how good they are. The 'bad' solutions are discarded, the 'good' solutions are kept and combined together to form (hopefully) better solutions, just like successful members of a species breeding a new generation. In theory, repeating this process will give better and better solutions until you eventually have an optimal one.
As you can see, they aren't algorithmically related. However, genetic algorithms are often used to generate Hidden Markov Models, for example here. The basic idea is that an HMM is initialized with random weights, a 'training set' of related Markov processes is run through it, and the weights are adjusted to give the members of the training set the highest probability of occurring. This is often done in speech recognition software.
Markov process and Artificial Neural Networks are completely different concepts.
Markov processes describes any events that follow a certain statistical property. The same way the words "Gaussian" or "Random" describes a certain set of events in terms of its statistical property.
Artificial Neural Network is an algorithm that helps solve you problems, and is not really related to Markov processes. You could be thinking of Hidden Markov Models which is also an algorithm. HMM's assumes the underlying system is a Markov Process with hidden states.

Implementing Simulated Annealing

I think I understand the basic concept of simulated annealing. It's basically adding random solutions to cover a better area of the search space at the beginning then slowly reducing the randomness as the algorithm continues running.
I'm a little confused on how I would implement this into my genetic algorithm.
Can anyone give me a simple explanation of what I need to do and clarify that my understand of how simulated annealing works is correct?
When constructing a new generation of individuals in a genetic algorithm, there are three random aspects to it:
Matching parent individuals to parent individuals, with preference according to their proportional fitness,
Choosing the crossover point, and,
Mutating the offspring.
There's not much you can do about the second one, since that's typically a uniform random distribution. You could conceivably try to add some random factor to the roulette wheel as you're selecting your parent individuals, and then slowly decrease that random function. But that goes against the spirit of the genetic algorithm and (more importantly) I don't think it will do much good. I think it would hurt, actually.
That leaves the third factor-- change the mutation rate from high mutation to low mutation as the generations go by.
It's really not any more complicated than that.

How do I pick a good representation for a board game tactic for a genetic algorithm?

For my bachelor's thesis I want to write a genetic algorithm that learns to play the game of Stratego (if you don't know this game, it's probably safe to assume I said chess). I haven't ever before done actual AI projects, so it's an eye-opener to see how little I actually know of implementing things.
The thing I'm stuck with is coming up with a good representation for an actual strategy. I'm probably making some thinking error, but some problems I encounter:
I don't assume you would have a representation containing a lot of
transitions between board positions, since that would just be
bruteforcing it, right?
What could branches of a decision tree look
like? Any representation I come up with don't have interchangeable
branches... If I were to use a bit string, which is apparently also
common, what would the bits represent?
Do I assign scores to the distance between certain pieces? How would I represent that?
I think I ought to know these things after three+ years of study, so I feel pretty stupid - this must look likeI have no clue at all. Still, any help or tips on what to Google would be appreciated!
I think, you could define a decision model and then try to optimize the parameters of that model. You can create multi-stage decision models also. I once did something similar for solving a dynamic dial-a-ride problem (paper here) by modeling it as a two stage linear decision problem. To give you an example, you could:
For each of your figures decide which one is to move next. Each figure is characterized by certain features derived from its position on the board, e.g. ability to make a score, danger, protecting x other figures, and so on. Each of these features can be combined (e.g. in a linear model, through a neural network, through a symbolic expression tree, a decision tree, ...) and give you a rank on which figure to act next with.
Acting with the figure you selected. Again there are a certain number of actions that can be taken, each has certain features. Again you can combine and rank them and one action will have the highest priority. This is the one you choose to perform.
The features you extract can be very simple or insanely complex, it's up to what you think will work best vs what takes how long to compute.
To evaluate and improve the quality of your decision model you can then simulate these decisions in several games against opponents and train the parameters of the model that combines these features to rank the moves (e.g. using a GA). This way you tune the model to win as many games as possible against the specified opponents. You can test the generality of that model by playing against opponents it has not seen before.
As Mathew Hall just said, you can use GP for this (if your model is a complex rule), but this is just one kind of model. In my case a linear combination of the weights did very well.
Btw, if you're interested we've also got a software on heuristic optimization which provides you with GA, GP and that stuff. It's called HeuristicLab. It's GPL and open source, but comes with a GUI (Windows). We've some Howto on how to evaluate the fitness function in an external program (data exchange using protocol buffers), so you can work on your simulation and your decision model and let the algorithms present in HeuristicLab optimize your parameters.
Vincent,
First, don't feel stupid. You've been (I infer) studying basic computer science for three years; now you're applying those basic techniques to something pretty specialized-- a particular application (Stratego) in a narrow field (artificial intelligence.)
Second, make sure your advisor fully understands the rules of Stratego. Stratego is played on a larger board, with more pieces (and more types of pieces) than chess. This gives it a vastly larger space of legal positions, and a vastly larger space of legal moves. It is also a game of hidden information, increasing the difficulty yet again. Your advisor may want to limit the scope of the project, e.g., concentrate on a variant with full observation. I don't know why you think this is simpler, except that the moves of the pieces are a little simpler.
Third, I think the right thing to do at first is to take a look at how games in general are handled in the field of AI. Russell and Norvig, chapters 3 (for general background) and 5 (for two player games) are pretty accessible and well-written. You'll see two basic ideas: One, that you're basically performing a huge search in a tree looking for a win, and two, that for any non-trivial game, the trees are too large, so you search to a certain depth and then cop out with a "board evaluation function" and look for one of those. I think your third bullet point is in this vein.
The board evaluation function is the magic, and probably a good candidate for using either a genetic algorithm, or a genetic program, either of which might be used in conjunction with a neural network. The basic idea is that you are trying to design (or evolve, actually) a function that takes as input a board position, and outputs a single number. Large numbers correspond to strong positions, and small numbers to weak positions. There is a famous paper by Chellapilla and Fogel showing how to do this for a game of Checkers:
http://library.natural-selection.com/Library/1999/Evolving_NN_Checkers.pdf
I think that's a great paper, tying three great strands of AI together: Adversarial search, genetic algorithms, and neural networks. It should give you some inspiration about how to represent your board, how to think about board evaluations, etc.
Be warned, though, that what you're trying to do is substantially more complex than Chellapilla and Fogel's work. That's okay-- it's 13 years later, after all, and you'll be at this for a while. You're still going to have a problem representing the board, because the AI player has imperfect knowledge of its opponent's state; initially, nothing is known but positions, but eventually as pieces are eliminated in conflict, one can start using First Order Logic or related techniques to start narrowing down individual pieces, and possibly even probabilistic methods to infer information about the whole set. (Some of these may be beyond the scope of an undergrad project.)
The fact you are having problems coming up with a representation for an actual strategy is not that surprising. In fact I would argue that it is the most challenging part of what you are attempting. Unfortunately, I haven't heard of Stratego so being a bit lazy I am going to assume you said chess.
The trouble is that a chess strategy is rather a complex thing. You suggest in your answer containing lots of transitions between board positions in the GA, but a chess board has more possible positions than the number of atoms in the universe this is clearly not going to work very well. What you will likely need to do is encode in the GA a series of weights/parameters that are attached to something that takes in the board position and fires out a move, I believe this is what you are hinting at in your second suggestion.
Probably the simplest suggestion would be to use some sort of generic function approximation like a neural network; Perceptrons or Radial Basis Functions are two possibilities. You can encode weights for the various nodes into the GA, although there are other fairly sound ways to train a neural network, see Backpropagation. You could perhaps encode the network structure instead/as well, this also has the advantage that I am pretty sure a fair amount of research has been done into developing neural networks with a genetic algorithm so you wouldn't be starting completely from scratch.
You still need to come up with how you are going to present the board to the neural network and interpret the result from it. Especially, with chess you would have to take note that a lot of moves will be illegal. It would be very beneficial if you could encode the board and interpret the result such that only legal moves are presented. I would suggest implementing the mechanics of the system and then playing around with different board representations to see what gives good results. A few ideas top of the head ideas to get you started could be, although I am not really convinced any of them are especially great ways to do this:
A bit string with all 64 squares one after another with a number presenting what is present in each square. Most obvious, but probably a rather bad representation as a lot of work will be required to filter out illegal moves.
A bit string with all 64 squares one after another with a number presenting what can move to each square. This has the advantage of embodying the covering concept of chess where you what to gain as much coverage of the board with your pieces as possible, but still has problems with illegal moves and dealing with friendly/enemy pieces.
A bit string with all 32 pieces one after another with a number presenting the location of that piece in each square.
In general though I would suggest that chess is rather a complex game to start with, I think it will be rather hard to get something playing to standard which is noticeably better than random. I don't know if Stratego is any simpler, but I would strongly suggest you opt for a fairly simple game. This will let you focus on getting the mechanics of the implementation correct and the representation of the game state.
Anyway hope that is of some help to you.
EDIT: As a quick addition it is worth looking into how standard chess AI's work, I believe most use some sort of Minimax system.
When you say "tactic", do you mean you want the GA to give you a general algorithm to play the game (i.e. evolve an AI) or do you want the game to use a GA to search the space of possible moves to generate a move at each turn?
If you want to do the former, then look into using Genetic programming (GP). You could try to use it to produce the best AI you can for a fixed tree size. JGAP already comes with support for GP as well. See the JGAP Robocode example for an instance of this. This approach does mean you need a domain specific language for a Stratego AI, so you'll need to think carefully how you expose the board and pieces to it.
Using GP means your fitness function can just be how well the AI does at a fixed number of pre-programmed games, but that requires a good AI player to start with (or a very patient human).
#DonAndre's answer is absolutely correct for movement. In general, problems involving state-based decisions are hard to model with GAs, requiring some form of GP (either explicit or, as #DonAndre suggested, trees that are essentially declarative programs).
A general Stratego player seems to me quite challenging, but if you have a reasonable Stratego playing program, "Setting up your Stratego board" would be an excellent GA problem. The initial positions of your pieces would be the phenotype and the outcome of the external Stratego-playing code would be the fitness. It is intuitively likely that random setups would be disadvantaged versus setups that have a few "good ideas" and that small "good ideas" could be combined into fitter-and-fitter setups.
...
On the general problem of what a decision tree, even trying to come up with a simple example, I kept finding it hard to come up with a small enough example, but maybe in the case where you are evaluation whether to attack a same-ranked piece (which, IIRC destroys both you and the other piece?):
double locationNeed = aVeryComplexDecisionTree();
if(thatRank == thisRank){
double sacrificeWillingness = SACRIFICE_GENETIC_BASE; //Assume range 0.0 - 1.0
double sacrificeNeed = anotherComplexTree(); //0.0 - 1.0
double sacrificeInContext = sacrificeNeed * SACRIFICE_NEED_GENETIC_DISCOUNT; //0.0 - 1.0
if(sacrificeInContext > sacrificeNeed){
...OK, this piece is "willing" to sacrifice itself
One way or the other, the basic idea is that you'd still have a lot of coding of Stratego-play, you'd just be seeking places where you could insert parameters that would change the outcome. Here I had the idea of a "base" disposition to sacrifice itself (presumably higher in common pieces) and a "discount" genetically-determined parameter that would weight whether the piece would "accept or reject" the need for a sacrifice.

Machine learning, best technique

I am new to machine learning. I am familiar with SVM , Neural networks and GA. I'd like to know the best technique to learn for classifying pictures and audio. SVM does a decent job but takes a lot of time. Anyone know a faster and better one? Also I'd like to know the fastest library for SVM.
Your question is a good one, and has to do with the state of the art of classification algorithms, as you say, the election of the classifier depends on your data, in the case of images, I can tell you that there is one method called Ada-Boost, read this and this to know more about it, in the other hand, you can find lots of people are doing some researh, for example in Gender Classification of Faces Using Adaboost [Rodrigo Verschae,Javier Ruiz-del-Solar and Mauricio Correa] they say:
"Adaboost-mLBP outperforms all other Adaboost-based methods, as well as baseline methods (SVM, PCA and PCA+SVM)"
Take a look at it.
If your main concern is speed, you should probably take a look at VW and generally at stochastic gradient descent based algorithms for training SVMs.
if the number of features is large in comparison to the number of the trainning examples
then you should go for logistic regression or SVM without kernel
if the number of features is small and the number of training examples is intermediate
then you should use SVN with gaussian kernel
is the number of features is small and the number of training examples is large
use logistic regression or SVM without kernels .
that's according to the stanford ML-class .
For such task you may need to extract features first. Only after that classification is feasible.
I think feature extraction and selection is important.
For image classification, there are a lot of features such as raw pixels, SIFT feature, color, texture,etc. It would be better choose some suitable for your task.
I'm not familiar with audio classication, but there may be some specturm features, like the fourier transform of the signal, MFCC.
The methods used to classify is also important. Besides the methods in the question, KNN is a reasonable choice, too.
Actually, using what feature and method is closely related to the task.
The method mostly depends on problem at hand. There is no method that is always the fastest for any problem. Having said that, you should also keep in mind that once you choose an algorithm for speed, you will start compromising on the accuracy.
For example- since your trying to classify images, there might a lot of features compared to the number of training samples at hand. In such cases, if you go for SVM with kernels, you could end up over fitting with the variance being too high.
So you would want to choose a method that has a high bias and low variance. Using logistic regression or linear SVM are some ways to do it.
You could also use different types of regularizations or techniques such as SVD to remove the features that do not contribute much to your output prediction and have only the most important ones. In other words, choose the features that have little or no correlation between them. Once you do this, you would be able to speed yup your SVM algorithms without sacrificing the accuracy.
Hope it helps.
there are some good techniques in learning machines such as, boosting and adaboost.
One method of classification is the boosting method. This method will iteratively manipulate data which will then be classified by a particular base classifier on each iteration, which in turn will build a classification model. Boosting uses weighting of each data in each iteration where its weight value will change according to the difficulty level of the data to be classified.
While the method adaBoost is one ensamble technique by using loss function exponential function to improve the accuracy of the prediction made.
I think your question is very open ended, and "best classifier for images" will largely depend on the type of image you want to classify. But in general, I suggest you study convulutional neural networks ( CNN ) and transfer learning, currently these are the state of the art techniques for the problem.
check out pre-trained models of cnn based neural networks from pytorch or tensorflow
Related to images I suggest you also study pre-processing of images, pre-processing techniques are very important to highlight some feature of the image and improve the generalization of the classifier.

Selecting an best target algorithm in arcade/strategy game AI programming

I would just like to know the various AI algorithms or logics used in arcade/strategy games for finding/selecting best target to attack for individual unit.
Because, I had to write an small AI logic, where their will be group of unit were attacked by an various tankers, so i am stuck in getting the better logic or algorithm for selecting an best target for unit to attack onto the tankers.
Data available are:
Tanker position, range, hitpoints, damage.
please anybody know the best suitable algorithm/logic for solving this problem, respond early.
Thanks in advance,
Ramanand.
I'm going to express this in a perspective similar to RPG gamers:
What character would you bring down first in order to strike a crippling blow to the rest of your enemies? It would be common sense to bring down the healers of the party, as they can heal the rest of the team. Once the healers are gone, the team needs to use medicine - which is limited in supply - and once medicine is exhausted, the party is screwed.
Similar logic would apply to the tank program. In your AI, you need to figure out which tanks provide the most strength and support to the user's fleet, and eliminate them first. Don't focus on any other tanks unless they become critical in achieving their goal: Kill the strongest, most useful members of the group first.
So I'm going to break down what I feel is most likely pertains to the attributes of your tanks.
RANGE: Far range tanks can hit from a distance but have weak STRENGTH in their attacks.
TANKER POSITION: Closer tanks are faster tanks, but have less STRENGTH in their attacks. Also low HITPOINTS because they're meant for SPEED, and not for DAMAGE.
TANKER HP: Higher HP means a slower-moving tank, as they're stronger. But they won't be close to the front lines.
DAMAGE: Higher DAMAGE means a STRONGER tank with lots of HP, but SLOWER as well to move.
So if I were you, I'd focus first on the tanks that have the highest HP/strongest attacks, followed by the closest ones, and then worry about the ranged tanks - you can't do anything to them anyway until they move into your attack radius :P
And the algorithm would be pretty simple. if you have a list of tanks in a party, create a custom sort for them (using CompareTo) and sort the tanks by class with the highest possible HP to the top of the list, followed by tanks with their focus being speed, and then range.
And then go through each item in the list. If it is possible to attack Tank(0), attack. If not, go to Tank(1).
The goal is to attack only one opponent at a time and receive fire from at most one enemy at a time (though, preferably, none).
Ideally, you would attack the tanks by remaining behind cover and flanking them with surprise attacks. This allows you to destroy the tanks one at a time, while receiving no or little fire.
If you don't have cover, then you should use the enemy as cover. Move into a position that puts the enemy behind the enemy. This also improves your chance to hit.
You can also use range to reduce fire from multiple enemies. Retreat until you are only within range of one enemy.
If the enemies can all fire on you, you want to attack one target until it is no longer a threat, then move on to the next target. The goal is to reduce the amount of fire that you receive as quickly as possible.
If more than one enemy can fire on you at the same time, and you can choose your target, you should fire at the one that allows you to reduce the most amount of damage for the least cost. Simply divide the hit points by the damage, and attack the one with the smallest result. You should also figure in any other relevant stats. Range probably affects you and the enemy equally, but considering the ability to maneuver out of the way of fire, closer enemies are more harmful and should be given some weight in the calculation.
If moving decreases the likelihood of being hit, then you should keep moving, typically by circling your opponent to stay at their flank.
Team tactics would mostly include flanking and diversions.
What's the ammo situation, and is it possible to miss a stationary target?
Based on your comments it sounds like you already have some adhoc set of rules or heuristics to give you something around 70% success based on your own measures, and you want to optimize this further to get a higher win rate.
As a general solution method I would use a hill-climbing algorithm. Since I don't know the details of your current algorithm that is responsible for the 70% success rate, I can only describe in abstract terms how to adapt hill-climbing to optimize your algorithm.
The general principle of hill-climbing is as follows. Hopefully, a small change in some numeric parameter of your current algorithm would be responsible for a small (hopefully linear) change in the resulting success rate. If this is true then you would first parameterize your current set of rules -- meaning you must decide in your current algorithm which numeric parameters may be tweaked and optimized to achieve a higher success rate. Once you've decided what they are, the learning process is straight-forward. Start with your current algorithm. Generate a variety of new algorithms with slightly tweaked parameters than before, and run your simulations to evaluate the performance of this new set of algorithms. Pick the best one as your next starting point. Repeat this process until the algorithm can't get any better.
If your algorithm is a set of if-then rules (this includes rule-matching systems), and improving the performance involves reordering or restructuring those rules, then you may want to consider genetic algorithms, which is a little more complex. To apply genetic algorithms, it is essential that you define the mutation and crossover operators such that a single application of mutation or crossover results in a small change in the overall performance while a many applications of mutation and crossover results in a large change in the overall performance of your algorithm. I'm not an expert in this field but there should be much that comes up when you google for "genetic algorithms on decision trees". The pitfall to avoid is that if you simply consider swapping branches in a decision tree for the mutation operator, a single application might modify the root of your decision tree, generating a huge performance difference. This typically adds too much noise for a genetic algorithm, so my advice in this approach is to be very careful about the encoding of your operators.
Note that these two methods are very popular AI methods for learning or improving your current algorithm. You would do all of these simulations and learning offline. Then you would simply deploy the resulting, learned algorithm.

Resources