Cant understand the game tree for Quarto game - artificial-intelligence

I am going to implement quarto game in which each opponent choose the next step for another opponent but I am having trouble to draw min max node and understand which node is max and which is min? the only thing that came to my mind is this :
But I am pretty sure that something is wrong with that?
can anyone help?

Just to clarify, I'm assuming that at the top node P1 is choosing a piece for P2 to place, at the second node P2 is placing the piece, at the third node P2 is choosing a piece for P1 to place, and then at four P1 is placing that piece and so on.
I can see why you might think that you're doing something wrong, because this is not the way that minimax is conventionally set up, but this seems like a logical way to apply it for quarto. You are correctly assigning min and max turns and such, so I don't see anything inherently wrong with this setup. Keeping track of the choosing vs. placing nodes for the purposes of the evaluation function could potentially get tricky, but I think it should be doable. Have you encountered any obstacles with doing it this way? If not, I'd say to give it a shot. It's an interesting twist on standard minimax.

Related

Does Alpha Beta / minimax require that each node be a full copy of the gameboard?

This is not a language-specific question, but for the sake of conversation, I currently work in C# 7.
Over the years I've successfully implemented the Alpha Beta pruning algorithm (even in PASCAL, 35 years ago :)
Each time, I've created semi-deep-copies (discussed below) of the game state which is recursed for each node. I've often wondered if this is necessary and if perhaps I'm not truly understanding the algorithm.
The interweb is full of requests for help for TicTacToe, which makes me think that this must be a common school assignment question - which kinda clogs searches on this fairly basic topic.
Semi-deep-copies ... it appears to me that each node should know:
the full state of the board
the player whose turn it is
the state of play - ie: { playing, Player1 win, Player2 win, draw }
My question is: does each node need its own copy of the board? ... for example Chess has 8x8 grid ... is there something more subtle to the algorithm, or do these nodes each need their own snap-shot of the board state? Is there some cool way (other than copy and apply-possible-move) that nodes can use to derive their state from their parent?
Perhaps someone can explain or point to a "read this, dummy" post, or just confirm that I need to make these instances, as I've attempted to describe, with each recursive call having its own in-memory copy of the game board.
I realize that over the last few decades, memory has become cheap... but combinatorial-explosion is still the main topic. Cheers.
I am not sure if this answers your question. But could you instead of making a copy each turn do the move, make the recursive call, and then undo the move instead? Something like:
board.make_move(move)
eval = minimax(board, ....)
board.unmake_move(move)

Monte Carlo Tree Search Alternating

Could anybody please clarify how (as I have not found any clear example anywhere) The MCTS algorithm iterates for the second player.
Everything I seem just seems to look like it is playing eg P1 move every time.
I understand the steps for one agent but I never find anything showing code where P2 places its counter, which surely must happen when growing the tree.
Essentially I would expect:
for each iter:
select node Player1
expand Player1
select node Player2
expand player 2
rollout
backpropogate
next iter
Is this right?? Could anybody please spell out some psuedocode showing that? Either iteratively or recursion i don't mind.
Thanks for any help.
The trick is in backpropagation part, where you update "wins" variable from the point of view of player whose move led into this position.
Code for MCTS
Notice under UCT function, specially the comments:
#Backpropagate
while node != None: # backpropagate from the expanded node and work back to the root node
node.Update(state.GetResult(node.playerJustMoved)) # state is terminal. Update node with result from POV of node.playerJustMoved
node = node.parentNode
IF you follow the function call, you would realize visit variable is always updated; wins however, is not.

How to search three or more arrays row by row for an optimum value in matlab

I have a few variables and here they are, three variables "R1, R2 and R3" each have a size of [40 x 1].
I have a fourth variable U of the same dimension. For every U(i) I need to search for an optimum value within R1(i), R2(i) and R3(i) which would return a single value solution. I intend to plot the optimum value against U9i).I have been trying to wrap my head around the knnsearch function but no luck.
Any one out there who could please help??
Thanks
Well when I can't wrap my head around something, I don't come here first.
A lot of people forget this one because we are online, but read a book on the topic. Have your code open so when you see something in the book, test it out.
Draw out any type of diagram. I call these "Napkin Diagrams", because I write it on anything, even a napkin.
I play with code until my keyboard has no letters left on it, then I keep plugging away until the keys fall off
Explore the language API's
Check for public repositories that you can play with
Google, is okay for a quick reference, but google will not teach you anything other than how to google
I talk my code over with myself all the time, people think I'm nuts, but so do I . . It actually works sometimes.
Then if I still can't get it, I come here with a list of things that I have tried, sample code that has not worked, etc.
I used to hate when people told me this, but that was the best thing anyone could have done for me so I tend to do the same now" Thinking about coding is a big part, but u have to get done wht u can. Then we all know what level u are at. Plus it being the end of semester a lot of these types of questions are homework...
Thinking is good, now turn those thoughts into a conceptual design . It's okay to be wrong in this stage, its all just conceptual
If I understood correctly, this might be what you need:
RR = [R1(:) R2(:) R3(:)];
d = bsxfun(#minus,RR, U(:));
[m mi] = min(abs(d),[],2);
answer = RR(:,mi);
first - put the three vectors into a single matrix:
RR = [R1(:) R2(:) R3(:)];
next, take the difference with U: bsxfun is ideal for this kind of thing
d = bsxfun(#minus,RR, U(:));
Now find the minimum absolute difference for each row:
[m mi] = min(abs(d),[],2);
The corresponding indices should allow you to find the "best fit"
answer = RR(:,mi);
I had to do some mind reading to get to this 'answer', so feel free to correct my misunderstanding of your problem!
update if you just need the highest of the three values, then
val = max([R1(:) R2(:) R3(:)]');
plot(U, val);
should be all you need...

Unity 2D/3D - Making a computer opponent (AI) for a match-3 game

I'm looking for guidance/advice on where to start to create a computer AI opponent for a match-3 game. This issue has been stumping me for years (literally), as I've not been able to figure it out. I've exhausted Google in finding this answer.
Sample match-3 games that have a computer opponent include: Puzzle Quest and Crystal Battle.
What programming methodologies are used in creating an AI opponent like this, and how can I apply it to Unity 2D scripting? Where/how can I start? I am mainly looking for a tutorial or something to get me started in the right direction. I realize this is not a quick an easy thing to do, but I would like to attempt it step by step so I can better understand things.
Thanks in advance!
There are two problems here:
Generating possible moves
Choosing the best move
If your board is reasonably small, you can simply brute-force both of them. For all positions in your grid check if you can move it up, down, left or right, and you have your move generator. (You should have checking for valid moves already implemented for a single-player version of the game).
Choosing the best move will be a bit more tricky, because you have to evaluate each move. Common way to do this is MiniMax method. General idea is that you build a tree of all possible moves in the next couple of turns and assign a score to each leaf. Then you reduce the tree so that parent node becomes max(leaves) if it is AIs turn to move, and min(leaves) if player moves. You end up with the score for your move at root.
Great resource for basic AI programming like this is Chess Programming Wiki (you won't need 90% of what is described there. Start with MiniMax and AlphaBeta algorithms).
On the other hand, for the simplest possible AI you can just pick a move at random, match-3 games are not the most demanding when it comes to planning your moves.
EDIT: As an afterthought, the following seems like a reasonable AI strategy for a match-3 game:
Assuming all the random gems added after each move cannot be matched in any way:
Pick a move that makes my opponent unable to move (has no child nodes).
If 1. is not possible, pick any move that guarantees me another move no matter which move my opponent picks (no child node is a leaf).
If 2. is not possible, pick random move.

Pacman: how do the eyes find their way back to the monster hole?

I found a lot of references to the AI of the ghosts in Pacman, but none of them mentioned how the eyes find their way back to the central ghost hole after a ghost is eaten by Pacman.
In my implementation I implemented a simple but awful solution. I just hard coded on every corner which direction should be taken.
Are there any better/or the best solution? Maybe a generic one that works with different level designs?
Actually, I'd say your approach is a pretty awesome solution, with almost zero-run time cost compared to any sort of pathfinding.
If you need it to generalise to arbitrary maps, you could use any pathfinding algorithm - breadth-first search is simple to implement, for example - and use that to calculate which directions to encode at each of the corners, before the game is run.
EDIT (11th August 2010): I was just referred to a very detailed page on the Pacman system: The Pac-Man Dossier, and since I have the accepted answer here, I felt I should update it. The article doesn't seem to cover the act of returning to the monster house explicitly but it states that the direct pathfinding in Pac-Man is a case of the following:
continue moving towards the next intersection (although this is essentially a special case of 'when given a choice, choose the direction that doesn't involve reversing your direction, as seen in the next step);
at the intersection, look at the adjacent exit squares, except the one you just came from;
picking one which is nearest the goal. If more than one is equally near the goal, pick the first valid direction in this order: up, left, down, right.
I've solved this problem for generic levels that way: Before the level starts, I do some kind of "flood fill" from the monster hole; every tile of the maze that isn't a wall gets a number that says how far it is away from the hole. So when the eyes are on a tile with a distance of 68, they look which of the neighbouring tiles has a distance of 67; that's the way to go then.
For an alternative to more traditional pathfinding algorithms, you could take a look at the (appropriately-named!) Pac-Man Scent Antiobject pattern.
You could diffuse monster-hole-scent around the maze at startup and have the eyes follow it home.
Once the smell is set up, runtime cost is very low.
Edit: sadly the wikipedia article has been deleted, so WayBack Machine to the rescue...
You should take a look a pathfindings algorithm, like Dijsktra's Algorithm or A* algorithm. This is what your problem is : a graph/path problem.
Any simple solution that works is maintainable, reliable and performs well enough is a good solution. It sounds to me like you have already found a good solution ...
An path-finding solution is likely to be more complicated than your current solution, and hence more likely to require debugging. It will probably also be slower.
IMO, if it ain't broken, don't fix it.
EDIT
IMO, if the maze is fixed then your current solution is good / elegant code. Don't make the mistake of equating "good" or "elegant" with "clever". Simple code can also be "good" and "elegant".
If you have configurable maze levels, then maybe you should just do the pathfinding when you initially configure the mazes. Simplest would be to get the maze designer to do it by hand. I'd only bother automating this if you have a bazillion mazes ... or users can design them.
(Aside: if the routes are configured by hand, the maze designer could make a level more interesting by using suboptimal routes ... )
In the original Pacman the Ghost found the yellow pill eater by his "smell" he would leave a trace on the map, the ghost would wander around randomly until they found the smell, then they would simply follow the smell path which lead them directly to the player. Each time Pacman moved, the "smell values" would get decreased by 1.
Now, a simple way to reverse the whole process would be to have a "pyramid of ghost smell", which has its highest point at the center of the map, then the ghost just move in the direction of this smell.
Assuming you already have the logic required for chasing pacman why not reuse that? Just change the target. Seems like it would be a lot less work than trying to create a whole new routine using the exact same logic.
It's a pathfinding problem. For a popular algorithm, see http://wiki.gamedev.net/index.php/A*.
How about each square having a value of distance to the center? This way for each given square you can get values of immediate neighbor squares in all possible directions. You pick the square with the lowest value and move to that square.
Values would be pre-calculated using any available algorithm.
This was the best source that I could find on how it actually worked.
http://gameai.com/wiki/index.php?title=Pac-Man#Respawn
When the ghosts are killed, their disembodied eyes return to their starting location. This is simply accomplished by setting the ghost's target tile to that location. The navigation uses the same rules.
It actually makes sense. Maybe not the most efficient in the world but a pretty nice way to not have to worry about another state or anything along those lines you are just changing the target.
Side note: I did not realize how awesome those pac-man programmers were they basically made an entire message system in a very small space with very limited memory ... that is amazing.
I think your solution is right for the problem, simpler than that, is to make a new version more "realistic" where ghost eyes can go through walls =)
Here's an analog and pseudocode to ammoQ's flood fill idea.
queue q
enqueue q, ghost_origin
set visited
while q has squares
p <= dequeue q
for each square s adjacent to p
if ( s not in visited ) then
add s to visited
s.returndirection <= direction from s to p
enqueue q, s
end if
next
next
The idea is that it's a breadth-first search, so each time you encounter a new adjacent square s, the best path is through p. It's O(N) I do believe.
I don't know much on how you implemented your game but, you could do the following:
Determine the eyes location relative position to the gate. i.e. Is it left above? Right below?
Then move the eyes opposite one of the two directions (such as make it move left if it is right of the gate, and below the gate) and check if there are and walls preventing you from doing so.
If there are walls preventing you from doing so then make it move opposite the other direction (for example, if the coordinates of the eyes relative to the pin is right north and it was currently moving left but there is a wall in the way make it move south.
Remember to keep checking each time to move to keep checking where the eyes are in relative to the gate and check to see when there is no latitudinal coordinate. i.e. it is only above the gate.
In the case it is only above the gate move down if there is a wall, move either left or right and keep doing this number 1 - 4 until the eyes are in the den.
I've never seen a dead end in Pacman this code will not account for dead ends.
Also, I have included a solution to when the eyes would "wobble" between a wall that spans across the origin in my pseudocode.
Some pseudocode:
x = getRelativeOppositeLatitudinalCoord()
y
origX = x
while(eyesNotInPen())
x = getRelativeOppositeLatitudinalCoordofGate()
y = getRelativeOppositeLongitudinalCoordofGate()
if (getRelativeOppositeLatitudinalCoordofGate() == 0 && move(y) == false/*assume zero is neither left or right of the the gate and false means wall is in the way */)
while (move(y) == false)
move(origX)
x = getRelativeOppositeLatitudinalCoordofGate()
else if (move(x) == false) {
move(y)
endWhile
dtb23's suggestion of just picking a random direction at each corner, and eventually you'll find the monster-hole sounds horribly ineficient.
However you could make use of its inefficient return-to-home algorithm to make the game more fun by introducing more variation in the game difficulty. You'd do this by applying one of the above approaches such as your waypoints or the flood fill, but doing so non-deterministically. So at every corner, you could generate a random number to decide whether to take the optimal way, or a random direction.
As the player progresses levels, you reduce the likelihood that a random direction is taken. This would add another lever on the overall difficulty level in addition to the level speed, ghost speed, pill-eating pause (etc). You've got more time to relax while the ghosts are just harmless eyes, but that time becomes shorter and shorter as you progress.
Short answer, not very well. :) If you alter the Pac-man maze the eyes won't necessarily come back. Some of the hacks floating around have that problem. So it's dependent on having a cooperative maze.
I would propose that the ghost stores the path he has taken from the hole to the Pacman. So as soon as the ghost dies, he can follow this stored path in the reverse direction.
Knowing that pacman paths are non-random (ie, each specific level 0-255, inky, blinky, pinky, and clyde will work the exact same path for that level).
I would take this and then guess there are a few master paths that wraps around the entire
maze as a "return path" that an eyeball object takes pending where it is when pac man ate the ghost.
The ghosts in pacman follow more or less predictable patterns in terms of trying to match on X or Y first until the goal was met. I always assumed that this was exactly the same for eyes finding their way back.
Before the game begins save the nodes (intersections) in the map
When the monster dies take the point (coordinates) and find the
nearest node in your node list
Calculate all the paths beginning from that node to the hole
Take the shortest path by length
Add the length of the space between the point and the nearest node
Draw and move on the path
Enjoy!
My approach is a little memory intensive (from the perspective of Pacman era), but you only need to compute once and it works for any level design (including jumps).
Label Nodes Once
When you first load a level, label all the monster lair nodes 0 (representing the distance from the lair). Proceed outward labelling connected nodes 1, nodes connected to them 2, and so on, until all nodes are labelled. (note: this even works if the lair has multiple entrances)
I'm assuming you already have objects representing each node and connections to their neighbours. Pseudo code might look something like this:
public void fillMap(List<Node> nodes) { // call passing lairNodes
int i = 0;
while(nodes.count > 0) {
// Label with distance from lair
nodes.labelAll(i++);
// Find connected unlabelled nodes
nodes = nodes
.flatMap(n -> n.neighbours)
.filter(!n.isDistanceAssigned());
}
}
Eyes Move to Neighbour with Lowest Distance Label
Once all the nodes are labelled, routing the eyes is trivial... just pick the neighbouring node with the lowest distance label (note: if multiple nodes have equal distance, it doesn't matter which is picked). Pseudo code:
public Node moveEyes(final Node current) {
return current.neighbours.min((n1, n2) -> n1.distance - n2.distance);
}
Fully Labelled Example
For my PacMan game I made a somewhat "shortest multiple path home" algorithm which works for what ever labyrinth I provide it with (within my set of rules). It also works across them tunnels.
When the level is loaded, all the path home data in every crossroad is empty (default) and once the ghosts start to explore the labyrinth, them crossroad path home information keeps getting updated every time they run into a "new" crossroad or from a different path stumble again upon their known crossroad.
The original pac-man didn't use path-finding or fancy AI. It just made gamers believe there is more depth to it than it actually was, but in fact it was random. As stated in Artificial Intelligence for Games/Ian Millington, John Funge.
Not sure if it's true or not, but it makes a lot of sense to me. Honestly, I don't see these behaviors that people are talking about. Red/Blinky for ex is not following the player at all times, as they say. Nobody seems to be consistently following the player, on purpose. The chance that they will follow you looks random to me. And it's just very tempting to see behavior in randomness, especially when the chances of getting chased are very high, with 4 enemies and very limited turning options, in a small space. At least in its initial implementation, the game was extremely simple. Check out the book, it's in one of the first chapters.

Resources