I've been trying to formalizing a domain and preparing two instances for a puzzle in which a frog is located in an n × n land, with few obstacles and a lot of insects. The frog can jump in any length in four cardinal directions and cannot land on any obstacles or already visited places. The frog eats insects whenever it lands on them. The goal is to eat all insects in the land. F denotes the initial location. The input corresponds to a n × n grid where F specifies the initial location of the frog, 0’s denote insects and 1’s denote obstacles. This requires planning (AI) but I'm new to strips, pddl and I'm stuck and can't figure out what to do. Can anyone help me with this?
Related
Suppose that we have a M*N maze and some and there are K dogs in different cells of this mase looking for their houses (their unique houses are also located in some cell in the maze). in each step, all of the dogs can stay at their location or move to an adjacent cell in the maze (the eligible moves are: up, down, right, left if possible). what could be a good state space for this problem?
Unique houses mean that each dog has its specific house located somewhere on the maze.
Two dogs can stand same cell too.
I personally think that the sum of manhattan distances for each dog from its house could be a good heuristic but I could not define a good state space myself.
Here is a link to a picture of a sample for k=2 and a 5*5 maze:
Example
Because all of the animals are independent (they don't block each other and they have unique individual goals), you shouldn't model the joint actions between all agents. You are really solving K independent pathfinding problems, where each one can use the manhattan distance heuristic individually, given 4-connected movement. If you solve them jointly you make the problem exponentially larger when it doesn't have to be.
There are lots of ways of building better heuristics or re-using search information, but that's a different question.
I was thinking about implementing the classical river crossing puzzle game and I was wondering what would be the most appropriate search algorithm to solve it.
Here is an example of the game
Apparently there is enough information to predict the distance from the current state to the goal, so a heuristic search could be used. Going through the different states I found a case where the search can enter an endless loop.
Here is the case:
On the left side of the river there are two creatures (one of both kinds)
On the right side of the river there are four creatures (two of both kinds)
The boat is on the right side
There are 2 legal moves:
return two munchkins (those who should not be outnumbered by the others) this will cause a loop
return two creatures (1 of both kinds) this is the move that should be taken
Is the usage of informed search better in this case? What would be the appropriate type of heuristic i.e Hill Climbing, A*
What would be the best way to go through that looping.
The following problem is an exam exercise I found from an Artificial Intelligence course.
"Suggest a heuristic mechanism that allows this problem to be solved, using the Hill-Climbing algorithm. (S=Start point, F=Final point/goal). No diagonal movement is allowed."
Since it's obvious that Manhattan Distance or Euclidean Distance will send the robot at (3,4) and no backtracking is allowed, what is a possible solution (heuristic mechanism) to this problem?
EDIT: To make the problem clearer, I've marked some of the Manhattan distances on the board:
It would be obvious that, using Manhattan distance, the robot's next move would be at (3,4) since it has a heuristic value of 2 - HC will choose that and get stuck forever. The aim is try and never go that path by finding the proper heuristic algorithm.
I thought of the obstructions as being hot, and that heat rises. I make the net cost of a cell the sum of the Manhattan metric distance to F plus a heat-penalty. Thus there is an attractive force drawing the robot towards F as well as a repelling force which forces it away from the obstructions.
There are two types of heat penalties:
1) It is very bad to touch an obstruction. Look at the 2 or 3 cells neighboring cells in the row immediately below a given cell. Add 15 for every obstruction cell which is directly below the given cell and 10 for every diagonal neighbor which is directly below
2) For cells not in direct contact with the instructions -- the heat is more diffuse. I calculate it as 6 times the average number of obstruction blocks below the cell both in its column and in its neighboring columns.
The following shows the result of combining this all, as well as the path taken from S to F:
A crucial point it the way that the averaging causes the robot to turn left rather than right when it hits the top row. The unheated columns towards the left make that the cooler direction. It is interesting to note how all cells (with the possible exception of the two at the upper-right corner) are drawn to F by this heuristic.
I have a chess AI that doesn't always know if it can castle or not. The rooks and kings have move counters that only allow them to participate in a castle when the value of the move counter equals zero. A problem occurs when the move counters are zero and there are no pieces blocking the castle, but an enemy piece has the ability to block the castle from afar.
For example, imagine that you are white and you want to make a queen side castle. The move counters are zero, so your pieces have made zero moves, and your white knight, bishop, and queen are gone. The you thinks that you can castle. But you actually cannot castle because there is an enemy rook with a clear line of attack that extends all the way down to the first row where you have your white rook and white king. If you castled, the king would have to cross the black rook's line of attack. You are the AI and this situation messed you up.
Now you [the human] might know a way to make you [the AI] smarter when it comes to castling. How would you, as a programmer, fix this problem such that the AI doesn't make this mistake anymore?
Here's some more information...
My board representation is int board[8][8]. I have an array that holds all possible white pieces [max 2 queens, 17 pieces total], int whitePieces[17], and array that holds all possible black pieces, int blackPieces[17]. Also, to keep track of movement, there is a moveTo[] array and a moveFrom[] array that contains, for each ply, a copy of the moving piece after it moved and before it moved. The rightmost bit of the piece integer is the y value and the 4 bit hexadecimal value one over from that is the x value. The integer piece also contains byte data representing the piece type, the piece color, the pieces location in the whitePieces array or the blackPieces array, and a movement counter that keeps track of the number of moves and is used to determine if a king or rook has moved and thus cannot castle.
Your AI should have some sort of 0-ply "threat grid" that shows where every enemy piece can move next turn. Use this info to see if the squares between the king and rook(s)the final castling location(s) are either occupied or under threat.
Had same problem long time ago (1978 - in fortran).
Aside from the tests you all ready mentioned (had select rook moved, had king move, is row between them empty) you need to insure:
The king is not currently in check.
With the code that determines if the King is in check, that same code can be use to see if a king would be in check in the 2 squares of interest. So "pretend" to move the king, 1 space at a time 2 spaces left (or right) and run the test.
2 other pedantic thoughts:
The flag that sets when a rook is "moved" also needs to be set is the rook is taken. Testing to see if a rook is in the corner is not enough as it could be another rook.
A pawn promoted to a rook and then not moved cannot be used for castling. castling on a file
Notes:
Rather than 17 pieces, consider staying at 16. (You can have 0-9 queens, 0-10 rooks, 0-10 bishops, 0-8 pawns, 1 kg, etc.)
The space the rook is on or passes through may be threatened from the other side.
I'm implementing an algorithm in which I need manipulate a mesh, adding and deleting edges quickly and iterating quickly over the edges adjacent to a vertex in CCW or CW order.
The winged-edge structure is used in the description of the algorithm I'm working from, but I can't find any concise descriptions of how to perform those operations on this data structure.
I've learned about it in University but that was a while ago.
In response to this question i've searched the web too for any good documentation, found none that is good, but we can go through a quick example for CCW and CW order and insertion/deletion here.
Have a look at this table and graphic:
from this page:
http://www.cs.mtu.edu/~shene/COURSES/cs3621/NOTES/model/winged-e.html
The table gives only the entry for one edge a, in a real table you have this row for every edge. You can see you get the:
left predecessor,
left successor,
right predecessor,
right successor
but here comes the critical point: it gives them relative to the direction of the edge which is X->Y in this case, and when it is right-traversed (e->a->c).
So for the CW-order of going through the graph this is very easy to read: edge a left has right-successor c and then you look into the row for edge c.
Ok, this table is easy to read for CW-order traversal; for CCW you have to think "from which edge did i come from when i walked this edge backwards". Effectively you get the next edge in CCW-order by taking the left-traverse-predecessor in this case b and continue with the row-entry for edge b in the same manner.
Now insertion and deletion: It is clear that you cant just remove the edge and think that the graph would still consist of only triangles; during deletion you have to join two vertices, for example X and Y in the graphic. To do this you first have to make sure that everywhere the edge a is referred-to we have to fix that reference.
So where can a be referred-to? only in the edges b,c,d and e (all other edges are too far away to know a) plus in the vertex->edge-table if you have that (but let's only consider the edges-table in this example).
As an example of how we have to fix edges lets take a look at c. Like a, c has a left and right pre- and successor (so 4 edges), which one of those is a? We cannot know that without checking because the table-entry for c can have the node Y in either its Start- or End-Node. So we have to check which one it is, let's assume we find that c has Y in its Start-Node, we then have to check whether a is c's right predecessor (which it is and which we find out by looking at c's entry and comparing it to a) OR whether it is c's right successor. "Successor??" you might ask? Yes because remember the two "left-traverse"-columns are relative to going the edge backward. So, now we have found that a is c's right predecessor and we can fix that reference by inserting a's right predecessor. Continue with the other 3 edges and you are done with the edges-table. Fixing an additional Node->Vertices is trivial of course, just look into the entries for X and Y and delete a there.
Adding edges is basically the reverse of this fix-up of 4 other edges BUT with a little twist. Lets call the node which we want to split Z (it will be split into X and Y). You have to take care that you split it in the right direction because you can have either d and e combined in a node or e and c (like if the new edge is horizontal instead of the vertical a in the graphic)! You first have to find out between which 2 edges of the soon-to-be X and between which 2 edges of Y the new edge is added: You just choose which edges shall be on one node and which on the other node: In this example graphic: choose that you want b, c and the 2 edges to the north in between them on one node, and it follows that the other edges are on the other node which will become X. You then find by vector-subtraction that the new edge a has to be between b and c, not between say c and one of the 2 edges in the north. The vector-subtraction is the desired position of the new X minus the desired position of Y.