Explore grid with obstacles - c

I need an algorithm for a robot to explore an n*n grid with obstacles (a maze if you wish). The goal is to explore all the squares without obstacles in them an avoid the squares with obstacles. The trick is that an obstacle forces the robot to change its path causing it to miss the possible free squares behind the obstacle. I can lazily increment/decrement the robot's x/y coordinates to have the robot move in any of the four directions in case there are no obstacles and the robot can traverse a pre-seen path (if needed) in order to reach other free squares. The algorithm should terminate when ALL the free squares were met at least once.
Any simple lazy/efficient way to do this? a pseudo-code will be greatly appreciated

I believe the problem is reduceable from Traveling Salesman Problem, and thus is NP - Hard, so you are unlikely to find a polynomial solution that solves the problem optimally and efficiently.
However, You might want to adopt some of the heuristics and approximations for TSP, I believe they can be adjusted to this problem as well, since the problem seems very closed to TSP in the first place
EDIT:
If finding the shortest path is not a requirement, and you want any path, a simple DFS with maintaining a visited set can do. In the step in DFS you come back from the recursion - you move to the previous squares, this way the robot is ensured to explore all squares, if there is a path to all of them.
pseudo code for DFS:
search(path,visited,current):
visited.add(current)
for each square s adjacent to current:
if (s is an obstacle): //cannot go to a square which is an obstacle
continue
if (s is not in visited): //no point to go to a node that was already visited
path.add(s) //go to s
search(path,visited,current) //recursively visit all squares accesable form s
//step back from s to current, so you can visit the next neighbor of current.
path.add(current)
invoke with search([source],{},source)
Note that optimization heuristics can be used before the for each step - the heuristic will just be to reorder the iteration order of the nodes.

Just keep a list of unexplored neighbors. A clever heuristic for which field from the list to visit next can be used to make it more efficient if necessary.
Pseudocode (uses a Stack to keep track of the unexplored neighbors resulting in a DFS):
// init
Cell first_cell;
CellStack stack;
stack.push(first_cell);
while(!stack.isEmpty()) {
Cell currentCell = stack.pop();
currentCell.markVisisted();
for(neighbor in currentCell.getNeighbors()) {
stack.push(neighbor);
}
}

I think best way of the problem solve is recursive (go to nearest free cell). With following additional heuristic: go to the nearest free cell with minimal free neighbours (prefer stubs).
Pseudocode:
// init:
for (cell in all_cells) {
cell.weight = calc_neighbors_number(cell);
}
path = [];
void calc_path(cell)
{
cell.visited = true;
path.push_back(cell);
preferred_cell = null;
for (cell in cell.neighbors)
if (cell.visited) {
continue;
}
if (preferred_cell == null || preferred_cell.weight > cell.weight)
preferred_cell = cell;
}
if (preferred_cell != null) {
calc_path(preferred_cell);
}
}
// Start algotithm:
calc_path(start);

Related

How would Transposition Tables work with Hypermax?

I was wondering if someone out there could help me understand how Transposition Tables could be incorporated into the Hypermax algorithm. Any examples, pseudo-code, tips, or implementation references would be much appreciated!
A little background:
Hypermax is a recursive game tree search algorithm used for n-player
games, typically for 3+ players. It's an extension of minimax and
alpha beta pruning.
Generally at each node in the game tree the
current player (chooser) will look at all of the moves it can make
and choose the one that maximizes it's own utility. Different than
minimax / negamax.
I understand how transposition tables work, but I
don't know how the values stored in them would be used to initiate
cutoffs when a transposition table entry is found. A transposition
flag is required in minimax with transposition & alpha-beta pruning.
I can't seem to wrap my head around how that would be incorporated
here.
Hypermax Algorithm without Transposition Tables in Javascript:
/**
* #param {*} state A game state object.
* #param {number[]} alphaVector The alpha vector.
* #returns {number[]} An array of utility values for each player.
*/
function hypermax(state, alphaVector) {
// If terminal return the utilities for all of the players
if (state.isTerminal()) {
return state.calculateUtilities();
}
// Play out each move
var moves = state.getLegalMoves();
var bestUtilityVector = null;
for (var i = 0; i < moves.length; ++i) {
var move = moves[i];
state.doMove(move); // move to child state - updates game board and advances player 1
var utilityVector = hypermax(state, alphaVector.slice(0)); // copy the alpha values down
state.undoMove(move); // return to this state - remove board updates and rollsback player 1
// Select this as best utility if first found
if (i === 0) {
bestUtilityVector = utilityVector;
}
// Update alpha
if (utilityVector[state.currentPlayer] > alpha[state.currentPlayer]) {
alpha[state.currentPlayer] = utilities[state.currentPlayer];
bestUtilities = utilityVector;
}
// Alpha prune
var sum = 0;
for (var j = 0; j < alphaVector.length; ++j) {
sum += alpha[j];
}
if (sum >= 0) {
break;
}
}
}
References:
An implementation of Hypermax without Transposition Tables: https://meatfighter.com/spotai/#references_2
Minimax (negamax variant) with alpha-beta pruning and transposition tables: https://en.wikipedia.org/wiki/Negamax#Negamax_with_alpha_beta_pruning_and_transposition_tables
Original derivation and Proofs of Hypermax: http://uu.diva-portal.org/smash/get/diva2:761634/FULLTEXT01.pdf
The question is quite broad, so this is a similarly broad answer - if there is something specific, please clarify what you don't understand.
Transposition tables are not guaranteed to be correct in multi-player games, but if you implement them carefully they can be. This is discussed briefly in this thesis:
Multi-Player Games, Algorithms and Approaches
To summarize, there are three things to note about transposition
tables in multi-player game trees. First, they require that we be
consistent with our node-ordering. Second, they can be less
effective than in two-player games, due to the fact that it takes more
moves for a transposition to occur. Finally, speculative pruning can
benefit from transposition tables, as they can offset the cost of
re-searching portions of the game tree.
Beyond ordering issues, you may need to store things like the depth of search underneath a branch, the next player to play, and the bounds used for pruning the subtree. If, for instance, you have different bounds for pruning a tree in your first search, you may not produce correct results in the second search.
HyperMax is only a slight variant of Max^n with speculative pruning, so you might want to look at that context to see if you can implement things in Max^n.

finding longest path in an adjacency list

I have an adjacency list I have created for a given graph with nodes and weighted edges. I am trying to figure out what the best way would be to find the longest path within the graph. I have a topological sort method, which I've heard can be useful, but I am unsure how to implement it to find the longest path. So is there a way to accomplish this using topology sort or is there a more efficient method?
Here is an example of my out for the adj list (the value in parenthesis are the cost to get to the node after the arrow (cost)to get to -> node:
Node 0 (4)->1(9)->2
Node 1 (10)->3
Node 2 (8)->3
Node 3
Node 4 (3)->8(3)->7
Node 5 (2)->8(4)->7(2)->0
Node 6 (2)->7(1)->0
Node 7 (5)->9(6)->1(4)->2
Node 8 (6)->9(5)->1
Node 9 (7)->3
Node 10 (12)->4(11)->5(1)->6
Bryan already answered your question above, but I thought I could go in more depth.
First, as he pointed out, this problem is only easily solvable if there are no cycles. If there are cycles you run into the situation where you have infinitely long paths. In that case, you might define a longest path to be any path with no repeated nodes. Unfortunately, this problem can be shown to be NP-Hard. So instead, we'll focus on the problem which it seems like you actually need to solve (since you mentioned the topological sort)--longest path in a Directed Acyclic Graph (DAG). We'll also assume that we have two nodes s and t that are our start and end nodes. The problem is a bit uglier otherwise unless you can make certain assumptions about your graph. If you understand the text below, and such assumptions in your graphs are correct, then perhaps you can remove the s and t restrictions (otherwise, you'll have to run it on every pair of vertices in your graph! Slow...)
The first step in the algorithm is to topologically order the vertices. Intuitively this makes sense. Say you order them from left to right (i.e. the leftmost node will have no incoming edges). The longest path from s to t will generally start from the left and end on the right. It's also impossible for the path to ever go in the left direction. This gives you a sequential ordering to generate the longest path--start at the left and move right.
The next step is to sequentially go left to right and define the longest path for each node. For any node that has no incoming edges, the longest path to that node is 0 (this is true by definition). For any node with incoming edges, recursively define the longest path to that node to be the maximum over all incoming edges + the longest path to get to the "incoming" neighbor (note that this number might be negative, if, for example, all of the incoming edges are negative!). Intuitively this makes sense, but the proof is also trivial:
Suppose our algorithm claims that the longest path to some node v is d but the actual longest path is some d' > d. Pick the "least" such node v (we use the ordering as defined by the topological sort. In other words, we pick the "left-most" node that our algorithm failed at. This is important so that we can assume that our algorithm has correctly determined the longest path for any nodes to the "left" of v). Define the length of the hypothetical longest path to be d' = d_1 + e where d_1 is the length of the hypothetical path up to a node v_prev with edge e to v (note the sloppy naming. The edge e also has weight e). We can define it as such because any path to v must go through one of its neighbors which have an edge going to v (since you can't get to v without getting there via some edge that goes to it). Then d_1 must be the longest path to v_prev (else, contradiction. There is a longer path which contradicts our choice of v as the "least" such node!) and our algorithm would choose the path containing d_1 + e as desired.
To generate the actual path you can figure out which edge was used. Say you've reconstructed the path up to some vertex v which has longest path length d. Then go over all incoming vertices and find the one with longest path length d' = d - e where e is the weight of the edge going into v. You could also just keep track of the parents' of nodes as you go through the algorithm. That is, when you find the longest path to v, set its parent to whichever adjacent node was chosen. You can use simple contradiction to show why either method generates the longest path.
Finally some pseudocode (sorry, it's basically in C#. This is a lot messier to code in C without custom classes and I haven't coded C in a while).
public List<Nodes> FindLongestPath(Graph graph, Node start, Node end)
{
var longestPathLengths = Dictionary<Node, int>;
var orderedNodes = graph.Nodes.TopologicallySort();
// Remove any nodes that are topologically less than start.
// They cannot be in a path from start to end by definition
while (orderedNodes.Pop() != start);
// Push it back onto the top of the stack
orderedNodes.Push(start);
// Do algorithm until we process the end node
while (1)
{
var node = orderedNodes.Pop();
if (node.IncomingEdges.Count() == 0)
{
longestPathLengths.Add(node, 0);
}
else
{
var longestPathLength = Int.Min;
foreach (var incomingEdge in node.IncomingEdges)
{
var currPathLength = longestPaths[incomingEdge.Parent] +
incomingEdge.Weight);
if (currPathlength > longestPathLength)
{
longestPath = currPathLength;
}
}
longestPathLengths.Add(node, longestPath);
}
if (node == end)
{
break;
}
}
// Reconstruct path. Go backwards until we hit start
var node = end;
var longestPath = new List<Node>();
while (node != start)
{
foreach (var incomingEdge in node.IncomingEdges)
{
if (longestPathLengths[incomingEdge.Parent] ==
longestPathLengths[node] - incomingEdge.Weight)
{
longestPath.Prepend(incomingEdge.Parent);
node = incomingEdge.Parent;
break;
}
}
}
return longestPath;
}
Note that this implementation is not particularly efficient, but hopefully it's clear! You can optimize in a lot of small ways that should be obvious as you think through the code/implementation. Generally, if you store more stuff in memory, it'll run faster. The way you structure your Graph is also critical. For instance, it didn't seem like you had an IncomingEdges property for your nodes. But without that, finding the incoming edges for each node is a pain (and is not performant!). In my opinion, graph algorithms are conceptually different from, say, algorithms on strings and arrays because the implementation matters so much! If you read the wiki entries on graph algorithms you'll find they often give three or four different runtimes based on different implementations (with different data structures). Keep this in mind if you care about speed
Assuming your graph has no cycles, otherwise longest path becomes a vague concept, you can have a topological sort indeed. Now you can walk this topological sort and for each node compute its longest distance from a source node by looking at all its predecessors and add the weight of the edge connecting to them to their distance. Then choose the predecessor that gives you the longest distance for this node. The topological sort guarantees that all your predecessors have their distance already correctly determined.
If in addition to the length of the longest path, you also want the path itself. Then you start at the node that gave the longest length and look at all its predecessors to find the one that resulted in this length. Then repeat this process until you have found a source node of the graph.

Flood Fill Algorithm - Maze Navigation

I am trying to implement a version of the flood fill algorithm to help solve the shortest distance path of a maze for a micro mouse. It works the same way as the regular flood fill except that each adjacent non-filled place will be assigned a number representing the distance of that place to the start place. Each time the algorithm moves to a different cell the number is incremented by one. Here is an example of a maze with no wall starting in the bottom left hand corner.
2 3 4
1 2 3
0 1 2
Here is the current code I have ...
void nav_flood_rec(struct nav_array *array, int row, int column, int flood_num)
{
//Check the base case (not shown here)
if (base_case)
return;
//Assign the flood number
arrray->cells[row][column]->flood_number = flood_num;
//North
nav_flood_rec(array, row + 1, column, flood_num + 1);
//East
nav_flood_rec(array, row, column + 1, flood_num + 1);
//South
nav_flood_rec(array, row - 1, column, flood_num + 1);
//West
nav_flood_rec(array, row, column - 1, flood_num + 1);
}
The problem that I am having is that the recursion is not going one step at a time (kind of vague but let me explain). Instead of checking all directions and then moving on the algorithm will keep moving north and not check the other directions. It seems that I want to make the other recursive calls somehow yield until the other directions are checked. Does anyone have any suggestions?
You've implemented something analogous to a depth-first-search, when what you're describing sounds like you want a breadth-first-search.
Use a queue instead of a stack. You're not using a stack explicitly here, but recursion is essentially an implicit stack. A queue will also will solve the problem of stack overflows, which seems likely with that much recursion.
Also, as G.Bach says, you'll need to mark cells as visited so your algorithm terminates.
Wikipedia's article on the subject:
An explicitly queue-based implementation is shown in pseudo-code below. It is similar to the simple recursive solution, except that instead of making recursive calls, it pushes the nodes into a LIFO queue — acting as a stack — for consumption:
Flood-fill (node, target-color, replacement-color):
1. Set Q to the empty queue.
2. Add node to the end of Q.
4. While Q is not empty:
5. Set n equal to the last element of Q.
7. Remove last element from Q.
8. If the color of n is equal to target-color:
9. Set the color of n to replacement-color.
10. Add west node to end of Q.
11. Add east node to end of Q.
12. Add north node to end of Q.
13. Add south node to end of Q.
14. Return.
You call north() without testing any conditionals. Therefore, your recursion will, in order:
1) Test for base case
2) Set new flood number
3) Encounter //north and call nav_flood_rec()
4) REPEAT.
As you can see, you will never reach your other calls. You need to implement a test conditional, branch it, or something like that.
Not really sure what you're trying to do, but you could pass another struct as a parameter and have a value for each direction and then test them for equality... like...
struct decision_maker {
int north;
int south;
int west;
int east;
};
Then in your code:
/* assume dm is passed as a pointer to a decision_maker struct */
if (dm->north > dm->south) {
if (dm->south > dm->east) {
dm->east++; // increment first
// call east
} else if (dm->south > dm->west) {
dm->west++; // increment first
// call west
} else {
dm->south++;
// call south
} else {
dm->north++;
// call north
}
/*
* needs another check or two, like altering the base case a bit
* the point should be clear, though.
*/
It will get a little messy but it will do the job.

Rotating a 2-d array by 90 degrees

A frequent question that props up during array manipulation exercises is to rotate a two dimensional array by 90 degrees. There are a few SO posts that answer how to do it in a variety of programming languages. My question is to clarify one of the answers that is out there and explore what sort of thought-process is required in order to get to the answer in an organic manner.
The solution to this problem that I found goes as follows:
public static void rotate(int[][] matrix,int n)
{
for( layer = 0;layer < n/2;++layer){
int first = layer;
int last = n -1 - layer;
for(int i = first;i<last;++i){
int offset = i - first;
int top = matrix[first][i];
matrix[first][i] = matrix[last-offset][first];
matrix[last-offset][first] = matrix[last][last-offset];
matrix[last][last-offset] = matrix[i][last];
matrix[i][last] = top;
}
}
}
I have somewhat of an idea what the code above is trying to do, it is swapping out the extremities/corners by doing a four-way swap and doing the same for the other cells separated by some offset.
Stepping through this code I know it works, what I do not get is the mathematical basis for the above given algorithm. What is the rationale behind the 'layer','first','last' and the offset?
How did 'last' turn out to be n-1-layer? Why is the offset i-first? What is the offset in the first place?
If somebody could explain the genesis of this algorithm and step me through the thought process to come up with the solution, that will be great.
Thanks
The idea is to break down the big task (rotating a square matrix) into smaller tasks.
First, a square matrix can be broken into concentric square rings. The rotation of a ring is independent from the rotation of other rings, so to rotate the matrix just rotate each of the rings, one by one. In this case, we start at the outermost ring and work inward. We count the rings using layer (or first, same thing), and stop when we get to the middle, which is why it goes up to n/2. (It is worth checking to make sure this will work for odd and even n.) It is useful to keep track of the "far edge" of the ring, using last = n - 1 - layer. For instance, in a 5x5 matrix, the first ring starts at first=0 and ends at last=4, the second ring starts at first=1 and ends at last=3 and so on.
How to rotate a ring? Walk right along the top edge, up along the left edge, left along the bottom edge and down along the right edge, all at the same time. At each step swap the four values around. The coordinate that changes is i, and the number of steps is offset. For example, when walking around the second ring, i goes {1,2,3} and offset goes {0,1,2}.

How to optimize Dijkstra algorithm for a single shortest path between 2 nodes?

I was trying to understand this implementation in C of the Dijkstra algorithm and at the same time modify it so that only the shortest path between 2 specific nodes (source and destination) is found.
However, I don't know exactly what do to. The way I see it, there's nothing much to do, I can't seem to change d[] or prev[] cause those arrays aggregate some important data for the shortest path calculation.
The only thing I can think of is stopping the algorithm when the path is found, that is, break the cycle when mini = destination when it's being marked as visited.
Is there anything else I could do to make it better or is that enough?
EDIT:
While I appreciate the suggestions given to me, people still fail to answer exactly what I questioned. All I want to know is how to optimize the algorithm to only search the shortest path between 2 nodes. I'm not interested, for now, in all other general optimizations. What I'm saying is, in an algorithm that finds all shortest paths from a node X to all other nodes, how do I optimize it to only search for a specific path?
P.S: I just noticed that the for loops start at 1 until <=, why can't it start at 0 and go until <?
The implementation in your question uses a adjacent matrix, which leads O(n^2) implementation. Considering that the graphs in the real world are usually sparse, i.e. the number of nodes n is usually very big, however, the number of edges is far less from n^2.
You'd better look at a heap-based dijkstra implementation.
BTW, single pair shortest path cannot be solved faster than shortest path from a specific node.
#include<algorithm>
using namespace std;
#define MAXN 100
#define HEAP_SIZE 100
typedef int Graph[MAXN][MAXN];
template <class COST_TYPE>
class Heap
{
public:
int data[HEAP_SIZE],index[HEAP_SIZE],size;
COST_TYPE cost[HEAP_SIZE];
void shift_up(int i)
{
int j;
while(i>0)
{
j=(i-1)/2;
if(cost[data[i]]<cost[data[j]])
{
swap(index[data[i]],index[data[j]]);
swap(data[i],data[j]);
i=j;
}
else break;
}
}
void shift_down(int i)
{
int j,k;
while(2*i+1<size)
{
j=2*i+1;
k=j+1;
if(k<size&&cost[data[k]]<cost[data[j]]&&cost[data[k]]<cost[data[i]])
{
swap(index[data[k]],index[data[i]]);
swap(data[k],data[i]);
i=k;
}
else if(cost[data[j]]<cost[data[i]])
{
swap(index[data[j]],index[data[i]]);
swap(data[j],data[i]);
i=j;
}
else break;
}
}
void init()
{
size=0;
memset(index,-1,sizeof(index));
memset(cost,-1,sizeof(cost));
}
bool empty()
{
return(size==0);
}
int pop()
{
int res=data[0];
data[0]=data[size-1];
index[data[0]]=0;
size--;
shift_down(0);
return res;
}
int top()
{
return data[0];
}
void push(int x,COST_TYPE c)
{
if(index[x]==-1)
{
cost[x]=c;
data[size]=x;
index[x]=size;
size++;
shift_up(index[x]);
}
else
{
if(c<cost[x])
{
cost[x]=c;
shift_up(index[x]);
shift_down(index[x]);
}
}
}
};
int Dijkstra(Graph G,int n,int s,int t)
{
Heap<int> heap;
heap.init();
heap.push(s,0);
while(!heap.empty())
{
int u=heap.pop();
if(u==t)
return heap.cost[t];
for(int i=0;i<n;i++)
if(G[u][i]>=0)
heap.push(i,heap.cost[u]+G[u][i]);
}
return -1;
}
You could perhaps improve somewhat by maintaining a separate open and closed list (visited and unvisited) it may improve seek times a little.
Currently you search for an unvisited node with the smallest distance to source.
1) You could maintain a separate 'open' list that will get smaller and smaller as you iterate and thus making your search space progressively smaller.
2) If you maintain a 'closed' list (those nodes you visited) you can check the distance against only those nodes. This will progressively increasing your search space but you don't have to check all nodes each iteration. The distance check against nodes that have not been visited yet holds no purpose.
Also: perhaps consider following the graph when picking the next node to evaluate: On the 'closed' list you could seek the smallest distance and then search an 'open' node among it's connections. (if the node turns out to have no open nodes in it's connections you can remove it from the closed list; dead end).
You can even use this connectivity to form your open list, this would help with islands also (your code will currently crash if you graph has islands).
You could also pre-build a more efficient connection graph instead of a cross table containing all possible node combinations (eg. a Node struct with a neighbours[] node list). This would remove having to check all nodes for each node in the dist[][] array
Instead of initializing all node distances to infinity you could initialize them to the 'smallest possible optimistic distance' to the target and favor node processing based on that (your possibilities differ here, if the nodes are on a 2D plane you could use bird-distance). See A* descriptions for the heuristic. I once implemented this around a queue, not entirely sure how I would integrate it in this code (without a queue that is).
The biggest improvement you can make over Dijkstra is using A* instead. Of course, this requires that you have a heuristic function.

Resources