TSP with vertices may be visited more than once - c

In a study project my project group is working on a robot which drives on a 5x5 with mines. We don't know the locations of the mines nor can we drive over the mines (only the sensor is allowed to sense it).
Our goal is to scan this matrix (as fast as possible) for mines with a starting point located below matrix point (1,5). We decided that the edges and the starting point of this matrix will be our vertices for the TSP problem, but we are stuck at finding a good algorithm which can (if needed) cross an edge mutiple times if this is faster.
The only thing we have at the moment is a 41x41 matrix with all possible ways to go from an edge to another edge (which can be altered to infinte if a mine is detected) and a backup plan to predefine a route and if a mine is detected send it to the next point.
What is the fastest algorithm which can tackle our problem and can you also provide an example c code or idea how to create it?

To answer the general question (how do you solve TSP with vertices that can be visited more than once): Just run the Floyd-Warshall algorithm before running a standard TSP solver.
To answer your specific question: since you don't know where the mines are at the start of the game you'll have to recompute your route each time you discover that your route is blocked.

Related

What is genetic drift and how does it affect EAs?

I have read in some articles on evolutionary computing that the algorithms generally converge to a single solution due to the phenomenon of genetic drift. There is a lot of content on the Internet, but I can't get a deep understanding of this concept. I need to know, simply and precisely:
What is genetic drift in the context of evolutionary computing?
How does it affect the convergence of an evolutionary algorithm?
To better understand the original concept of genetic drift (Biology), I suggest you read this Khan Academy's article. Simply put, you can think of it as an evolutionary phenomenon in which the frequency of one or more alleles (versions of a gene) in a population changes due to random factors (unrelated to the fitness of each individual). If the fittest individual of a population is struck, out of bad luck, by a lightning and dies before reproducing, he won't leave offspring (although he has the highest fitness!). This is an example (somewhat absurd, I know) of genetic drift.
Now, in the specific context of evolutionary algorithms, this paper provides a good summary on the subject:
EAs genetic drift can be as a result of a combination of factors,
primarily related to selection, fitness function and representation.
It happens by unintentional loss of genotypes. For example, random
chance that a good genotype solution never gets selected for
reproduction. Or, if there is a ‘lifespan’ to a solution and it dies
before it can reproduce. Normally such a genotype only resides in the
population for a limited number of generations.
(Sloss & Gustafson, 2019)
Finally, I will give you a real example of genetic drift acting on a genetic algorithm. Recently, I've used a simple neuroevolution algorithm to create an agent capable of playing the Snake game (GitHub repo). In my implementation of the game, the apples appear in random positions of the screen. When executing the evolutionary process for the first time, I noticed a big fluctuation in the population's best fitness between consecutive generations - overall, it wasn't improving much. Because of this, my algorithm was unable to converge to a good solution.
After some debugging, I found out that this was being caused by genetic drift. Because the apples spawned in random positions, some individuals, not necessarily the fittest, were lucky and got "easy apples", thus achieving a high fitness and leaving more offspring. Do you see the problem here?
Suppose that snake A is better at the game than snake B, because it can move towards the food, while B only moves randomly. Now, suppose that the first food that appeared for snake A was in a corner of the screen (a difficult position) and A died shortly after eating the apple. Now, suppose that snake B was lucky enough to have 3 apples spawn in a row, one after the other. Although B is "dumber" than A, it will leave more offspring, because it achieved a greater fitness. B's offspring will "pollute" the next generation, because they'll probably be "dumb" like B.
I solved the problem using a better apple positioning algorithm (I defined a minimum distance between the spawning position of two consecutive apples) and by calculating each individual's final fitness as the average of its fitness in several playing sessions. This greatly reduced (although it did not eliminate) the interference of genetic drift in my algorithm.
I hope this helps. You can also take a look at this video (it's in Portuguese, but English subtitles are available), where I explain some of the strategies I used to make the Snake AI.

how to move in a matrix to reach a goal using the least number of moves

I have recently started programming in c, I would have a problem to solve, in practice I am developing a small game where in a first phase, I randomly place a number of pawns on a matrix, and in a second time I place the flags to conquer
Each pawn has a target index, which corresponds to a placed flag, and has a number of moves to reach it
How can I find an optimal path that from the starting index leads me to a goal with a number of moves <= number of moves for each piece?
How can I find an optimal path that from the starting index leads me to a goal with a number of moves <= number of moves for each piece?
If I understand correctly you are looking for pathfinding algorithm(s) for determine the optimal path.
You can use BFS (Breadth First Search) or DFS (Depth-First Search) but there's a lot more algorithms, find info here and if you want to test in you browser I recommend you this github page
In term of code you will find implementations of these algorithms on internet, and lots of information directly on stackoverflow.
When there is a matrix involved, you usually can use nested for loops to iterate throughout all the points of the matrix. Since every pawn has its own target flag, you should have the position in the matrix for each pawn and flag. If, for example, its a 6 by 6 matrix, your pawn is at position 5,0 and the target flag is at position 0,0 you have to decrement the y for your pawn. However if the number of moves for this pawn is 3 its somehow imposible to reach the flag. I guess that's a start and then you build up from there?

How to account for move order in chess board evaluation

I am programming a Chess AI using an alpha-beta pruning algorithm that works at fixed depth. I was quite surprised to see that by setting the AI to a higher depth, it played even worse. But I think I figured it why so.
It currently works that way : All positions are listed, and for each of them, every other positions from that move is listed and so on... Until the fixed depth is reached : the board is evaluated by checking what pieces are present and by setting a value for every piece types. Then, the value bubbles up to the root using the minimax algorithm with alpha-beta.
But I need to account for the move order. For instance, there is two options, a checkmate in 2 moves, and another in 7 moves, then the first one has to be chosen. The same thing goes to taking a queen in whether 3 or 6 moves.
But since I only evaluate the board at the deepest nodes and that I only check the board as the evaluation result, it doesn't know what was the previous moves were.
I'm sure there is a better way to evaluate the game that can account for the way the pieces moved through the search.
EDIT: I figured out why it was playing weird. When I searched for moves (depth 5), it ended with a AI move (a MAX node level). By doing so, it counted moves such as taking a knight with a rook, even if it made the latter vulnerable (the algorithm cannot see it because it doesn't search deeper than that).
So I changed that and I set depth to 6, so it ends with a MIN node level.
Its moves now make more sense as it actually takes revenge when attacked (what it sometimes didn't do and instead played a dumb move).
However, it is now more defensive than ever and does not play : it moves its knight, then moves it back to the place it was before, and therefore, it ends up losing.
My evaluation is very standard, only the presence of pieces matters to the node value so it is free to pick the strategy it wants without forcing it to do stuff it doesn't need to.
Consedering that, is that a normal behaviour for my algorithm ? Is it a sign that my alpha-beta algorithm is badly implemented or is it perfectly normal with such an evaluation function ?
If you want to select the shortest path to a win, you probably also want to select the longest path to a loss. If you were to try to account for this in the evaluation function, you would have to the path length along with the score and have separate evaluation functions for min and max. It's a lot of complex and confusing overhead.
The standard way to solve this problem is with an iterative deepening approach to the evaluation. First you search deep enough for 1 move for all players, then you run the entire search again searching 2 moves for each player, etc until you run out of time. If you find a win in 2 moves, you stop searching and you'll never run into the 7 moves situation. This also solves your problem of searching odd depths and getting strange evaluations. It has many other benefits, like always having a move ready to go when you run out of time, and some significant algorithmic improvements because you won't need the overhead of tracking visited states.
As for the defensive play, that is a little bit of the horizon effect and a little bit of the evaluation function. If you have a perfect evaluation function, the algorithm only needs to see one move deep. If it's not perfect (and it's not), then you'll need to get much deeper into search. Last I checked, algorithms that can run on your laptop and see about 8 plys deep (a ply is 1 move for each player) can compete with strong humans.
In order to let the program choose the shortest checkmate, the standard approach is to give a higher value to mates that occur closer to the root. Of course, you must detect checkmates, and give them some score.
Also, from what you describe, you need a quiescence search.
All of this (and much more) is explained in the chess programming wiki. You should check it out:
https://chessprogramming.wikispaces.com/Checkmate#MateScore
https://chessprogramming.wikispaces.com/Quiescence+Search

All or nothing - fast heuristic shortest path algorithm (parallel?)

I'm looking for a good way to find a shortest path between two points in a network (directed, cyclic, weighted) of billions of nodes. Basically I want an algorithm that will typically get a solution very very quickly, even if its worst case is horrible.
I'm open to parallel or distributed algorithms, although it would have to make sense with the size of the data set (an algorithm that would work with CUDA on a graphics card would have to be able to be processed in chunks). I don't plan on using a farm of computers to do this, but potentially a few max.
A google search gives you a lot of good links. The first link itself talks about parallel implementations of two shortest path algorithms.
And talking about implementation on CUDA, you will have to remember that billions of nodes = Gigabytes of memory. That would provide a limitation on the nodes you can use per card (for optimum performance) at a time. The maximum capacity of a graphics card currently in the market is about 6GB. This can give you an estimate on the number of cards you may need to use (not necessarily the number of machines).
Look at Dikstra's algorithm. Generally it does an optimized multi-depth breadth first search until you're guaranteed to have found the shortest path. The first path found might be the shortest, but you can't be sure until the other branches of the search don't terminate with a shorter distance.
You could use an uniform cost search. This search algorithm will find a optimal solution in a weighted graph. If I remember correctly, the search complexity (space and time) is b^(C*/e+1), where b denotes the branching, C* the optimal path cost to your goal, and e is the average path cost.
And there is also something called bidirectional search, where you start from the initial state and goal state with the search and hopefully both starting points crosses each other somewhere in the middle of the graph :)
I am worried that unless your graph is somehow nicely layed out in the memory, you won't get much benefit from using CUDA, when compared to a well-tuned parallel algorithm on CPU. The problem is, that walking on a "totally-unordered" graphs lead to a lot of random memory accesses.
When you have 32 CUDA-threads working together in parallel, but their memory access is random, the fetch instruction has to be serialised. Since the search algorithm does not perform many hard mathematical computations, fetching memory is where you are likely to loose most of your time.

Is the board game "Go" NP complete?

There are plenty of Chess AI's around, and evidently some are good enough to beat some of the world's greatest players.
I've heard that many attempts have been made to write successful AI's for the board game Go, but so far nothing has been conceived beyond average amateur level.
Could it be that the task of mathematically calculating the optimal move at any given time in Go is an NP-complete problem?
Chess and Go are both EXPTIME complete. IIRC, Go has more possible moves, so I think it a higher multiple of that complexity class than chess. Wikipedia has a good article on the complexity of Go.
Even if Go is merely in P it could still be something horrendous like O(n^m) where n is the number of spaces and m is some (large) fixed number. Even being in P doesn't make something reasonable to compute.
Neither Chess or Go AIs completely evaluate all possibilities before deciding on a move.
Chess AIs use various heuristics to narrow down the search space, and to quantify how 'good' a given position on the board happens to be. This can be done recursively by evaluating possible board positions 14-15 moves ahead and choosing a path that leads to a good position.
There's a bit of 'magic' in how a board position is quantified so the that at the top level, the AI can simply go Move A > Move B therefore lets do Move A. But since there's a limited number of pieces and they all have quantifiable value a 'good enough' algorithm can be implemented.
But it turns out to be a lot harder for a program to evaluate two possible board positions in Go and make that A > B calculation. Without that critical piece its a little hard to make the rest of the AI work.

Resources