Search Algorithm for Pacman - c

I need to find the path with the lower cost in a graph represented by a matrix. I researched a bit on the Dijkstra's algorithm but I need a vector with the sequence of nodes in the shortest path, not the distance itself. The game is being made ​​for Assembly, but if anyone knows an implementation in C at least it's gonna help a lot. I will use it to calculate the route of ghosts, matching heuristic algorithms to create the Very Hard Mode of the game. I also tried something with A*, but the implementations I found used struct, which are not applicable to the situation. Thanks a lot since now. ^^

This problem is the basis for the edx AI course. I've managed to google a breadth first search code written in C here. From what I can remember breadth first search is guarenteed to find the shortest path if it exists.
I don't think it would be too hard to add a heuristic algorithm in there either, there should be notes on the edx link that would help with that.

Related

why DFS is not optimal but BFs is optimal

I have this question in my mind for long but never got reasonable answer for that :
Usually in artifitial intelligent course when it comes to search it is always said that BFS is optimal but DFS is not but I can come up with many example that shows with DFS we can even get the answer faster. So can anyone explain it ? Am I missing something?
Optimal as in "produces the optimal path", not "is the fastest algorithm possible". When searching a state space for a path to a goal, DFS may produce a much longer path than BFS. Note that BFS is only optimal when actions are unweighted; if different actions have different weights, you need something like A*.
This is because by optimal strategy they mean the one whose returned solution maximizes the utility.
Regarding this, nothing guarantees that the first solution found by DFS s optimal. Also BFS is not optimal in a general sense, so your statement as-is is wrong.
The main point here is about being guaranteed that a certain search strategy will always return the optimal result. Any strategy might be the best in a given case, but often (especially in AI), you don't know in what specific case you are, at most you have some hypothesis on that case.
However, DFS is optimal when the search tree is finite, all action costs are identical and all solutions have the same length. However limitating this may sound, there is an important class of problems that satisfies these conditions: the CSPs (constraint satisfaction problems). Maybe all the examples you thought about fall in this (rather common) category.
You can refer to my answer to this question in which I explain why DFS is not optimal and why BFS is not the best choice for solving uninformed state-space search problems.
You can refer to the link below it considers an eg of a tree and solves it with both the approaches.
link

PACMAN: a short path for eating all the dots

I am trying to find a solution for the PACMAN problem of finding a short path (not the shortest, but a good one) that eats all the dots in a big maze. I've seen a lot of people talking about TSP, Dijsktra, BFS, A*. I don't think this is a TSP since I don't have to go back where I started and I can repeat node if I want. And I don't think Dijsktra, BFS and A* would help because I'm not looking for the shortest path and even if that was the case, it wouldn't give a answer in a reasonable time.
Could anyone give me hints on this? What kind of problem is this? Is this a kind of TSP? What kind of algorithms approach this problem in a efficient way? I'd appreciate any hints on implementation.
I take it you're trying to do contest where you find the shortest path in the big maze in under 30 seconds?
I actually did this last year for fun (my college class didn't do the contest). After weeks of research, I was able to do an exact solution of the maze in under 30 seconds.
The heuristic I used was actually an exact heuristic. I wrote a bunch of code to find the minimal path length using a much more efficient algorithm based on graph decomposition and dynamic programming, and then fed the results back into A* as the 'heuristic' value.
The key thing to realize is that while the graph is very big (273 nodes), it has a low carving width (5), meaning that it can be solved efficiently using a fixed parameter tractable algorithm.
Hopefully that's enough keywords to get you on the right track.
Update: I wrote a blog post explaining the solution

Matrix solving with C (within CUDA)

As part of a larger problem, I need to solve small linear systems (i.e NxN where N ~10) so using the relevant cuda libraries doesn't make any sense in terms of speed.
Unfortunately something that's also unclear is how to go about solving such systems without pulling in the big guns like GSL, EIGEN etc.
Can anyone point me in the direction of a dense matrix solver (Ax=B) in straight C?
For those interested, the basic structure of the generator for this section of code is:
ndarray=some.generator(N,N)
for v in range N:
B[v]=_F(v)*constant
for x in range N:
A[v,x]=-_F(v)*ndarray[x,v]
Unfortunately I have approximately zero knowledge of higher mathematics, so any advice would be appreciated.
UPDATE: I've been working away at this, and have a nearly-solution that runs but isn't working. Anyone lurking is welcome to check out what I've got so far on pastebin.
I'm using Crout Decomposition with Pivoting which seems to be the most general approach. The idea for this test is that every thread does the same work. Boring I know, but the plan is that the matrixcount variable is increased, actual data is put in, and each thread solves the small matrices individually.
Thanks for everyone who's been checking on this.
POST-ANSWER UPDATE: Finished the matrix solving code for CPU and GPU operation, check out my lazy-writeup here
CUDA won't help here, that's true. Matrices like that are just too small for it.
What you do to solve a system of linear equations is LU decomposition:
http://en.wikipedia.org/wiki/LU_decomposition
http://mathworld.wolfram.com/LUDecomposition.html
Or even better a QR decomposition with Householder reflections like in the Gram-Schmidt process.
http://en.wikipedia.org/wiki/QR_decomposition#Computing_the_QR_decomposition
Solving the linear equation becomes easy afterwards, but I'm afraid there always is some "higher mathematics" (linear algebra) involved. That, and there are many (many!) C libraries out there for solving linear equations. Doesn't seem like "big guns" to me.

A* implemented in C

Where can I find an A* implementation in C?
I was looking around but it seems my google-fu is not strong enough. I've started writing my own implementation, but then I remembered Stack Overflow and I thought I should ask here first. It seems a bit complicated to write a real A* implementation - I was tempted to just write an implementation of Dijkstra's algorithm for a binary grid, since that's all I really need, but I feel like I want to have a C A* implementation in my repertoire.
Your google-fu is indeed weak, young padawan :-)
Try googling for astar c.
The first and second links are actual code implementations (the first under a liberal MIT licence, no idea about the second).
here you can find the pseudocode: http://en.wikipedia.org/wiki/A*
to find the right code for you just search after:
astar graph search algorithm C

How does the Levenberg–Marquardt algorithm work in detail but in an understandable way?

Im a programmer that wants to learn how the Levenberg–Marquardt curvefitting algorithm works so that i can implement it myself. Is there a good tutorial anywhere that can explain how it works in detail with the reader beeing a programmer and not a mathemagician.
My goal is to implement this algorithm in opencl so that i can have it run hardware accelerated.
Minimizing a function is like trying to find lowest point on a surface. Think of yourself walking on a hilly surface and that you are trying to get to the lowest point. You would find the direction that goes downhill and walk until it doesn't go downhill anymore. Then you would chose a new direction that goes downhill and walk in that direction until it doesn't go downhill anymore, and so on. Eventually (hopefully) you would reach a point where no direction goes downhill anymore. You would then be at a (local) minimum.
The LM algorithm, and many other minimization algorithms, use this scheme.
Suppose that the function being minimized is F and we are at the point x(n) in our iteration. We wish to find the next iterate x(n+1) such that F(x(n+1)) < F(x(n)), i.e. the function value is smaller. In order to chose x(n+1) we need two things, a direction from x(n) and a step size (how far to go in that direction). The LM algorithm determines these values as follows -
First, compute a linear approximation to F at the point x(n). It is easy to find out the downhill direction of a linear function, so we use the linear approximating function to determine the downhill direction.
Next, we need to know how far we can go in this chosen direction. If our approximating linear function is a good approximation for F for a large area around x(n), then we can take a fairly large step. If it's a good approximation only very close to x(n), then we can take only a very small step.
This is what LM does - calculates a linear approximation to F at x(n), thus giving the downhill direction, then it figures out how big a step to take based on how well the linear function approximates F at x(n). LM figures out how good the approximating function is by basically taking a step in the direction thus determined and comparing how much the linear approximation to F decreased to the how much the the actual function F decreased. If they are close, the approximating function is good and we can take a little larger step. If they are not close then the approximation function is not good and we should back off and take a smaller step.
Try http://en.wikipedia.org/wiki/Levenberg–Marquardt_algorithm
PDF Tutorial from Ananth Ranganathan
JavaNumerics has a pretty readable implementation
The ICS has a C/C++ implementation
The basic ideas of the LM algorithm can be explained in a few pages - but for a production-grade implementation that is fast and robust, many subtle optimizations are necessary. State of the art is still the Minpack implementation by Moré et al., documented in detail by Moré 1978 (http://link.springer.com/content/pdf/10.1007/BFb0067700.pdf) and in the Minpack user guide (http://www.mcs.anl.gov/~more/ANL8074b.pdf). To study the code, my C translation (https://jugit.fz-juelich.de/mlz/lmfit) is probably more accessible than the original Fortran code.
Try Numerical Recipes (Levenberg-Marquardt is in Section 15.5). It's available online, and I find that they explain algorithms in a way that's detailed (they have complete source code, how much more detailed can you get...), yet accessible.
I used these notes from a course at Purdue University to code up a generic Levenberg-Marquardt curve-fitting algorithm in MATLAB that computes numerical derivatives and therefore accepts any function of the form f(x;p) where p is a vector of fitting parameters.

Resources