Is an optimal algorithm a complete algorithm? - artificial-intelligence

I do understand that a complete algorithm is one where if there is a solution, the algorithm is able to find it and that optimal algorithm is one where it manages to find a least cost solution.
But is an optimal algorithm, a complete algorithm? Can please briefly explain?
Thanks.

Yes, by definition. Finding the optimal solution entails proving optimality. This can be done by finding all solutions or by proving that no solution can have better cost than the one found already. In either case, at least one solution has to be found.
If there is no solution, neither an optimal nor a complete algorithm would find one of course.

The notion of completeness refers to the ability of the algorithm to find a solution if one exists, and if not, to report that no solution is possible.
If an algorithm can find an solution if it exists but it's not capable to "say" that there is no solution in cases when there are no solution, then it's not complete.

Yes. In simple terms
completeness defines :
If a solution is possible, it ensures the solution. (Is it guaranteed or not ?)
Optimal :
Ensures the best solution will be found or not ?
Therefore according to your problem if an algorithm is optimal, it tells the best solution is found. Then automatically it ensures the completeness of the algorithm because it already found the solution (as guaranteed).

Related

why DFS is not optimal but BFs is optimal

I have this question in my mind for long but never got reasonable answer for that :
Usually in artifitial intelligent course when it comes to search it is always said that BFS is optimal but DFS is not but I can come up with many example that shows with DFS we can even get the answer faster. So can anyone explain it ? Am I missing something?
Optimal as in "produces the optimal path", not "is the fastest algorithm possible". When searching a state space for a path to a goal, DFS may produce a much longer path than BFS. Note that BFS is only optimal when actions are unweighted; if different actions have different weights, you need something like A*.
This is because by optimal strategy they mean the one whose returned solution maximizes the utility.
Regarding this, nothing guarantees that the first solution found by DFS s optimal. Also BFS is not optimal in a general sense, so your statement as-is is wrong.
The main point here is about being guaranteed that a certain search strategy will always return the optimal result. Any strategy might be the best in a given case, but often (especially in AI), you don't know in what specific case you are, at most you have some hypothesis on that case.
However, DFS is optimal when the search tree is finite, all action costs are identical and all solutions have the same length. However limitating this may sound, there is an important class of problems that satisfies these conditions: the CSPs (constraint satisfaction problems). Maybe all the examples you thought about fall in this (rather common) category.
You can refer to my answer to this question in which I explain why DFS is not optimal and why BFS is not the best choice for solving uninformed state-space search problems.
You can refer to the link below it considers an eg of a tree and solves it with both the approaches.
link

Cache Oblivious Search

Please forgive this stupid question, but I didn't find any hint by googling it.
If I have an array (contiguous memory), and I search sequentially for a given pattern (for example build the list of all even numbers), am I using a cache-oblivious algorithm? Yes it's quite stupid as an algorithm, but I'm trying to understand here :)
Yes, you are using a cache-oblivious algorithm since your running time is O(N/B) - i.e. # of disk transfers, which is dependent on the block size, but your algorithm doesn't depend on a particular value of the block size. Additionally, this means that you are both cache-oblivious as well as cache-efficient.

PACMAN: a short path for eating all the dots

I am trying to find a solution for the PACMAN problem of finding a short path (not the shortest, but a good one) that eats all the dots in a big maze. I've seen a lot of people talking about TSP, Dijsktra, BFS, A*. I don't think this is a TSP since I don't have to go back where I started and I can repeat node if I want. And I don't think Dijsktra, BFS and A* would help because I'm not looking for the shortest path and even if that was the case, it wouldn't give a answer in a reasonable time.
Could anyone give me hints on this? What kind of problem is this? Is this a kind of TSP? What kind of algorithms approach this problem in a efficient way? I'd appreciate any hints on implementation.
I take it you're trying to do contest where you find the shortest path in the big maze in under 30 seconds?
I actually did this last year for fun (my college class didn't do the contest). After weeks of research, I was able to do an exact solution of the maze in under 30 seconds.
The heuristic I used was actually an exact heuristic. I wrote a bunch of code to find the minimal path length using a much more efficient algorithm based on graph decomposition and dynamic programming, and then fed the results back into A* as the 'heuristic' value.
The key thing to realize is that while the graph is very big (273 nodes), it has a low carving width (5), meaning that it can be solved efficiently using a fixed parameter tractable algorithm.
Hopefully that's enough keywords to get you on the right track.
Update: I wrote a blog post explaining the solution

Booth Algorithm in string

I tried solving this problem in SPOJ using Booth Algorithm in O(n) time, but it failed though it worked for all test cases I tried.
Then I did in Brute force way in O(n^2) time, it worked. I have attached the code for both the cases, tell me where I went wrong or is Booth algo a correct approach for this problem?
Isnt the problem, finding the minimum rotation for Lexicographically smallest string
For first approach, Booth Algorithm : http://ideone.com/J5gl5
For second approach, Brute Force : http://ideone.com/ofTeA
Your algorithm gives the wrong answer for string "ABAED", for example.
Your algorithm returns 7 (even though this is longer than the string!).
The correct answer is 0.
(Note this bug may also be present in wherever you found a description of the algorithm! If you look at the history/discussion for the wikipedia article there are a lot of edits fixing bugs - both claiming to fix bugs in the original paper, and to fix bugs in bugfix...)
It seems to work a lot better if you replace:
if( lst[i] < lst[ans+i+1] )
with
if( lst[j] < lst[ans+i+1] )

Building a suffix tree for a string matching algorithm in large database

I had an internship interview last week and I was given a question regarding searching for a particular string in a large database. I was totally clueless about it during the interview though I just gave a reply the"multi-level hashing" as that was the only hin I knew which had the best time efficiency, After a bit googling I think the answer he expected was that of suffix tree. Now during my search I found my algorithms for building suffix trees and there were even research papers on how to build suffix tree!! So is it really possible to implement the suffix tree for string matching algorithm especially during interview time?
It would be great if someone can throw light on it.
Thanks in advance
Usually the interviewer don't need a precise answer for these kind of questions, they're more interested in the way you think about the problem and try to solve it.
Of course, mentioning known algorithms to solve the problem would be a plus, but I find it hard to believe that someone would require "suffix tree" as an answer for that question.
That being said, I don't consider the algorithms to build suffix trees trivial to implement.

Resources