I am looking to find the differences between blind search and heuristic search used in the artificial intelligence area.
Blind Search - searching without information.
For example : BFS (one of blind search method). We just generate all successor state (child node) for currentstate (current node) and find is there a goal state among them, if isn't we will generate one of child node's successor and so on. Because we don't have information, so just generate all.
Heuristic Seach- searching with information.
For example : A* Algorithm. We choose our next state based on cost and 'heuristic information' with heuristic function.
Case Example : find shortest path.
with Blind search we just trying all location (brute force).
with Heuristic, say we have information about distance between start point and each available location. We will use that to determine next location.
Blind search:
it is totally brute in nature because it doesn't have any domain specific knowledge.
it is a very lengthy process
it is also called uninformed or Brute Force search.
large memory is used.
the search process remembers all the unwanted nodes which are no use for the search process.
it doesn't use any special function for searching.
example: depth first search and breadth first search.
Heuristic search:
they use domain-specific knowledge to do the search process.
by the use of heuristic the search process is reduced.
this is called informed search.
no time is wasted in this type of search.
no large memory is used.
heuristic functions are used for searching.
example: hill climbing,best first search and A* and AO*.
This is a pretty vague question, but using a heuristic usually means using logic or prior data to make educated guesses during a search. Blind search (I am guessing) does the particular search without such heuristics and uses a brute force approach.
A blind such is usually uninformed.that is, it doesnt have any specific knowledge about the problem whereas a heuristic search is that which has information about the problem and therefore uses logic in decision making.
Related
I am studying informed search algorithms, and for New Bidirectional A* Search, I know that the space complexity is O(b^d), where d is the depth of the shallowest goal node and b is the branch factor. I have tried to find out what its time complexity is, but I haven't been able to find any exact information about it on online resources. Is the exact time complexity of NBA* Search unknown and what is the difference between the original Bidirectional A*? Any insights are appreciated.
If you have specific models of your problem (eg uniformly growing graph in both directions with unit edge costs and the number of states growing exponentially) then most bidirectional search algorithms require O(b^(d/2)) node expansions and require O(b^(d/2)) time. But, this simple model doesn't actual model most real-world problems.
Given this, I would not recommend putting significant effort into studying New Bidirectional A*.
The state of the art in bidirectional search has changed massively in the last few years. The current algorithm with the best theoretical guarantees is NBS - Near-Optimal Bidirectional Search. The algorithm finds optimal paths and is near-optimal in node-expansions. That is, NBS is guaranteed to do no more than 2x more necessary expansions than the best possible algorithm (given reasonable theoretical assumptions such as using the same heuristic). All other algorithms (including A*) can do arbitrarily worse than NBS.
Other algorithm variants of NBS, such as DVCBS, have been proposed which follow the same basic structure, do not have the same guarantees, but perform well in practice.
In my Intro to AI class, we have been studying:
Uniformed Search (i.e. Depth-First Search)
Informed Search (i.e. A* Search)
Constraint Satisfaction pRoblems (i.e. Hill Climbing)
Adversarial Search (i.e. Minimax)
In general lines, why would we use, for example, Depth-First Search instead of using more complex algorithms such as A* Search? In other words, why choosing simple and limited algorithms when we can choose complex ones?
The main reason is efficiency. Some algorithms take much more time/memory than others.
Some algorithms won't work will in certain situations. For example, if there are local maxima, Hill Climbing won't work very well.
If you expect most paths to lead to destination, you can use Depth First, which could be much faster than A*.
In Artificial Intelligence - A Modern Approach 3rd Edition, I came across an interesting quote stating:
"As yet there is no good understanding of how to combine the two kinds of algorithms [Goal directed reasoning / planning and heuristic search] into a robust and efficient system" (Russel pg 189)
Why is this so? Why is it hard to combine goal oriented planning with a heuristic search? Wouldn't reinforcement learning solve this?
The term “Goal directed reasoning” was used in the 1980s for a backtracking search technique. Sometimes it was called backward reasoning or top-down search, which means all the same. It describes the working of the algorithm in traversing the state space. Or to be more specific: it describes the order in which the states in the graph are visited. In newer literature this aspect of a planner is no longer explained in detail, because a graph search algorithm is no big thing. It means simply to put the nodes on a stack and traverse them.
In contrast, the term “heuristic search” means to replace a brute-force solver with a knowledge based approach. Heuristic search is equal to not traverse a graph, but find a domain-specific strategy which leaves out most part of the graph. And indeed, it is hard to combine backtracking with heuristics, this approach would be called grounding. If a grounded problem is available, it is possible to use a backtracking solver on a knowledge-based problem. This is the strategy utilized in modern PDDL planners which are first describe the domain in a symbolic PDDL notation (which is knowledge based) and using then a fast solver to search in the state space.
If we have a problem like the one in this image.
We are in Arad, for example, and we want to go to Fagaras using a informed search algorithm (like A*) and the only information available is the one in the picture, is it possible to get a heuristics that is not trivial, but consistent and admissible?
I have thought of some heuristics for a big (higher dimensions) tic-tac-toe game. How do I check which of them are actually consistent?
What is meant by consistency anyways?
Heuristics produce some sort of cost value for a given state. Consistency in this context means the estimate for a state plus the cost of moving to the next state is less than or equal to the estimate for that new state. If this wasn't true then it would imply that - if the heuristic was accurate - that transitioning from one state to the next could incur negative cost, which is typically impossible or incorrect.
This is intuitive to prove when it comes to pathfinding, as you expect every step along the path to take some time, therefore the estimate at step 1 must be lower than the estimate at any step 2. It's probably a bit more complex for tic-tac-toe since you probably have to arbitrarily decide what constitutes a 'cost' in your system. If your heuristic can go both up or down as a result of playing a move - eg. because you encode good moves with positive numbers and bad moves with negative numbers - then your heuristic cannot be consistent.
However, lack of a consistent heuristic is not always a problem. You may not be guaranteed of reaching an optimal solution without one, but it may still speed up the search compared to a brute force state search.
EDITED: This answer confused admissibility and consistency. I have corrected it to refer to admissibility, but the original question was about consistency, and this answer does not fully answer the question.
You could do it analytically, by distinguishing all different cases and thereby proving that your heuristic is indeed admissible.
For informed search, a heuristic is admissible with a search problem (say, the search for the best move in a game) if and only if it underestimates the 'distance' to a suitable state.
EXAMPLE: Search for the shortest route to a target city via a network of highways between cities. Here, one could use the Eucidean distance as a heuristic: the length of a straight line to the goal is always shorter or equally long than the best possible way.
Admissibility is required by algorithms like A*, which then quarantuee you to be optimal (i.e. they will find the best 'route' to a goal state if one exists).
I would recommend to look the topic up in an AI textbook.