DIT410/TIN174, Artificial Intelligence
Peter Ljunglöf
24 March, 2017
Often we are not given an algorithm to solve a problem, but only
a specification of a solution — we have to search for it.
A typical problem is when the agent is in one state, it has a set of
deterministic actions it can carry out, and wants to get to a goal state.
Many AI problems can be abstracted into the problem of finding
a path in a directed graph.
Often there is more than one way to represent a problem as a graph.
Observable? | fully |
Deterministic? | deterministic |
Episodic? | episodic |
Static? | static |
Discrete? | discrete |
N:o of agents | single |
Most complex problems (partly observable, stochastic, sequential)
usualy have components using state-space search.
A graph consists of a set \(N\) of nodes and a set \(A\) of ordered pairs of nodes,
called arcs.
Node \(n_2\) is a neighbor of \(n_1\)
if there is an arc from \(n_1\) to \(n_2\).
That is, if \( (n_1, n_2) \in A \).
A path is a sequence of nodes \( (n_0, n_1, \ldots, n_k) \) such that \( (n_{i-1}, n_i) \in A \).
The length of path \( (n_0, n_1, \ldots, n_k) \) is \(k\).
A solution is a path from a start node to a goal node,
given a set of start nodes and goal nodes.
(Russel & Norvig sometimes call the graph nodes states).
We want to drive from Arad to Bucharest in Romania
Grid game: Rob needs to collect coins
\( C_{1}, C_{2}, C_{3}, C_{4} \),
without running out of fuel, and end up at location (1,1):
What is a good representation of the search states and the goal?
States | [room A dirty?, room B dirty?, robot location] |
Initial state | any state |
Actions | left, right, suck, do-nothing |
Goal test | [false, false, –] |
Path cost | 1 per action (0 for do-nothing) |
States | a 3 x 3 matrix of integers |
Initial state | any state |
Actions | move the blank space: left, right, up, down |
Goal test | equal to the goal state |
Path cost | 1 action (0 for do-nothing) |
States | any arrangement of 0 to 8 queens on the board |
Initial state | no queens on the board |
Actions | add a queen to any empty square |
Goal test | 8 queens on the board, none attacked |
Path cost | 1 per move |
This gives us \( 64 \times 63 \times\cdots\times 57 \approx 1.8\times10^{14} \) possible paths to explore!
States | one queen per column in leftmost columns, none attacked |
Initial state | no queens on the board |
Actions | add a queen to a square in the leftmost empty column, make sure that no queen is attacked |
Goal test | 8 queens on the board, none attacked |
Path cost | 1 per move |
Using this formulation, we have only 2,057 paths!
Donald Knuth conjectured that all positive integers can be obtained by starting with
the number 4 and applying some combination of the factorial, square root, and floor.
\[ \left\lfloor \sqrt{\sqrt{\sqrt{\sqrt{\sqrt{(4!)!}}}}}\right\rfloor = 5 \]
States | positive numbers \( (1, 2, 2.5, 3, \sqrt{2}, 1.23\cdot 10^{456}, \sqrt{\sqrt{2}}, \ldots) \) |
Initial state | 4 |
Actions | apply factorial, square root, or floor operation |
Goal test | any positive integer (e.g., 5) |
Path cost | 1 per move |
States | real-valued coordinates of robot joint angles parts of the object to be assembled |
Actions | continuous motions of robot joints |
Goal test | complete assembly of the object |
Path cost | time to execute |
A generic search algorithm:
Given a graph, start nodes, and a goal description, incrementally
explore paths from the start nodes.
Maintain a frontier of nodes that are to be explored.
As search proceeds, the frontier expands into the unexplored nodes
until a goal node is encountered.
The way in which the frontier is expanded defines the search strategy.
The nodes used while searching are not the same as the graph nodes:
Which shaded goal will a depth-first search find first?
Which shaded goal will a breadth-first search find first?
Depth-first search treats the frontier as a stack.
It always selects one of the last elements added to the frontier.
If the list of nodes on the frontier is \( [p_{1},p_{2},p_{3},\ldots] \), then:
Does DFS guarantee to find the path with fewest arcs?
What happens on infinite graphs or on graphs with cycles if there is a solution?
What is the time complexity as a function of the path length?
What is the space complexity as a function of the path length?
How does the goal affect the search?
Breadth-first search treats the frontier as a queue.
It always selects one of the earliest elements added to the frontier.
If the list of paths on the frontier is \( [p_{1},p_{2},\ldots,p_{r}] \), then:
Does BFS guarantee to find the path with fewest arcs?
What happens on infinite graphs or on graphs with cycles if there is a solution?
What is the time complexity as a function of the path length?
What is the space complexity as a function of the path length?
How does the goal affect the search?
Previous methods don’t use the goal to select a path to explore.
Main idea: don’t ignore the goal when selecting paths.
Often there is extra knowledge that can guide the search: heuristics.
\( h(n) \) is an estimate of the cost of the shortest path from node \(n\)
to a goal node.
\(h(n)\) needs to be efficient to compute.
\(h(n)\) is an underestimate if there is no path from \(n\) to a goal
with cost less than \(h(n)\).
An admissible heuristic is a nonnegative heuristic function that is an underestimate of the actual cost of a path to a goal.
Here are some example heuristic functions:
If the nodes are points on a Euclidean plane and the cost is the distance,
\(h(n)\) can be the straight-line distance (SLD) from n to the closest goal.
If the nodes are locations and cost is time, we can use the distance to
a goal divided by the maximum speed, \(h(n)=d(n)/v_{\max}\).
If the goal is to collect all of the coins and not run out of fuel, we can
use an estimate of how many steps it will take to collect the coins
and return to goal position, without caring about the fuel consumption.
A heuristic function can be found by solving a simpler (less constrained) version of the problem.
Main idea: select the path whose end is closest to a goal
according to the heuristic function.
Best-first search selects a path on the frontier with minimal \(h\)-value.
It treats the frontier as a priority queue ordered by \(h\).
This is not the shortest path!
Best-first search might fall into an infinite loop!
Does best-first search guarantee to find the path with fewest arcs?
What happens on infinite graphs or on graphs with cycles if there is a solution?
What is the time complexity as a function of the path length?
What is the space complexity as a function of the path length?
How does the goal affect the search?
A* search uses both path cost and heuristic values.
\(cost(p)\) is the cost of path \(p\).
\(h(p)\) estimates the cost from the end node of \(p\) to a goal.
\(f(p) = cost(p)+h(p)\),
estimates the total path cost
of going from the start node, via path \(p\) to a goal:
\[ \underbrace{ \underbrace{start\xrightarrow{\textrm{path}~p}~}_{cost(p)} n \underbrace{~\xrightarrow{\textrm{estimate}}~goal}_{h(p)} }_{f(p)} \]
A* is a mix of lowest-cost-first and best-first search.
It treats the frontier as a priority queue ordered by \(f(p)\).
It always selects the node on the frontier with
the lowest estimated distance from the start
to a goal node constrained to go via that node.
Does A* search guarantee to find the path with fewest arcs?
What happens on infinite graphs or on graphs with cycles if there is a solution?
What is the time complexity as a function of the path length?
What is the space complexity as a function of the path length?
How does the goal affect the search?
A* guarantees that this is the shortest path!
A* will always find a solution if there is one, because:
The frontier always contains the initial part of a path to a goal,
before that goal is selected.
A* halts, because the costs of the paths on the frontier keeps increasing,
and will eventually exceed any finite number.
If there is a solution, A* always finds an optimal one first, provided that:
the branching factor is finite,
arc costs are bounded above zero
(i.e., there is some \(\epsilon>0\)
such that all
of the arc costs are greater than \(\epsilon\)), and
\(h(n)\) is nonnegative and an underestimate of
the cost of the shortest path from \(n\) to a goal node.
The first path that A* finds to a goal is an optimal path, because:
The \(f\)-value for any node on an optimal solution path
is less than or equal to the \(f\)-value of an optimal solution.
This is because \(h\) is an underestimate of the actual cost
Thus, the \(f\)-value of a node on an optimal solution path
is less than the \(f\)-value for any non-optimal solution.
Thus, a non-optimal solution can never be chosen while
a node exists on the frontier that leads to an optimal solution.
Because an element with minimum \(f\)-value is chosen at each step
So, before it can select a non-optimal solution, it will have to pick
all of the nodes on an optimal path, including each of the optimal solutions.
A* gradually adds “\(f\)-contours” of nodes (cf. BFS adds layers).
Contour \(i\) has all nodes with \(f=f_{i}\), where \(f_{i}<f_{i+1}\).
If (admissible) \(h_{2}(n)\geq h_{1}(n)\) for all \(n\),
then \(h_{2}\) dominates \(h_{1}\) and is better for search.
Typical search costs (for 8-puzzle):
depth = 14 | DFS ≈ 3,000,000 nodes A*(\(h_1\)) = 539 nodes A*(\(h_2\)) = 113 nodes |
depth = 24 | DFS ≈ 54,000,000,000 nodes A*(\(h_1\)) = 39,135 nodes A*(\(h_2\)) = 1,641 nodes |
Given any admissible heuristics \(h_{a}\), \(h_{b}\),
the maximum heuristics \(h(n)\)
is also admissible and dominates both:
\[ h(n) = \max(h_{a}(n),h_{b}(n)) \]
Admissible heuristics can be derived from the exact solution cost of
a relaxed problem:
If the rules of the 8-puzzle are relaxed so that a tile can move anywhere,
then \(h_{1}(n)\) gives the shortest solution
If the rules are relaxed so that a tile can move to any adjacent square,
then \(h_{2}(n)\) gives the shortest solution
Key point: the optimal solution cost of a relaxed problem is
never greater than
the optimal solution cost of the real problem
Graph search keeps track of visited nodes, so we don’t visit the same node twice.
Suppose that the first time we visit a node is not via the most optimal path
\(\Rightarrow\) then graph search will return a suboptimal path
Under which circumstances can we guarantee that A* graph search is optimal?
Suppose path \(p\) to \(n\) was selected, but there is a shorter path \(p’\) to \(n\).
Suppose path \(p’\) ends at node \(n’\).
\(p\) was selected before \(p’\), which means that: \(cost(p)+h(n)\leq cost(p’)+h(n’)\).
Suppose \(cost(n’,n)\) is the actual cost of a path from \(n’\) to \(n\).
The path to \(n\) via \(p’\) is shorter than \(p\), i.e.:
\(cost(p’)+cost(n’,n)<cost(p)\).
Combining the two: \(cost(n’,n)<cost(p)-cost(p’)\leq h(n’)-h(n)\)
So, the problem won’t occur if \(|h(n’)-h(n)|\leq cost(n’,n)\).
A heuristic function \(h\) is consistent (or monotone) if
\( |h(m)-h(n)| \leq cost(m,n) \)
for every arc \((m,n)\).
(This is a form of triangle inequality)
If \(h\) is consistent, then A* graph search will always finds
the shortest path to a goal.
This is a strengthening of admissibility.
A* tree search is optimal if:
A* graph search is optimal if:
Search strategy |
Frontier selection | Halts if solution? | Halts if no solution? | Space usage |
---|---|---|---|---|
Depth first | Last node added | No | No | Linear |
Breadth first | First node added | Yes | No | Exp |
Best first | Global min \(h(p)\) | No | No | Exp |
Lowest cost first | Minimal \(cost(p)\) | Yes | No | Exp |
A* | Minimal \(f(p)\) | Yes | No | Exp |
Here is an example demo of several different search algorithms, including A*.
Furthermore you can play with different heuristics:
http://qiao.github.io/PathFinding.js/visual/
Note that this demo is tailor-made for planar grids,
which is a special case of all possible search graphs.