Uninformed Search Strategies


In this post, we will be exploring different search strategies when there is no information about how close a state to the goal is.

Criteria used to choose between a search algorithm:

  • Completeness: The algorithm will eventually find the solution if it exists
  • Cost optimally: If it finds the solution with the lowest cost
  • Time complexity: Time needed to find the solution
  • Space complexity: Memory needed
CriterionBreadth-FirstUniform-CostDepth-FirstDepth-LimitedIterative DeepeningBidirectional
Completeness⚠️⚠️⚠️⚠️
Cost-Optimal⚠️
TimeO(bd)O(b^d)O(b1+(C/ε))O(b^{1+(C/\varepsilon)})O(bm)O(b^m)O(bl)O(b^l)O(bd)O(b^d)O(bd/2)O(b^{d/2})
SpaceO(bd)O(b^d)O(b1+(C/ε))O(b^{1+(C/\varepsilon)})O(bm)O(bm)O(bl)O(bl)O(bd)O(bd)O(bd/2)O(b^{d/2})
Summary of the algorithms in this post

Contents

This strategy bases on the successive expansion of the nodes by depths: First the root node, then the successors of the root nodes, then the successors of the successors, and so on.

Additional efficiency can be obtained with a first-in-first-out queue.1

Total number of the nodes:

N=1+b+b2+b3++bdO(bd)N=1+b+b^2+b^3+\dots+b^d\sim O(b^d)
  • bb: Branching factor
  • dd: Depth

Nodes must be kept in memory: Both time and space complexity are O(bd)O(b^d)

Properties:

  • Complete
  • Cost-optimal: The first solution found has the least cost (assuming all actions have the same cost)
  • Exponential-complexity

Uniform-Cost Search (Dijkstra’s Algorithm)

In uniform-cost, the node with the least path cost is chosen to be expanded.

Time complexity:

tO(b1+(1+C/ε))t\sim O(b^{1+(1+C/\varepsilon)})
  • CC: Cost of the solution. Using the notation of CC^* for the optimal solution
  • ε>0\varepsilon>0: Lowest step cost
  • Can be much greater than bdb^d of breadth-first search.
    • When all costs are equal, the time complexity of the uniform-cost search is bd+1b^{d+1}

Properties:

  • Complete
  • Cost-optimal

The deepest node in the frontier is expanded first. It’s usually implemented as a tree-like search without keeping a list of reached nodes.

The search proceeds first to the deepest level of the search tree with no successors, then it goes back to the next deepest unexpanded node.

The time and memory complexity are, respectively:

t# of states;SO(bm)t\propto\text{\# of states}\quad;\quad S\sim O(bm)
  • bb: Branching factor
  • mm: Maximum depth

Properties:

  • Not cost-optimal: Returns the first solution found, not the cheapest
  • Generally incomplete
    • Efficient and complete for finite state spaces.
    • Also complete for acyclic state spaces, but can be expanding the same state via different paths
    • In cyclic state spaces, it can get stuck in an infinite loop
      • Some implementations check nodes for cycles
    • In infinite state space, it’s not systematic: Can’t explore the entire space and can get stuck going down an infinite path
  • Very memory efficient
    • There is a variant that uses even less memory (backtracking search): Only one successor is generated at a time rather than all successors, with SO(m)S\sim O(m)

In depth-limited, a depth limit ll is supplied, stopping the expansion at depth ll. This search solves the problem of infinite path of the depth-first search.

Time and space complexity:

tO(bl);SO(bl)t\sim O(b^l)\quad;\quad S\sim O(bl)

Property:

  • An incorrect choice of ll can make the search incomplete
  • The algorithm can be improved by checking cycles at upper nodes, leaving the longer cycles to the depth limit
  • ll can be chosen based on knowledge of the problem. The minimum depth limit that allows the full exploration is known as diameter of the state-space graph

With this search we don’t have to pick a nice value for ll. The algorithm simply tries all values for l=0,1,2,l=0,1,2,\dots until finding the solution or failure (no solution).

Iterative deepening search combines benefits of depth-first and breadth-first search:

  • Modest memory requirement:
    • SO(bd)S\sim O(bd) when there is a solution
    • SO(bm)S\sim O(bm) on finite state spaces with no solution
  • Optimal when all actions have the same cost
  • Complete on finite acyclic state spaces, or on any finite state space by checking for cycles
  • The time complexity is:
    • tO(bd)t\sim O(b^d) when there is a solution
    • tO(bm)t\sim O(b^m) when there is no solution

Iterative deepening search is slower than breadth-first search, but much more memory efficient. The total number of nodes generated in the worst case is:

N(IDS)=db+(d1)b2+(d2)b3++bdN(IDS)=db+(d-1)b^2+(d-2)b^3+\dots+b^d

with a time complexity of tO(bd)t\sim O(b^d), asymptotically the same as breadth-first search.

In this case we will search simultaneously forward from the initial state and backwards from the goal state(s), expecting both searches to meet. The motivation is based on:

bd/2+bd/2bdb^{d/2}+b^{d/2}\ll b^d

Likewise, the path cost of each search is going to be C/2C/2.

We will have to track two frontiers and two tables of reached states, and have the ability to reason backwards: The parent of a successor in the forward direction is the successor of the parent in the backward direction. Solution is obtained when two frontiers collide.

The underlying search algorithm can be chosen from the strategies listed above.

Footnotes

  1. With a FIFO queue, there is no need to evaluate a priority function to choose which successor to expand first.