Forward-Chaining Planning

Formally, forward-chaining planning can be described as search through a landscape where each node is defined by a tuple $ <S, P>$. $ S$ is a world state comprised of predicate facts and $ P$ is the plan (a series of ordered actions) used to reach $ S$ from the initial state. Search begins from the initial problem state, corresponding to a tuple $ <S_{0}, \{\}>$.

Edges between pairs of nodes in the search landscape correspond to applying actions to lead from one state to another. When an action $ A$ is applied to a search space node $ <S, P>$ the node $ <S', P'>$ is reached, where $ S'$ is the result of applying the action $ A$ in the state $ S$ and $ P'$ is determined by appending the action $ A$ to $ P$. Forward-chaining search through this landscape is restricted to only considering moves in a forwards direction: transitions are only ever made from a node with plan $ P$ to nodes with a plan $ P'$ where $ P'$ can be determined by adding (or `chaining') actions to the end of $ P$.

As unguided search in this manner is prohibitively expensive in all but the smallest problems, heuristics are used to guide search. Commonly, a heuristic value is used to provide a goal distance estimate from a node $ <S, P>$ to a node $ <S', P'>$ in which $ S'$ is a goal state.

Andrew Coles and Amanda Smith 2007-01-09