Automatic motion generation is a key part of many applications, from games to art to training scenarios. For many techniques used to generate such motion, however, the range in which the technique will perform well is not know. For example, the ability of a motion graph to generate animations fulfilling a set of requirements---such as efficient navigation over an environment---is not easy to predict from the input motion segments.
This project introduces the idea of assessing a data structure such as a motion graph for its utility in a particular application. We focus on navigation tasks and define metrics for evaluating expected path quality and coverage for a given environment. One key to evaluating a motion graph for navigation tasks is to first embed it into the environment in a way that captures all possible paths that might result from ``playing back'' the motion graph within that environment. This project describes an algorithm for accomplishing this embedding that preserves the flexibility of the original motion graph. We use the metrics defined in this paper to compare motion datasets and to highlight areas where these datasets could be improved.
To introduce this notion of animation-system evaluation, we look at a simple set of metrics intended to measure the ability of the motion graph to create animations that allow the character to efficiently navigate through a target environment.
The coverage metric examines the ability of the character to navigate into and through all points in the environment. To aid in computing this metric, we discretize the environment into a large number of cells, and consider all points in a cell to be reachable if the centre point of that cell is reachable, in a manner similar to occupancy grids. This allows computation of measures related to ability to reach points in the environment, such as:
The path quality metric examines the ability of the motion graph to efficiently navigate the character between two reachable points in the environment. This can be efficiently computed by embedding the motion graph into the environment (see next section) and then randomly sampling source and destinations in a Monte Carlo fashion. The value for each pair of points is the ratio between the length of the best source-to-destination path available in the embedded motion graph and the theoretical shortest obstacle-respecting path. The result of the Monte Carlo sampling is a distribution of path efficiencies that can be used for several measurements, including:
We embedded the original motion graph into the target environment, producing in effect a much larger motion graph tailored to the environment. This embedding process also allows our algorithm to take into account the perceptual effects of motion editing.
For the purposes of navigation capability, the relevant part of motion editing is how much the path of a character can be manipulated. Given a particular starting configuration, each clip in the motion graph will terminate at a certain root position and facing; we abstract the effects of motion editing into an editing footprint, which is the collection of root positions and facings into which the clip can be edited while respecting the known editing bounds.
Summarized briefly, we found:
We are interested in exploring the problem of what makes a good metric. While the ones used for this work were appropriate for a simple navigational task, more complex tasks, such as a game environment or emergency-services training scenario, would demand richer metric sets.
Scalability, in both environment size and motion graph complexity, is an issue for the implementation used in this work. Several options are possible for expanding the approach to work on environments tens of metres on a side using motion graphs containing hundreds of clips representing many distinct actions. Examining extremely large spaces could benefit from a tiling approach; the second figure at the top of this page shows a character navigating through an infinite forest of pillars, and the third figure shows the tile used to analyze that environment, as well as a path through it.
The current approach works only for static environments; extending this to dynamic environments would allow evaluation of more practical scenarios.
The approach used here should generalize to other motion generation systems. One naive way to accomplish this would be to have the motion generation system run the character through a repetoire of actions, either systematically or by hand (i.e., under interactive user control), and treat the resulting clips as motion capture data to be evaluated in the above manner.
A much-expanded version of this work with much larger and more dynamic environments, motion graphs, and action sets featuring more realistic and practical metrics will be submitted for publication shortly.
Supported in part by the NSF under Grant IIS-0205224 and IIS-0326322.