Evaluating Motion Graphs for Character Navigation

Analysing the properties of motion graphs to determine their capabilities with a target environment
icon:  visualization for motion graph variability

Project Description

Automatic motion generation is a key part of many applications, from games to art to training scenarios. For many techniques used to generate such motion, however, the range in which the technique will perform well is not know. For example, the ability of a motion graph to generate animations fulfilling a set of requirements---such as efficient navigation over an environment---is not easy to predict from the input motion segments.

This project introduces the idea of assessing a data structure such as a motion graph for its utility in a particular application. We focus on navigation tasks and define metrics for evaluating expected path quality and coverage for a given environment. One key to evaluating a motion graph for navigation tasks is to first embed it into the environment in a way that captures all possible paths that might result from ``playing back'' the motion graph within that environment. This project describes an algorithm for accomplishing this embedding that preserves the flexibility of the original motion graph. We use the metrics defined in this paper to compare motion datasets and to highlight areas where these datasets could be improved.

Evaluation Metrics

To introduce this notion of animation-system evaluation, we look at a simple set of metrics intended to measure the ability of the motion graph to create animations that allow the character to efficiently navigate through a target environment.

The coverage metric examines the ability of the character to navigate into and through all points in the environment. To aid in computing this metric, we discretize the environment into a large number of cells, and consider all points in a cell to be reachable if the centre point of that cell is reachable, in a manner similar to occupancy grids. This allows computation of measures related to ability to reach points in the environment, such as:

  • Fractional coverage of the environment (shown here as a 2D slice, collapsing all angular and pose information by summing at each X,Z cell. Brighter regions show greater coverage)
  • Location and size of unreachable regions in the environment (brighter represents greater distance from a reachable location)
    cvg holes
  • The path quality metric examines the ability of the motion graph to efficiently navigate the character between two reachable points in the environment. This can be efficiently computed by embedding the motion graph into the environment (see next section) and then randomly sampling source and destinations in a Monte Carlo fashion. The value for each pair of points is the ratio between the length of the best source-to-destination path available in the embedded motion graph and the theoretical shortest obstacle-respecting path. The result of the Monte Carlo sampling is a distribution of path efficiencies that can be used for several measurements, including:

  • Estimation of the median path result ("what we expect to see") (Deep red is the theoretical best path; green is the best path using the motion graph)
  • Estimation of the fraction of paths falling outside a user-determined acceptable level, such as 10% longer than optimal ("how often will the results be poor")
    10% long
  • Characterization of the worst paths, such as the 95th percentile ("how bad is it likely to get")
    95th percentile
  • Embedding Algorithm

    We embedded the original motion graph into the target environment, producing in effect a much larger motion graph tailored to the environment. This embedding process also allows our algorithm to take into account the perceptual effects of motion editing.

    Editing Model
    We assume that it is known how much motion editing can be used while still producing animations of sufficient visual quality, perhaps using limitations on motion editing derived by a method similar to our previous work (in this paper, we use a simple linear model).

    For the purposes of navigation capability, the relevant part of motion editing is how much the path of a character can be manipulated. Given a particular starting configuration, each clip in the motion graph will terminate at a certain root position and facing; we abstract the effects of motion editing into an editing footprint, which is the collection of root positions and facings into which the clip can be edited while respecting the known editing bounds.

    In order to efficiently compute the metrics and allow the motion graph to be tractably embedded, we discretize the environment using a 4D grid. The axes are the X,Z groundplane position and the facing of the character's root, and the pose of the character (i.e., the motion clip just played):
    4D grid
    While this discretization is conservative, the results show stablility with respect to decreasing grid size past a certain saturation level, suggesting that this gives a good approximation of the continuous case.

    Evaluation Results

    Summarized briefly, we found:

  • The coverage metric is largely stable when the radius of the editing footprint is larger than the width of a grid cell.
  • For simple navigational tasks, having a variety of motion clips of the same action (turning left, walking straight, etc.) adds only a small amount to the reachable part of the environment.
  • Missing whole subclasses of actions, such as all gradual turns, can adversely affect the efficiency of character navigation.
  • Future

    We are interested in exploring the problem of what makes a good metric. While the ones used for this work were appropriate for a simple navigational task, more complex tasks, such as a game environment or emergency-services training scenario, would demand richer metric sets.

    Scalability, in both environment size and motion graph complexity, is an issue for the implementation used in this work. Several options are possible for expanding the approach to work on environments tens of metres on a side using motion graphs containing hundreds of clips representing many distinct actions. Examining extremely large spaces could benefit from a tiling approach; the second figure at the top of this page shows a character navigating through an infinite forest of pillars, and the third figure shows the tile used to analyze that environment, as well as a path through it.

    The current approach works only for static environments; extending this to dynamic environments would allow evaluation of more practical scenarios.

    The approach used here should generalize to other motion generation systems. One naive way to accomplish this would be to have the motion generation system run the character through a repetoire of actions, either systematically or by hand (i.e., under interactive user control), and treat the resulting clips as motion capture data to be evaluated in the above manner.

    A much-expanded version of this work with much larger and more dynamic environments, motion graphs, and action sets featuring more realistic and practical metrics will be submitted for publication shortly.



    Project Team

    Supported in part by the NSF under Grant IIS-0205224 and IIS-0326322.

    Paul Reitsma
    Last Updated: November 27, 2005