Vision and Mobile Robotics Laboratory | Software
Home | Members | Projects | Publications | Software | Videos Internal

IntegrateMeshes

Description

IntegrateMeshes is a program that integrates multiple aligned meshes into a single seamless surface mesh. It reads in a set of meshes, transformations that align all of the single meshes to a base mesh and projection matrices that determine the viewing direction for the meshes. It outputs the integrated mesh. IntegrateMeshes can also be used to blend appearace information collected from multiple views. The theory behind this integration algorithm is given in Digital Equipment Corporation Cambridge Research Laboratory Technical Report CRL-TR-96-4 Registration and Integration of Textured 3-D Data. This technical report also appears in shorter form as a conference paper at 3DIM '97.

Integrate meshes uses a naming convention to simplify the command line options. All files read in for integration must start with the same model prefix. This same model prefix is used to name output files. Suppose that we have three views of a robot to be integrated, then a reasonable way to name the surface meshes is robot1.wrl, robot2.wrl and robot3.wrl. Integrate meshes also requires that the three views be aligned with a single view. Suppose that the alignment of robot2.wrl and robot3.wrl with robot1.wrl are known. The transformation matrices must then be named robot.1.2.trans, which aligns robot2.wrl with robot1.wrl, and robot.1.3.trans, which aligns robot3.wrl with robot1.wrl. Furthermore, the projection matrices which define the sensor origin of the views (in view coordinates) must be named robot1.pa, robot2.pa and robot3.pa. If appearance blending is not used then IntegrateMeshes outputs two files: the integrated surface robot.wrl and the integrated points and sensor origins in the base coordinate system robot.points.wrl.

If appearanceis being blended, then the projection matrices must map 3-D coordinates (x,y,z,1) of each view into image coordinates (u,v,w) before alignment of the view with the base view. I.e., the projection matrix should describe the projection in view coordinates, not world coordinates. Each view must also have a 24-bit RGB TIFF image describing the appearance of the view. For our example above, images named robot1.tiff,robot2.tiff and robot3.tiff must exist. The outputs after appearance blending are the integrated points and sensor origins in the base coordinate system robot.points.wrl and the texture mapped model robot.texture.wrl with texture images robot.*.tiff which can be viewed with vrweb.

Files

main.c this file contains the main controlling function for the integration of surface meshes

marchingcubes.c this file contains all of the functions for creating marching cubes cases and then deciding when to apply each case to a cube of implicit function values.

probability.c this file contains functions to compute probability contribution of points inserted into the voxel space.

texture.c this file contains all of the functions for reading panormic images and creating a cell of texture for each face in the surface mesh created from integration.

voxel.c this file Contains all of the functions for updating and maintaining a binary tree indexed space of voxels.

vrml.c this fileContains functions for outputing VRML files describing integration algorithm.

cube.h this file defines all classes used for marching cubes.

integrateMesh.h this file defines global variables for IntegrateMeshes function.

probability.h this file defines protypes of functions in probability.c.

texture.h this file defines prototypes for functions in texture.c.

viewMesh.h this file defines viewpoint, viewface and view_mesh classes. These clases are less memory expensive versions of meshpoints and meshfaces and surface_mesh optimized for describing texture information.

voxel.h this file defines voxel and voxel_space classes.

vrml.h this file contains vrml.c function prototypes.

Usage

By typing IntegrateMeshes - the following options (format description default) are printed:

Usage: IntegrateMeshes (See IntegrateMeshes.html for complete usage)

Detailed Usage

%S set model prefix name [required]

%d set base view index [required]

-views ... view indices [1 2]

-size %F set voxel size of object [0.25]

-sigmas %F %F %F %F set point probability stdevs (as bs ae be) [.375 .375 .375 .375]

-lambda %F error(=1) vs surface balancer(=0) [0.5]

-pt %F minimum probability threshold [1]

-bb %F %F %F %F %F %F bounding box min max [-1e8 -1e8 -1e8 1e8 1e8 1e8]

-texture turn on texture integration [off]

-tcw %d set texture cell width [8]

-max_wt turn on max weight texture blending [off]

-slices %S output slices with this prefix [off]

-dThresh %F distance threshold

-transDir %S transform directory

-wrlDir %S input wrl directory

Examples

Assume that the current directory contains robot1.wrl, robot2.wrl,robot3.wrl, robot8.wrl, robot1.pa, robot2.pa, robot3.pa, robot8.pa, robot1.tiff, robot2.tiff, robot3.tiff, robot8.tiff, robot.1.2.trans, robot.1.3.trans and robot1.8.trans. An example of usage that will integrate the four aligned robot views described above using the default settings is:

This command will place the integrated model in robot.wrl and the aligned points sets in robot.points.wrl. The same example as above that will produce a coarser model by changing the size and sigmas values is:

The same example as above that uses only the surface probability gaussian for its sensor model and has a modified bounding box is:

The same example as above with the addition of linear appearance blending is:

Finally, the same example as above but with max weighted texture blending nad the output of slice images with finer texture maps is:

up


The VMR Lab is part of the Vision and Autonomous Systems Center within the Robotics Institute in the School of Computer Science, Carnegie Mellon University.