Michael Schuresko, Project Progress, 15-869

My project is to attempt an algorithm for single-view modeling of non-planar surfaces


Algorithm Description

Idea #1 User sketches borders of shape to be modelled. Using the Teddy Algorithm (Igarashi, reference 1) to create shape, reproject image onto shape as texture. Use texture synthesis to create missing pieces of texture.

How Teddy works. Given an arbitrary non-convex closed polygon representing the outline of a curve, Teddy finds the Voronoi decomposition of the points on the polygon. Teddy uses the edge segments which don't intersect the original outline as the "spine", then rotates the rest of the shape around the spine to create a 3d blob conforming to the outline.

How to extend Teddy to exploit information from single image

Method 1: User can draw lines on the shape that don't correspond to the boundary.

My first idea was that the user could then specify that the curve corresponds to a cross-section of the shape perpendicular to the image-plane. Then I realized that the user could establish a set of correspondences to morph the image (boundary + extra curves) to match a small rotation of the image. The resulting morph can determine a stereo pair from which to extract 3d. Or, if instead of the Beier-Neely morph method, the morph was done by triangulating the image and using piecewise affine warps, the transition to 3d could be done at the triangle level, without stereo vision, and the textures could just be warped onto the new triangles.

Method 2: User could sketch the outline of the shape. An initial attempt at 3d can be made using the Teddy algorithm, then the user can rotate the resulting object, and paint depth onto it.

Method 1 uses polygon representation, method 2 has to convert to a voxel-based representation. Either way, texture synthesis is not as straightforward as it is in the straight planar case. One method of reprojecting textures based on partial differential equations would reduce the texture-synthesis problem to a near-planar case.

Describe idea. Possibly ditch most of single-view modeling part of this assignment, and work on texture-synthesis for curved shapes (CSG, voxel or spline-based). Step 1, create scalar field for which Voxel shape is implicit surface (if shape is not in voxel format, first voxelize it). This is done by (I forget what the name of this technique is, but its used to numerically solve for potential fields in physics) holding the voxels inside the object constant at 0, holding the voxels at the boundary of a bounding box enclosing the shape at 1, temporarily setting all other voxels to random scalar values, then sweeping through the 3d array repeatedly until system stabilizes, adjust each unset value (i.e. not part of the shape or bounding box) at time t+1 to the average of its neighbors at time t. Take the gradient of this scalar field to give a vector field. For each point on the surface of the blobbie, find a corresponding point on the bounding box by following the gradient vectors to the bounding box. Use the known texture pixels from the blob shape to begin texture synthesis on the bounding box, (which should be similar, but not identical to texture synthesis on a plane), then reproject the synthesized texture pixels.

Alternatively, step 1 (the step copied from teddy) can be replaced by for each point inside the boundary, create a line of "interior" voxels that extends in front and behind the shape by the closest distance from that point to some place on the shape boundary. The results of this should be similar to Teddy, but slower. On the other hand, my result is allready in Voxel form

Also, instead of reprojecting textures based on implicit surface shape representations, I could attempt to extend a texture-synthesis algorithm to hypertextures.


Related Literature


Michael D Schuresko
Last modified: Mon Nov 15 01:46:22 EST 1999