ICP Improvements

Goal: To improve the robustness of the iterative closest point algorithm (ICP) and adapt the algorithm to use for partially overlapping surfaces.

We tested two types of improvements:

We performed several sets of tests to evaluate these changes to ICP:

Test setup

group max_trans terrain (mm) max_trans objects (mm) max_rot (deg)
easy 1000 10 5
med 5000 50 15
hard 10000 100 25

Comparing methods for delaying the onset of overlap filtering

Initial tests showed that overlap filtering performed worse on medium and hard tests. The reason is that for these tests, the initial perterbation moves the data so much that most of the data was filtered out because it was not overlapping. As a result, convergence was much slower and often to the wrong minimum. Our solution is to delay the onset of overlap filtering until ICP approaches a solution. With this approach, the errors induced by partially overlapping datasets will be reduced.

We tried two strategies for delaying overlap filtering onset:

Another strategy (which we didn't try) might be to look at the derivative of the error function and engage the filtering after the slope falls below a threshold. In our experience, the derivative is fairly constant over most of the iterations, so determining a threshold would be difficult.

Results: (all on one page)
Newmesa 1 and 2 - easy | medium | hard | combined
Angel object - easy | medium | hard | combined

The graphs above show that the fixed delay and smart delay performance is essentially the same but both are generally better than no delay. This is true for datasets that overlap significantly with a steep global minimum in the error surface. But when the overlap is small and the error surface has a shallow local minimum, delaying the overlap filtering is not a good idea. The reason is that the shallow minimum allows the non-overlapping regions to pull the transformation far enough away from the local minimum (even if it started out quite close) that the algorithm cannot recover even after overlap filtering is enabled.

Results:
Newmesa 3 and 8 - easy | medium | hard | combined

Comparing the effects of area weighting and overlap filtering

These experiments show that in many cases, overlap filtering leads to convergence when the original method fails. Area weighting showed mixed results. One possible reason for this is that area weighting gives equal influence to prominent features (complex regions) and uninteresting parts of the meshes (flat or constant curvature regions). As a result, the uninteresting parts, which cover the majority of the surface, tend to dominate. Without area weighting, the features (which typically contain many more datapoints - due to simplification algorithm) are weighted much more than the uninteresting areas. Terrain datasets are often fairly ambiguous except for a few small interesting areas, so this problem with area weighting is likely to arise.

Another problem can occur if there is regular symmetry (such as a series of small hills) in the data. In this case, ICP can get trapped in local minima that are either 1/2 or 1 period offset from the correct solution. For example, with the hills in dpiles data, an offset of 1/2 hill can occur because some data points will match the 1 period offset correspondences and some will match the 0 offset correspondences.

Results: (all on one page)
Newmesa 1 and 2 - easy | medium | hard | combined
Newmesa 5 and 7 - easy | medium | hard | combined
Newmesa 9 and 11 - easy | medium | hard | combined
Newmesa 3 and 8 - easy | medium | hard | combined
Dpiles 3 and 5 - easy | medium | hard | combined
Angel object - easy | medium | hard | combined
Bunny object - easy | medium | hard | combined

Determining the effects of varying the parameters of the Lorentzian and sigmoid functions

The lorenzian has one parameter and sigmoid has two that govern the shape of the weighting function. We ran tests with several values of these parameters to determine whether the ICP performance was sensitive to the parameter choice and if so, which values were best. Test results showed that the sigmoid version was relatively insensitive to parameter choice, and lorentzian version performed better with large values of sigma.

Results: (all on one page)
Lorentzian graphs - Newmesa 9 and 11 - easy | medium | hard | combined
Sigmoid graphs - Newmesa 9 and 11 - easy | medium | hard | combined

Comparison of the Lorentzian, sigmoid, and original ICP methods

Results: (all on one page)
Newmesa 1 and 2 - easy | medium | hard | combined
Newmesa 5 and 7 - easy | medium | hard | combined
Newmesa 3 and 8 - easy | medium | hard | combined
(note: For 3-8 smartOverlap was turned off because it unfairly penalizes Lorentzian method. Instead, fixed delay overlap filtering was used with delay=0 for easy datasets and delay=5 for medium and hard datasets.)

Conclusions

Delay - Delaying the onset of overlap filtering had little effect on the easy tests, but for medium and hard tests, delaying filtering improved performance on the medium tests and more so on the hard tests. The exception occurs when the data doesn't overlap much (as with newmesa 3-8). In this case, delaying allows the non-overlapping data to drive the transform far enough from the global minimum that it converges to another minimum (this applies to the easy test). On the medium and hard tests, the initial transform is so far from the global minimum that a delay of zero means that not enough points are initially matched to draw the transform into the global minimum. In this case, the fixed delay and smart overlap methods worked better.

Area weighting - For most datasets, area weighting did not perform noticeably better than the default. One explanation for this is that area weighting emphasizes points in flat areas (since they generally contain larger triangles due to simplification). But the non-flat areas contribute more to the constraining of the solution (especially when the transform is very near to the ground truth). This effect is compounded by the fact that the terrain datasets contain primarily flat areas with occasional non-flat regions such as bushes and rocks. The error surface around the global optimum is not very steep and area weighting makes it shallower.

Overlap - The results depend on the data. For datasets that contain thin surfaces (such as the angel, which has a thin wing). The angle threshold in the overlap filtering prevents the solution from converging with the back of the wing in mesh1 matching the front of the wing in mesh2. In the newmesa3-8 test, which overlaps only slightly the overlap filter prevents the data from sliding such that the to sets overlap too much. In some cases, e.g. newmesa 9-11 and 5-7, the overlap filter tends to produce results that are the same or slightly worse than no overlap filter. There are a couple of reasons why this could happen: 1) The non-overlapping points could actually pull the transform in the correct direction; 2) Since the overlap filter usually activates fairly early, it is possible that many non-overlapping points are actually reasonable correspondences that are ruled out by the filter. One possible solution is to allow the algorithm to converge with no overlap filtering then activate the filtering and allow it to converge again. 3) If the closest point has a significantly different surface normal, then it will be filtered even if there is a good match with the same normal only slightly further away. The solution here is to use 6D icp with the surface normal as the extra 3 dimensions.

Lorentz and sigmoid parameters - The Lorentzian improved with larger sigma (3200). The sigmoid results were inconclusive, but dmid=5000, slope - .0007 performed better on average. These values were used for the head-to-head tests.

Head-to-head tests - Tests on 3 datasets show that the Lorentzian method performs better than sigmoid. However, neither consistently outperforms the original or the overlap filtered method (dynamic). Considering that the iterative reweighting required for the Lorentzian and sigmoid approaches makes them much more computationally intensive, switching to these new methods cannot be justified.

Possible improvements - 1) Use 6D ICP incorporating normal calculations into the computation of correspondences. 2) For Lorentzian or sigmoid method, vary the parameters based on matching statistics (similar to those used in the dynamic thresholding). dmid could be the same as dynamicT and the slope parameter could be computed based on the standard deviation.


Last modified: Fri Jun 18 13:42:20 EDT 1999