|PARK, JOHNNY - Purdue University
Submitted to: IEEE International Conference on Robotics and Automation
Publication Type: Peer Reviewed Journal
Publication Acceptance Date: 3/1/2015
Publication Date: 5/1/2015
Citation: Tabb, A., Park, J. 2015. Camera calibration correction in shape from inconsistent silhouette. IEEE International Conference on Robotics and Automation. DOI: 10.1109/ICRA.2015.7139870.
Interpretive Summary: Orchard management in the future will require automation-assistance for costly, labor-intensive practices such as pruning, thinning, and harvest. Accurate localization of branches, flowers, and fruit are crucial for successful automation practices such as robot-assisted pruning but accurate 3D shapes of tree components have been largely unsuccessful due to errors in reconstructing the tree image from multiple photographic images. This report provides evidence for a new, successful method using the Iterated Closest Point approach to more accurately reconstruct the 3D shapes of trees, including small branches, by correcting camera calibration parameters. This finding is important for orchard automation applications which depend on accurate tree shape information.
Technical Abstract: The use of shape from silhouette for reconstruction tasks is plagued by two types of real-world errors: camera calibration error and silhouette segmentation error. When either error is present, we call the problem the Shape from Inconsistent Silhouette (SfIS) problem. In this paper, we show how small camera calibration error can be corrected when using a previously-published SfIS technique to generate a reconstruction, by using an Iterated Closest Point (ICP) approach. We give formulations under two scenarios: that only external camera calibration parameters rotation and translation need to be corrected for each camera, and internal and external parameters need to be corrected. We formulate the problem as a 2D-3D ICP problem and find approximate solutions using a nonlinear minimization algorithm, the Levenberg-Marquadt method. We demonstrate the ability of our algorithm to create more representative reconstructions of both synthetic and real datasets of thin objects as compared to uncorrected datasets.