Memory-Efficient Interactive Online Reconstruction from Depth Image Streams

Florian Reichl,  Jakob Weiss and Rüdiger Westermann
Department of Informatics, Technische Universität München, Germany

Abstract

We describe how the pipeline for 3D online reconstruction using commodity depth and image scanning hardware can be made scalable for large spatial extents and high scanning resolutions. Our modified pipeline requires less than 10% of the memory that is required by previous approaches at similar speed and resolution. To achieve this, we avoid storing a 3D distance field and weight map during online scene reconstruction. Instead, surface samples are binned into a high-resolution binary voxel grid. This grid is used in combination with caching and deferred processing of depth images to reconstruct the scene geometry. For pose estimation, GPU ray-casting is performed on the binary voxel grid. A one-to-one comparison to level-set ray-casting in a distance volume indicates slightly lower pose accuracy. To enable unlimited spatial extents and store acquired samples at the appropriate level of detail, we combine a hash map with a hierarchical tree representation.

Associated publications

Memory-Efficient Interactive Online Reconstruction from Depth Image Streams
F. Reichl, J. Weiss, R. Westermann, Computer Graphics Forum (2016), to appear [PDF] [BIBTEX]