Creating Datasets for 3D Reconstruction on Mobile Devices, Poster 35
Abstract
This goal of this project is to build a system that creates datasets for evaluating 3D object reconstruction on mobile devices. These datasets will contribute to the Middlebury Computer Vision Benchmarks... [ view full abstract ]
This goal of this project is to build a system that creates datasets for evaluating 3D object reconstruction on mobile devices. These datasets will contribute to the Middlebury Computer Vision Benchmarks (http://vision.middlebury.edu/), a widely-known resource for researchers working in stereo vision. By comparing their algorithmically-generated depth maps with our “ground truth” benchmarks, researchers can test, compare, and improve their algorithms. The field of computer vision is currently experiencing a “mobile revolution;” there is increasing interest in generating high-quality 3D models with mobile devices such as smartphones. This technology promises wide-ranging applications, including e-commerce, social media, and virtual reality. The mobile movement is in urgent need of new datasets that take advantage of the full capabilities of mobile hardware. Last summer, we began to address this need by developing the backbone of a new framework for dataset acquisition on mobile devices, which will eventually produce ground truth depth maps. The framework is modeled after the ActiveLighting system built by Middlebury students in previous years. Our new MobileLighting suite coordinates the projection of binary code patterns, the capturing of calibration and structured lighting images, and the positioning of the mobile device. It also implements an image processing pipeline with an improved thresholding algorithm. The calibration program supports three modes and involves a novel 3D calibration rig, adapted from the innovative ArUco library.
Authors
-
Kyle Meredith '19
-
Nicholas Mosier '20
Topic Area
Science & Technology
Session
P2 » Poster Presentations: Group 2 and Refreshments (2:45pm - Friday, 20th April, MBH Great Hall, 331 and 338)