Pairwise registration in indoor environments using adaptive combination of 2D and 3D cues
Pairwise frame registration of indoor scenes with sparse 2D local features is not particularly robust under varying lighting conditions or low visual texture. In this case, the use of 3D local features can be a solution, as such attributes come from the 3D points themselves and are resistant to visu...
- Autores:
-
Perafán Villota, Juan Carlos
Leno Da Silva, Felipe
Reali Costa, Anna Helena
de Souza Jacomini, Ricardo
- Tipo de recurso:
- Article of journal
- Fecha de publicación:
- 2018
- Institución:
- Universidad Autónoma de Occidente
- Repositorio:
- RED: Repositorio Educativo Digital UAO
- Idioma:
- eng
- OAI Identifier:
- oai:red.uao.edu.co:10614/11388
- Palabra clave:
- Data compression (Computer science)
Compresión de datos (Computadores)
Pairwise registration
RGB-D data
Local descriptors
Keypoint detectors
- Rights
- openAccess
- License
- Derechos Reservados - Universidad Autónoma de Occidente
Summary: | Pairwise frame registration of indoor scenes with sparse 2D local features is not particularly robust under varying lighting conditions or low visual texture. In this case, the use of 3D local features can be a solution, as such attributes come from the 3D points themselves and are resistant to visual texture and illumination variations. However, they also hamper the registration task in cases where the scene has little geometric structure. Frameworks that use both types of features have been proposed, but they do not take into account the type of scene to better explore the use of 2D or 3D features. Because varying conditions are inevitable in real indoor scenes, we propose a new framework to improve pairwise registration of consecutive frames using an adaptive combination of sparse 2D and 3D features. In our proposal, the proportion of 2D and 3D features used in the registration is automatically defined according to the levels of geometric structure and visual texture contained in each scene. The effectiveness of our proposed framework is demonstrated by experimental results from challenging scenarios with datasets including unrestricted RGB-D camera motion in indoor environments and natural changes in illumination |
---|