Pairwise registration in indoor environments using adaptive combination of 2D and 3D cues

Pairwise frame registration of indoor scenes with sparse 2D local features is not particularly robust under varying lighting conditions or low visual texture. In this case, the use of 3D local features can be a solution, as such attributes come from the 3D points themselves and are resistant to visu...

Full description

Autores:
Perafán Villota, Juan Carlos
Leno Da Silva, Felipe
Reali Costa, Anna Helena
de Souza Jacomini, Ricardo
Tipo de recurso:
Article of journal
Fecha de publicación:
2018
Institución:
Universidad Autónoma de Occidente
Repositorio:
RED: Repositorio Educativo Digital UAO
Idioma:
eng
OAI Identifier:
oai:red.uao.edu.co:10614/11388
Acceso en línea:
http://hdl.handle.net/10614/11388
https://doi.org/10.1016/j.imavis.2017.08.008
Palabra clave:
Data compression (Computer science)
Compresión de datos (Computadores)
Pairwise registration
RGB-D data
Local descriptors
Keypoint detectors
Rights
openAccess
License
Derechos Reservados - Universidad Autónoma de Occidente
id REPOUAO2_1c38398912a97a6a9e65bd351207bdaf
oai_identifier_str oai:red.uao.edu.co:10614/11388
network_acronym_str REPOUAO2
network_name_str RED: Repositorio Educativo Digital UAO
repository_id_str
dc.title.eng.fl_str_mv Pairwise registration in indoor environments using adaptive combination of 2D and 3D cues
title Pairwise registration in indoor environments using adaptive combination of 2D and 3D cues
spellingShingle Pairwise registration in indoor environments using adaptive combination of 2D and 3D cues
Data compression (Computer science)
Compresión de datos (Computadores)
Pairwise registration
RGB-D data
Local descriptors
Keypoint detectors
title_short Pairwise registration in indoor environments using adaptive combination of 2D and 3D cues
title_full Pairwise registration in indoor environments using adaptive combination of 2D and 3D cues
title_fullStr Pairwise registration in indoor environments using adaptive combination of 2D and 3D cues
title_full_unstemmed Pairwise registration in indoor environments using adaptive combination of 2D and 3D cues
title_sort Pairwise registration in indoor environments using adaptive combination of 2D and 3D cues
dc.creator.fl_str_mv Perafán Villota, Juan Carlos
Leno Da Silva, Felipe
Reali Costa, Anna Helena
de Souza Jacomini, Ricardo
dc.contributor.author.none.fl_str_mv Perafán Villota, Juan Carlos
Leno Da Silva, Felipe
Reali Costa, Anna Helena
de Souza Jacomini, Ricardo
dc.subject.lemb.eng.fl_str_mv Data compression (Computer science)
topic Data compression (Computer science)
Compresión de datos (Computadores)
Pairwise registration
RGB-D data
Local descriptors
Keypoint detectors
dc.subject.lemb.spa.fl_str_mv Compresión de datos (Computadores)
dc.subject.proposal.eng.fl_str_mv Pairwise registration
RGB-D data
Local descriptors
Keypoint detectors
description Pairwise frame registration of indoor scenes with sparse 2D local features is not particularly robust under varying lighting conditions or low visual texture. In this case, the use of 3D local features can be a solution, as such attributes come from the 3D points themselves and are resistant to visual texture and illumination variations. However, they also hamper the registration task in cases where the scene has little geometric structure. Frameworks that use both types of features have been proposed, but they do not take into account the type of scene to better explore the use of 2D or 3D features. Because varying conditions are inevitable in real indoor scenes, we propose a new framework to improve pairwise registration of consecutive frames using an adaptive combination of sparse 2D and 3D features. In our proposal, the proportion of 2D and 3D features used in the registration is automatically defined according to the levels of geometric structure and visual texture contained in each scene. The effectiveness of our proposed framework is demonstrated by experimental results from challenging scenarios with datasets including unrestricted RGB-D camera motion in indoor environments and natural changes in illumination
publishDate 2018
dc.date.issued.none.fl_str_mv 2018
dc.date.accessioned.none.fl_str_mv 2019-11-01T20:57:31Z
dc.date.available.none.fl_str_mv 2019-11-01T20:57:31Z
dc.type.spa.fl_str_mv Artículo de revista
dc.type.coar.fl_str_mv http://purl.org/coar/resource_type/c_2df8fbb1
dc.type.coarversion.fl_str_mv http://purl.org/coar/version/c_970fb48d4fbd8a85
dc.type.coar.eng.fl_str_mv http://purl.org/coar/resource_type/c_6501
dc.type.content.eng.fl_str_mv Text
dc.type.driver.eng.fl_str_mv info:eu-repo/semantics/article
dc.type.redcol.eng.fl_str_mv http://purl.org/redcol/resource_type/ARTREF
dc.type.version.eng.fl_str_mv info:eu-repo/semantics/publishedVersion
format http://purl.org/coar/resource_type/c_6501
status_str publishedVersion
dc.identifier.issn.spa.fl_str_mv 0262-8856
dc.identifier.uri.none.fl_str_mv http://hdl.handle.net/10614/11388
dc.identifier.doi.spa.fl_str_mv https://doi.org/10.1016/j.imavis.2017.08.008
identifier_str_mv 0262-8856
url http://hdl.handle.net/10614/11388
https://doi.org/10.1016/j.imavis.2017.08.008
dc.language.iso.eng.fl_str_mv eng
language eng
dc.relation.citationendpage.none.fl_str_mv 124
dc.relation.citationstartpage.none.fl_str_mv 113
dc.relation.citationvolume.none.fl_str_mv 69
dc.relation.cites.spa.fl_str_mv Villota, J. C. P., da Silva, F. L., de Souza Jacomini, R., & Costa, A. H. R. (2018). Pairwise registration in indoor environments using adaptive combination of 2D and 3D cues. Image and Vision Computing, 69, 113-124
dc.relation.ispartofjournal.eng.fl_str_mv Image and Vision Computing, volumen 69, páginas 113-124, (january, 2018)
dc.relation.references.none.fl_str_mv [1] Z. Xie, S. Xu, X. Li, A high-accuracy method for fine registration of overlapping point clouds, Image Vis. Comput. 28 (4) (2010) 563–570.
[2] P. Henry, M. Krainin, E. Herbst, X. Ren, D. Fox, RGB-D mapping: using Kinect-style depth cameras for dense 3D modeling of indoor environments, Int. J. Robot. Res. 31 (5) (2012) 647–663.
[3] S. Li, A. Calway, RGBD relocalisation using pairwise geometry and concise key point sets, IEEE Int. Conf. Robot. Autom. (ICRA) (2015) 6374–6379.
[4] B.C. Russell, J. Sivic, W.T. Freeman, A. Zisserman, A.a. Efros, Segmenting scenes by matching image composites, Adv. Neural Inf. Proces. Syst. (NIPS) (2009) 1–9.
[5] S. Gupta, P. Arbelaez, R. Girshick, J. Malik, Indoor scene understanding with RGB-D images: bottom-up segmentation, object detection and semantic segmentation, Int. J. Comput. Vis. 112 (2) (2015) 133–149.
[6] T. Shao, W. Xu, K. Zhou, J. Wang, D. Li, B. Guo, An interactive approach to semantic modeling of indoor scenes with an RGBD camera, ACM Trans. Graph 31 (6). (2012)136:1–136:11.
[7] A. Geiger, C. Wang, Joint 3D object and layout inference from a single RGB-D image, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 9358, 2015. pp. 183–195.
[8] M. Firman, D. Thomas, S. Julier, A. sugimoto, Learning to discover objects in RGB-D images using correlation clustering, IEEE International Conference on Intelligent Robots and Systems (IROS), 2013. pp. 1107–1112.
[9] E. Bylow, J. Sturm, C. Kerl, F. Kahl, D. Cremers, Real-time camera tracking and 3D reconstruction using signed distance functions, Robotics: Science and Systems Conference (RSS), 2013.
[10] T. Tykkälä, A.I. Comport, J.K. Kämäräinen, H. Hartikainen, Live RGB-D camera tracking for television production studios, J. Vis. Commun. Image Represent. 25 (1) (2014) 207–217.
[11] V. Morell-Gimenez, M. Saval-Calvo, J. Azorin-Lopez, J. Garcia-Rodriguez, M. Cazola, S. Orts-Escolano, A. Fuster-Guillo, A comparative study of registration methods for RGB-D video of static scenes, Sensors 14 (1) (2014) 8547–8576.
[12] D.G. Lowe, Distinctive image features from scale-invariant keypoints, Int. J. Comput. Vis. 60 (2) (2004) 91–110.
[13] Y. Díez, F. Roure, X. Llado, J. Salvi, A qualitative review on 3D coarse registration methods, ACM Comput. Surv. 47 (3) (2015) 1–36.
[14] M. Fischler, R.C. Bolles, Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography, Commun. ACM 24 (6) (1981) 381–395.
[15] F. Endres, J. Hess, N. Engelhard, J. Sturm, D. Cremers, W. Burgard, An evaluation of the RGB-D SLAM system, IEEE International Conference on Robotics and Automation (ICRA), 2012. pp. 1691–1696.
[16] D. Holz, A.E. Ichim, F. Tombari, R.B. Rusu, S. Behnke, Registration with the point cloud library: a modular framework for aligning in 3-D, IEEE Robot. Autom. Mag. 22 (4) (2015) 110–124.
[17] S.M. Prakhya, U. Qayyum, Sparse depth odometry: 3D keypoint based pose estimation from dense depth data, IEEE International Conference on Robotics and Automation (ICRA), 2015. pp. 4216–4223.
[18] C.-C. Wang, C. Thorpe, S. Thrun, M. Hebert, H. Durrant-Whyte, Simultaneous localization, mapping and moving object tracking, Int. J. Robot. Res. 26 (9) (2007) 889–916. Sage Publications.
[19] A. Aldoma, F. Tombari, L.D. Stefano, M. Vincze, A global hypothesis verification framework for 3D object recognition in clutter, IEEE Trans. Pattern Anal. Mach. Learn 38 (7) (2016) 1383–1396.
[20] J. Xie, Y.-F. Hsu, R.S. Feris, M.-T. Sun, Fine registration of 3D point clouds fusing structural and photometric information using an RGB-D camera, J. Vis. Commun. Image Represent. 32 (1) (2015) 194–204.
[21] P.J. Besl, N.D. McKay, A method for registration of 3-D shapes, IEEE Trans. Pattern Anal. Mach. Learn 14 (2) (1992) 239–256.
[22] H. Kim, A. Hilton, Influence of colour and feature geometry on multi-modal 3D point clouds data registration, 2nd International Conference on 3D Vision, 1, 2014. pp. 202–209.
[23] J.C.P. Villota, A.H.R. Costa, Aligning RGB-D point clouds through adaptive integration of color and depth cues, 12th Latin American Robotics Symposium (LARS), 2015. pp. 309–314.
[24] D.G. Lowe, Object recognition from local scale-invariant features, 7th IEEE International Conference on Computer Vision (ICCV), 2, 1999. pp. 1150–1157.
[25] E. Rosten, R. Porter, T. Drummond, Faster and better: a machine learning approach to corner detection, IEEE Trans. Pattern Anal. Mach. Intell. 32 (1) (2010) 105–119.
[26] M. Calonder, V. Lepetit, P. Fua, Keypoint signatures for fast learning and recognition, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 5302, 2008. pp. 58–71. LNCS.
[27] E. Rublee, V. Rabaud, K. Konolige, G. Bradski, ORB: an efficient alternative to SIFT or SURF, IEEE International Conference on Computer Vision (ICCV), 2011. pp. 2564–2571.
[28] H. Bay, T. Tuytelaars, L.V.a.n. Gool, SURF: speeded up robust features, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3951, 2006. pp. 404–417. lNCS.
[29] S. Filipe, L.A. Alexandre, A comparative evaluation of 3D keypoint detectors in a RGB-D object dataset, IEEE International Conference on Computer Vision Theory and Applications (VISAPP), 2014. pp. 476–483.
[30] C. Harris, M. Stephens, A combined corner and edge detector, Alvey Vision Conference, 1988. pp. 147–151.
[31] C. Tomasi, T. Kanade, Detection and tracking of point features, Technical Report CMU-CS-91-132, School of Computer Science, Carnegie Mellon University. 1991. pp. 1–22.
[32] A. Flint, A. Dick, A. Van Den Hengel, Thrift: local 3D structure recognition, 9th Biennial Conference of the Australian Pattern Recognition Society, Digital Image Computing Techniques and Applications (DICTA), 2007. pp. 182–188.
[33] S. Smith, J. Brady, SUSANA: a new approach to low level image processing, Int. J. Comput. Vis. 23 (1) (1997) 45–78.
[34] Z. Yu, Intrinsic shape signatures: a shape descriptor for 3D object recognition, IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops. 2009, pp. 689–696.
[35] M. Desbrun, M. Meyer, P. Schröder, A.H. Barr, Implicit fairing of irregular meses using diffusion and curvature flow, 26th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), 1999. pp. 317–324.
[36] R.B. Rusu, S. Cousins, 3D is here: point cloud library, IEEE International Conference on Robotics and Automation (ICRA), 2011. pp. 1–4.
[37] B. Steder, R. Rusu, K. Konolige, W. Burgard, NARF: 3D range image features for object recognition, Workshop on Defining and Solving Realistic Perception Problems in Personal Robotics at the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2010.
[38] T. Fiolka, J. Stuckler, D.A. Klein, D. Schulz, S. Behnke, SURE: surface entropy for distinctive 3D features, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 7463, 2012. pp. 74–93. LNAI.
[39] S. Salti, A. Petrelli, F. Tombari, L.D. Stefano, On the affinity between 3D detectors and descriptors, Second Joint 3DIM/3DPVT Conference: 3D Imaging, Modeling, Processing, Visualization & Transmission, 1, 2012. pp. 425–4313.
[40] R. Hänsch, T. Weber, O. Hellwich, Comparison of 3D interest point detectors and descriptors for point cloud fusion, ISPRS Annals of Photogrammetry, Remote. Sens. Spat. Inf. Sci. (2014) 57–64. II-3 (September).
[41] Y. Guo, M. Bennamoun, F. Sohel, M. Lu, J. Wan, N.M. Kwok, A comprehensive performance evaluation of 3D local feature descriptors, Int. J. Comput. Vis. 116 (1) (2016) 66–89.
[42] R.B. Rusu, N. Blodow, M. Beetz, Fast point feature histograms (FPFH) for 3D registration, IEEE International Conference on Robotics and Automation (ICRA), 2009. pp. 3212–3217.
[43] S. Salti, F. Tombari, L. Di Stefano, SHOT: unique signatures of histograms for surface and texture description, Comput. Vis. Image Underst. 125 (2014) 251–264.
[44] M. Muja, D.G. Lowe, Scalable nearest neighbor algorithms for high dimensional data, IEEE Trans. Pattern Anal. Mach. Intell. 36 (11) (2014) 2227–2240.
[45] T. Ojala, M. Pietikainen, D. Harwood, Performance evaluation of texture measures with classification based on Kullback discrimination of distributions, 12th International Conference on Pattern Recognition (ICPR), 1, 1994. pp. 582–585.
[46] S. Chun, C. Lee, S. Lee, Facial expression recognition using extended local binary patterns of 3D curvature, Multimedia and Ubiquitous Engineering, 2013. pp. 1005–1012.
[47] C. Cortes, V. Vapnik, Support-vector networks, Mach. Learn. 20 (3) (1995) 273–297.
[48] J. Sturm, N. Engelhard, F. Endres, W. Burgard, D. Cremers, A benchmark for the evaluation of RGB-D SLAM systems, IEEE International Conference on Intelligent Robots and Systems (IROS), 2012. pp. 573–580.
[49] J. Russell, P. Norvig, Artificial Intelligence: A Modern Approach, Third ed., Pearson Education, Inc, 2010.
dc.rights.spa.fl_str_mv Derechos Reservados - Universidad Autónoma de Occidente
dc.rights.coar.fl_str_mv http://purl.org/coar/access_right/c_abf2
dc.rights.uri.eng.fl_str_mv https://creativecommons.org/licenses/by-nc-nd/4.0/
dc.rights.accessrights.eng.fl_str_mv info:eu-repo/semantics/openAccess
dc.rights.creativecommons.spa.fl_str_mv Atribución-NoComercial-SinDerivadas 4.0 Internacional (CC BY-NC-ND 4.0)
rights_invalid_str_mv Derechos Reservados - Universidad Autónoma de Occidente
https://creativecommons.org/licenses/by-nc-nd/4.0/
Atribución-NoComercial-SinDerivadas 4.0 Internacional (CC BY-NC-ND 4.0)
http://purl.org/coar/access_right/c_abf2
eu_rights_str_mv openAccess
dc.format.eng.fl_str_mv application/pdf
dc.format.extent.spa.fl_str_mv 12 páginas
dc.coverage.spatial.none.fl_str_mv Universidad Autónoma de Occidente. Calle 25 115-85. Km 2 vía Cali-Jamundí
dc.publisher.eng.fl_str_mv Elsevier
dc.source.spa.fl_str_mv https://www.sciencedirect.com/science/article/pii/S0262885617301245
reponame:Repositorio Institucional UAO
institution Universidad Autónoma de Occidente
reponame_str Repositorio Institucional UAO
collection Repositorio Institucional UAO
bitstream.url.fl_str_mv https://red.uao.edu.co/bitstreams/8d108fa3-e2d7-4321-82a0-039782456eab/download
https://red.uao.edu.co/bitstreams/13986e6a-8fd5-47f1-9785-8619e5fd3bf2/download
https://red.uao.edu.co/bitstreams/d842aa4d-4602-48c2-9337-e76bfd20717d/download
https://red.uao.edu.co/bitstreams/381556b0-1059-4d7d-8ea3-3a8ab4cb7cd4/download
bitstream.checksum.fl_str_mv 4460e5956bc1d1639be9ae6146a50347
20b5ba22b1117f71589c7318baa2c560
4ee981a58ea15b006a3ecfb076a7958c
321ca11ad2c2b90b6661e54150b7a657
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
MD5
repository.name.fl_str_mv Repositorio Digital Universidad Autonoma de Occidente
repository.mail.fl_str_mv repositorio@uao.edu.co
_version_ 1814260139405869056
spelling Perafán Villota, Juan Carlosvirtual::4112-1Leno Da Silva, Feliped0a2e07bf03e1f422e832795894abc46Reali Costa, Anna Helenac016fb4ee940fba3a99fffc21d399a77de Souza Jacomini, Ricardob6d0b56375b64928b6b4570f0140506eUniversidad Autónoma de Occidente. Calle 25 115-85. Km 2 vía Cali-Jamundí2019-11-01T20:57:31Z2019-11-01T20:57:31Z20180262-8856http://hdl.handle.net/10614/11388https://doi.org/10.1016/j.imavis.2017.08.008Pairwise frame registration of indoor scenes with sparse 2D local features is not particularly robust under varying lighting conditions or low visual texture. In this case, the use of 3D local features can be a solution, as such attributes come from the 3D points themselves and are resistant to visual texture and illumination variations. However, they also hamper the registration task in cases where the scene has little geometric structure. Frameworks that use both types of features have been proposed, but they do not take into account the type of scene to better explore the use of 2D or 3D features. Because varying conditions are inevitable in real indoor scenes, we propose a new framework to improve pairwise registration of consecutive frames using an adaptive combination of sparse 2D and 3D features. In our proposal, the proportion of 2D and 3D features used in the registration is automatically defined according to the levels of geometric structure and visual texture contained in each scene. The effectiveness of our proposed framework is demonstrated by experimental results from challenging scenarios with datasets including unrestricted RGB-D camera motion in indoor environments and natural changes in illuminationapplication/pdf12 páginasengElsevierDerechos Reservados - Universidad Autónoma de Occidentehttps://creativecommons.org/licenses/by-nc-nd/4.0/info:eu-repo/semantics/openAccessAtribución-NoComercial-SinDerivadas 4.0 Internacional (CC BY-NC-ND 4.0)http://purl.org/coar/access_right/c_abf2https://www.sciencedirect.com/science/article/pii/S0262885617301245reponame:Repositorio Institucional UAOPairwise registration in indoor environments using adaptive combination of 2D and 3D cuesArtículo de revistahttp://purl.org/coar/resource_type/c_6501http://purl.org/coar/resource_type/c_2df8fbb1Textinfo:eu-repo/semantics/articlehttp://purl.org/redcol/resource_type/ARTREFinfo:eu-repo/semantics/publishedVersionhttp://purl.org/coar/version/c_970fb48d4fbd8a85Data compression (Computer science)Compresión de datos (Computadores)Pairwise registrationRGB-D dataLocal descriptorsKeypoint detectors12411369Villota, J. C. P., da Silva, F. L., de Souza Jacomini, R., & Costa, A. H. R. (2018). Pairwise registration in indoor environments using adaptive combination of 2D and 3D cues. Image and Vision Computing, 69, 113-124Image and Vision Computing, volumen 69, páginas 113-124, (january, 2018)[1] Z. Xie, S. Xu, X. Li, A high-accuracy method for fine registration of overlapping point clouds, Image Vis. Comput. 28 (4) (2010) 563–570.[2] P. Henry, M. Krainin, E. Herbst, X. Ren, D. Fox, RGB-D mapping: using Kinect-style depth cameras for dense 3D modeling of indoor environments, Int. J. Robot. Res. 31 (5) (2012) 647–663.[3] S. Li, A. Calway, RGBD relocalisation using pairwise geometry and concise key point sets, IEEE Int. Conf. Robot. Autom. (ICRA) (2015) 6374–6379.[4] B.C. Russell, J. Sivic, W.T. Freeman, A. Zisserman, A.a. Efros, Segmenting scenes by matching image composites, Adv. Neural Inf. Proces. Syst. (NIPS) (2009) 1–9.[5] S. Gupta, P. Arbelaez, R. Girshick, J. Malik, Indoor scene understanding with RGB-D images: bottom-up segmentation, object detection and semantic segmentation, Int. J. Comput. Vis. 112 (2) (2015) 133–149.[6] T. Shao, W. Xu, K. Zhou, J. Wang, D. Li, B. Guo, An interactive approach to semantic modeling of indoor scenes with an RGBD camera, ACM Trans. Graph 31 (6). (2012)136:1–136:11.[7] A. Geiger, C. Wang, Joint 3D object and layout inference from a single RGB-D image, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 9358, 2015. pp. 183–195.[8] M. Firman, D. Thomas, S. Julier, A. sugimoto, Learning to discover objects in RGB-D images using correlation clustering, IEEE International Conference on Intelligent Robots and Systems (IROS), 2013. pp. 1107–1112.[9] E. Bylow, J. Sturm, C. Kerl, F. Kahl, D. Cremers, Real-time camera tracking and 3D reconstruction using signed distance functions, Robotics: Science and Systems Conference (RSS), 2013.[10] T. Tykkälä, A.I. Comport, J.K. Kämäräinen, H. Hartikainen, Live RGB-D camera tracking for television production studios, J. Vis. Commun. Image Represent. 25 (1) (2014) 207–217.[11] V. Morell-Gimenez, M. Saval-Calvo, J. Azorin-Lopez, J. Garcia-Rodriguez, M. Cazola, S. Orts-Escolano, A. Fuster-Guillo, A comparative study of registration methods for RGB-D video of static scenes, Sensors 14 (1) (2014) 8547–8576.[12] D.G. Lowe, Distinctive image features from scale-invariant keypoints, Int. J. Comput. Vis. 60 (2) (2004) 91–110.[13] Y. Díez, F. Roure, X. Llado, J. Salvi, A qualitative review on 3D coarse registration methods, ACM Comput. Surv. 47 (3) (2015) 1–36.[14] M. Fischler, R.C. Bolles, Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography, Commun. ACM 24 (6) (1981) 381–395.[15] F. Endres, J. Hess, N. Engelhard, J. Sturm, D. Cremers, W. Burgard, An evaluation of the RGB-D SLAM system, IEEE International Conference on Robotics and Automation (ICRA), 2012. pp. 1691–1696.[16] D. Holz, A.E. Ichim, F. Tombari, R.B. Rusu, S. Behnke, Registration with the point cloud library: a modular framework for aligning in 3-D, IEEE Robot. Autom. Mag. 22 (4) (2015) 110–124.[17] S.M. Prakhya, U. Qayyum, Sparse depth odometry: 3D keypoint based pose estimation from dense depth data, IEEE International Conference on Robotics and Automation (ICRA), 2015. pp. 4216–4223.[18] C.-C. Wang, C. Thorpe, S. Thrun, M. Hebert, H. Durrant-Whyte, Simultaneous localization, mapping and moving object tracking, Int. J. Robot. Res. 26 (9) (2007) 889–916. Sage Publications.[19] A. Aldoma, F. Tombari, L.D. Stefano, M. Vincze, A global hypothesis verification framework for 3D object recognition in clutter, IEEE Trans. Pattern Anal. Mach. Learn 38 (7) (2016) 1383–1396.[20] J. Xie, Y.-F. Hsu, R.S. Feris, M.-T. Sun, Fine registration of 3D point clouds fusing structural and photometric information using an RGB-D camera, J. Vis. Commun. Image Represent. 32 (1) (2015) 194–204.[21] P.J. Besl, N.D. McKay, A method for registration of 3-D shapes, IEEE Trans. Pattern Anal. Mach. Learn 14 (2) (1992) 239–256.[22] H. Kim, A. Hilton, Influence of colour and feature geometry on multi-modal 3D point clouds data registration, 2nd International Conference on 3D Vision, 1, 2014. pp. 202–209.[23] J.C.P. Villota, A.H.R. Costa, Aligning RGB-D point clouds through adaptive integration of color and depth cues, 12th Latin American Robotics Symposium (LARS), 2015. pp. 309–314.[24] D.G. Lowe, Object recognition from local scale-invariant features, 7th IEEE International Conference on Computer Vision (ICCV), 2, 1999. pp. 1150–1157.[25] E. Rosten, R. Porter, T. Drummond, Faster and better: a machine learning approach to corner detection, IEEE Trans. Pattern Anal. Mach. Intell. 32 (1) (2010) 105–119.[26] M. Calonder, V. Lepetit, P. Fua, Keypoint signatures for fast learning and recognition, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 5302, 2008. pp. 58–71. LNCS.[27] E. Rublee, V. Rabaud, K. Konolige, G. Bradski, ORB: an efficient alternative to SIFT or SURF, IEEE International Conference on Computer Vision (ICCV), 2011. pp. 2564–2571.[28] H. Bay, T. Tuytelaars, L.V.a.n. Gool, SURF: speeded up robust features, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3951, 2006. pp. 404–417. lNCS.[29] S. Filipe, L.A. Alexandre, A comparative evaluation of 3D keypoint detectors in a RGB-D object dataset, IEEE International Conference on Computer Vision Theory and Applications (VISAPP), 2014. pp. 476–483.[30] C. Harris, M. Stephens, A combined corner and edge detector, Alvey Vision Conference, 1988. pp. 147–151.[31] C. Tomasi, T. Kanade, Detection and tracking of point features, Technical Report CMU-CS-91-132, School of Computer Science, Carnegie Mellon University. 1991. pp. 1–22.[32] A. Flint, A. Dick, A. Van Den Hengel, Thrift: local 3D structure recognition, 9th Biennial Conference of the Australian Pattern Recognition Society, Digital Image Computing Techniques and Applications (DICTA), 2007. pp. 182–188.[33] S. Smith, J. Brady, SUSANA: a new approach to low level image processing, Int. J. Comput. Vis. 23 (1) (1997) 45–78.[34] Z. Yu, Intrinsic shape signatures: a shape descriptor for 3D object recognition, IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops. 2009, pp. 689–696.[35] M. Desbrun, M. Meyer, P. Schröder, A.H. Barr, Implicit fairing of irregular meses using diffusion and curvature flow, 26th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), 1999. pp. 317–324.[36] R.B. Rusu, S. Cousins, 3D is here: point cloud library, IEEE International Conference on Robotics and Automation (ICRA), 2011. pp. 1–4.[37] B. Steder, R. Rusu, K. Konolige, W. Burgard, NARF: 3D range image features for object recognition, Workshop on Defining and Solving Realistic Perception Problems in Personal Robotics at the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2010.[38] T. Fiolka, J. Stuckler, D.A. Klein, D. Schulz, S. Behnke, SURE: surface entropy for distinctive 3D features, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 7463, 2012. pp. 74–93. LNAI.[39] S. Salti, A. Petrelli, F. Tombari, L.D. Stefano, On the affinity between 3D detectors and descriptors, Second Joint 3DIM/3DPVT Conference: 3D Imaging, Modeling, Processing, Visualization & Transmission, 1, 2012. pp. 425–4313.[40] R. Hänsch, T. Weber, O. Hellwich, Comparison of 3D interest point detectors and descriptors for point cloud fusion, ISPRS Annals of Photogrammetry, Remote. Sens. Spat. Inf. Sci. (2014) 57–64. II-3 (September).[41] Y. Guo, M. Bennamoun, F. Sohel, M. Lu, J. Wan, N.M. Kwok, A comprehensive performance evaluation of 3D local feature descriptors, Int. J. Comput. Vis. 116 (1) (2016) 66–89.[42] R.B. Rusu, N. Blodow, M. Beetz, Fast point feature histograms (FPFH) for 3D registration, IEEE International Conference on Robotics and Automation (ICRA), 2009. pp. 3212–3217.[43] S. Salti, F. Tombari, L. Di Stefano, SHOT: unique signatures of histograms for surface and texture description, Comput. Vis. Image Underst. 125 (2014) 251–264.[44] M. Muja, D.G. Lowe, Scalable nearest neighbor algorithms for high dimensional data, IEEE Trans. Pattern Anal. Mach. Intell. 36 (11) (2014) 2227–2240.[45] T. Ojala, M. Pietikainen, D. Harwood, Performance evaluation of texture measures with classification based on Kullback discrimination of distributions, 12th International Conference on Pattern Recognition (ICPR), 1, 1994. pp. 582–585.[46] S. Chun, C. Lee, S. Lee, Facial expression recognition using extended local binary patterns of 3D curvature, Multimedia and Ubiquitous Engineering, 2013. pp. 1005–1012.[47] C. Cortes, V. Vapnik, Support-vector networks, Mach. Learn. 20 (3) (1995) 273–297.[48] J. Sturm, N. Engelhard, F. Endres, W. Burgard, D. Cremers, A benchmark for the evaluation of RGB-D SLAM systems, IEEE International Conference on Intelligent Robots and Systems (IROS), 2012. pp. 573–580.[49] J. Russell, P. Norvig, Artificial Intelligence: A Modern Approach, Third ed., Pearson Education, Inc, 2010.Publication286553f4-3942-4404-9123-b85ee6e69330virtual::4112-1286553f4-3942-4404-9123-b85ee6e69330virtual::4112-1https://scholar.google.com/citations?user=MW2zbLAAAAAJ&hl=envirtual::4112-10000-0002-7275-9839virtual::4112-1https://scienti.minciencias.gov.co/cvlac/visualizador/generarCurriculoCv.do?cod_rh=0000637769virtual::4112-1CC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-8805https://red.uao.edu.co/bitstreams/8d108fa3-e2d7-4321-82a0-039782456eab/download4460e5956bc1d1639be9ae6146a50347MD52LICENSElicense.txtlicense.txttext/plain; charset=utf-81665https://red.uao.edu.co/bitstreams/13986e6a-8fd5-47f1-9785-8619e5fd3bf2/download20b5ba22b1117f71589c7318baa2c560MD53TEXTPairwise registration in indoor environments using adaptive combination of 2D and 3D cues.pdf.txtPairwise registration in indoor environments using adaptive combination of 2D and 3D cues.pdf.txtExtracted texttext/plain62298https://red.uao.edu.co/bitstreams/d842aa4d-4602-48c2-9337-e76bfd20717d/download4ee981a58ea15b006a3ecfb076a7958cMD55THUMBNAILPairwise registration in indoor environments using adaptive combination of 2D and 3D cues.pdf.jpgPairwise registration in indoor environments using adaptive combination of 2D and 3D cues.pdf.jpgGenerated Thumbnailimage/jpeg15619https://red.uao.edu.co/bitstreams/381556b0-1059-4d7d-8ea3-3a8ab4cb7cd4/download321ca11ad2c2b90b6661e54150b7a657MD5610614/11388oai:red.uao.edu.co:10614/113882024-03-13 11:38:27.131https://creativecommons.org/licenses/by-nc-nd/4.0/Derechos Reservados - Universidad Autónoma de Occidentemetadata.onlyhttps://red.uao.edu.coRepositorio Digital Universidad Autonoma de Occidenterepositorio@uao.edu.coRUwgQVVUT1IgYXV0b3JpemEgYSBsYSBVbml2ZXJzaWRhZCBBdXTDs25vbWEgZGUgT2NjaWRlbnRlLCBkZSBmb3JtYSBpbmRlZmluaWRhLCBwYXJhIHF1ZSBlbiBsb3MgdMOpcm1pbm9zIGVzdGFibGVjaWRvcyBlbiBsYSBMZXkgMjMgZGUgMTk4MiwgbGEgTGV5IDQ0IGRlIDE5OTMsIGxhIERlY2lzacOzbiBhbmRpbmEgMzUxIGRlIDE5OTMsIGVsIERlY3JldG8gNDYwIGRlIDE5OTUgeSBkZW3DoXMgbGV5ZXMgeSBqdXJpc3BydWRlbmNpYSB2aWdlbnRlIGFsIHJlc3BlY3RvLCBoYWdhIHB1YmxpY2FjacOzbiBkZSBlc3RlIGNvbiBmaW5lcyBlZHVjYXRpdm9zLiBQQVJBR1JBRk86IEVzdGEgYXV0b3JpemFjacOzbiBhZGVtw6FzIGRlIHNlciB2w6FsaWRhIHBhcmEgbGFzIGZhY3VsdGFkZXMgeSBkZXJlY2hvcyBkZSB1c28gc29icmUgbGEgb2JyYSBlbiBmb3JtYXRvIG8gc29wb3J0ZSBtYXRlcmlhbCwgdGFtYmnDqW4gcGFyYSBmb3JtYXRvIGRpZ2l0YWwsIGVsZWN0csOzbmljbywgdmlydHVhbCwgcGFyYSB1c29zIGVuIHJlZCwgSW50ZXJuZXQsIGV4dHJhbmV0LCBpbnRyYW5ldCwgYmlibGlvdGVjYSBkaWdpdGFsIHkgZGVtw6FzIHBhcmEgY3VhbHF1aWVyIGZvcm1hdG8gY29ub2NpZG8gbyBwb3IgY29ub2Nlci4gRUwgQVVUT1IsIGV4cHJlc2EgcXVlIGVsIGRvY3VtZW50byAodHJhYmFqbyBkZSBncmFkbywgcGFzYW50w61hLCBjYXNvcyBvIHRlc2lzKSBvYmpldG8gZGUgbGEgcHJlc2VudGUgYXV0b3JpemFjacOzbiBlcyBvcmlnaW5hbCB5IGxhIGVsYWJvcsOzIHNpbiBxdWVicmFudGFyIG5pIHN1cGxhbnRhciBsb3MgZGVyZWNob3MgZGUgYXV0b3IgZGUgdGVyY2Vyb3MsIHkgZGUgdGFsIGZvcm1hLCBlbCBkb2N1bWVudG8gKHRyYWJham8gZGUgZ3JhZG8sIHBhc2FudMOtYSwgY2Fzb3MgbyB0ZXNpcykgZXMgZGUgc3UgZXhjbHVzaXZhIGF1dG9yw61hIHkgdGllbmUgbGEgdGl0dWxhcmlkYWQgc29icmUgw6lzdGUuIFBBUkFHUkFGTzogZW4gY2FzbyBkZSBwcmVzZW50YXJzZSBhbGd1bmEgcmVjbGFtYWNpw7NuIG8gYWNjacOzbiBwb3IgcGFydGUgZGUgdW4gdGVyY2VybywgcmVmZXJlbnRlIGEgbG9zIGRlcmVjaG9zIGRlIGF1dG9yIHNvYnJlIGVsIGRvY3VtZW50byAoVHJhYmFqbyBkZSBncmFkbywgUGFzYW50w61hLCBjYXNvcyBvIHRlc2lzKSBlbiBjdWVzdGnDs24sIEVMIEFVVE9SLCBhc3VtaXLDoSBsYSByZXNwb25zYWJpbGlkYWQgdG90YWwsIHkgc2FsZHLDoSBlbiBkZWZlbnNhIGRlIGxvcyBkZXJlY2hvcyBhcXXDrSBhdXRvcml6YWRvczsgcGFyYSB0b2RvcyBsb3MgZWZlY3RvcywgbGEgVW5pdmVyc2lkYWQgIEF1dMOzbm9tYSBkZSBPY2NpZGVudGUgYWN0w7phIGNvbW8gdW4gdGVyY2VybyBkZSBidWVuYSBmZS4gVG9kYSBwZXJzb25hIHF1ZSBjb25zdWx0ZSB5YSBzZWEgZW4gbGEgYmlibGlvdGVjYSBvIGVuIG1lZGlvIGVsZWN0csOzbmljbyBwb2Ryw6EgY29waWFyIGFwYXJ0ZXMgZGVsIHRleHRvIGNpdGFuZG8gc2llbXByZSBsYSBmdWVudGUsIGVzIGRlY2lyIGVsIHTDrXR1bG8gZGVsIHRyYWJham8geSBlbCBhdXRvci4gRXN0YSBhdXRvcml6YWNpw7NuIG5vIGltcGxpY2EgcmVudW5jaWEgYSBsYSBmYWN1bHRhZCBxdWUgdGllbmUgRUwgQVVUT1IgZGUgcHVibGljYXIgdG90YWwgbyBwYXJjaWFsbWVudGUgbGEgb2JyYS4K