MarkerPose: Robust Real-time Planar Target Tracking for Accurate Stereo Pose Estimation

Despite the attention marker-less pose estimation has attracted in recent years, marker-based approaches still provide unbeatable accuracy under controlled environmental conditions. Thus, they are used in many fields such as robotics or biomedical applications but are primarily implemented through c...

Full description

Autores:
Meza, Jhacson
Romero, Lenny A.
Marrugo Hernández, Andrés Guillermo
Tipo de recurso:
Fecha de publicación:
2021
Institución:
Universidad Tecnológica de Bolívar
Repositorio:
Repositorio Institucional UTB
Idioma:
eng
OAI Identifier:
oai:repositorio.utb.edu.co:20.500.12585/10407
Acceso en línea:
https://hdl.handle.net/20.500.12585/10407
Palabra clave:
Robótica
MarkerPose
Aplicaciones biomédicas
Redes neuronales
Visión por computador
LEMB
Rights
openAccess
License
http://creativecommons.org/licenses/by-nc-nd/4.0/
id UTB2_548cdd329c1aedcdcccb7658650f30bb
oai_identifier_str oai:repositorio.utb.edu.co:20.500.12585/10407
network_acronym_str UTB2
network_name_str Repositorio Institucional UTB
repository_id_str
dc.title.spa.fl_str_mv MarkerPose: Robust Real-time Planar Target Tracking for Accurate Stereo Pose Estimation
title MarkerPose: Robust Real-time Planar Target Tracking for Accurate Stereo Pose Estimation
spellingShingle MarkerPose: Robust Real-time Planar Target Tracking for Accurate Stereo Pose Estimation
Robótica
MarkerPose
Aplicaciones biomédicas
Redes neuronales
Visión por computador
LEMB
title_short MarkerPose: Robust Real-time Planar Target Tracking for Accurate Stereo Pose Estimation
title_full MarkerPose: Robust Real-time Planar Target Tracking for Accurate Stereo Pose Estimation
title_fullStr MarkerPose: Robust Real-time Planar Target Tracking for Accurate Stereo Pose Estimation
title_full_unstemmed MarkerPose: Robust Real-time Planar Target Tracking for Accurate Stereo Pose Estimation
title_sort MarkerPose: Robust Real-time Planar Target Tracking for Accurate Stereo Pose Estimation
dc.creator.fl_str_mv Meza, Jhacson
Romero, Lenny A.
Marrugo Hernández, Andrés Guillermo
dc.contributor.author.none.fl_str_mv Meza, Jhacson
Romero, Lenny A.
Marrugo Hernández, Andrés Guillermo
dc.subject.keywords.spa.fl_str_mv Robótica
MarkerPose
Aplicaciones biomédicas
Redes neuronales
Visión por computador
topic Robótica
MarkerPose
Aplicaciones biomédicas
Redes neuronales
Visión por computador
LEMB
dc.subject.armarc.none.fl_str_mv LEMB
description Despite the attention marker-less pose estimation has attracted in recent years, marker-based approaches still provide unbeatable accuracy under controlled environmental conditions. Thus, they are used in many fields such as robotics or biomedical applications but are primarily implemented through classical approaches, which require lots of heuristics and parameter tuning for reliable performance under different environments. In this work, we propose MarkerPose, a robust, real-time pose estimation system based on a planar target of three circles and a stereo vision system. MarkerPose is meant for highaccuracy pose estimation applications. Our method consists of two deep neural networks for marker point detection. A SuperPoint-like network for pixel-level accuracy keypoint localization and classification, and we introduce EllipSegNet, a lightweight ellipse segmentation network for sub-pixel-level accuracy keypoint detection. The marker’s pose is estimated through stereo triangulation. The target point detection is robust to low lighting and motion blur conditions. We compared MarkerPose with a detection method based on classical computer vision techniques using a robotic arm for validation. The results show our method provides better accuracy than the classical technique. Finally, we demonstrate the suitability of MarkerPose in a 3D freehand ultrasound system, which is an application where highly accurate pose estimation is required. Code is available in Python and C++ at
publishDate 2021
dc.date.issued.none.fl_str_mv 2021-05-29
dc.date.accessioned.none.fl_str_mv 2022-01-26T14:22:55Z
dc.date.available.none.fl_str_mv 2022-01-26T14:22:55Z
dc.date.submitted.none.fl_str_mv 2022-01-25
dc.type.driver.spa.fl_str_mv info:eu-repo/semantics/article
dc.type.hasversion.spa.fl_str_mv info:eu-repo/semantics/restrictedAccess
dc.type.spa.spa.fl_str_mv http://purl.org/coar/resource_type/c_2df8fbb1
dc.identifier.citation.spa.fl_str_mv Meza, Jhacson & Romero, Lenny & Marrugo, Andrés. (2021). MarkerPose: Robust Real-time Planar Target Tracking for Accurate Stereo Pose Estimation.
dc.identifier.uri.none.fl_str_mv https://hdl.handle.net/20.500.12585/10407
dc.identifier.doi.none.fl_str_mv 10.1109/CVPRW53098.2021.00141
dc.identifier.instname.spa.fl_str_mv Universidad Tecnológica de Bolívar
dc.identifier.reponame.spa.fl_str_mv Repositorio Universidad Tecnológica de Bolívar
identifier_str_mv Meza, Jhacson & Romero, Lenny & Marrugo, Andrés. (2021). MarkerPose: Robust Real-time Planar Target Tracking for Accurate Stereo Pose Estimation.
10.1109/CVPRW53098.2021.00141
Universidad Tecnológica de Bolívar
Repositorio Universidad Tecnológica de Bolívar
url https://hdl.handle.net/20.500.12585/10407
dc.language.iso.spa.fl_str_mv eng
language eng
dc.rights.coar.fl_str_mv http://purl.org/coar/access_right/c_abf2
dc.rights.uri.*.fl_str_mv http://creativecommons.org/licenses/by-nc-nd/4.0/
dc.rights.accessrights.spa.fl_str_mv info:eu-repo/semantics/openAccess
dc.rights.cc.*.fl_str_mv Attribution-NonCommercial-NoDerivatives 4.0 Internacional
rights_invalid_str_mv http://creativecommons.org/licenses/by-nc-nd/4.0/
Attribution-NonCommercial-NoDerivatives 4.0 Internacional
http://purl.org/coar/access_right/c_abf2
eu_rights_str_mv openAccess
dc.format.extent.none.fl_str_mv 9 páginas
dc.format.mimetype.spa.fl_str_mv application/pdf
dc.publisher.place.spa.fl_str_mv Cartagena de Indias
dc.source.spa.fl_str_mv Computer Vision and Pattern Recognition
institution Universidad Tecnológica de Bolívar
bitstream.url.fl_str_mv https://repositorio.utb.edu.co/bitstream/20.500.12585/10407/1/2105.00368.pdf
https://repositorio.utb.edu.co/bitstream/20.500.12585/10407/2/license_rdf
https://repositorio.utb.edu.co/bitstream/20.500.12585/10407/3/license.txt
https://repositorio.utb.edu.co/bitstream/20.500.12585/10407/4/2105.00368.pdf.txt
https://repositorio.utb.edu.co/bitstream/20.500.12585/10407/5/2105.00368.pdf.jpg
bitstream.checksum.fl_str_mv cb746308d0568932bef6cb75844934d2
4460e5956bc1d1639be9ae6146a50347
e20ad307a1c5f3f25af9304a7a7c86b6
745b89ea268772dfc586220dc6951c0e
5e39cd3579e85aab8f8d239d1dcc981e
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
MD5
MD5
repository.name.fl_str_mv Repositorio Institucional UTB
repository.mail.fl_str_mv repositorioutb@utb.edu.co
_version_ 1814021619390087168
spelling Meza, Jhacsonf82caa3d-d398-4c7c-8651-1d32adcd8925Romero, Lenny A.4e34aa8a-f981-4e1d-ae32-d45acb6abcf9Marrugo Hernández, Andrés Guillermo3d6cd388-d48f-4669-934f-49ca4179f5422022-01-26T14:22:55Z2022-01-26T14:22:55Z2021-05-292022-01-25Meza, Jhacson & Romero, Lenny & Marrugo, Andrés. (2021). MarkerPose: Robust Real-time Planar Target Tracking for Accurate Stereo Pose Estimation.https://hdl.handle.net/20.500.12585/1040710.1109/CVPRW53098.2021.00141Universidad Tecnológica de BolívarRepositorio Universidad Tecnológica de BolívarDespite the attention marker-less pose estimation has attracted in recent years, marker-based approaches still provide unbeatable accuracy under controlled environmental conditions. Thus, they are used in many fields such as robotics or biomedical applications but are primarily implemented through classical approaches, which require lots of heuristics and parameter tuning for reliable performance under different environments. In this work, we propose MarkerPose, a robust, real-time pose estimation system based on a planar target of three circles and a stereo vision system. MarkerPose is meant for highaccuracy pose estimation applications. Our method consists of two deep neural networks for marker point detection. A SuperPoint-like network for pixel-level accuracy keypoint localization and classification, and we introduce EllipSegNet, a lightweight ellipse segmentation network for sub-pixel-level accuracy keypoint detection. The marker’s pose is estimated through stereo triangulation. The target point detection is robust to low lighting and motion blur conditions. We compared MarkerPose with a detection method based on classical computer vision techniques using a robotic arm for validation. The results show our method provides better accuracy than the classical technique. Finally, we demonstrate the suitability of MarkerPose in a 3D freehand ultrasound system, which is an application where highly accurate pose estimation is required. Code is available in Python and C++ at9 páginasapplication/pdfenghttp://creativecommons.org/licenses/by-nc-nd/4.0/info:eu-repo/semantics/openAccessAttribution-NonCommercial-NoDerivatives 4.0 Internacionalhttp://purl.org/coar/access_right/c_abf2Computer Vision and Pattern RecognitionMarkerPose: Robust Real-time Planar Target Tracking for Accurate Stereo Pose Estimationinfo:eu-repo/semantics/articleinfo:eu-repo/semantics/restrictedAccesshttp://purl.org/coar/resource_type/c_2df8fbb1RobóticaMarkerPoseAplicaciones biomédicasRedes neuronalesVisión por computadorLEMBCartagena de IndiasMykhaylo Andriluka, Umar Iqbal, Eldar Insafutdinov, Leonid Pishchulin, Anton Milan, Juergen Gall, and Bernt Schiele. Posetrack: A benchmark for human pose estimation and tracking. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5167–5176, 2018.Ehsan Basafa, Pezhman Foroughi, Martin Hossbach, Jasmine Bhanushali, and Philipp Stolka. Visual tracking for multi-modality computer-assisted image guidance. In Medical Imaging 2017: Image-Guided Procedures, Robotic Interventions, and Modeling, volume 10135, page 101352S. International Society for Optics and Photonics, 2017Alisa JV Brown, Ali Uneri, Tharindu S De Silva, Amir Manbachi, and Jeffrey H Siewerdsen. Design and validation of an open-source library of dynamic reference frames for research and education in optical tracking. Journal of Medical Imaging, 5(2):021215, 2018Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 40(4):834–848, 2017Daniel DeTone, Tomasz Malisiewicz, and Andrew Rabinovich. Superpoint: Self-supervised interest point detection and description. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 224–236, 2018Liuhao Ge, Zhou Ren, Yuncheng Li, Zehao Xue, Yingying Wang, Jianfei Cai, and Junsong Yuan. 3d hand shape and pose estimation from a single rgb image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10833–10842, 2019.Aryaman Gupta, Kalpit Thakkar, Vineet Gandhi, and PJ Narayanan. Nose, eyes and ears: Head pose estimation by locating facial keypoints. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1977–1981. IEEE, 2019Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Gir- ´ shick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961–2969, 2017Danying Hu, Daniel DeTone, and Tomasz Malisiewicz. Deep charuco: Dark charuco marker pose estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8436–8444, 2019Qinghua Huang and Zhaozheng Zeng. A Review on RealTime 3D Ultrasound Imaging Technology. BioMed research international, 2017:6027029, 2017.Ho Chuen Kam, Ying Kin Yu, and Kin Hong Wong. An improvement on aruco marker for pose tracking using kalman filter. In 2018 19th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), pages 65–69. IEEE, 2018Jisung Kim, Youngdo Jeong, Hyojin Lee, and Hongsik Yun. Marker-based structural displacement measurement models with camera movement error correction using image matching and anomaly detection. Sensors, 20(19):5676, 2020Ji Yang Lee and Cheol-soo Lee. Path planning for scara robot based on marker detection using feature extraction and, labelling. International Journal of Computer Integrated Manufacturing, 31(8):769–776, 2018Jhacson Meza, Pedro Simarra, Sara Contreras-Ojeda, Lenny A Romero, Sonia H Contreras-Ortiz, Fernando Arambula Cos ´ ´ıo, and Andres G Marrugo. A ´ low-cost multi-modal medical imaging system with fringe projection profilometry and 3d freehand ultrasound. Proc. SPIE, 11330:1133004, 2020Yoshihiro Nakano. Stereo vision based single-shot 6d object pose estimation for bin-picking by a robot manipulator. arXiv preprint arXiv:2005.13759, 2020Tanmay Nath, Alexander Mathis, An Chi Chen, Amir Patel, Matthias Bethge, and Mackenzie Weygandt Mathis. Using deeplabcut for 3d markerless pose estimation across species and behaviors. Nature protocols, 14(7):2152–2176, 2019Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 779–788, 2016] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: towards real-time object detection with region proposal networks. IEEE transactions on pattern analysis and machine intelligence, 39(6):1137–1149, 2016C Romero, C Naufal, J Meza, and Andres G Marrugo. A vali- ´ dation strategy for a target-based vision tracking system with an industrial robot. J. Phys.: Conf. Series, 1547(1):012018– 8, 2020Denys Rozumnyi, Jiri Matas, Filip Sroubek, Marc Pollefeys, and Martin R Oswald. Fmodetect: Robust detection and trajectory estimation of fast moving objects. arXiv preprint arXiv:2012.08216, 2020Paul-Edouard Sarlin, Daniel DeTone, Tomasz Malisiewicz, and Andrew Rabinovich. Superglue: Learning feature matching with graph neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4938–4947, 2020Yongbin Sun, Yimin Deng, Haibin Duan, and Xiaobin Xu. Bionic visual close-range navigation control system for the docking stage of probe-and-drogue autonomous aerial refueling. Aerospace Science and Technology, 91:136–149, 2019He Wang, Srinath Sridhar, Jingwei Huang, Julien Valentin, Shuran Song, and Leonidas J Guibas. Normalized object coordinate space for category-level 6d object pose and size estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2642– 2651, 2019Yu Xiang, Tanner Schmidt, Venkatraman Narayanan, and Dieter Fox. Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes. arXiv preprint arXiv:1711.00199, 2017http://purl.org/coar/resource_type/c_2df8fbb1ORIGINAL2105.00368.pdf2105.00368.pdfapplication/pdf7828230https://repositorio.utb.edu.co/bitstream/20.500.12585/10407/1/2105.00368.pdfcb746308d0568932bef6cb75844934d2MD51CC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-8805https://repositorio.utb.edu.co/bitstream/20.500.12585/10407/2/license_rdf4460e5956bc1d1639be9ae6146a50347MD52LICENSElicense.txtlicense.txttext/plain; charset=utf-83182https://repositorio.utb.edu.co/bitstream/20.500.12585/10407/3/license.txte20ad307a1c5f3f25af9304a7a7c86b6MD53TEXT2105.00368.pdf.txt2105.00368.pdf.txtExtracted texttext/plain37854https://repositorio.utb.edu.co/bitstream/20.500.12585/10407/4/2105.00368.pdf.txt745b89ea268772dfc586220dc6951c0eMD54THUMBNAIL2105.00368.pdf.jpg2105.00368.pdf.jpgGenerated Thumbnailimage/jpeg93991https://repositorio.utb.edu.co/bitstream/20.500.12585/10407/5/2105.00368.pdf.jpg5e39cd3579e85aab8f8d239d1dcc981eMD5520.500.12585/10407oai:repositorio.utb.edu.co:20.500.12585/104072023-05-26 16:25:50.775Repositorio Institucional UTBrepositorioutb@utb.edu.coQXV0b3Jpem8gKGF1dG9yaXphbW9zKSBhIGxhIEJpYmxpb3RlY2EgZGUgbGEgSW5zdGl0dWNpw7NuIHBhcmEgcXVlIGluY2x1eWEgdW5hIGNvcGlhLCBpbmRleGUgeSBkaXZ1bGd1ZSBlbiBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsLCBsYSBvYnJhIG1lbmNpb25hZGEgY29uIGVsIGZpbiBkZSBmYWNpbGl0YXIgbG9zIHByb2Nlc29zIGRlIHZpc2liaWxpZGFkIGUgaW1wYWN0byBkZSBsYSBtaXNtYSwgY29uZm9ybWUgYSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBxdWUgbWUobm9zKSBjb3JyZXNwb25kZShuKSB5IHF1ZSBpbmNsdXllbjogbGEgcmVwcm9kdWNjacOzbiwgY29tdW5pY2FjacOzbiBww7pibGljYSwgZGlzdHJpYnVjacOzbiBhbCBww7pibGljbywgdHJhbnNmb3JtYWNpw7NuLCBkZSBjb25mb3JtaWRhZCBjb24gbGEgbm9ybWF0aXZpZGFkIHZpZ2VudGUgc29icmUgZGVyZWNob3MgZGUgYXV0b3IgeSBkZXJlY2hvcyBjb25leG9zIHJlZmVyaWRvcyBlbiBhcnQuIDIsIDEyLCAzMCAobW9kaWZpY2FkbyBwb3IgZWwgYXJ0IDUgZGUgbGEgbGV5IDE1MjAvMjAxMiksIHkgNzIgZGUgbGEgbGV5IDIzIGRlIGRlIDE5ODIsIExleSA0NCBkZSAxOTkzLCBhcnQuIDQgeSAxMSBEZWNpc2nDs24gQW5kaW5hIDM1MSBkZSAxOTkzIGFydC4gMTEsIERlY3JldG8gNDYwIGRlIDE5OTUsIENpcmN1bGFyIE5vIDA2LzIwMDIgZGUgbGEgRGlyZWNjacOzbiBOYWNpb25hbCBkZSBEZXJlY2hvcyBkZSBhdXRvciwgYXJ0LiAxNSBMZXkgMTUyMCBkZSAyMDEyLCBsYSBMZXkgMTkxNSBkZSAyMDE4IHkgZGVtw6FzIG5vcm1hcyBzb2JyZSBsYSBtYXRlcmlhLgoKQWwgcmVzcGVjdG8gY29tbyBBdXRvcihlcykgbWFuaWZlc3RhbW9zIGNvbm9jZXIgcXVlOgoKLSBMYSBhdXRvcml6YWNpw7NuIGVzIGRlIGNhcsOhY3RlciBubyBleGNsdXNpdmEgeSBsaW1pdGFkYSwgZXN0byBpbXBsaWNhIHF1ZSBsYSBsaWNlbmNpYSB0aWVuZSB1bmEgdmlnZW5jaWEsIHF1ZSBubyBlcyBwZXJwZXR1YSB5IHF1ZSBlbCBhdXRvciBwdWVkZSBwdWJsaWNhciBvIGRpZnVuZGlyIHN1IG9icmEgZW4gY3VhbHF1aWVyIG90cm8gbWVkaW8sIGFzw60gY29tbyBsbGV2YXIgYSBjYWJvIGN1YWxxdWllciB0aXBvIGRlIGFjY2nDs24gc29icmUgZWwgZG9jdW1lbnRvLgoKLSBMYSBhdXRvcml6YWNpw7NuIHRlbmRyw6EgdW5hIHZpZ2VuY2lhIGRlIGNpbmNvIGHDsW9zIGEgcGFydGlyIGRlbCBtb21lbnRvIGRlIGxhIGluY2x1c2nDs24gZGUgbGEgb2JyYSBlbiBlbCByZXBvc2l0b3JpbywgcHJvcnJvZ2FibGUgaW5kZWZpbmlkYW1lbnRlIHBvciBlbCB0aWVtcG8gZGUgZHVyYWNpw7NuIGRlIGxvcyBkZXJlY2hvcyBwYXRyaW1vbmlhbGVzIGRlbCBhdXRvciB5IHBvZHLDoSBkYXJzZSBwb3IgdGVybWluYWRhIHVuYSB2ZXogZWwgYXV0b3IgbG8gbWFuaWZpZXN0ZSBwb3IgZXNjcml0byBhIGxhIGluc3RpdHVjacOzbiwgY29uIGxhIHNhbHZlZGFkIGRlIHF1ZSBsYSBvYnJhIGVzIGRpZnVuZGlkYSBnbG9iYWxtZW50ZSB5IGNvc2VjaGFkYSBwb3IgZGlmZXJlbnRlcyBidXNjYWRvcmVzIHkvbyByZXBvc2l0b3Jpb3MgZW4gSW50ZXJuZXQgbG8gcXVlIG5vIGdhcmFudGl6YSBxdWUgbGEgb2JyYSBwdWVkYSBzZXIgcmV0aXJhZGEgZGUgbWFuZXJhIGlubWVkaWF0YSBkZSBvdHJvcyBzaXN0ZW1hcyBkZSBpbmZvcm1hY2nDs24gZW4gbG9zIHF1ZSBzZSBoYXlhIGluZGV4YWRvLCBkaWZlcmVudGVzIGFsIHJlcG9zaXRvcmlvIGluc3RpdHVjaW9uYWwgZGUgbGEgSW5zdGl0dWNpw7NuLCBkZSBtYW5lcmEgcXVlIGVsIGF1dG9yKHJlcykgdGVuZHLDoW4gcXVlIHNvbGljaXRhciBsYSByZXRpcmFkYSBkZSBzdSBvYnJhIGRpcmVjdGFtZW50ZSBhIG90cm9zIHNpc3RlbWFzIGRlIGluZm9ybWFjacOzbiBkaXN0aW50b3MgYWwgZGUgbGEgSW5zdGl0dWNpw7NuIHNpIGRlc2VhIHF1ZSBzdSBvYnJhIHNlYSByZXRpcmFkYSBkZSBpbm1lZGlhdG8uCgotIExhIGF1dG9yaXphY2nDs24gZGUgcHVibGljYWNpw7NuIGNvbXByZW5kZSBlbCBmb3JtYXRvIG9yaWdpbmFsIGRlIGxhIG9icmEgeSB0b2RvcyBsb3MgZGVtw6FzIHF1ZSBzZSByZXF1aWVyYSBwYXJhIHN1IHB1YmxpY2FjacOzbiBlbiBlbCByZXBvc2l0b3Jpby4gSWd1YWxtZW50ZSwgbGEgYXV0b3JpemFjacOzbiBwZXJtaXRlIGEgbGEgaW5zdGl0dWNpw7NuIGVsIGNhbWJpbyBkZSBzb3BvcnRlIGRlIGxhIG9icmEgY29uIGZpbmVzIGRlIHByZXNlcnZhY2nDs24gKGltcHJlc28sIGVsZWN0csOzbmljbywgZGlnaXRhbCwgSW50ZXJuZXQsIGludHJhbmV0LCBvIGN1YWxxdWllciBvdHJvIGZvcm1hdG8gY29ub2NpZG8gbyBwb3IgY29ub2NlcikuCgotIExhIGF1dG9yaXphY2nDs24gZXMgZ3JhdHVpdGEgeSBzZSByZW51bmNpYSBhIHJlY2liaXIgY3VhbHF1aWVyIHJlbXVuZXJhY2nDs24gcG9yIGxvcyB1c29zIGRlIGxhIG9icmEsIGRlIGFjdWVyZG8gY29uIGxhIGxpY2VuY2lhIGVzdGFibGVjaWRhIGVuIGVzdGEgYXV0b3JpemFjacOzbi4KCi0gQWwgZmlybWFyIGVzdGEgYXV0b3JpemFjacOzbiwgc2UgbWFuaWZpZXN0YSBxdWUgbGEgb2JyYSBlcyBvcmlnaW5hbCB5IG5vIGV4aXN0ZSBlbiBlbGxhIG5pbmd1bmEgdmlvbGFjacOzbiBhIGxvcyBkZXJlY2hvcyBkZSBhdXRvciBkZSB0ZXJjZXJvcy4gRW4gY2FzbyBkZSBxdWUgZWwgdHJhYmFqbyBoYXlhIHNpZG8gZmluYW5jaWFkbyBwb3IgdGVyY2Vyb3MgZWwgbyBsb3MgYXV0b3JlcyBhc3VtZW4gbGEgcmVzcG9uc2FiaWxpZGFkIGRlbCBjdW1wbGltaWVudG8gZGUgbG9zIGFjdWVyZG9zIGVzdGFibGVjaWRvcyBzb2JyZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBsYSBvYnJhIGNvbiBkaWNobyB0ZXJjZXJvLgoKLSBGcmVudGUgYSBjdWFscXVpZXIgcmVjbGFtYWNpw7NuIHBvciB0ZXJjZXJvcywgZWwgbyBsb3MgYXV0b3JlcyBzZXLDoW4gcmVzcG9uc2FibGVzLCBlbiBuaW5nw7puIGNhc28gbGEgcmVzcG9uc2FiaWxpZGFkIHNlcsOhIGFzdW1pZGEgcG9yIGxhIGluc3RpdHVjacOzbi4KCi0gQ29uIGxhIGF1dG9yaXphY2nDs24sIGxhIGluc3RpdHVjacOzbiBwdWVkZSBkaWZ1bmRpciBsYSBvYnJhIGVuIMOtbmRpY2VzLCBidXNjYWRvcmVzIHkgb3Ryb3Mgc2lzdGVtYXMgZGUgaW5mb3JtYWNpw7NuIHF1ZSBmYXZvcmV6Y2FuIHN1IHZpc2liaWxpZGFkCgo=