MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation

Despite the attention marker-less pose estimation has attracted in recent years, marker-based approaches still provide unbeatable accuracy under controlled environmental conditions. Thus, they are used in many fields such as robotics or biomedical applications but are primarily implemented through c...

Full description

Autores:
Meza, Jhacson
Romero, Lenny A.
Marrugo, Andres G.
Tipo de recurso:
Fecha de publicación:
2021
Institución:
Universidad Tecnológica de Bolívar
Repositorio:
Repositorio Institucional UTB
Idioma:
eng
OAI Identifier:
oai:repositorio.utb.edu.co:20.500.12585/12383
Acceso en línea:
https://hdl.handle.net/20.500.12585/12383
Palabra clave:
Object Detection;
Deep Learning;
IOU
LEMB
Rights
openAccess
License
http://creativecommons.org/licenses/by-nc-nd/4.0/
id UTB2_8a0dafa16baeb9d99c3facd7161352be
oai_identifier_str oai:repositorio.utb.edu.co:20.500.12585/12383
network_acronym_str UTB2
network_name_str Repositorio Institucional UTB
repository_id_str
dc.title.spa.fl_str_mv MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation
title MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation
spellingShingle MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation
Object Detection;
Deep Learning;
IOU
LEMB
title_short MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation
title_full MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation
title_fullStr MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation
title_full_unstemmed MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation
title_sort MarkerPose: Robust real-time planar target tracking for accurate stereo pose estimation
dc.creator.fl_str_mv Meza, Jhacson
Romero, Lenny A.
Marrugo, Andres G.
dc.contributor.author.none.fl_str_mv Meza, Jhacson
Romero, Lenny A.
Marrugo, Andres G.
dc.subject.keywords.spa.fl_str_mv Object Detection;
Deep Learning;
IOU
topic Object Detection;
Deep Learning;
IOU
LEMB
dc.subject.armarc.none.fl_str_mv LEMB
description Despite the attention marker-less pose estimation has attracted in recent years, marker-based approaches still provide unbeatable accuracy under controlled environmental conditions. Thus, they are used in many fields such as robotics or biomedical applications but are primarily implemented through classical approaches, which require lots of heuristics and parameter tuning for reliable performance under different environments. In this work, we propose MarkerPose, a robust, real-time pose estimation system based on a planar target of three circles and a stereo vision system. MarkerPose is meant for high-accuracy pose estimation applications. Our method consists of two deep neural networks for marker point detection. A SuperPoint-like network for pixel-level accuracy keypoint localization and classification, and we introduce EllipSegNet, a lightweight ellipse segmentation network for sub-pixel-level accuracy keypoint detection. The marker's pose is estimated through stereo triangulation. The target point detection is robust to low lighting and motion blur conditions. We compared MarkerPose with a detection method based on classical computer vision techniques using a robotic arm for validation. The results show our method provides better accuracy than the classical technique. Finally, we demonstrate the suitability of MarkerPose in a 3D freehand ultrasound system, which is an application where highly accurate pose estimation is required. Code is available in Python and C++ at https://github.com/jhacsonmeza/MarkerPose. © 2021 IEEE.
publishDate 2021
dc.date.issued.none.fl_str_mv 2021
dc.date.accessioned.none.fl_str_mv 2023-07-21T20:48:50Z
dc.date.available.none.fl_str_mv 2023-07-21T20:48:50Z
dc.date.submitted.none.fl_str_mv 2023
dc.type.coarversion.fl_str_mv http://purl.org/coar/version/c_b1a7d7d4d402bcce
dc.type.coar.fl_str_mv http://purl.org/coar/resource_type/c_2df8fbb1
dc.type.driver.spa.fl_str_mv info:eu-repo/semantics/article
dc.type.hasversion.spa.fl_str_mv info:eu-repo/semantics/draft
dc.type.spa.spa.fl_str_mv http://purl.org/coar/resource_type/c_6501
status_str draft
dc.identifier.uri.none.fl_str_mv https://hdl.handle.net/20.500.12585/12383
dc.identifier.doi.none.fl_str_mv 10.1109/CVPRW53098.2021.00141
dc.identifier.instname.spa.fl_str_mv Universidad Tecnológica de Bolívar
dc.identifier.reponame.spa.fl_str_mv Repositorio Universidad Tecnológica de Bolívar
url https://hdl.handle.net/20.500.12585/12383
identifier_str_mv 10.1109/CVPRW53098.2021.00141
Universidad Tecnológica de Bolívar
Repositorio Universidad Tecnológica de Bolívar
dc.language.iso.spa.fl_str_mv eng
language eng
dc.rights.coar.fl_str_mv http://purl.org/coar/access_right/c_abf2
dc.rights.uri.*.fl_str_mv http://creativecommons.org/licenses/by-nc-nd/4.0/
dc.rights.accessrights.spa.fl_str_mv info:eu-repo/semantics/openAccess
dc.rights.cc.*.fl_str_mv Attribution-NonCommercial-NoDerivatives 4.0 Internacional
rights_invalid_str_mv http://creativecommons.org/licenses/by-nc-nd/4.0/
Attribution-NonCommercial-NoDerivatives 4.0 Internacional
http://purl.org/coar/access_right/c_abf2
eu_rights_str_mv openAccess
dc.format.extent.none.fl_str_mv 9 páginas
dc.format.mimetype.spa.fl_str_mv application/pdf
dc.publisher.place.spa.fl_str_mv Cartagena de Indias
dc.source.spa.fl_str_mv IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
institution Universidad Tecnológica de Bolívar
bitstream.url.fl_str_mv https://repositorio.utb.edu.co/bitstream/20.500.12585/12383/1/Meza_MarkerPose_Robust_Real-Time_Planar_Target_Tracking_for_Accurate_Stereo_Pose_CVPRW_2021_paper.pdf
https://repositorio.utb.edu.co/bitstream/20.500.12585/12383/3/license.txt
https://repositorio.utb.edu.co/bitstream/20.500.12585/12383/2/license_rdf
https://repositorio.utb.edu.co/bitstream/20.500.12585/12383/4/Meza_MarkerPose_Robust_Real-Time_Planar_Target_Tracking_for_Accurate_Stereo_Pose_CVPRW_2021_paper.pdf.txt
https://repositorio.utb.edu.co/bitstream/20.500.12585/12383/5/Meza_MarkerPose_Robust_Real-Time_Planar_Target_Tracking_for_Accurate_Stereo_Pose_CVPRW_2021_paper.pdf.jpg
bitstream.checksum.fl_str_mv 6311013207b4c695b2f6b52da838385a
e20ad307a1c5f3f25af9304a7a7c86b6
4460e5956bc1d1639be9ae6146a50347
4cbdf2b768f6bf4898e5fa97988d8683
0f63e7ab4812a269ecfb43c79cfe5b0d
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
MD5
MD5
repository.name.fl_str_mv Repositorio Institucional UTB
repository.mail.fl_str_mv repositorioutb@utb.edu.co
_version_ 1814021629929324544
spelling Meza, Jhacsonf82caa3d-d398-4c7c-8651-1d32adcd8925Romero, Lenny A.4e34aa8a-f981-4e1d-ae32-d45acb6abcf9Marrugo, Andres G.3d6cd388-d48f-4669-934f-49ca4179f5422023-07-21T20:48:50Z2023-07-21T20:48:50Z20212023https://hdl.handle.net/20.500.12585/1238310.1109/CVPRW53098.2021.00141Universidad Tecnológica de BolívarRepositorio Universidad Tecnológica de BolívarDespite the attention marker-less pose estimation has attracted in recent years, marker-based approaches still provide unbeatable accuracy under controlled environmental conditions. Thus, they are used in many fields such as robotics or biomedical applications but are primarily implemented through classical approaches, which require lots of heuristics and parameter tuning for reliable performance under different environments. In this work, we propose MarkerPose, a robust, real-time pose estimation system based on a planar target of three circles and a stereo vision system. MarkerPose is meant for high-accuracy pose estimation applications. Our method consists of two deep neural networks for marker point detection. A SuperPoint-like network for pixel-level accuracy keypoint localization and classification, and we introduce EllipSegNet, a lightweight ellipse segmentation network for sub-pixel-level accuracy keypoint detection. The marker's pose is estimated through stereo triangulation. The target point detection is robust to low lighting and motion blur conditions. We compared MarkerPose with a detection method based on classical computer vision techniques using a robotic arm for validation. The results show our method provides better accuracy than the classical technique. Finally, we demonstrate the suitability of MarkerPose in a 3D freehand ultrasound system, which is an application where highly accurate pose estimation is required. Code is available in Python and C++ at https://github.com/jhacsonmeza/MarkerPose. © 2021 IEEE.9 páginasapplication/pdfenghttp://creativecommons.org/licenses/by-nc-nd/4.0/info:eu-repo/semantics/openAccessAttribution-NonCommercial-NoDerivatives 4.0 Internacionalhttp://purl.org/coar/access_right/c_abf2IEEE Computer Society Conference on Computer Vision and Pattern Recognition WorkshopsMarkerPose: Robust real-time planar target tracking for accurate stereo pose estimationinfo:eu-repo/semantics/articleinfo:eu-repo/semantics/drafthttp://purl.org/coar/resource_type/c_6501http://purl.org/coar/version/c_b1a7d7d4d402bccehttp://purl.org/coar/resource_type/c_2df8fbb1Object Detection;Deep Learning;IOULEMBCartagena de IndiasAndriluka, M., Iqbal, U., Insafutdinov, E., Pishchulin, L., Milan, A., Gall, J., Schiele, B. PoseTrack: A Benchmark for Human Pose Estimation and Tracking (2018) Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, art. no. 8578640, pp. 5167-5176. Cited 237 times. ISBN: 978-153866420-9 doi: 10.1109/CVPR.2018.00542Basafa, E., Foroughi, P., Hossbach, M., Bhanushali, J., Stolka, P. Visual tracking for multi-modality computer-assisted image guidance (2017) Progress in Biomedical Optics and Imaging - Proceedings of SPIE, 10135, art. no. 101352S. Cited 7 times. http://spie.org/x1848.xml ISBN: 978-151060715-6 doi: 10.1117/12.2254362Brown, A., Uneri, A., Silva, T.D., Manbachi, A., Siewerdsen, J.H. Design and validation of an open-source library of dynamic reference frames for research and education in optical tracking (2018) Journal of Medical Imaging, 5 (2), art. no. 021215. Cited 11 times. http://medicalimaging.spiedigitallibrary.org/journal.aspx doi: 10.1117/1.JMI.5.2.021215Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs (2018) IEEE Transactions on Pattern Analysis and Machine Intelligence, 40 (4), pp. 834-848. Cited 10239 times. doi: 10.1109/TPAMI.2017.2699184Detone, D., Malisiewicz, T., Rabinovich, A. SuperPoint: Self-supervised interest point detection and description (2018) IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2018-June, art. no. 8575521, pp. 337-349. Cited 873 times. http://ieeexplore.ieee.org/xpl/conferences.jsp ISBN: 978-153866100-0 doi: 10.1109/CVPRW.2018.00060Ge, L., Ren, Z., Li, Y., Xue, Z., Wang, Y., Cai, J., Yuan, J. 3D hand shape and pose estimation from a single RGB image (2019) Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019-June, art. no. 8953612, pp. 10825-10834. Cited 234 times. ISBN: 978-172813293-8 doi: 10.1109/CVPR.2019.01109Gupta, A., Thakkar, K., Gandhi, V., Narayanan, P.J. Nose, Eyes and Ears: Head Pose Estimation by Locating Facial Keypoints (Open Access) (2019) ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, 2019-May, art. no. 8683503, pp. 1977-1981. Cited 29 times. ISBN: 978-147998131-1 doi: 10.1109/ICASSP.2019.8683503He, K., Gkioxari, G., Dollar, P., Girshick, R. Mask R-CNN (Open Access) (2017) Proceedings of the IEEE International Conference on Computer Vision, 2017-October, art. no. 8237584, pp. 2980-2988. Cited 13335 times. http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000149 ISBN: 978-153861032-9 doi: 10.1109/ICCV.2017.322Hu, D., Detone, D., Malisiewicz, T. Deep charuco: Dark charuco marker pose estimation (2019) Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019-June, art. no. 8953882, pp. 8428-8436. Cited 40 times. ISBN: 978-172813293-8 doi: 10.1109/CVPR.2019.00863Huang, Q., Zeng, Z. A Review on Real-Time 3D Ultrasound Imaging Technology (2017) BioMed Research International, 2017, art. no. 6027029. Cited 168 times. http://www.hindawi.com/journals/biomed/ doi: 10.1155/2017/6027029Kam, H.C., Yu, Y.K., Wong, K.H. An improvement on ArUco marker for pose tracking using kalman filter (Open Access) (2018) Proceedings - 2018 IEEE/ACIS 19th International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing, SNPD 2018, art. no. 8441049, pp. 65-69. Cited 25 times. http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=8422066 ISBN: 978-153865889-5 doi: 10.1109/SNPD.2018.8441049Kim, J., Jeong, Y., Lee, H., Yun, H. Marker-based structural displacement measurement models with camera movement error correction using image matching and anomaly detection (Open Access) (2020) Sensors (Switzerland), 20 (19), art. no. 5676, pp. 1-24. Cited 8 times. https://www.mdpi.com/1424-8220/20/19/5676/pdfLee, J.Y., Lee, C.-S. Path planning for SCARA robot based on marker detection using feature extraction and, labelling (Open Access) (2018) International Journal of Computer Integrated Manufacturing, 31 (8), pp. 769-776. Cited 7 times. http://www.tandfonline.com/loi/tcim20 doi: 10.1080/0951192X.2018.1429669Nath, T., Mathis, A., Chen, A.C., Patel, A., Bethge, M., Mathis, M.W. Using DeepLabCut for 3D markerless pose estimation across species and behaviors (Open Access) (2019) Nature Protocols, 14 (7), pp. 2152-2176. Cited 415 times. http://www.natureprotocols.com/ doi: 10.1038/s41596-019-0176-0Redmon, J., Divvala, S., Girshick, R., Farhadi, A. You only look once: Unified, real-time object detection (Open Access) (2016) Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016-December, art. no. 7780460, pp. 779-788. Cited 22811 times. ISBN: 978-146738850-4 doi: 10.1109/CVPR.2016.91Ren, S., He, K., Girshick, R., Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks (Open Access) (2017) IEEE Transactions on Pattern Analysis and Machine Intelligence, 39 (6), art. no. 7485869, pp. 1137-1149. Cited 16494 times. doi: 10.1109/TPAMI.2016.2577031Sarlin, P.-E., Detone, D., Malisiewicz, T., Rabinovich, A. SuperGlue: Learning Feature Matching with Graph Neural Networks (Open Access) (2020) Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, art. no. 9157489, pp. 4937-4946. Cited 644 times. doi: 10.1109/CVPR42600.2020.00499Wang, H., Sridhar, S., Huang, J., Valentin, J., Song, S., Guibas, L.J. Normalized object coordinate space for category-level 6D object pose and size estimation (Open Access) (2019) Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019-June, art. no. 8953761, pp. 2637-2646. Cited 267 times. ISBN: 978-172813293-8 doi: 10.1109/CVPR.2019.00275http://purl.org/coar/resource_type/c_6501ORIGINALMeza_MarkerPose_Robust_Real-Time_Planar_Target_Tracking_for_Accurate_Stereo_Pose_CVPRW_2021_paper.pdfMeza_MarkerPose_Robust_Real-Time_Planar_Target_Tracking_for_Accurate_Stereo_Pose_CVPRW_2021_paper.pdfapplication/pdf4085073https://repositorio.utb.edu.co/bitstream/20.500.12585/12383/1/Meza_MarkerPose_Robust_Real-Time_Planar_Target_Tracking_for_Accurate_Stereo_Pose_CVPRW_2021_paper.pdf6311013207b4c695b2f6b52da838385aMD51LICENSElicense.txtlicense.txttext/plain; charset=utf-83182https://repositorio.utb.edu.co/bitstream/20.500.12585/12383/3/license.txte20ad307a1c5f3f25af9304a7a7c86b6MD53CC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-8805https://repositorio.utb.edu.co/bitstream/20.500.12585/12383/2/license_rdf4460e5956bc1d1639be9ae6146a50347MD52TEXTMeza_MarkerPose_Robust_Real-Time_Planar_Target_Tracking_for_Accurate_Stereo_Pose_CVPRW_2021_paper.pdf.txtMeza_MarkerPose_Robust_Real-Time_Planar_Target_Tracking_for_Accurate_Stereo_Pose_CVPRW_2021_paper.pdf.txtExtracted texttext/plain37758https://repositorio.utb.edu.co/bitstream/20.500.12585/12383/4/Meza_MarkerPose_Robust_Real-Time_Planar_Target_Tracking_for_Accurate_Stereo_Pose_CVPRW_2021_paper.pdf.txt4cbdf2b768f6bf4898e5fa97988d8683MD54THUMBNAILMeza_MarkerPose_Robust_Real-Time_Planar_Target_Tracking_for_Accurate_Stereo_Pose_CVPRW_2021_paper.pdf.jpgMeza_MarkerPose_Robust_Real-Time_Planar_Target_Tracking_for_Accurate_Stereo_Pose_CVPRW_2021_paper.pdf.jpgGenerated Thumbnailimage/jpeg8360https://repositorio.utb.edu.co/bitstream/20.500.12585/12383/5/Meza_MarkerPose_Robust_Real-Time_Planar_Target_Tracking_for_Accurate_Stereo_Pose_CVPRW_2021_paper.pdf.jpg0f63e7ab4812a269ecfb43c79cfe5b0dMD5520.500.12585/12383oai:repositorio.utb.edu.co:20.500.12585/123832023-07-22 00:18:15.228Repositorio Institucional UTBrepositorioutb@utb.edu.coQXV0b3Jpem8gKGF1dG9yaXphbW9zKSBhIGxhIEJpYmxpb3RlY2EgZGUgbGEgSW5zdGl0dWNpw7NuIHBhcmEgcXVlIGluY2x1eWEgdW5hIGNvcGlhLCBpbmRleGUgeSBkaXZ1bGd1ZSBlbiBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsLCBsYSBvYnJhIG1lbmNpb25hZGEgY29uIGVsIGZpbiBkZSBmYWNpbGl0YXIgbG9zIHByb2Nlc29zIGRlIHZpc2liaWxpZGFkIGUgaW1wYWN0byBkZSBsYSBtaXNtYSwgY29uZm9ybWUgYSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBxdWUgbWUobm9zKSBjb3JyZXNwb25kZShuKSB5IHF1ZSBpbmNsdXllbjogbGEgcmVwcm9kdWNjacOzbiwgY29tdW5pY2FjacOzbiBww7pibGljYSwgZGlzdHJpYnVjacOzbiBhbCBww7pibGljbywgdHJhbnNmb3JtYWNpw7NuLCBkZSBjb25mb3JtaWRhZCBjb24gbGEgbm9ybWF0aXZpZGFkIHZpZ2VudGUgc29icmUgZGVyZWNob3MgZGUgYXV0b3IgeSBkZXJlY2hvcyBjb25leG9zIHJlZmVyaWRvcyBlbiBhcnQuIDIsIDEyLCAzMCAobW9kaWZpY2FkbyBwb3IgZWwgYXJ0IDUgZGUgbGEgbGV5IDE1MjAvMjAxMiksIHkgNzIgZGUgbGEgbGV5IDIzIGRlIGRlIDE5ODIsIExleSA0NCBkZSAxOTkzLCBhcnQuIDQgeSAxMSBEZWNpc2nDs24gQW5kaW5hIDM1MSBkZSAxOTkzIGFydC4gMTEsIERlY3JldG8gNDYwIGRlIDE5OTUsIENpcmN1bGFyIE5vIDA2LzIwMDIgZGUgbGEgRGlyZWNjacOzbiBOYWNpb25hbCBkZSBEZXJlY2hvcyBkZSBhdXRvciwgYXJ0LiAxNSBMZXkgMTUyMCBkZSAyMDEyLCBsYSBMZXkgMTkxNSBkZSAyMDE4IHkgZGVtw6FzIG5vcm1hcyBzb2JyZSBsYSBtYXRlcmlhLgoKQWwgcmVzcGVjdG8gY29tbyBBdXRvcihlcykgbWFuaWZlc3RhbW9zIGNvbm9jZXIgcXVlOgoKLSBMYSBhdXRvcml6YWNpw7NuIGVzIGRlIGNhcsOhY3RlciBubyBleGNsdXNpdmEgeSBsaW1pdGFkYSwgZXN0byBpbXBsaWNhIHF1ZSBsYSBsaWNlbmNpYSB0aWVuZSB1bmEgdmlnZW5jaWEsIHF1ZSBubyBlcyBwZXJwZXR1YSB5IHF1ZSBlbCBhdXRvciBwdWVkZSBwdWJsaWNhciBvIGRpZnVuZGlyIHN1IG9icmEgZW4gY3VhbHF1aWVyIG90cm8gbWVkaW8sIGFzw60gY29tbyBsbGV2YXIgYSBjYWJvIGN1YWxxdWllciB0aXBvIGRlIGFjY2nDs24gc29icmUgZWwgZG9jdW1lbnRvLgoKLSBMYSBhdXRvcml6YWNpw7NuIHRlbmRyw6EgdW5hIHZpZ2VuY2lhIGRlIGNpbmNvIGHDsW9zIGEgcGFydGlyIGRlbCBtb21lbnRvIGRlIGxhIGluY2x1c2nDs24gZGUgbGEgb2JyYSBlbiBlbCByZXBvc2l0b3JpbywgcHJvcnJvZ2FibGUgaW5kZWZpbmlkYW1lbnRlIHBvciBlbCB0aWVtcG8gZGUgZHVyYWNpw7NuIGRlIGxvcyBkZXJlY2hvcyBwYXRyaW1vbmlhbGVzIGRlbCBhdXRvciB5IHBvZHLDoSBkYXJzZSBwb3IgdGVybWluYWRhIHVuYSB2ZXogZWwgYXV0b3IgbG8gbWFuaWZpZXN0ZSBwb3IgZXNjcml0byBhIGxhIGluc3RpdHVjacOzbiwgY29uIGxhIHNhbHZlZGFkIGRlIHF1ZSBsYSBvYnJhIGVzIGRpZnVuZGlkYSBnbG9iYWxtZW50ZSB5IGNvc2VjaGFkYSBwb3IgZGlmZXJlbnRlcyBidXNjYWRvcmVzIHkvbyByZXBvc2l0b3Jpb3MgZW4gSW50ZXJuZXQgbG8gcXVlIG5vIGdhcmFudGl6YSBxdWUgbGEgb2JyYSBwdWVkYSBzZXIgcmV0aXJhZGEgZGUgbWFuZXJhIGlubWVkaWF0YSBkZSBvdHJvcyBzaXN0ZW1hcyBkZSBpbmZvcm1hY2nDs24gZW4gbG9zIHF1ZSBzZSBoYXlhIGluZGV4YWRvLCBkaWZlcmVudGVzIGFsIHJlcG9zaXRvcmlvIGluc3RpdHVjaW9uYWwgZGUgbGEgSW5zdGl0dWNpw7NuLCBkZSBtYW5lcmEgcXVlIGVsIGF1dG9yKHJlcykgdGVuZHLDoW4gcXVlIHNvbGljaXRhciBsYSByZXRpcmFkYSBkZSBzdSBvYnJhIGRpcmVjdGFtZW50ZSBhIG90cm9zIHNpc3RlbWFzIGRlIGluZm9ybWFjacOzbiBkaXN0aW50b3MgYWwgZGUgbGEgSW5zdGl0dWNpw7NuIHNpIGRlc2VhIHF1ZSBzdSBvYnJhIHNlYSByZXRpcmFkYSBkZSBpbm1lZGlhdG8uCgotIExhIGF1dG9yaXphY2nDs24gZGUgcHVibGljYWNpw7NuIGNvbXByZW5kZSBlbCBmb3JtYXRvIG9yaWdpbmFsIGRlIGxhIG9icmEgeSB0b2RvcyBsb3MgZGVtw6FzIHF1ZSBzZSByZXF1aWVyYSBwYXJhIHN1IHB1YmxpY2FjacOzbiBlbiBlbCByZXBvc2l0b3Jpby4gSWd1YWxtZW50ZSwgbGEgYXV0b3JpemFjacOzbiBwZXJtaXRlIGEgbGEgaW5zdGl0dWNpw7NuIGVsIGNhbWJpbyBkZSBzb3BvcnRlIGRlIGxhIG9icmEgY29uIGZpbmVzIGRlIHByZXNlcnZhY2nDs24gKGltcHJlc28sIGVsZWN0csOzbmljbywgZGlnaXRhbCwgSW50ZXJuZXQsIGludHJhbmV0LCBvIGN1YWxxdWllciBvdHJvIGZvcm1hdG8gY29ub2NpZG8gbyBwb3IgY29ub2NlcikuCgotIExhIGF1dG9yaXphY2nDs24gZXMgZ3JhdHVpdGEgeSBzZSByZW51bmNpYSBhIHJlY2liaXIgY3VhbHF1aWVyIHJlbXVuZXJhY2nDs24gcG9yIGxvcyB1c29zIGRlIGxhIG9icmEsIGRlIGFjdWVyZG8gY29uIGxhIGxpY2VuY2lhIGVzdGFibGVjaWRhIGVuIGVzdGEgYXV0b3JpemFjacOzbi4KCi0gQWwgZmlybWFyIGVzdGEgYXV0b3JpemFjacOzbiwgc2UgbWFuaWZpZXN0YSBxdWUgbGEgb2JyYSBlcyBvcmlnaW5hbCB5IG5vIGV4aXN0ZSBlbiBlbGxhIG5pbmd1bmEgdmlvbGFjacOzbiBhIGxvcyBkZXJlY2hvcyBkZSBhdXRvciBkZSB0ZXJjZXJvcy4gRW4gY2FzbyBkZSBxdWUgZWwgdHJhYmFqbyBoYXlhIHNpZG8gZmluYW5jaWFkbyBwb3IgdGVyY2Vyb3MgZWwgbyBsb3MgYXV0b3JlcyBhc3VtZW4gbGEgcmVzcG9uc2FiaWxpZGFkIGRlbCBjdW1wbGltaWVudG8gZGUgbG9zIGFjdWVyZG9zIGVzdGFibGVjaWRvcyBzb2JyZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBsYSBvYnJhIGNvbiBkaWNobyB0ZXJjZXJvLgoKLSBGcmVudGUgYSBjdWFscXVpZXIgcmVjbGFtYWNpw7NuIHBvciB0ZXJjZXJvcywgZWwgbyBsb3MgYXV0b3JlcyBzZXLDoW4gcmVzcG9uc2FibGVzLCBlbiBuaW5nw7puIGNhc28gbGEgcmVzcG9uc2FiaWxpZGFkIHNlcsOhIGFzdW1pZGEgcG9yIGxhIGluc3RpdHVjacOzbi4KCi0gQ29uIGxhIGF1dG9yaXphY2nDs24sIGxhIGluc3RpdHVjacOzbiBwdWVkZSBkaWZ1bmRpciBsYSBvYnJhIGVuIMOtbmRpY2VzLCBidXNjYWRvcmVzIHkgb3Ryb3Mgc2lzdGVtYXMgZGUgaW5mb3JtYWNpw7NuIHF1ZSBmYXZvcmV6Y2FuIHN1IHZpc2liaWxpZGFkCgo=