Automatic detection of Parkinson’s disease from components of modulators in speech signals
Parkinson’s Disease (PD) is the second most common neurodegenerative disorder after Alzheimer’s disease. This disorder mainly affects older adults at a rate of about 2%, and about 89% of people diagnosed with PD also develop speech disorders. This has led scientific community to research information...
- Autores:
-
Moofarrry , Jhon Freddy
Argüello- Vélez, Patricia
Sarria-Paja, Milton
- Tipo de recurso:
- Article of journal
- Fecha de publicación:
- 2020
- Institución:
- Corporación Universidad de la Costa
- Repositorio:
- REDICUC - Repositorio CUC
- Idioma:
- eng
- OAI Identifier:
- oai:repositorio.cuc.edu.co:11323/8725
- Acceso en línea:
- https://hdl.handle.net/11323/8725
https://doi.org/10.17981/cesta.01.01.2020.05
https://repositorio.cuc.edu.co/
- Palabra clave:
- Modulation spectrum
Covariance features
Parkinson’s disease
Speech signals
Pattern recognition
Espectro de modulación
Enfermedad de Parkinson
Señales de voz
Reconocimiento de patrones
Características de covarianza
- Rights
- openAccess
- License
- CC0 1.0 Universal
id |
RCUC2_69bf62c9f63195dae01107f2c6559b83 |
---|---|
oai_identifier_str |
oai:repositorio.cuc.edu.co:11323/8725 |
network_acronym_str |
RCUC2 |
network_name_str |
REDICUC - Repositorio CUC |
repository_id_str |
|
dc.title.spa.fl_str_mv |
Automatic detection of Parkinson’s disease from components of modulators in speech signals |
dc.title.translated.spa.fl_str_mv |
Detección automática de la enfermedad de Parkinson usando componentes moduladoras de señales de voz |
title |
Automatic detection of Parkinson’s disease from components of modulators in speech signals |
spellingShingle |
Automatic detection of Parkinson’s disease from components of modulators in speech signals Modulation spectrum Covariance features Parkinson’s disease Speech signals Pattern recognition Espectro de modulación Enfermedad de Parkinson Señales de voz Reconocimiento de patrones Características de covarianza |
title_short |
Automatic detection of Parkinson’s disease from components of modulators in speech signals |
title_full |
Automatic detection of Parkinson’s disease from components of modulators in speech signals |
title_fullStr |
Automatic detection of Parkinson’s disease from components of modulators in speech signals |
title_full_unstemmed |
Automatic detection of Parkinson’s disease from components of modulators in speech signals |
title_sort |
Automatic detection of Parkinson’s disease from components of modulators in speech signals |
dc.creator.fl_str_mv |
Moofarrry , Jhon Freddy Argüello- Vélez, Patricia Sarria-Paja, Milton |
dc.contributor.author.spa.fl_str_mv |
Moofarrry , Jhon Freddy Argüello- Vélez, Patricia Sarria-Paja, Milton |
dc.subject.proposal.eng.fl_str_mv |
Modulation spectrum Covariance features |
topic |
Modulation spectrum Covariance features Parkinson’s disease Speech signals Pattern recognition Espectro de modulación Enfermedad de Parkinson Señales de voz Reconocimiento de patrones Características de covarianza |
dc.subject.proposal.spa.fl_str_mv |
Parkinson’s disease Speech signals Pattern recognition Espectro de modulación Enfermedad de Parkinson Señales de voz Reconocimiento de patrones Características de covarianza |
description |
Parkinson’s Disease (PD) is the second most common neurodegenerative disorder after Alzheimer’s disease. This disorder mainly affects older adults at a rate of about 2%, and about 89% of people diagnosed with PD also develop speech disorders. This has led scientific community to research information embedded in speech signal from Parkinson’s patients, which has allowed not only a diagnosis of the pathology but also a follow-up of its evolution. In recent years, a large number of studies have focused on the automatic detection of pathologies related to the voice, in order to make objective evaluations of the voice in a non-invasive manner. In cases where the pathology primarily affects the vibratory patterns of vocal folds such as Parkinson’s, the analyses typically performed are sustained over vowel pronunciations. In this article, it is proposed to use information from slow and rapid variations in speech signals, also known as modulating components, combined with an effective dimensionality reduction approach that will be used as input to the classification system. The proposed approach achieves classification rates higher than 88 %, surpassing the classical approach based on Mel Cepstrals Coefficients (MFCC). The results show that the information extracted from slow varying components is highly discriminative for the task at hand, and could support assisted diagnosis systems for PD. |
publishDate |
2020 |
dc.date.issued.none.fl_str_mv |
2020 |
dc.date.accessioned.none.fl_str_mv |
2021-09-21T13:31:48Z |
dc.date.available.none.fl_str_mv |
2021-09-21T13:31:48Z |
dc.type.spa.fl_str_mv |
Artículo de revista |
dc.type.coar.fl_str_mv |
http://purl.org/coar/resource_type/c_2df8fbb1 |
dc.type.coar.spa.fl_str_mv |
http://purl.org/coar/resource_type/c_6501 |
dc.type.content.spa.fl_str_mv |
Text |
dc.type.driver.spa.fl_str_mv |
info:eu-repo/semantics/article |
dc.type.redcol.spa.fl_str_mv |
http://purl.org/redcol/resource_type/ART |
dc.type.version.spa.fl_str_mv |
info:eu-repo/semantics/acceptedVersion |
format |
http://purl.org/coar/resource_type/c_6501 |
status_str |
acceptedVersion |
dc.identifier.citation.spa.fl_str_mv |
J. Moofarry, P. Argüello-Velez & J. Sarria-Paja, “Automatic detection of Parkinson’s disease from components of modulators in speech signals”, J. Comput. Electron. Sci.: Theory Appl., vol. 1, no. 1, pp. 71–82, 2020. https://doi.org/10.17981/cesta.01.01.2020.05 |
dc.identifier.uri.spa.fl_str_mv |
https://hdl.handle.net/11323/8725 |
dc.identifier.url.spa.fl_str_mv |
https://doi.org/10.17981/cesta.01.01.2020.05 |
dc.identifier.doi.spa.fl_str_mv |
10.17981/cesta.01.01.2020.05 |
dc.identifier.eissn.spa.fl_str_mv |
2745-0090 |
dc.identifier.instname.spa.fl_str_mv |
Corporación Universidad de la Costa |
dc.identifier.reponame.spa.fl_str_mv |
REDICUC - Repositorio CUC |
dc.identifier.repourl.spa.fl_str_mv |
https://repositorio.cuc.edu.co/ |
identifier_str_mv |
J. Moofarry, P. Argüello-Velez & J. Sarria-Paja, “Automatic detection of Parkinson’s disease from components of modulators in speech signals”, J. Comput. Electron. Sci.: Theory Appl., vol. 1, no. 1, pp. 71–82, 2020. https://doi.org/10.17981/cesta.01.01.2020.05 10.17981/cesta.01.01.2020.05 2745-0090 Corporación Universidad de la Costa REDICUC - Repositorio CUC |
url |
https://hdl.handle.net/11323/8725 https://doi.org/10.17981/cesta.01.01.2020.05 https://repositorio.cuc.edu.co/ |
dc.language.iso.none.fl_str_mv |
eng |
language |
eng |
dc.relation.ispartofjournal.spa.fl_str_mv |
Computer and Electronic Sciences: Theory and Applications Computer and Electronic Sciences: Theory and Applications |
dc.relation.references.spa.fl_str_mv |
[1] J. M. Fearnley & A. J. Lees, “Ageing and parkinson’s disease: substantia nigra regional selectivity”, Brain, vol. 114, no. 5, pp. 2283–2301, Oct. 1991. https://doi.org/10.1093/brain/114.5.2283 [2] P. Gómez-Vilda, D. Palacios-Alonso, V. Rodellar-Biarge, A. Álvarez-Marquina, V. Nieto-Lluis & R. Martínez-Olalla, “Parkinson’s disease monitoring by biomechanical instability of phonation”, Neurocomputing, vol. 255, pp. 3–16, Sept. 2017. https:// doi.org/10.1016/j.neucom.2016.06.092 [3] T. Khan, J. Westin & M. Dougherty, “Classification of speech intelligibility in parkinson’s disease”, BBE, vol.34, no. 1, pp. 35–45, Jan. 2014. https://doi.org/10.1016/j.bbe.2013.10.003 [4] J. Rusz, R. Cmejla, H. Ruzickova & E. Ruzicka, “Objectification of dysarthria in parkinson’s disease using bayes theorem”, in Proc. 10th NEHIPISIC, WSEAS, CGK, ID, Dec. 1-3, 2011, pp. 165–169. https://dl.acm.org/doi/10.5555/1959586.1959620 [5] L. O. Ramig, C. Fox & S. Sapir, “Speech treatment for parkinson’s disease”, Expert Rev Neurother, vol. 8, no. 2, pp. 297–309, Feb. 2008. https://doi.org/10.1586/14737175.8.2.297 [6] P. C. Doyle, H. A. Leeper, A.-L. Kotler, N. Thomas-Stonell, C. O’Neill, M.-C. Dylke & K. Rolls, “Dysarthric speech: A comparison of computerized speech recognition and listener intelligibility”, JRRD, vol. 34, no. 3, pp. 309–316, Jul. 1997. Available: https://www.rehab.research.va.gov/jour/97/34/3/pdf/doyle.pdf [7] J. R. Duffy, Motor speech disorders e-book: Substrates, differential diagnosis, and management, St. Louis, Mo, USA: Elsevier Health Sciences, 2013. [8] R. D. Kent, G. Weismer, J. F. Kent, H. K. Vorperian & J. R. Duffy, “Acoustic studies of dysarthric speech: Methods, progress, and potential”, J Commun Disord, vol. 32, no. 3, pp. 141–186, May. 1999. https://doi.org/10.1016/s0021-9924(99)00004-0 [9] T. H. Falk, W.-Y. Chan & F. Shein, “Characterization of atypical vocal source excitation, temporal dynamics and prosody for objective measurement of dysarthric word intelligibility”, Speech Commun, vol. 54, no. 5, pp. 622–631, Jun. 2012. https://doi. org/10.1016/j.specom.2011.03.007 [10] H. Kim, M. Hasegawa-Johnson, A. Perlman, J. Gunderson, T. S. Huang, K. Watkin & S. Frame, “Dysarthric speech database for universal access research”, in INTERSPEECH 2008, ISCA, BRN, AUS, Sep. 22-26, 2008. Available at: https://www.iscaspeech.org/archive/archive_papers/interspeech_2008/i08_1741.pdf [11] F. Rudzicz, “Articulatory knowledge in the recognition of dysarthric speech”, IEEE/ACM Trans. Audio, Speech, Language Process, vol. 19, no. 4, pp. 947–960, Sep. 2010. https://doi.org/10.1109/TASL.2010.2072499 [12] S. Skodda, “Aspects of speech rate and regularity in parkinson’s disease”, J Neurol Sci, vol. 310, no. 1-2, pp. 231–236, Aug. 2011. https://doi.org/10.1016/j.jns.2011.07.020 [13] T. Villa-Cañas, J. Orozco-Arroyave, J. Vargas-Bonilla & J. Arias-Londoño, “Modulation spectra for automatic detection of parkinson’s disease”, In Proc. 2014 XIX STSIVA, IEEE, AM, CO, Sept. 17-19, 2014, pp. 1–5. https://doi.org/10.1109/STSIVA.2014.7010173 [14] J. R. Orozco-Arroyave, J. D. Arias-Londoño, J. F. Vargas-Bonilla, M. C. Gonzalez-Rátiva & E. Nöth, “New spanish speech corpus database for the analysis of people suffering from parkinson’s disease”, in Proc. LREC´14, ELRA, RKV, ISL, May. 26- 31, 2014, pp. 342–347. Available: http://www.lrec-conf.org/proceedings/lrec2014/pdf/7_Paper.pdf [15] J. Orozco-Arroyave, F. Hönig, J. Arias-Londoño, J. Vargas-Bonilla, K. Daqrouq, S. Skodda, J. Rusz & E. Nöth, “Automatic detection of parkinson’s disease in running speech spoken in three different languages”, JASA, vol. 139, no. 1, pp. 481–500, 2016. https://doi.org/10.1121/1.4939739 [16] H. Ackermann, I. Hertrich & T. Hehr, “Oral diadochokinesis in neurological dysarthrias”, Folia Phoniatr Logop, vol. 47, no. 1, pp. 15–23, Feb. 1995. https://doi.org/10.1159/000266338 [17] C.-C. Yang, Y.-M. Chung, L.-Y. Chi, H.-H. Chen & Y.-T. Wang, “Analysis of verbal diadochokinesis in normal speech using the diadochokinetic rate analysis program”, JDS, vol. 6, no. 4, pp. 221–226, Dec. 2011. https://doi.org/10.1016/j.jds.2011.09.007 [18] M. N. Wong, B. E. Murdoch & B.-M. Whelan, “Lingual kinematics during rapid syllable repetition in parkinson’s disease”, Int J Lang Commun Disord, vol. 47, no. 5, pp. 578–588, Jul. 2012. https://doi.org/10.1111/j.1460-6984.2012.00167.x [19] M. Lotze, G. Seggewies, M. Erb, W. Grodd & N. Birbaumer, “The representation of articulation in the primary sensorimotor cortex”, Neuroreport, vol. 11, no. 13, pp. 2985–2989, Sep. 2000. https://doi.org/10.1097/00001756-200009110-00032 [20] D. O’Shaughnessy, Speech Communications: Human and Machine, CAM, USA: Universities Press, 2009. [21] D. Montaña, Y. Campos-Roca & C. J. Pérez, “A diadochokinesis-based expert system considering articulatory features of plosive consonants for early detection of parkinson’s disease”, Comput Meth Prog Bio, vol. 154, pp. 89–97, Feb. 2018. https://doi. org/10.1016/j.cmpb.2017.11.010 [22] M. Novotný, J. Rusz, R. Čmejla & E. Růžička, “Automatic evaluation of articulatory disorders in parkinson’s disease”, IEEE/ACM Trans. Audio, Speech, Language Process, vol. 22, no. 9, pp. 1366–1378, Sep. 2014. https://doi.org/10.1109/ TASLP.2014.2329734 [23] M. Benzeghiba, R. De Mori, O. Deroo, S. Dupont, T. Erbes, D. Jouvet, L. Fissore, P. Laface, A. Mertins, C. Ris, R. V. Tyagi & C. Wellekens, “Automatic speech recognition and speech variability: A review”, Speech Commun, vol. 49, no. 10-11, pp. 763–786, Oct. 2007. https://doi.org/10.1016/j.specom.2007.02.006 [24] J. Markič, “Real academia española y asociación de academias de la lengua española. Nueva gramática de la lengua española. Fonética y fonología”, Linguistica, vol. 52, no. 1, pp. 403–406, Dec. 2012. https://doi.org/10.4312/linguistica.52.1.403-406 [25] A. M. Borzone, Manual de fonética acústica, TX, USA: Hachette, 1980. [26] J. Rusz, R. Cmejla, H. Ruzickova & E. Ruzicka, “Quantitative acoustic measurements for characterization of speech and voice disorders in early untreated parkinson’s disease”, JASA, vol. 129, no. 1, pp. 350–367, Feb. 2011. https://doi.org/10.1121/1.3514381 [27] L. Rabiner & R. Schafer, Digital processing of speech signals. ENGL, N.J., USA: Prentice-Hall, 1978. [28] J. Proakis & D. Manolakis, Digital signal processing: principles, algorithms, and applications. USR, N.J., USA: Prentice Hall, 1996. [29] D. O’Shaughnessy, “Invited paper: Automatic speech recognition: History, methods and challenges”, Pattern Recognit. Image Anal., vol. 41, no. 10, pp. 2965–2979, Oct. 2008. https://doi.org/10.1016/j.patcog.2008.05.008 [30] Q. Jin & T. F. Zheng, “Overview of front-end features for robust speaker recognition”, in Proc. ASC2011, APSIPA, Xi'an, CN, Oct. 18-20, 2011. Available: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.704.6205&rep=rep1&type=pdf [31] P. Maragos, J. F. Kaiser & T. F. Quatieri, “Energy separation in signal modulations with application to speech analysis”, IEEE Trans. Signal Process, vol. 41, no. 10, pp. 3024–3051, Oct. 1993. https://doi.org/10.1109/78.277799 [32] M. Grimaldi & F. Cummins, “Speaker identification using instantaneous frequencies”, IEEE/ACM Trans. Audio, Speech, Language Process, vol. 16, no. 6, pp. 1097–1111, Aug. 2008. https://doi.org/10.1109/TASL.2008.2001109 [33] S. O. Sadjadi & J. H. L. Hansen, “Assessment of single-channel speech enhancement techniques for speaker identification under mismatched conditions”, in Proc. INTERSPEECH2010, ISCA, Makuhari, JP, Sep. 26-30, 2010, pp. 2138–2141. Available at: https://www.isca-speech.org/archive/archive_papers/interspeech_2010/i10_2138.pdf [34] P. Clark & L. Atlas, “Time-frequency coherent modulation filtering of nonstationary signals”, IEEE Trans. Signal Process., vol. 57, no. 11, pp. 4323–4332, Nov. 2009. https://doi.org/10.1109/TSP.2009.2025107 [35] M. Sarria-Paja & T. Falk, “Whispered speech detection in noise using auditory-inspired modulation spectrum features”, IEEE Signal Process Lett, vol. 20, no. 8, pp. 783–786, Aug. 2013. https://doi.org/10.1109/LSP.2013.2266860 [36] T. Falkand & W.-Y. Chan, “Modulation spectral features for robust farfield speaker identification”, IEEE/ACM Trans. Audio, Speech, Language Process, vol. 18, no. 1, pp. 90–100, Jan. 2010. https://doi.org/10.1109/TASL.2009.2023679 [37] S. Ganapathy, S. Thomas & H. Hermansky, “Static and dynamic modulation spectrum for speech recognition”, in Proc. INTERSPEECH2009, ISCA, Brig, UK, Sep. 6-10, 2009, pp. 2823–2826. Available at: https://www.isca-speech.org/archive/ archive_papers/interspeech_2009/papers/i09_2823.pdf [38] T. Kinnunen & H. Li, “An overview of text-independent speaker recognition: From features to supervectors”, Speech Commun, vol. 52, no. 1, pp. 12–40, Jan. 2010. https://doi.org/10.1016/j.specom.2009.08.009 [39] M. Sarria-Paja & T. H. Falk, “Fusion of auditory inspired amplitude modulation spectrum and cepstral features for whispered and normal speech speaker verification”, Comp Speech Lang, vol. 45, pp. 437–456, Sep. 2017. https://doi.org/10.1016/j. csl.2017.04.004 [40] C. Bishop, Pattern Recognition and Machine Learning. NY, USA: Springer-Verlag, 2006. [41] M. Senoussaoui, M. Saria-Paja, P. Cardinal, T. H. Falk & F. Michaud, “1. State-of-the-art speaker recognition methods applied to speakers with dysarthria” in Voice Technologies for Speech Reconstruction and Enhancement, H. A. Patil & A. Neustein, Ed. BE, GE: De Gruyter, 2020. pp. 7–34. https://doi.org/10.1515/9781501501265-002 [42] O. Tuzel, F. Porikli & P. Meer, “Region covariance: A fast descriptor for detection and classification”, in Proc. ECCV2006, GRZ, AT, May. 7-13, 2006, pp. 589–600. https://doi.org/10.1007/11744047_45 [43] O. Tuzel, F. Porikli & P. Meer, “Human detection via classification on riemannian manifolds”, In Proc. CVPR’07, IEEE, Mpls, MN, USA, Jun. 17-22, 2007, pp. 1–8. https://doi.org/10.1109/CVPR.2007.383197 [44] C. Ye, J. Liu, C. Chen, M. Song & J. Bu, “Speech Emotion Classification on a Riemannian Manifold”, in Proc. 9th PCM2008, LNCS, TNN, TW, Dec. 9-13, 2008, , vol. 5353, pp. 61–69. https://doi.org/10.1007/978-3-540-89796-5 [45] R. Duda, P. Hart & D. Stork, Pattern classification, NY, USA: John Wiley & Sons, 2012. [46] L. Rabiner & B. Juang, “An introduction to hidden markov models”, IEEE ASSP Mag, vol. 3, no. 1, pp. 4–16, Jan. 1986. https://doi.org/10.1109/MASSP.1986.1165342 [47] V. N. Vapnik, “An overview of statistical learning theory”, IEEE Trans Neural Netw Learn Syst, vol. 10, no. 5, pp. 988–999, Sep. 1999. https://doi.org/10.1109/72.788640 [48] B. Scholkopfand & A. J. Smola, Learning with kernels: support vector machines, regularization, optimization, and beyond, CAM, USA: MIT Press, 2001. [49] R. Lyon, A. Katsiamis & E. Drakakis, “History and future of auditory filter models”, in Proc. ISCAS, IEEE, PAR, FRA, May. 30-Jun. 2, 2010, pp. 3809–3812. https://doi.org/10.1109/ISCAS.2010.5537724 [50] T. Cho & P. Ladefoged, “Variation and universals in vot: evidence from 18 languages”, J. Phon, vol. 27, no. 2, pp. 207–229, Apr. 1999. https://doi.org/10.1006/jpho.1999.0094 [51] L. Liskerand & A. S. Abramson, “A cross-language study of voicing in initial stops: Acoustical measurements”, Word, vol. 20, no. 3, pp. 384–422, 1964. https://doi.org/10.1080/00437956.1964.11659830 |
dc.relation.citationendpage.spa.fl_str_mv |
82 |
dc.relation.citationstartpage.spa.fl_str_mv |
71 |
dc.relation.citationissue.spa.fl_str_mv |
1 |
dc.relation.citationvolume.spa.fl_str_mv |
1 |
dc.relation.ispartofjournalabbrev.spa.fl_str_mv |
CESTA |
dc.rights.spa.fl_str_mv |
CC0 1.0 Universal © The author; licensee Universidad de la Costa - CUC. |
dc.rights.uri.spa.fl_str_mv |
https://creativecommons.org/licenses/by-nc-nd/4.0/ |
dc.rights.accessrights.spa.fl_str_mv |
info:eu-repo/semantics/openAccess |
dc.rights.coar.spa.fl_str_mv |
http://purl.org/coar/access_right/c_abf2 |
rights_invalid_str_mv |
CC0 1.0 Universal © The author; licensee Universidad de la Costa - CUC. https://creativecommons.org/licenses/by-nc-nd/4.0/ http://purl.org/coar/access_right/c_abf2 |
eu_rights_str_mv |
openAccess |
dc.format.extent.spa.fl_str_mv |
12 páginas |
dc.format.mimetype.spa.fl_str_mv |
application/pdf |
dc.publisher.spa.fl_str_mv |
Corporación Universidad de la Costa |
dc.publisher.place.spa.fl_str_mv |
Barranquilla |
dc.source.spa.fl_str_mv |
Computer and Electronic Sciences: Theory and Applications |
institution |
Corporación Universidad de la Costa |
dc.source.url.spa.fl_str_mv |
https://revistascientificas.cuc.edu.co/CESTA/article/view/3374 |
bitstream.url.fl_str_mv |
https://repositorio.cuc.edu.co/bitstreams/71ed19b8-198c-4f55-bc06-0a302cca7608/download https://repositorio.cuc.edu.co/bitstreams/f98372fa-87a6-4acf-b32a-cc33142a3bac/download https://repositorio.cuc.edu.co/bitstreams/0f20f67a-63b1-47e9-9bc7-1e89064ebba8/download https://repositorio.cuc.edu.co/bitstreams/6c02f317-dd61-4329-af32-5083ffbf5e98/download https://repositorio.cuc.edu.co/bitstreams/2e754583-275e-43d1-9fd1-e93cf8d0c8ad/download https://repositorio.cuc.edu.co/bitstreams/3fa316d4-9f97-4e01-8a57-f8cc5c792c80/download |
bitstream.checksum.fl_str_mv |
8fbdc860c5cf0eda0d355e2911c2359a 42fd4ad1e89814f5e4a476b409eb708c e30e9215131d99561d40d6b0abbe9bad 224d64c0422095c560ae0c9997334b69 224d64c0422095c560ae0c9997334b69 d1d321688e85d32156090e6b6263c727 |
bitstream.checksumAlgorithm.fl_str_mv |
MD5 MD5 MD5 MD5 MD5 MD5 |
repository.name.fl_str_mv |
Repositorio de la Universidad de la Costa CUC |
repository.mail.fl_str_mv |
repdigital@cuc.edu.co |
_version_ |
1811760746807164928 |
spelling |
Moofarrry , Jhon FreddyArgüello- Vélez, PatriciaSarria-Paja, Milton2021-09-21T13:31:48Z2021-09-21T13:31:48Z2020J. Moofarry, P. Argüello-Velez & J. Sarria-Paja, “Automatic detection of Parkinson’s disease from components of modulators in speech signals”, J. Comput. Electron. Sci.: Theory Appl., vol. 1, no. 1, pp. 71–82, 2020. https://doi.org/10.17981/cesta.01.01.2020.05https://hdl.handle.net/11323/8725https://doi.org/10.17981/cesta.01.01.2020.0510.17981/cesta.01.01.2020.052745-0090Corporación Universidad de la CostaREDICUC - Repositorio CUChttps://repositorio.cuc.edu.co/Parkinson’s Disease (PD) is the second most common neurodegenerative disorder after Alzheimer’s disease. This disorder mainly affects older adults at a rate of about 2%, and about 89% of people diagnosed with PD also develop speech disorders. This has led scientific community to research information embedded in speech signal from Parkinson’s patients, which has allowed not only a diagnosis of the pathology but also a follow-up of its evolution. In recent years, a large number of studies have focused on the automatic detection of pathologies related to the voice, in order to make objective evaluations of the voice in a non-invasive manner. In cases where the pathology primarily affects the vibratory patterns of vocal folds such as Parkinson’s, the analyses typically performed are sustained over vowel pronunciations. In this article, it is proposed to use information from slow and rapid variations in speech signals, also known as modulating components, combined with an effective dimensionality reduction approach that will be used as input to the classification system. The proposed approach achieves classification rates higher than 88 %, surpassing the classical approach based on Mel Cepstrals Coefficients (MFCC). The results show that the information extracted from slow varying components is highly discriminative for the task at hand, and could support assisted diagnosis systems for PD.La Enfermedad de Parkinson (EP) es el segundo trastorno neurodegenerativo más común después de la enfermedad de Alzheimer. Este trastorno afecta principalmente a los adultos mayores con una tasa de aproximadamente el 2%, y aproximadamente el 89% de las personas diagnosticadas con EP también desarrollan trastornos del habla. Esto ha llevado a la comunidad científica a investigar información embebida en las señales de voz de pacientes diagnosticados con la EP, lo que ha permitido no solo un diagnóstico de la patología sino también un seguimiento de su evolución. En los últimos años, una gran cantidad de estudios se han centrado en la detección automática de patologías relacionadas con la voz, a fin de realizar evaluaciones objetivas de manera no invasiva. En los casos en que la patología afecta principalmente los patrones vibratorios de las cuerdas vocales como el Parkinson, los análisis que se realizan típicamente sobre grabaciones de vocales sostenidas. En este artículo, se propone utilizar información de componentes con variación lenta de las señales de voz, también conocidas como componentes de modulación, combinadas con un enfoque efectivo de reducción de dimensiónalidad que se utilizará como entrada al sistema de clasificación. El enfoque propuesto logra tasas de clasificación superiores al 88 %, superando el enfoque clásico basado en los Coeficientes Cepstrales de Mel (MFCC). Los resultados muestran que la información extraída de componentes que varían lentamente es altamente discriminatoria para el problema abordado y podría apoyar los sistemas de diagnóstico asistido para EP.Moofarrry, Jhon Freddy-will be generated-orcid-0000-0002-0366-5396-600Argüello- Vélez, Patricia-will be generated-orcid-0000-0002-5733-3506-600Sarria-Paja, Milton-will be generated-orcid-0000-0003-4288-1742-60012 páginasapplication/pdfengCorporación Universidad de la CostaBarranquillaCC0 1.0 Universal© The author; licensee Universidad de la Costa - CUC.https://creativecommons.org/licenses/by-nc-nd/4.0/info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2Computer and Electronic Sciences: Theory and Applicationshttps://revistascientificas.cuc.edu.co/CESTA/article/view/3374Automatic detection of Parkinson’s disease from components of modulators in speech signalsDetección automática de la enfermedad de Parkinson usando componentes moduladoras de señales de vozArtículo de revistahttp://purl.org/coar/resource_type/c_6501http://purl.org/coar/resource_type/c_2df8fbb1Textinfo:eu-repo/semantics/articlehttp://purl.org/redcol/resource_type/ARTinfo:eu-repo/semantics/acceptedVersionComputer and Electronic Sciences: Theory and ApplicationsComputer and Electronic Sciences: Theory and Applications[1] J. M. Fearnley & A. J. Lees, “Ageing and parkinson’s disease: substantia nigra regional selectivity”, Brain, vol. 114, no. 5, pp. 2283–2301, Oct. 1991. https://doi.org/10.1093/brain/114.5.2283[2] P. Gómez-Vilda, D. Palacios-Alonso, V. Rodellar-Biarge, A. Álvarez-Marquina, V. Nieto-Lluis & R. Martínez-Olalla, “Parkinson’s disease monitoring by biomechanical instability of phonation”, Neurocomputing, vol. 255, pp. 3–16, Sept. 2017. https:// doi.org/10.1016/j.neucom.2016.06.092[3] T. Khan, J. Westin & M. Dougherty, “Classification of speech intelligibility in parkinson’s disease”, BBE, vol.34, no. 1, pp. 35–45, Jan. 2014. https://doi.org/10.1016/j.bbe.2013.10.003[4] J. Rusz, R. Cmejla, H. Ruzickova & E. Ruzicka, “Objectification of dysarthria in parkinson’s disease using bayes theorem”, in Proc. 10th NEHIPISIC, WSEAS, CGK, ID, Dec. 1-3, 2011, pp. 165–169. https://dl.acm.org/doi/10.5555/1959586.1959620[5] L. O. Ramig, C. Fox & S. Sapir, “Speech treatment for parkinson’s disease”, Expert Rev Neurother, vol. 8, no. 2, pp. 297–309, Feb. 2008. https://doi.org/10.1586/14737175.8.2.297[6] P. C. Doyle, H. A. Leeper, A.-L. Kotler, N. Thomas-Stonell, C. O’Neill, M.-C. Dylke & K. Rolls, “Dysarthric speech: A comparison of computerized speech recognition and listener intelligibility”, JRRD, vol. 34, no. 3, pp. 309–316, Jul. 1997. Available: https://www.rehab.research.va.gov/jour/97/34/3/pdf/doyle.pdf[7] J. R. Duffy, Motor speech disorders e-book: Substrates, differential diagnosis, and management, St. Louis, Mo, USA: Elsevier Health Sciences, 2013.[8] R. D. Kent, G. Weismer, J. F. Kent, H. K. Vorperian & J. R. Duffy, “Acoustic studies of dysarthric speech: Methods, progress, and potential”, J Commun Disord, vol. 32, no. 3, pp. 141–186, May. 1999. https://doi.org/10.1016/s0021-9924(99)00004-0[9] T. H. Falk, W.-Y. Chan & F. Shein, “Characterization of atypical vocal source excitation, temporal dynamics and prosody for objective measurement of dysarthric word intelligibility”, Speech Commun, vol. 54, no. 5, pp. 622–631, Jun. 2012. https://doi. org/10.1016/j.specom.2011.03.007[10] H. Kim, M. Hasegawa-Johnson, A. Perlman, J. Gunderson, T. S. Huang, K. Watkin & S. Frame, “Dysarthric speech database for universal access research”, in INTERSPEECH 2008, ISCA, BRN, AUS, Sep. 22-26, 2008. Available at: https://www.iscaspeech.org/archive/archive_papers/interspeech_2008/i08_1741.pdf[11] F. Rudzicz, “Articulatory knowledge in the recognition of dysarthric speech”, IEEE/ACM Trans. Audio, Speech, Language Process, vol. 19, no. 4, pp. 947–960, Sep. 2010. https://doi.org/10.1109/TASL.2010.2072499[12] S. Skodda, “Aspects of speech rate and regularity in parkinson’s disease”, J Neurol Sci, vol. 310, no. 1-2, pp. 231–236, Aug. 2011. https://doi.org/10.1016/j.jns.2011.07.020[13] T. Villa-Cañas, J. Orozco-Arroyave, J. Vargas-Bonilla & J. Arias-Londoño, “Modulation spectra for automatic detection of parkinson’s disease”, In Proc. 2014 XIX STSIVA, IEEE, AM, CO, Sept. 17-19, 2014, pp. 1–5. https://doi.org/10.1109/STSIVA.2014.7010173[14] J. R. Orozco-Arroyave, J. D. Arias-Londoño, J. F. Vargas-Bonilla, M. C. Gonzalez-Rátiva & E. Nöth, “New spanish speech corpus database for the analysis of people suffering from parkinson’s disease”, in Proc. LREC´14, ELRA, RKV, ISL, May. 26- 31, 2014, pp. 342–347. Available: http://www.lrec-conf.org/proceedings/lrec2014/pdf/7_Paper.pdf[15] J. Orozco-Arroyave, F. Hönig, J. Arias-Londoño, J. Vargas-Bonilla, K. Daqrouq, S. Skodda, J. Rusz & E. Nöth, “Automatic detection of parkinson’s disease in running speech spoken in three different languages”, JASA, vol. 139, no. 1, pp. 481–500, 2016. https://doi.org/10.1121/1.4939739[16] H. Ackermann, I. Hertrich & T. Hehr, “Oral diadochokinesis in neurological dysarthrias”, Folia Phoniatr Logop, vol. 47, no. 1, pp. 15–23, Feb. 1995. https://doi.org/10.1159/000266338[17] C.-C. Yang, Y.-M. Chung, L.-Y. Chi, H.-H. Chen & Y.-T. Wang, “Analysis of verbal diadochokinesis in normal speech using the diadochokinetic rate analysis program”, JDS, vol. 6, no. 4, pp. 221–226, Dec. 2011. https://doi.org/10.1016/j.jds.2011.09.007[18] M. N. Wong, B. E. Murdoch & B.-M. Whelan, “Lingual kinematics during rapid syllable repetition in parkinson’s disease”, Int J Lang Commun Disord, vol. 47, no. 5, pp. 578–588, Jul. 2012. https://doi.org/10.1111/j.1460-6984.2012.00167.x[19] M. Lotze, G. Seggewies, M. Erb, W. Grodd & N. Birbaumer, “The representation of articulation in the primary sensorimotor cortex”, Neuroreport, vol. 11, no. 13, pp. 2985–2989, Sep. 2000. https://doi.org/10.1097/00001756-200009110-00032[20] D. O’Shaughnessy, Speech Communications: Human and Machine, CAM, USA: Universities Press, 2009.[21] D. Montaña, Y. Campos-Roca & C. J. Pérez, “A diadochokinesis-based expert system considering articulatory features of plosive consonants for early detection of parkinson’s disease”, Comput Meth Prog Bio, vol. 154, pp. 89–97, Feb. 2018. https://doi. org/10.1016/j.cmpb.2017.11.010[22] M. Novotný, J. Rusz, R. Čmejla & E. Růžička, “Automatic evaluation of articulatory disorders in parkinson’s disease”, IEEE/ACM Trans. Audio, Speech, Language Process, vol. 22, no. 9, pp. 1366–1378, Sep. 2014. https://doi.org/10.1109/ TASLP.2014.2329734[23] M. Benzeghiba, R. De Mori, O. Deroo, S. Dupont, T. Erbes, D. Jouvet, L. Fissore, P. Laface, A. Mertins, C. Ris, R. V. Tyagi & C. Wellekens, “Automatic speech recognition and speech variability: A review”, Speech Commun, vol. 49, no. 10-11, pp. 763–786, Oct. 2007. https://doi.org/10.1016/j.specom.2007.02.006[24] J. Markič, “Real academia española y asociación de academias de la lengua española. Nueva gramática de la lengua española. Fonética y fonología”, Linguistica, vol. 52, no. 1, pp. 403–406, Dec. 2012. https://doi.org/10.4312/linguistica.52.1.403-406[25] A. M. Borzone, Manual de fonética acústica, TX, USA: Hachette, 1980.[26] J. Rusz, R. Cmejla, H. Ruzickova & E. Ruzicka, “Quantitative acoustic measurements for characterization of speech and voice disorders in early untreated parkinson’s disease”, JASA, vol. 129, no. 1, pp. 350–367, Feb. 2011. https://doi.org/10.1121/1.3514381[27] L. Rabiner & R. Schafer, Digital processing of speech signals. ENGL, N.J., USA: Prentice-Hall, 1978.[28] J. Proakis & D. Manolakis, Digital signal processing: principles, algorithms, and applications. USR, N.J., USA: Prentice Hall, 1996.[29] D. O’Shaughnessy, “Invited paper: Automatic speech recognition: History, methods and challenges”, Pattern Recognit. Image Anal., vol. 41, no. 10, pp. 2965–2979, Oct. 2008. https://doi.org/10.1016/j.patcog.2008.05.008[30] Q. Jin & T. F. Zheng, “Overview of front-end features for robust speaker recognition”, in Proc. ASC2011, APSIPA, Xi'an, CN, Oct. 18-20, 2011. Available: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.704.6205&rep=rep1&type=pdf[31] P. Maragos, J. F. Kaiser & T. F. Quatieri, “Energy separation in signal modulations with application to speech analysis”, IEEE Trans. Signal Process, vol. 41, no. 10, pp. 3024–3051, Oct. 1993. https://doi.org/10.1109/78.277799[32] M. Grimaldi & F. Cummins, “Speaker identification using instantaneous frequencies”, IEEE/ACM Trans. Audio, Speech, Language Process, vol. 16, no. 6, pp. 1097–1111, Aug. 2008. https://doi.org/10.1109/TASL.2008.2001109[33] S. O. Sadjadi & J. H. L. Hansen, “Assessment of single-channel speech enhancement techniques for speaker identification under mismatched conditions”, in Proc. INTERSPEECH2010, ISCA, Makuhari, JP, Sep. 26-30, 2010, pp. 2138–2141. Available at: https://www.isca-speech.org/archive/archive_papers/interspeech_2010/i10_2138.pdf[34] P. Clark & L. Atlas, “Time-frequency coherent modulation filtering of nonstationary signals”, IEEE Trans. Signal Process., vol. 57, no. 11, pp. 4323–4332, Nov. 2009. https://doi.org/10.1109/TSP.2009.2025107[35] M. Sarria-Paja & T. Falk, “Whispered speech detection in noise using auditory-inspired modulation spectrum features”, IEEE Signal Process Lett, vol. 20, no. 8, pp. 783–786, Aug. 2013. https://doi.org/10.1109/LSP.2013.2266860[36] T. Falkand & W.-Y. Chan, “Modulation spectral features for robust farfield speaker identification”, IEEE/ACM Trans. Audio, Speech, Language Process, vol. 18, no. 1, pp. 90–100, Jan. 2010. https://doi.org/10.1109/TASL.2009.2023679[37] S. Ganapathy, S. Thomas & H. Hermansky, “Static and dynamic modulation spectrum for speech recognition”, in Proc. INTERSPEECH2009, ISCA, Brig, UK, Sep. 6-10, 2009, pp. 2823–2826. Available at: https://www.isca-speech.org/archive/ archive_papers/interspeech_2009/papers/i09_2823.pdf[38] T. Kinnunen & H. Li, “An overview of text-independent speaker recognition: From features to supervectors”, Speech Commun, vol. 52, no. 1, pp. 12–40, Jan. 2010. https://doi.org/10.1016/j.specom.2009.08.009[39] M. Sarria-Paja & T. H. Falk, “Fusion of auditory inspired amplitude modulation spectrum and cepstral features for whispered and normal speech speaker verification”, Comp Speech Lang, vol. 45, pp. 437–456, Sep. 2017. https://doi.org/10.1016/j. csl.2017.04.004[40] C. Bishop, Pattern Recognition and Machine Learning. NY, USA: Springer-Verlag, 2006.[41] M. Senoussaoui, M. Saria-Paja, P. Cardinal, T. H. Falk & F. Michaud, “1. State-of-the-art speaker recognition methods applied to speakers with dysarthria” in Voice Technologies for Speech Reconstruction and Enhancement, H. A. Patil & A. Neustein, Ed. BE, GE: De Gruyter, 2020. pp. 7–34. https://doi.org/10.1515/9781501501265-002[42] O. Tuzel, F. Porikli & P. Meer, “Region covariance: A fast descriptor for detection and classification”, in Proc. ECCV2006, GRZ, AT, May. 7-13, 2006, pp. 589–600. https://doi.org/10.1007/11744047_45[43] O. Tuzel, F. Porikli & P. Meer, “Human detection via classification on riemannian manifolds”, In Proc. CVPR’07, IEEE, Mpls, MN, USA, Jun. 17-22, 2007, pp. 1–8. https://doi.org/10.1109/CVPR.2007.383197[44] C. Ye, J. Liu, C. Chen, M. Song & J. Bu, “Speech Emotion Classification on a Riemannian Manifold”, in Proc. 9th PCM2008, LNCS, TNN, TW, Dec. 9-13, 2008, , vol. 5353, pp. 61–69. https://doi.org/10.1007/978-3-540-89796-5[45] R. Duda, P. Hart & D. Stork, Pattern classification, NY, USA: John Wiley & Sons, 2012.[46] L. Rabiner & B. Juang, “An introduction to hidden markov models”, IEEE ASSP Mag, vol. 3, no. 1, pp. 4–16, Jan. 1986. https://doi.org/10.1109/MASSP.1986.1165342[47] V. N. Vapnik, “An overview of statistical learning theory”, IEEE Trans Neural Netw Learn Syst, vol. 10, no. 5, pp. 988–999, Sep. 1999. https://doi.org/10.1109/72.788640[48] B. Scholkopfand & A. J. Smola, Learning with kernels: support vector machines, regularization, optimization, and beyond, CAM, USA: MIT Press, 2001.[49] R. Lyon, A. Katsiamis & E. Drakakis, “History and future of auditory filter models”, in Proc. ISCAS, IEEE, PAR, FRA, May. 30-Jun. 2, 2010, pp. 3809–3812. https://doi.org/10.1109/ISCAS.2010.5537724[50] T. Cho & P. Ladefoged, “Variation and universals in vot: evidence from 18 languages”, J. Phon, vol. 27, no. 2, pp. 207–229, Apr. 1999. https://doi.org/10.1006/jpho.1999.0094[51] L. Liskerand & A. S. Abramson, “A cross-language study of voicing in initial stops: Acoustical measurements”, Word, vol. 20, no. 3, pp. 384–422, 1964. https://doi.org/10.1080/00437956.1964.11659830827111CESTAModulation spectrumCovariance featuresParkinson’s diseaseSpeech signalsPattern recognitionEspectro de modulaciónEnfermedad de ParkinsonSeñales de vozReconocimiento de patronesCaracterísticas de covarianzaPublicationORIGINALAutomatic detection of Parkinson's disease from components of modulators in speech signals.pdfAutomatic detection of Parkinson's disease from components of modulators in speech signals.pdfapplication/pdf861846https://repositorio.cuc.edu.co/bitstreams/71ed19b8-198c-4f55-bc06-0a302cca7608/download8fbdc860c5cf0eda0d355e2911c2359aMD51CC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-8701https://repositorio.cuc.edu.co/bitstreams/f98372fa-87a6-4acf-b32a-cc33142a3bac/download42fd4ad1e89814f5e4a476b409eb708cMD52LICENSElicense.txtlicense.txttext/plain; charset=utf-83196https://repositorio.cuc.edu.co/bitstreams/0f20f67a-63b1-47e9-9bc7-1e89064ebba8/downloade30e9215131d99561d40d6b0abbe9badMD53THUMBNAILAutomatic detection of Parkinson's disease from components of modulators in speech signals.pdf.jpgAutomatic detection of Parkinson's disease from components of modulators in speech signals.pdf.jpgimage/jpeg76456https://repositorio.cuc.edu.co/bitstreams/6c02f317-dd61-4329-af32-5083ffbf5e98/download224d64c0422095c560ae0c9997334b69MD54THUMBNAILAutomatic detection of Parkinson's disease from components of modulators in speech signals.pdf.jpgAutomatic detection of Parkinson's disease from components of modulators in speech signals.pdf.jpgimage/jpeg76456https://repositorio.cuc.edu.co/bitstreams/2e754583-275e-43d1-9fd1-e93cf8d0c8ad/download224d64c0422095c560ae0c9997334b69MD54TEXTAutomatic detection of Parkinson's disease from components of modulators in speech signals.pdf.txtAutomatic detection of Parkinson's disease from components of modulators in speech signals.pdf.txttext/plain52698https://repositorio.cuc.edu.co/bitstreams/3fa316d4-9f97-4e01-8a57-f8cc5c792c80/downloadd1d321688e85d32156090e6b6263c727MD5511323/8725oai:repositorio.cuc.edu.co:11323/87252024-09-17 10:56:44.952https://creativecommons.org/licenses/by-nc-nd/4.0/CC0 1.0 Universalopen.accesshttps://repositorio.cuc.edu.coRepositorio de la Universidad de la Costa CUCrepdigital@cuc.edu.coQXV0b3Jpem8gKGF1dG9yaXphbW9zKSBhIGxhIEJpYmxpb3RlY2EgZGUgbGEgSW5zdGl0dWNpw7NuIHBhcmEgcXVlIGluY2x1eWEgdW5hIGNvcGlhLCBpbmRleGUgeSBkaXZ1bGd1ZSBlbiBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsLCBsYSBvYnJhIG1lbmNpb25hZGEgY29uIGVsIGZpbiBkZSBmYWNpbGl0YXIgbG9zIHByb2Nlc29zIGRlIHZpc2liaWxpZGFkIGUgaW1wYWN0byBkZSBsYSBtaXNtYSwgY29uZm9ybWUgYSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBxdWUgbWUobm9zKSBjb3JyZXNwb25kZShuKSB5IHF1ZSBpbmNsdXllbjogbGEgcmVwcm9kdWNjacOzbiwgY29tdW5pY2FjacOzbiBww7pibGljYSwgZGlzdHJpYnVjacOzbiBhbCBww7pibGljbywgdHJhbnNmb3JtYWNpw7NuLCBkZSBjb25mb3JtaWRhZCBjb24gbGEgbm9ybWF0aXZpZGFkIHZpZ2VudGUgc29icmUgZGVyZWNob3MgZGUgYXV0b3IgeSBkZXJlY2hvcyBjb25leG9zIHJlZmVyaWRvcyBlbiBhcnQuIDIsIDEyLCAzMCAobW9kaWZpY2FkbyBwb3IgZWwgYXJ0IDUgZGUgbGEgbGV5IDE1MjAvMjAxMiksIHkgNzIgZGUgbGEgbGV5IDIzIGRlIGRlIDE5ODIsIExleSA0NCBkZSAxOTkzLCBhcnQuIDQgeSAxMSBEZWNpc2nDs24gQW5kaW5hIDM1MSBkZSAxOTkzIGFydC4gMTEsIERlY3JldG8gNDYwIGRlIDE5OTUsIENpcmN1bGFyIE5vIDA2LzIwMDIgZGUgbGEgRGlyZWNjacOzbiBOYWNpb25hbCBkZSBEZXJlY2hvcyBkZSBhdXRvciwgYXJ0LiAxNSBMZXkgMTUyMCBkZSAyMDEyLCBsYSBMZXkgMTkxNSBkZSAyMDE4IHkgZGVtw6FzIG5vcm1hcyBzb2JyZSBsYSBtYXRlcmlhLg0KDQpBbCByZXNwZWN0byBjb21vIEF1dG9yKGVzKSBtYW5pZmVzdGFtb3MgY29ub2NlciBxdWU6DQoNCi0gTGEgYXV0b3JpemFjacOzbiBlcyBkZSBjYXLDoWN0ZXIgbm8gZXhjbHVzaXZhIHkgbGltaXRhZGEsIGVzdG8gaW1wbGljYSBxdWUgbGEgbGljZW5jaWEgdGllbmUgdW5hIHZpZ2VuY2lhLCBxdWUgbm8gZXMgcGVycGV0dWEgeSBxdWUgZWwgYXV0b3IgcHVlZGUgcHVibGljYXIgbyBkaWZ1bmRpciBzdSBvYnJhIGVuIGN1YWxxdWllciBvdHJvIG1lZGlvLCBhc8OtIGNvbW8gbGxldmFyIGEgY2FibyBjdWFscXVpZXIgdGlwbyBkZSBhY2Npw7NuIHNvYnJlIGVsIGRvY3VtZW50by4NCg0KLSBMYSBhdXRvcml6YWNpw7NuIHRlbmRyw6EgdW5hIHZpZ2VuY2lhIGRlIGNpbmNvIGHDsW9zIGEgcGFydGlyIGRlbCBtb21lbnRvIGRlIGxhIGluY2x1c2nDs24gZGUgbGEgb2JyYSBlbiBlbCByZXBvc2l0b3JpbywgcHJvcnJvZ2FibGUgaW5kZWZpbmlkYW1lbnRlIHBvciBlbCB0aWVtcG8gZGUgZHVyYWNpw7NuIGRlIGxvcyBkZXJlY2hvcyBwYXRyaW1vbmlhbGVzIGRlbCBhdXRvciB5IHBvZHLDoSBkYXJzZSBwb3IgdGVybWluYWRhIHVuYSB2ZXogZWwgYXV0b3IgbG8gbWFuaWZpZXN0ZSBwb3IgZXNjcml0byBhIGxhIGluc3RpdHVjacOzbiwgY29uIGxhIHNhbHZlZGFkIGRlIHF1ZSBsYSBvYnJhIGVzIGRpZnVuZGlkYSBnbG9iYWxtZW50ZSB5IGNvc2VjaGFkYSBwb3IgZGlmZXJlbnRlcyBidXNjYWRvcmVzIHkvbyByZXBvc2l0b3Jpb3MgZW4gSW50ZXJuZXQgbG8gcXVlIG5vIGdhcmFudGl6YSBxdWUgbGEgb2JyYSBwdWVkYSBzZXIgcmV0aXJhZGEgZGUgbWFuZXJhIGlubWVkaWF0YSBkZSBvdHJvcyBzaXN0ZW1hcyBkZSBpbmZvcm1hY2nDs24gZW4gbG9zIHF1ZSBzZSBoYXlhIGluZGV4YWRvLCBkaWZlcmVudGVzIGFsIHJlcG9zaXRvcmlvIGluc3RpdHVjaW9uYWwgZGUgbGEgSW5zdGl0dWNpw7NuLCBkZSBtYW5lcmEgcXVlIGVsIGF1dG9yKHJlcykgdGVuZHLDoW4gcXVlIHNvbGljaXRhciBsYSByZXRpcmFkYSBkZSBzdSBvYnJhIGRpcmVjdGFtZW50ZSBhIG90cm9zIHNpc3RlbWFzIGRlIGluZm9ybWFjacOzbiBkaXN0aW50b3MgYWwgZGUgbGEgSW5zdGl0dWNpw7NuIHNpIGRlc2VhIHF1ZSBzdSBvYnJhIHNlYSByZXRpcmFkYSBkZSBpbm1lZGlhdG8uDQoNCi0gTGEgYXV0b3JpemFjacOzbiBkZSBwdWJsaWNhY2nDs24gY29tcHJlbmRlIGVsIGZvcm1hdG8gb3JpZ2luYWwgZGUgbGEgb2JyYSB5IHRvZG9zIGxvcyBkZW3DoXMgcXVlIHNlIHJlcXVpZXJhIHBhcmEgc3UgcHVibGljYWNpw7NuIGVuIGVsIHJlcG9zaXRvcmlvLiBJZ3VhbG1lbnRlLCBsYSBhdXRvcml6YWNpw7NuIHBlcm1pdGUgYSBsYSBpbnN0aXR1Y2nDs24gZWwgY2FtYmlvIGRlIHNvcG9ydGUgZGUgbGEgb2JyYSBjb24gZmluZXMgZGUgcHJlc2VydmFjacOzbiAoaW1wcmVzbywgZWxlY3Ryw7NuaWNvLCBkaWdpdGFsLCBJbnRlcm5ldCwgaW50cmFuZXQsIG8gY3VhbHF1aWVyIG90cm8gZm9ybWF0byBjb25vY2lkbyBvIHBvciBjb25vY2VyKS4NCg0KLSBMYSBhdXRvcml6YWNpw7NuIGVzIGdyYXR1aXRhIHkgc2UgcmVudW5jaWEgYSByZWNpYmlyIGN1YWxxdWllciByZW11bmVyYWNpw7NuIHBvciBsb3MgdXNvcyBkZSBsYSBvYnJhLCBkZSBhY3VlcmRvIGNvbiBsYSBsaWNlbmNpYSBlc3RhYmxlY2lkYSBlbiBlc3RhIGF1dG9yaXphY2nDs24uDQoNCi0gQWwgZmlybWFyIGVzdGEgYXV0b3JpemFjacOzbiwgc2UgbWFuaWZpZXN0YSBxdWUgbGEgb2JyYSBlcyBvcmlnaW5hbCB5IG5vIGV4aXN0ZSBlbiBlbGxhIG5pbmd1bmEgdmlvbGFjacOzbiBhIGxvcyBkZXJlY2hvcyBkZSBhdXRvciBkZSB0ZXJjZXJvcy4gRW4gY2FzbyBkZSBxdWUgZWwgdHJhYmFqbyBoYXlhIHNpZG8gZmluYW5jaWFkbyBwb3IgdGVyY2Vyb3MgZWwgbyBsb3MgYXV0b3JlcyBhc3VtZW4gbGEgcmVzcG9uc2FiaWxpZGFkIGRlbCBjdW1wbGltaWVudG8gZGUgbG9zIGFjdWVyZG9zIGVzdGFibGVjaWRvcyBzb2JyZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBsYSBvYnJhIGNvbiBkaWNobyB0ZXJjZXJvLg0KDQotIEZyZW50ZSBhIGN1YWxxdWllciByZWNsYW1hY2nDs24gcG9yIHRlcmNlcm9zLCBlbCBvIGxvcyBhdXRvcmVzIHNlcsOhbiByZXNwb25zYWJsZXMsIGVuIG5pbmfDum4gY2FzbyBsYSByZXNwb25zYWJpbGlkYWQgc2Vyw6EgYXN1bWlkYSBwb3IgbGEgaW5zdGl0dWNpw7NuLg0KDQotIENvbiBsYSBhdXRvcml6YWNpw7NuLCBsYSBpbnN0aXR1Y2nDs24gcHVlZGUgZGlmdW5kaXIgbGEgb2JyYSBlbiDDrW5kaWNlcywgYnVzY2Fkb3JlcyB5IG90cm9zIHNpc3RlbWFzIGRlIGluZm9ybWFjacOzbiBxdWUgZmF2b3JlemNhbiBzdSB2aXNpYmlsaWRhZA== |