Identificación de eventos delictivos a partir de señales de audio utilizando modos de correlación wavelet

Ilustraciones, gráficos

Autores:
Álvarez Osorio, Carlos Andres
Tipo de recurso:
Fecha de publicación:
2024
Institución:
Universidad Nacional de Colombia
Repositorio:
Universidad Nacional de Colombia
Idioma:
spa
OAI Identifier:
oai:repositorio.unal.edu.co:unal/86601
Acceso en línea:
https://repositorio.unal.edu.co/handle/unal/86601
https://repositorio.unal.edu.co/
Palabra clave:
620 - Ingeniería y operaciones afines
Seguridad ciudadana - Medellín (Antioquia, Colombia)
Prevención del delito - Medellín (Antioquia, Colombia)
Innovaciones tecnológicas - Medellín (Antioquia, Colombia)
Seguridad Ciudadana
Transformada Wavelet
Señal de Audio
Modos de Correlación Wavelet
Aprendizaje de Máquina
Citizen security
Wavelet Transform
Audio signal
Wavelet Correlation Modes
Machine Learning
Rights
openAccess
License
Atribución-NoComercial-SinDerivadas 4.0 Internacional
id UNACIONAL2_5b233ca44081484bd9f2b7d72bea6eca
oai_identifier_str oai:repositorio.unal.edu.co:unal/86601
network_acronym_str UNACIONAL2
network_name_str Universidad Nacional de Colombia
repository_id_str
dc.title.spa.fl_str_mv Identificación de eventos delictivos a partir de señales de audio utilizando modos de correlación wavelet
dc.title.translated.eng.fl_str_mv Identification of crime events from audio signals using wavelet correlation modes
title Identificación de eventos delictivos a partir de señales de audio utilizando modos de correlación wavelet
spellingShingle Identificación de eventos delictivos a partir de señales de audio utilizando modos de correlación wavelet
620 - Ingeniería y operaciones afines
Seguridad ciudadana - Medellín (Antioquia, Colombia)
Prevención del delito - Medellín (Antioquia, Colombia)
Innovaciones tecnológicas - Medellín (Antioquia, Colombia)
Seguridad Ciudadana
Transformada Wavelet
Señal de Audio
Modos de Correlación Wavelet
Aprendizaje de Máquina
Citizen security
Wavelet Transform
Audio signal
Wavelet Correlation Modes
Machine Learning
title_short Identificación de eventos delictivos a partir de señales de audio utilizando modos de correlación wavelet
title_full Identificación de eventos delictivos a partir de señales de audio utilizando modos de correlación wavelet
title_fullStr Identificación de eventos delictivos a partir de señales de audio utilizando modos de correlación wavelet
title_full_unstemmed Identificación de eventos delictivos a partir de señales de audio utilizando modos de correlación wavelet
title_sort Identificación de eventos delictivos a partir de señales de audio utilizando modos de correlación wavelet
dc.creator.fl_str_mv Álvarez Osorio, Carlos Andres
dc.contributor.advisor.none.fl_str_mv Bolaños Martinez, Freddy
Fletscher Bocanegra, Luis Alejandro
dc.contributor.author.none.fl_str_mv Álvarez Osorio, Carlos Andres
dc.subject.ddc.spa.fl_str_mv 620 - Ingeniería y operaciones afines
topic 620 - Ingeniería y operaciones afines
Seguridad ciudadana - Medellín (Antioquia, Colombia)
Prevención del delito - Medellín (Antioquia, Colombia)
Innovaciones tecnológicas - Medellín (Antioquia, Colombia)
Seguridad Ciudadana
Transformada Wavelet
Señal de Audio
Modos de Correlación Wavelet
Aprendizaje de Máquina
Citizen security
Wavelet Transform
Audio signal
Wavelet Correlation Modes
Machine Learning
dc.subject.lemb.none.fl_str_mv Seguridad ciudadana - Medellín (Antioquia, Colombia)
Prevención del delito - Medellín (Antioquia, Colombia)
Innovaciones tecnológicas - Medellín (Antioquia, Colombia)
dc.subject.proposal.spa.fl_str_mv Seguridad Ciudadana
Transformada Wavelet
Señal de Audio
Modos de Correlación Wavelet
Aprendizaje de Máquina
dc.subject.proposal.eng.fl_str_mv Citizen security
Wavelet Transform
Audio signal
Wavelet Correlation Modes
Machine Learning
description Ilustraciones, gráficos
publishDate 2024
dc.date.accessioned.none.fl_str_mv 2024-07-23T20:23:58Z
dc.date.available.none.fl_str_mv 2024-07-23T20:23:58Z
dc.date.issued.none.fl_str_mv 2024
dc.type.spa.fl_str_mv Trabajo de grado - Maestría
dc.type.driver.spa.fl_str_mv info:eu-repo/semantics/masterThesis
dc.type.version.spa.fl_str_mv info:eu-repo/semantics/acceptedVersion
dc.type.content.spa.fl_str_mv Text
dc.type.redcol.spa.fl_str_mv http://purl.org/redcol/resource_type/TM
status_str acceptedVersion
dc.identifier.uri.none.fl_str_mv https://repositorio.unal.edu.co/handle/unal/86601
dc.identifier.instname.spa.fl_str_mv Universidad Nacional de Colombia
dc.identifier.reponame.spa.fl_str_mv Repositorio Institucional Universidad Nacional de Colombia
dc.identifier.repourl.spa.fl_str_mv https://repositorio.unal.edu.co/
url https://repositorio.unal.edu.co/handle/unal/86601
https://repositorio.unal.edu.co/
identifier_str_mv Universidad Nacional de Colombia
Repositorio Institucional Universidad Nacional de Colombia
dc.language.iso.spa.fl_str_mv spa
language spa
dc.relation.indexed.spa.fl_str_mv LaReferencia
dc.relation.references.spa.fl_str_mv A. Gholamy, V. Kreinovich, and O. Kosheleva, ‘‘Why 70/30 or 80/20 relation between training and testing sets: A pedagogical explanation,’’ 2018.
G. T. S. R. R. R. Axton Pitt, Digl Dixon, ‘‘Litmaps,’’ 2023 Litmaps Ltd., 2023
S. Waldekar and G. Saha, ‘‘Analysis and classification of acoustic scenes with wavelet transform-based mel-scaled features,’’ Multimedia Tools and Applications, vol. 79, pp. 7911--7926, 3 2020.
G. Shen and B. Liu, ‘‘The visions, technologies, applications and security issues of internet of things,’’ pp. 1--4, IEEE, 5 2011
Markets and Markets, ‘‘Research and markets, internet of things (iot) market global forecast to 2021,’’ 2022.
J. . T. J. . T. H. . W. U. H. R. Bowerman, B. Braverman, ‘‘The vision of a smart city. 2nd int. life ..,’’ 2000
A. Gobernación, ‘‘Plan de desarrollo unidos por la vida,’’ 2020
CEJ, ‘‘En 2021 aumentó el hurto a personas y otros delitos, advierte el reloj de la criminalidad de la cej,’’ 2021
M. cómo vamos, ‘‘Informe de calidad de vida de medellÍn, 2020. seguridad ciudadana y convivencia,’’ 2020.
J. D. Rodriguez-Ortega, Y. A. A. Duarte-VelÃ!‘squez, C. GÃ-Toro, and J. A. Cadavid-Carmona, ‘‘Seguridad ciudadana, violencia y criminalidad: una visiÃholÃstica y criminolÃde las cifras estadÃsticas del 2018,’’ Revista Criminalidad, vol. 61, pp. 9 -- 58, 12 2019
F. L. A. Tamayo-Arboleda and E. Norza, ‘‘Midiendo el crimen: cifras de criminalidad y operatividad policial en Colombia, aÃ2017,’’ Revista Criminalidad, vol. 60, pp. 49 -- 71, 12 2018.
S. Mondal and A. Das Barman, ‘‘Deep learning technique based real-time audio event detection experiment in a distributed system architecture,’’ Computers and Electrical Engineering, vol. 102, p. 108252, 2022.
Y. Yamamoto, J. Nam, H. Terasawa, and Y. Hiraga, ‘‘Investigating time-frequency representations for audio feature extraction in singing technique classification,’’ in 2021 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), pp. 890--896, 2021
M. Esmaeilpour, P. Cardinal, and A. L. Koerich, ‘‘Unsupervised feature learning for environmental sound classification using weighted cycle-consistent generative adversarial network,’’ Applied Soft Computing, vol. 86, p. 105912, 1 2020.
I. D. Ilyashenko, R. S. Nasretdinov, Y. A. Filin, and A. A. Lependin, ‘‘Trainable wavelet-like transform for feature extraction to audio classification,’’ Journal of Physics: Conference Series, vol. 1333, p. 32029, 10 2019.
D. Guillén, H. Esponda, E. Vázquez, and G. Idárraga-Ospina, ‘‘Algorithm for transformer differential protection based on wavelet correlation modes,’’ IET Generation, Transmission & Distribution, vol. 10, no. 12, pp. 2871--2879, 2016.
A. Rabaoui, M. Davy, S. Rossignol, and N. Ellouze, ‘‘Using one-class svms and wavelets for audio surveillance,’’ IEEE Transactions on Information Forensics and Security, vol. 3, pp. 763--775, 12 2008
A. I. Middya, B. Nag, and S. Roy, ‘‘Deep learning based multimodal emotion recognition using model-level fusion of audio–visual modalities,’’ Knowledge-Based Systems, vol. 244, p. 108580, 5 2022
Y. Xu, J. Yang, and K. Mao, ‘‘Semantic-filtered soft-split-aware video captioning with audio-augmented feature,’’ Neurocomputing, vol. 357, pp. 24--35, 9 2019
S. Durai, ‘‘Wavelet based feature vector formation for audio signal classification,’’ 10 2007
T. Düzenli and N. Ozkurt, ‘‘Comparison of wavelet based feature extraction methods for speech/music discrimination,’’ Istanbul University - Journal of Electrical and Electronics Engineering, vol. 11, pp. 617- -621, 10 2011.
G. Tzanetakis, G. Essl, and P. Cook, ‘‘Audio analysis using the discrete wavelet transform,’’ pp. 318--323, 10 2001
C.-C. Lin, S.-H. Chen, T.-K. Truong, and Y. Chang, ‘‘Audio classification and categorization based on wavelets and support vector machine,’’ IEEE Transactions on Speech and Audio Processing, vol. 13, pp. 644--651, 2005.
K. Kim, D. H. Youn, and C. Lee, ‘‘Evaluation of wavelet filters for speech recognition,’’ SMC 2000 Confe- rence Proceedings. 2000 IEEE International Conference on Systems, Man and Cybernetics. ’Cybernetics Evolving to Systems, Humans, Organizations, and their Complex Interactions’ (Cat. No.00CH37166), pp. 2891--2894, 2000.
J. Gowdy and Z. Tufekci, ‘‘Mel-scaled discrete wavelet coefficients for speech recognition,’’ pp. 1351--1354, IEEE, 2000.
S. K. Kopparapu and M. Laxminarayana, ‘‘Choice of mel filter bank in computing mfcc of a resampled speech,’’ pp. 121--124, IEEE, 5 2010.
K. V. K. Kishore and P. K. Satish, ‘‘Emotion recognition in speech using mfcc and wavelet features,’’ pp. 842--847, 2013.
J. Salamon, C. Jacoby, and J. P. Bello, ‘‘A dataset and taxonomy for urban sound research,’’ pp. 1041-- 1044, ACM, 11 2014.
A. Rakotomamonjy and G. Gasso, ‘‘Histogram of gradients of time-frequency representations for audio scene detection,’’ IEEE/ACM Transactions on Audio, Speech, and Language Processing, pp. 1--1, 2014.
S. R. Kadiri and P. Alku, ‘‘Subjective evaluation of basic emotions from audio–visual data,’’ Sensors, vol. 22, p. 4931, 6 2022.
K. Umapathy, S. Krishnan, and R. K. Rao, ‘‘Audio signal feature extraction and classification using local discriminant bases,’’ IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, pp. 1236--1246, 2007
F. Weninger, F. Eyben, B. W. Schuller, M. Mortillaro, and K. R. Scherer, ‘‘On the acoustics of emotion in audio: What speech, music, and sound have in common,’’ Frontiers in Psychology, vol. 4, 2013
P. Wu, J. Liu, Y. Shi, Y. Sun, F. Shao, Z. Wu, and Z. Yang, ‘‘Not only look, but also listen: Learning multimodal violence detection under weak supervision,’’ pp. 322--339, 10 2020
J. Liu, Y. Zhang, D. Lv, J. Lu, H. Xu, S. Xie, and Y. Xiong, ‘‘Classification method of birdsong based on gaborwt feature image and convolutional neural network,’’ pp. 134--140, 2021
W. Dai, C. Dai, S. Qu, J. Li, and S. Das, ‘‘Very deep convolutional neural networks for raw waveforms,’’ pp. 421--425, IEEE, 3 2017.
L. Deng, G. Hinton, and B. Kingsbury, ‘‘New types of deep neural network learning for speech recognition and related applications: an overview,’’ pp. 8599--8603, IEEE, 5 2013
J. Sharma, O.-C. Granmo, and M. Goodwin, ‘‘Environment sound classification using multiple feature channels and deep convolutional neural networks,’’ 10 2019
S. Lee and H.-S. Pang, ‘‘Feature extraction based on the non-negative matrix factorization of convolu- tional neural networks for monitoring domestic activity with acoustic signals,’’ IEEE Access, vol. 8, pp. 122384--122395, 2020
J. Salamon and J. P. Bello, ‘‘Deep convolutional neural networks and data augmentation for environ- mental sound classification,’’ IEEE Signal Processing Letters, vol. 24, pp. 279--283, 3 2017.
T. Lv, H. yong Zhang, and C. hui Yan, ‘‘Double mode surveillance system based on remote audio/video signals acquisition,’’ Applied Acoustics, vol. 129, pp. 316--321, 1 2018.
G. Parascandolo, H. Huttunen, and T. Virtanen, ‘‘Recurrent neural networks for polyphonic sound event detection in real life recordings,’’ pp. 6440--6444, IEEE, 3 2016
Y. Tokozume and T. Harada, ‘‘Learning environmental sounds with end-to-end convolutional neural network,’’ pp. 2721--2725, IEEE, 3 2017.
D. Burgund, S. Nikolovski, D. Galić, and N. Maravić, ‘‘Pearson correlation in determination of quality of current transformers,’’ Sensors, vol. 23, no. 5, 2023
G. S. Shuhe Han, ‘‘Protection algorithm based on deep convolution neural network algorithm,’’ Compu- tational Intelligence and Neuroscience, 2022.
A. Roy, D. Singh, R. K. Misra, and A. Singh, ‘‘Differential protection scheme for power transformers using matched wavelets,’’ IET Generation, Transmission &amp Distribution, vol. 13, pp. 2423--2437, May 2019.
S. Jena and B. R. Bhalja, ‘‘Initial travelling wavefront-based bus zone protection scheme,’’ IET Generation, Transmission &amp Distribution, vol. 13, pp. 3216--3229, July 2019.
R. C. Hendriks and T. Gerkmann, ‘‘Noise correlation matrix estimation for multi-microphone speech enhancement,’’ IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 1, pp. 223- -233, 2012.
H. Hu, Z. He, Y. Zhang, and S. Gao, ‘‘Modal frequency sensitivity analysis and application using complex nodal matrix,’’ IEEE Transactions on Power Delivery, vol. 29, no. 2, pp. 969--971, 2014
P. S. Addison, The Illustrated Wavelet Transform Handbook. CRC Press, Jan. 2017
E. Johansson, ‘‘Wavelet theory and some of its applications,’’ Luleå University of Technology Department of Mathematics, vol. 48, 10 2005.
R. R. Merry, ‘‘Wavelet theory and applications : a literature study,’’ 2005.
A. Hadd and J. L. Rodgers, Understanding correlation matrices, vol. 186. SAGE Publications, Inc, 1 2021.
B. Kröse, B. Krose, P. van der Smagt, and P. Smagt, ‘‘An introduction to neural networks,’’ J Comput Sci, vol. 48, 01 1996.
M. Awad and R. Khanna, ‘‘Support vector machines for classification,’’ pp. 39--66, 04 2015
N. Shreyas, M. Venkatraman, S. Malini, and S. Chandrakala, ‘‘Trends of sound event recognition in audio surveillance: A recent review and study,’’ The Cognitive Approach in Cloud Computing and Internet of Things Technologies for Surveillance Tracking Systems, pp. 95--106, 2020
M. A. E. M. Mohamed Elesawy, Mohamad Hussein, ‘‘Real life violence situations dataset,’’ Kaggle, 2019.
T. S. (2019), ‘‘Sound events for surveillance applications (1.0.0) [data set],’’ 2019.
S. R. Livingstone and F. A. Russo, ‘‘Ravdess emotional speech audio,’’ 2019.
B. McFee, M. McVicar, D. Faronbi, I. Roman, M. Gover, S. Balke, S. Seyfarth, A. Malek, C. Raffel, V. Lostanlen, B. van Niekirk, D. Lee, F. Cwitkowitz, F. Zalkow, O. Nieto, D. Ellis, J. Mason, K. Lee, B. Steers, E. Halvachs, C. Thomé, F. Robert-Stöter, R. Bittner, Z. Wei, A. Weiss, E. Battenberg, K. Choi, R. Yamamoto, C. Carr, A. Metsai, S. Sullivan, P. Friesch, A. Krishnakumar, S. Hidaka, S. Kowalik, F. Keller, D. Mazur, A. Chabot-Leclerc, C. Hawthorne, C. Ramaprasad, M. Keum, J. Gomez, W. Monroe, V. A. Morozov, K. Eliasi, nullmightybofo, P. Biberstein, N. D. Sergin, R. Hennequin, R. Naktinis, beantowel, T. Kim, J. P. Åsen, J. Lim, A. Malins, D. Hereñú, S. van der Struijk, L. Nickel, J. Wu, Z. Wang, T. Gates, M. Vollrath, A. Sarroff, Xiao-Ming, A. Porter, S. Kranzler, Voodoohop, M. D. Gangi, H. Jinoz, C. Guerrero, A. Mazhar, toddrme2178, Z. Baratz, A. Kostin, X. Zhuang, C. T. Lo, P. Campr, E. Semeniuc, M. Biswal, S. Moura, P. Brossier, H. Lee, and W. Pimenta, ‘‘librosa/librosa: 0.10.1,’’ Aug. 2023.
G. R. Lee, R. Gommers, F. Waselewski, K. Wohlfahrt, and A. O8217;Leary, ‘‘Pywavelets: A python package for wavelet analysis,’’ Journal of Open Source Software, vol. 4, no. 36, p. 1237, 2019
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenho- fer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, ‘‘Scikit-learn: Machine learning in Python,’’ Journal of Machine Learning Research, vol. 12, pp. 2825--2830, 2011.
dc.rights.coar.fl_str_mv http://purl.org/coar/access_right/c_abf2
dc.rights.license.spa.fl_str_mv Atribución-NoComercial-SinDerivadas 4.0 Internacional
dc.rights.uri.spa.fl_str_mv http://creativecommons.org/licenses/by-nc-nd/4.0/
dc.rights.accessrights.spa.fl_str_mv info:eu-repo/semantics/openAccess
rights_invalid_str_mv Atribución-NoComercial-SinDerivadas 4.0 Internacional
http://creativecommons.org/licenses/by-nc-nd/4.0/
http://purl.org/coar/access_right/c_abf2
eu_rights_str_mv openAccess
dc.format.extent.spa.fl_str_mv 54 páginas
dc.format.mimetype.spa.fl_str_mv application/pdf
dc.coverage.city.none.fl_str_mv Medellín (Antioquia, Colombia)
dc.publisher.spa.fl_str_mv Universidad Nacional de Colombia
dc.publisher.program.spa.fl_str_mv Medellín - Minas - Maestría en Ingeniería - Automatización Industrial
dc.publisher.faculty.spa.fl_str_mv Facultad de Minas
dc.publisher.place.spa.fl_str_mv Medellín, Colombia
dc.publisher.branch.spa.fl_str_mv Universidad Nacional de Colombia - Sede Medellín
institution Universidad Nacional de Colombia
bitstream.url.fl_str_mv https://repositorio.unal.edu.co/bitstream/unal/86601/1/license.txt
https://repositorio.unal.edu.co/bitstream/unal/86601/2/1152199519.2024.pdf
https://repositorio.unal.edu.co/bitstream/unal/86601/3/1152199519.2024.pdf.jpg
bitstream.checksum.fl_str_mv eb34b1cf90b7e1103fc9dfd26be24b4a
2e601cd83340648fb1c8340338c3bbe6
cd1d6a0101658a581ae2f08d029db779
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
repository.name.fl_str_mv Repositorio Institucional Universidad Nacional de Colombia
repository.mail.fl_str_mv repositorio_nal@unal.edu.co
_version_ 1814089300674871296
spelling Atribución-NoComercial-SinDerivadas 4.0 Internacionalhttp://creativecommons.org/licenses/by-nc-nd/4.0/info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2Bolaños Martinez, Freddye7acb79486e0b3abc2d30449661c78f9Fletscher Bocanegra, Luis Alejandrob91ac8915b54cc18d5091124dbd839a1Álvarez Osorio, Carlos Andres43e9c735e3de9b4caad08120c1996b7b2024-07-23T20:23:58Z2024-07-23T20:23:58Z2024https://repositorio.unal.edu.co/handle/unal/86601Universidad Nacional de ColombiaRepositorio Institucional Universidad Nacional de Colombiahttps://repositorio.unal.edu.co/Ilustraciones, gráficosDurante muchos años, la seguridad en los entornos urbanos ha sido una preocupación para los habitantes de todas las ciudades del mundo. A causa de la inseguridad que existe en mayor o menor medida, los gobiernos en diferentes regiones del mundo buscan continuamente mecanismos para prevenir actos criminales. Este estudio presenta un método que busca prevenir delitos utilizando señales de audio que pueden capturarse en un entorno urbano. Por ello, se utiliza una técnica emergente, denominada modos de correlación wavelet, con el fin de representar señales de audio de forma compacta, tras lo cual se implementa un método basado en aprendizaje automático para clasificar las señales de audio en dos categorías: aquellas asociadas con un evento o crimen violento o aquellos asociados con un evento común (no violento o no criminal). En el estudio realizado, fue posible concluir que los modos de correlación de wavelet permiten realizar la clasificación de este tipo de señales con una precisión mayor al 80 % y con tiempos de ejecución menores a 1 segundo, utilizando un máximo de 4 características para el entrenamiento de los modelos. Las pruebas se realizaron en un computador portátil con sistema operativo de 64 bits, procesador x64, Windows 11, procesador AMD Ryzen 5 5500U, 16 Gb de RAM. (Tomado de la fuente)For many years, safety in urban environments has been a concern for inhabitants of all cities worldwide. Because of the insecurity that exists to a greater or lesser extent, governments in different regions of the world continually seeks mechanisms to prevent criminal acts. This study introduces a method to prevent crime events using audio signals that can be captured in an urban environment. For this, an emerging technique, called wavelet correlation modes, is used to represent audio signals in a compact manner, following which a machine learning based method is implemented to classify the audio signals into two categories: those associated with a violent event or crime or those associated with a common event (nonviolent or noncriminal). In the study carried out, it was possible to conclude that wavelet compression modes allow the classification of this type of signals with a precision greater than 80 % and with execution times less than 1 second, using a maximum of 4 characteristics for the model training. The tests were carried out on a laptop with a 64-bit operating system, x64 processor, Windows 11, AMD Ryzen 5 5500U processor, 16 Gb of RAM.Este trabajo fue apoyado por el Fondo de Ciencia, Tecnología e Innovación (FCTeI) del Sistema General de Regalías (SGR) bajo el proyecto identificado con el código BPIN 2020000100044MaestríaProcesamiento Digital de Señales e Inteligencia ArtificialIngeniería Eléctrica E Ingeniería De Control.Sede Medellín54 páginasapplication/pdfspaUniversidad Nacional de ColombiaMedellín - Minas - Maestría en Ingeniería - Automatización IndustrialFacultad de MinasMedellín, ColombiaUniversidad Nacional de Colombia - Sede Medellín620 - Ingeniería y operaciones afinesSeguridad ciudadana - Medellín (Antioquia, Colombia)Prevención del delito - Medellín (Antioquia, Colombia)Innovaciones tecnológicas - Medellín (Antioquia, Colombia)Seguridad CiudadanaTransformada WaveletSeñal de AudioModos de Correlación WaveletAprendizaje de MáquinaCitizen securityWavelet TransformAudio signalWavelet Correlation ModesMachine LearningIdentificación de eventos delictivos a partir de señales de audio utilizando modos de correlación waveletIdentification of crime events from audio signals using wavelet correlation modesTrabajo de grado - Maestríainfo:eu-repo/semantics/masterThesisinfo:eu-repo/semantics/acceptedVersionTexthttp://purl.org/redcol/resource_type/TMMedellín (Antioquia, Colombia)LaReferenciaA. Gholamy, V. Kreinovich, and O. Kosheleva, ‘‘Why 70/30 or 80/20 relation between training and testing sets: A pedagogical explanation,’’ 2018.G. T. S. R. R. R. Axton Pitt, Digl Dixon, ‘‘Litmaps,’’ 2023 Litmaps Ltd., 2023S. Waldekar and G. Saha, ‘‘Analysis and classification of acoustic scenes with wavelet transform-based mel-scaled features,’’ Multimedia Tools and Applications, vol. 79, pp. 7911--7926, 3 2020.G. Shen and B. Liu, ‘‘The visions, technologies, applications and security issues of internet of things,’’ pp. 1--4, IEEE, 5 2011Markets and Markets, ‘‘Research and markets, internet of things (iot) market global forecast to 2021,’’ 2022.J. . T. J. . T. H. . W. U. H. R. Bowerman, B. Braverman, ‘‘The vision of a smart city. 2nd int. life ..,’’ 2000A. Gobernación, ‘‘Plan de desarrollo unidos por la vida,’’ 2020CEJ, ‘‘En 2021 aumentó el hurto a personas y otros delitos, advierte el reloj de la criminalidad de la cej,’’ 2021M. cómo vamos, ‘‘Informe de calidad de vida de medellÍn, 2020. seguridad ciudadana y convivencia,’’ 2020.J. D. Rodriguez-Ortega, Y. A. A. Duarte-VelÃ!‘squez, C. GÃ-Toro, and J. A. Cadavid-Carmona, ‘‘Seguridad ciudadana, violencia y criminalidad: una visiÃholÃstica y criminolÃde las cifras estadÃsticas del 2018,’’ Revista Criminalidad, vol. 61, pp. 9 -- 58, 12 2019F. L. A. Tamayo-Arboleda and E. Norza, ‘‘Midiendo el crimen: cifras de criminalidad y operatividad policial en Colombia, aÃ2017,’’ Revista Criminalidad, vol. 60, pp. 49 -- 71, 12 2018.S. Mondal and A. Das Barman, ‘‘Deep learning technique based real-time audio event detection experiment in a distributed system architecture,’’ Computers and Electrical Engineering, vol. 102, p. 108252, 2022.Y. Yamamoto, J. Nam, H. Terasawa, and Y. Hiraga, ‘‘Investigating time-frequency representations for audio feature extraction in singing technique classification,’’ in 2021 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), pp. 890--896, 2021M. Esmaeilpour, P. Cardinal, and A. L. Koerich, ‘‘Unsupervised feature learning for environmental sound classification using weighted cycle-consistent generative adversarial network,’’ Applied Soft Computing, vol. 86, p. 105912, 1 2020.I. D. Ilyashenko, R. S. Nasretdinov, Y. A. Filin, and A. A. Lependin, ‘‘Trainable wavelet-like transform for feature extraction to audio classification,’’ Journal of Physics: Conference Series, vol. 1333, p. 32029, 10 2019.D. Guillén, H. Esponda, E. Vázquez, and G. Idárraga-Ospina, ‘‘Algorithm for transformer differential protection based on wavelet correlation modes,’’ IET Generation, Transmission & Distribution, vol. 10, no. 12, pp. 2871--2879, 2016.A. Rabaoui, M. Davy, S. Rossignol, and N. Ellouze, ‘‘Using one-class svms and wavelets for audio surveillance,’’ IEEE Transactions on Information Forensics and Security, vol. 3, pp. 763--775, 12 2008A. I. Middya, B. Nag, and S. Roy, ‘‘Deep learning based multimodal emotion recognition using model-level fusion of audio–visual modalities,’’ Knowledge-Based Systems, vol. 244, p. 108580, 5 2022Y. Xu, J. Yang, and K. Mao, ‘‘Semantic-filtered soft-split-aware video captioning with audio-augmented feature,’’ Neurocomputing, vol. 357, pp. 24--35, 9 2019S. Durai, ‘‘Wavelet based feature vector formation for audio signal classification,’’ 10 2007T. Düzenli and N. Ozkurt, ‘‘Comparison of wavelet based feature extraction methods for speech/music discrimination,’’ Istanbul University - Journal of Electrical and Electronics Engineering, vol. 11, pp. 617- -621, 10 2011.G. Tzanetakis, G. Essl, and P. Cook, ‘‘Audio analysis using the discrete wavelet transform,’’ pp. 318--323, 10 2001C.-C. Lin, S.-H. Chen, T.-K. Truong, and Y. Chang, ‘‘Audio classification and categorization based on wavelets and support vector machine,’’ IEEE Transactions on Speech and Audio Processing, vol. 13, pp. 644--651, 2005.K. Kim, D. H. Youn, and C. Lee, ‘‘Evaluation of wavelet filters for speech recognition,’’ SMC 2000 Confe- rence Proceedings. 2000 IEEE International Conference on Systems, Man and Cybernetics. ’Cybernetics Evolving to Systems, Humans, Organizations, and their Complex Interactions’ (Cat. No.00CH37166), pp. 2891--2894, 2000.J. Gowdy and Z. Tufekci, ‘‘Mel-scaled discrete wavelet coefficients for speech recognition,’’ pp. 1351--1354, IEEE, 2000.S. K. Kopparapu and M. Laxminarayana, ‘‘Choice of mel filter bank in computing mfcc of a resampled speech,’’ pp. 121--124, IEEE, 5 2010.K. V. K. Kishore and P. K. Satish, ‘‘Emotion recognition in speech using mfcc and wavelet features,’’ pp. 842--847, 2013.J. Salamon, C. Jacoby, and J. P. Bello, ‘‘A dataset and taxonomy for urban sound research,’’ pp. 1041-- 1044, ACM, 11 2014.A. Rakotomamonjy and G. Gasso, ‘‘Histogram of gradients of time-frequency representations for audio scene detection,’’ IEEE/ACM Transactions on Audio, Speech, and Language Processing, pp. 1--1, 2014.S. R. Kadiri and P. Alku, ‘‘Subjective evaluation of basic emotions from audio–visual data,’’ Sensors, vol. 22, p. 4931, 6 2022.K. Umapathy, S. Krishnan, and R. K. Rao, ‘‘Audio signal feature extraction and classification using local discriminant bases,’’ IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, pp. 1236--1246, 2007F. Weninger, F. Eyben, B. W. Schuller, M. Mortillaro, and K. R. Scherer, ‘‘On the acoustics of emotion in audio: What speech, music, and sound have in common,’’ Frontiers in Psychology, vol. 4, 2013P. Wu, J. Liu, Y. Shi, Y. Sun, F. Shao, Z. Wu, and Z. Yang, ‘‘Not only look, but also listen: Learning multimodal violence detection under weak supervision,’’ pp. 322--339, 10 2020J. Liu, Y. Zhang, D. Lv, J. Lu, H. Xu, S. Xie, and Y. Xiong, ‘‘Classification method of birdsong based on gaborwt feature image and convolutional neural network,’’ pp. 134--140, 2021W. Dai, C. Dai, S. Qu, J. Li, and S. Das, ‘‘Very deep convolutional neural networks for raw waveforms,’’ pp. 421--425, IEEE, 3 2017.L. Deng, G. Hinton, and B. Kingsbury, ‘‘New types of deep neural network learning for speech recognition and related applications: an overview,’’ pp. 8599--8603, IEEE, 5 2013J. Sharma, O.-C. Granmo, and M. Goodwin, ‘‘Environment sound classification using multiple feature channels and deep convolutional neural networks,’’ 10 2019S. Lee and H.-S. Pang, ‘‘Feature extraction based on the non-negative matrix factorization of convolu- tional neural networks for monitoring domestic activity with acoustic signals,’’ IEEE Access, vol. 8, pp. 122384--122395, 2020J. Salamon and J. P. Bello, ‘‘Deep convolutional neural networks and data augmentation for environ- mental sound classification,’’ IEEE Signal Processing Letters, vol. 24, pp. 279--283, 3 2017.T. Lv, H. yong Zhang, and C. hui Yan, ‘‘Double mode surveillance system based on remote audio/video signals acquisition,’’ Applied Acoustics, vol. 129, pp. 316--321, 1 2018.G. Parascandolo, H. Huttunen, and T. Virtanen, ‘‘Recurrent neural networks for polyphonic sound event detection in real life recordings,’’ pp. 6440--6444, IEEE, 3 2016Y. Tokozume and T. Harada, ‘‘Learning environmental sounds with end-to-end convolutional neural network,’’ pp. 2721--2725, IEEE, 3 2017.D. Burgund, S. Nikolovski, D. Galić, and N. Maravić, ‘‘Pearson correlation in determination of quality of current transformers,’’ Sensors, vol. 23, no. 5, 2023G. S. Shuhe Han, ‘‘Protection algorithm based on deep convolution neural network algorithm,’’ Compu- tational Intelligence and Neuroscience, 2022.A. Roy, D. Singh, R. K. Misra, and A. Singh, ‘‘Differential protection scheme for power transformers using matched wavelets,’’ IET Generation, Transmission &amp Distribution, vol. 13, pp. 2423--2437, May 2019.S. Jena and B. R. Bhalja, ‘‘Initial travelling wavefront-based bus zone protection scheme,’’ IET Generation, Transmission &amp Distribution, vol. 13, pp. 3216--3229, July 2019.R. C. Hendriks and T. Gerkmann, ‘‘Noise correlation matrix estimation for multi-microphone speech enhancement,’’ IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 1, pp. 223- -233, 2012.H. Hu, Z. He, Y. Zhang, and S. Gao, ‘‘Modal frequency sensitivity analysis and application using complex nodal matrix,’’ IEEE Transactions on Power Delivery, vol. 29, no. 2, pp. 969--971, 2014P. S. Addison, The Illustrated Wavelet Transform Handbook. CRC Press, Jan. 2017E. Johansson, ‘‘Wavelet theory and some of its applications,’’ Luleå University of Technology Department of Mathematics, vol. 48, 10 2005.R. R. Merry, ‘‘Wavelet theory and applications : a literature study,’’ 2005.A. Hadd and J. L. Rodgers, Understanding correlation matrices, vol. 186. SAGE Publications, Inc, 1 2021.B. Kröse, B. Krose, P. van der Smagt, and P. Smagt, ‘‘An introduction to neural networks,’’ J Comput Sci, vol. 48, 01 1996.M. Awad and R. Khanna, ‘‘Support vector machines for classification,’’ pp. 39--66, 04 2015N. Shreyas, M. Venkatraman, S. Malini, and S. Chandrakala, ‘‘Trends of sound event recognition in audio surveillance: A recent review and study,’’ The Cognitive Approach in Cloud Computing and Internet of Things Technologies for Surveillance Tracking Systems, pp. 95--106, 2020M. A. E. M. Mohamed Elesawy, Mohamad Hussein, ‘‘Real life violence situations dataset,’’ Kaggle, 2019.T. S. (2019), ‘‘Sound events for surveillance applications (1.0.0) [data set],’’ 2019.S. R. Livingstone and F. A. Russo, ‘‘Ravdess emotional speech audio,’’ 2019.B. McFee, M. McVicar, D. Faronbi, I. Roman, M. Gover, S. Balke, S. Seyfarth, A. Malek, C. Raffel, V. Lostanlen, B. van Niekirk, D. Lee, F. Cwitkowitz, F. Zalkow, O. Nieto, D. Ellis, J. Mason, K. Lee, B. Steers, E. Halvachs, C. Thomé, F. Robert-Stöter, R. Bittner, Z. Wei, A. Weiss, E. Battenberg, K. Choi, R. Yamamoto, C. Carr, A. Metsai, S. Sullivan, P. Friesch, A. Krishnakumar, S. Hidaka, S. Kowalik, F. Keller, D. Mazur, A. Chabot-Leclerc, C. Hawthorne, C. Ramaprasad, M. Keum, J. Gomez, W. Monroe, V. A. Morozov, K. Eliasi, nullmightybofo, P. Biberstein, N. D. Sergin, R. Hennequin, R. Naktinis, beantowel, T. Kim, J. P. Åsen, J. Lim, A. Malins, D. Hereñú, S. van der Struijk, L. Nickel, J. Wu, Z. Wang, T. Gates, M. Vollrath, A. Sarroff, Xiao-Ming, A. Porter, S. Kranzler, Voodoohop, M. D. Gangi, H. Jinoz, C. Guerrero, A. Mazhar, toddrme2178, Z. Baratz, A. Kostin, X. Zhuang, C. T. Lo, P. Campr, E. Semeniuc, M. Biswal, S. Moura, P. Brossier, H. Lee, and W. Pimenta, ‘‘librosa/librosa: 0.10.1,’’ Aug. 2023.G. R. Lee, R. Gommers, F. Waselewski, K. Wohlfahrt, and A. O8217;Leary, ‘‘Pywavelets: A python package for wavelet analysis,’’ Journal of Open Source Software, vol. 4, no. 36, p. 1237, 2019F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenho- fer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, ‘‘Scikit-learn: Machine learning in Python,’’ Journal of Machine Learning Research, vol. 12, pp. 2825--2830, 2011.Identificación de eventos delictivos a partir de señales de audio utilizando modos de correlación waveletFondo de Ciencia, Tecnología e Innovación (FCTeI)EstudiantesInvestigadoresPúblico generalResponsables políticosLICENSElicense.txtlicense.txttext/plain; charset=utf-85879https://repositorio.unal.edu.co/bitstream/unal/86601/1/license.txteb34b1cf90b7e1103fc9dfd26be24b4aMD51ORIGINAL1152199519.2024.pdf1152199519.2024.pdfTesis de Maestría en Ingeniería - Automatización Industrialapplication/pdf1373203https://repositorio.unal.edu.co/bitstream/unal/86601/2/1152199519.2024.pdf2e601cd83340648fb1c8340338c3bbe6MD52THUMBNAIL1152199519.2024.pdf.jpg1152199519.2024.pdf.jpgGenerated Thumbnailimage/jpeg4665https://repositorio.unal.edu.co/bitstream/unal/86601/3/1152199519.2024.pdf.jpgcd1d6a0101658a581ae2f08d029db779MD53unal/86601oai:repositorio.unal.edu.co:unal/866012024-08-27 23:10:54.583Repositorio Institucional Universidad Nacional de Colombiarepositorio_nal@unal.edu.coUEFSVEUgMS4gVMOJUk1JTk9TIERFIExBIExJQ0VOQ0lBIFBBUkEgUFVCTElDQUNJw5NOIERFIE9CUkFTIEVOIEVMIFJFUE9TSVRPUklPIElOU1RJVFVDSU9OQUwgVU5BTC4KCkxvcyBhdXRvcmVzIHkvbyB0aXR1bGFyZXMgZGUgbG9zIGRlcmVjaG9zIHBhdHJpbW9uaWFsZXMgZGUgYXV0b3IsIGNvbmZpZXJlbiBhIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHVuYSBsaWNlbmNpYSBubyBleGNsdXNpdmEsIGxpbWl0YWRhIHkgZ3JhdHVpdGEgc29icmUgbGEgb2JyYSBxdWUgc2UgaW50ZWdyYSBlbiBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsLCBiYWpvIGxvcyBzaWd1aWVudGVzIHTDqXJtaW5vczoKCgphKQlMb3MgYXV0b3JlcyB5L28gbG9zIHRpdHVsYXJlcyBkZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciBzb2JyZSBsYSBvYnJhIGNvbmZpZXJlbiBhIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHVuYSBsaWNlbmNpYSBubyBleGNsdXNpdmEgcGFyYSByZWFsaXphciBsb3Mgc2lndWllbnRlcyBhY3RvcyBzb2JyZSBsYSBvYnJhOiBpKSByZXByb2R1Y2lyIGxhIG9icmEgZGUgbWFuZXJhIGRpZ2l0YWwsIHBlcm1hbmVudGUgbyB0ZW1wb3JhbCwgaW5jbHV5ZW5kbyBlbCBhbG1hY2VuYW1pZW50byBlbGVjdHLDs25pY28sIGFzw60gY29tbyBjb252ZXJ0aXIgZWwgZG9jdW1lbnRvIGVuIGVsIGN1YWwgc2UgZW5jdWVudHJhIGNvbnRlbmlkYSBsYSBvYnJhIGEgY3VhbHF1aWVyIG1lZGlvIG8gZm9ybWF0byBleGlzdGVudGUgYSBsYSBmZWNoYSBkZSBsYSBzdXNjcmlwY2nDs24gZGUgbGEgcHJlc2VudGUgbGljZW5jaWEsIHkgaWkpIGNvbXVuaWNhciBhbCBww7pibGljbyBsYSBvYnJhIHBvciBjdWFscXVpZXIgbWVkaW8gbyBwcm9jZWRpbWllbnRvLCBlbiBtZWRpb3MgYWzDoW1icmljb3MgbyBpbmFsw6FtYnJpY29zLCBpbmNsdXllbmRvIGxhIHB1ZXN0YSBhIGRpc3Bvc2ljacOzbiBlbiBhY2Nlc28gYWJpZXJ0by4gQWRpY2lvbmFsIGEgbG8gYW50ZXJpb3IsIGVsIGF1dG9yIHkvbyB0aXR1bGFyIGF1dG9yaXphIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgcGFyYSBxdWUsIGVuIGxhIHJlcHJvZHVjY2nDs24geSBjb211bmljYWNpw7NuIGFsIHDDumJsaWNvIHF1ZSBsYSBVbml2ZXJzaWRhZCByZWFsaWNlIHNvYnJlIGxhIG9icmEsIGhhZ2EgbWVuY2nDs24gZGUgbWFuZXJhIGV4cHJlc2EgYWwgdGlwbyBkZSBsaWNlbmNpYSBDcmVhdGl2ZSBDb21tb25zIGJham8gbGEgY3VhbCBlbCBhdXRvciB5L28gdGl0dWxhciBkZXNlYSBvZnJlY2VyIHN1IG9icmEgYSBsb3MgdGVyY2Vyb3MgcXVlIGFjY2VkYW4gYSBkaWNoYSBvYnJhIGEgdHJhdsOpcyBkZWwgUmVwb3NpdG9yaW8gSW5zdGl0dWNpb25hbCwgY3VhbmRvIHNlYSBlbCBjYXNvLiBFbCBhdXRvciB5L28gdGl0dWxhciBkZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciBwb2Ryw6EgZGFyIHBvciB0ZXJtaW5hZGEgbGEgcHJlc2VudGUgbGljZW5jaWEgbWVkaWFudGUgc29saWNpdHVkIGVsZXZhZGEgYSBsYSBEaXJlY2Npw7NuIE5hY2lvbmFsIGRlIEJpYmxpb3RlY2FzIGRlIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhLiAKCmIpIAlMb3MgYXV0b3JlcyB5L28gdGl0dWxhcmVzIGRlIGxvcyBkZXJlY2hvcyBwYXRyaW1vbmlhbGVzIGRlIGF1dG9yIHNvYnJlIGxhIG9icmEgY29uZmllcmVuIGxhIGxpY2VuY2lhIHNlw7FhbGFkYSBlbiBlbCBsaXRlcmFsIGEpIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8gcG9yIGVsIHRpZW1wbyBkZSBwcm90ZWNjacOzbiBkZSBsYSBvYnJhIGVuIHRvZG9zIGxvcyBwYcOtc2VzIGRlbCBtdW5kbywgZXN0byBlcywgc2luIGxpbWl0YWNpw7NuIHRlcnJpdG9yaWFsIGFsZ3VuYS4KCmMpCUxvcyBhdXRvcmVzIHkvbyB0aXR1bGFyZXMgZGUgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciBtYW5pZmllc3RhbiBlc3RhciBkZSBhY3VlcmRvIGNvbiBxdWUgbGEgcHJlc2VudGUgbGljZW5jaWEgc2Ugb3RvcmdhIGEgdMOtdHVsbyBncmF0dWl0bywgcG9yIGxvIHRhbnRvLCByZW51bmNpYW4gYSByZWNpYmlyIGN1YWxxdWllciByZXRyaWJ1Y2nDs24gZWNvbsOzbWljYSBvIGVtb2x1bWVudG8gYWxndW5vIHBvciBsYSBwdWJsaWNhY2nDs24sIGRpc3RyaWJ1Y2nDs24sIGNvbXVuaWNhY2nDs24gcMO6YmxpY2EgeSBjdWFscXVpZXIgb3RybyB1c28gcXVlIHNlIGhhZ2EgZW4gbG9zIHTDqXJtaW5vcyBkZSBsYSBwcmVzZW50ZSBsaWNlbmNpYSB5IGRlIGxhIGxpY2VuY2lhIENyZWF0aXZlIENvbW1vbnMgY29uIHF1ZSBzZSBwdWJsaWNhLgoKZCkJUXVpZW5lcyBmaXJtYW4gZWwgcHJlc2VudGUgZG9jdW1lbnRvIGRlY2xhcmFuIHF1ZSBwYXJhIGxhIGNyZWFjacOzbiBkZSBsYSBvYnJhLCBubyBzZSBoYW4gdnVsbmVyYWRvIGxvcyBkZXJlY2hvcyBkZSBwcm9waWVkYWQgaW50ZWxlY3R1YWwsIGluZHVzdHJpYWwsIG1vcmFsZXMgeSBwYXRyaW1vbmlhbGVzIGRlIHRlcmNlcm9zLiBEZSBvdHJhIHBhcnRlLCAgcmVjb25vY2VuIHF1ZSBsYSBVbml2ZXJzaWRhZCBOYWNpb25hbCBkZSBDb2xvbWJpYSBhY3TDumEgY29tbyB1biB0ZXJjZXJvIGRlIGJ1ZW5hIGZlIHkgc2UgZW5jdWVudHJhIGV4ZW50YSBkZSBjdWxwYSBlbiBjYXNvIGRlIHByZXNlbnRhcnNlIGFsZ8O6biB0aXBvIGRlIHJlY2xhbWFjacOzbiBlbiBtYXRlcmlhIGRlIGRlcmVjaG9zIGRlIGF1dG9yIG8gcHJvcGllZGFkIGludGVsZWN0dWFsIGVuIGdlbmVyYWwuIFBvciBsbyB0YW50bywgbG9zIGZpcm1hbnRlcyAgYWNlcHRhbiBxdWUgY29tbyB0aXR1bGFyZXMgw7puaWNvcyBkZSBsb3MgZGVyZWNob3MgcGF0cmltb25pYWxlcyBkZSBhdXRvciwgYXN1bWlyw6FuIHRvZGEgbGEgcmVzcG9uc2FiaWxpZGFkIGNpdmlsLCBhZG1pbmlzdHJhdGl2YSB5L28gcGVuYWwgcXVlIHB1ZWRhIGRlcml2YXJzZSBkZSBsYSBwdWJsaWNhY2nDs24gZGUgbGEgb2JyYS4gIAoKZikJQXV0b3JpemFuIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgaW5jbHVpciBsYSBvYnJhIGVuIGxvcyBhZ3JlZ2Fkb3JlcyBkZSBjb250ZW5pZG9zLCBidXNjYWRvcmVzIGFjYWTDqW1pY29zLCBtZXRhYnVzY2Fkb3Jlcywgw61uZGljZXMgeSBkZW3DoXMgbWVkaW9zIHF1ZSBzZSBlc3RpbWVuIG5lY2VzYXJpb3MgcGFyYSBwcm9tb3ZlciBlbCBhY2Nlc28geSBjb25zdWx0YSBkZSBsYSBtaXNtYS4gCgpnKQlFbiBlbCBjYXNvIGRlIGxhcyB0ZXNpcyBjcmVhZGFzIHBhcmEgb3B0YXIgZG9ibGUgdGl0dWxhY2nDs24sIGxvcyBmaXJtYW50ZXMgc2Vyw6FuIGxvcyByZXNwb25zYWJsZXMgZGUgY29tdW5pY2FyIGEgbGFzIGluc3RpdHVjaW9uZXMgbmFjaW9uYWxlcyBvIGV4dHJhbmplcmFzIGVuIGNvbnZlbmlvLCBsYXMgbGljZW5jaWFzIGRlIGFjY2VzbyBhYmllcnRvIENyZWF0aXZlIENvbW1vbnMgeSBhdXRvcml6YWNpb25lcyBhc2lnbmFkYXMgYSBzdSBvYnJhIHBhcmEgbGEgcHVibGljYWNpw7NuIGVuIGVsIFJlcG9zaXRvcmlvIEluc3RpdHVjaW9uYWwgVU5BTCBkZSBhY3VlcmRvIGNvbiBsYXMgZGlyZWN0cmljZXMgZGUgbGEgUG9sw610aWNhIEdlbmVyYWwgZGUgbGEgQmlibGlvdGVjYSBEaWdpdGFsLgoKCmgpCVNlIGF1dG9yaXphIGEgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEgY29tbyByZXNwb25zYWJsZSBkZWwgdHJhdGFtaWVudG8gZGUgZGF0b3MgcGVyc29uYWxlcywgZGUgYWN1ZXJkbyBjb24gbGEgbGV5IDE1ODEgZGUgMjAxMiBlbnRlbmRpZW5kbyBxdWUgc2UgZW5jdWVudHJhbiBiYWpvIG1lZGlkYXMgcXVlIGdhcmFudGl6YW4gbGEgc2VndXJpZGFkLCBjb25maWRlbmNpYWxpZGFkIGUgaW50ZWdyaWRhZCwgeSBzdSB0cmF0YW1pZW50byB0aWVuZSB1bmEgZmluYWxpZGFkIGhpc3TDs3JpY2EsIGVzdGFkw61zdGljYSBvIGNpZW50w61maWNhIHNlZ8O6biBsbyBkaXNwdWVzdG8gZW4gbGEgUG9sw610aWNhIGRlIFRyYXRhbWllbnRvIGRlIERhdG9zIFBlcnNvbmFsZXMuCgoKClBBUlRFIDIuIEFVVE9SSVpBQ0nDk04gUEFSQSBQVUJMSUNBUiBZIFBFUk1JVElSIExBIENPTlNVTFRBIFkgVVNPIERFIE9CUkFTIEVOIEVMIFJFUE9TSVRPUklPIElOU1RJVFVDSU9OQUwgVU5BTC4KClNlIGF1dG9yaXphIGxhIHB1YmxpY2FjacOzbiBlbGVjdHLDs25pY2EsIGNvbnN1bHRhIHkgdXNvIGRlIGxhIG9icmEgcG9yIHBhcnRlIGRlIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHkgZGUgc3VzIHVzdWFyaW9zIGRlIGxhIHNpZ3VpZW50ZSBtYW5lcmE6CgphLglDb25jZWRvIGxpY2VuY2lhIGVuIGxvcyB0w6lybWlub3Mgc2XDsWFsYWRvcyBlbiBsYSBwYXJ0ZSAxIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8sIGNvbiBlbCBvYmpldGl2byBkZSBxdWUgbGEgb2JyYSBlbnRyZWdhZGEgc2VhIHB1YmxpY2FkYSBlbiBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsIGRlIGxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhIHkgcHVlc3RhIGEgZGlzcG9zaWNpw7NuIGVuIGFjY2VzbyBhYmllcnRvIHBhcmEgc3UgY29uc3VsdGEgcG9yIGxvcyB1c3VhcmlvcyBkZSBsYSBVbml2ZXJzaWRhZCBOYWNpb25hbCBkZSBDb2xvbWJpYSAgYSB0cmF2w6lzIGRlIGludGVybmV0LgoKCgpQQVJURSAzIEFVVE9SSVpBQ0nDk04gREUgVFJBVEFNSUVOVE8gREUgREFUT1MgUEVSU09OQUxFUy4KCkxhIFVuaXZlcnNpZGFkIE5hY2lvbmFsIGRlIENvbG9tYmlhLCBjb21vIHJlc3BvbnNhYmxlIGRlbCBUcmF0YW1pZW50byBkZSBEYXRvcyBQZXJzb25hbGVzLCBpbmZvcm1hIHF1ZSBsb3MgZGF0b3MgZGUgY2Fyw6FjdGVyIHBlcnNvbmFsIHJlY29sZWN0YWRvcyBtZWRpYW50ZSBlc3RlIGZvcm11bGFyaW8sIHNlIGVuY3VlbnRyYW4gYmFqbyBtZWRpZGFzIHF1ZSBnYXJhbnRpemFuIGxhIHNlZ3VyaWRhZCwgY29uZmlkZW5jaWFsaWRhZCBlIGludGVncmlkYWQgeSBzdSB0cmF0YW1pZW50byBzZSByZWFsaXphIGRlIGFjdWVyZG8gYWwgY3VtcGxpbWllbnRvIG5vcm1hdGl2byBkZSBsYSBMZXkgMTU4MSBkZSAyMDEyIHkgZGUgbGEgUG9sw610aWNhIGRlIFRyYXRhbWllbnRvIGRlIERhdG9zIFBlcnNvbmFsZXMgZGUgbGEgVW5pdmVyc2lkYWQgTmFjaW9uYWwgZGUgQ29sb21iaWEuIFB1ZWRlIGVqZXJjZXIgc3VzIGRlcmVjaG9zIGNvbW8gdGl0dWxhciBhIGNvbm9jZXIsIGFjdHVhbGl6YXIsIHJlY3RpZmljYXIgeSByZXZvY2FyIGxhcyBhdXRvcml6YWNpb25lcyBkYWRhcyBhIGxhcyBmaW5hbGlkYWRlcyBhcGxpY2FibGVzIGEgdHJhdsOpcyBkZSBsb3MgY2FuYWxlcyBkaXNwdWVzdG9zIHkgZGlzcG9uaWJsZXMgZW4gd3d3LnVuYWwuZWR1LmNvIG8gZS1tYWlsOiBwcm90ZWNkYXRvc19uYUB1bmFsLmVkdS5jbyIKClRlbmllbmRvIGVuIGN1ZW50YSBsbyBhbnRlcmlvciwgYXV0b3Jpem8gZGUgbWFuZXJhIHZvbHVudGFyaWEsIHByZXZpYSwgZXhwbMOtY2l0YSwgaW5mb3JtYWRhIGUgaW5lcXXDrXZvY2EgYSBsYSBVbml2ZXJzaWRhZCBOYWNpb25hbCBkZSBDb2xvbWJpYSBhIHRyYXRhciBsb3MgZGF0b3MgcGVyc29uYWxlcyBkZSBhY3VlcmRvIGNvbiBsYXMgZmluYWxpZGFkZXMgZXNwZWPDrWZpY2FzIHBhcmEgZWwgZGVzYXJyb2xsbyB5IGVqZXJjaWNpbyBkZSBsYXMgZnVuY2lvbmVzIG1pc2lvbmFsZXMgZGUgZG9jZW5jaWEsIGludmVzdGlnYWNpw7NuIHkgZXh0ZW5zacOzbiwgYXPDrSBjb21vIGxhcyByZWxhY2lvbmVzIGFjYWTDqW1pY2FzLCBsYWJvcmFsZXMsIGNvbnRyYWN0dWFsZXMgeSB0b2RhcyBsYXMgZGVtw6FzIHJlbGFjaW9uYWRhcyBjb24gZWwgb2JqZXRvIHNvY2lhbCBkZSBsYSBVbml2ZXJzaWRhZC4gCgo=