Design of a protocol for the measurement of physiological and emotional responses to sound stimuli

A protocol for the measurement of physiological and emotional responses to sound stimuli is presented. The sounds used in the study were created by comparing different methods and possible bandwidths. These sounds correspond to white noise filtered in 3 central frequencies, to know, 125 Hz, 500Hz, a...

Full description

Autores:
Macía Arango, Andrés Felipe
Tipo de recurso:
Fecha de publicación:
2017
Institución:
Universidad de San Buenaventura
Repositorio:
Repositorio USB
Idioma:
spa
OAI Identifier:
oai:bibliotecadigital.usb.edu.co:10819/4131
Acceso en línea:
http://hdl.handle.net/10819/4131
Palabra clave:
Acoustic noise
Sound
SAM
Face reader
Emotions
Crossmodal
Estimulación
Ruido
Medición acústicas
Sonómetro
Rights
License
Atribución-NoComercial-SinDerivadas 2.5 Colombia
id SANBUENAV2_6e067fb2bfc024d92b11c9fa6ca7662a
oai_identifier_str oai:bibliotecadigital.usb.edu.co:10819/4131
network_acronym_str SANBUENAV2
network_name_str Repositorio USB
repository_id_str
dc.title.spa.fl_str_mv Design of a protocol for the measurement of physiological and emotional responses to sound stimuli
title Design of a protocol for the measurement of physiological and emotional responses to sound stimuli
spellingShingle Design of a protocol for the measurement of physiological and emotional responses to sound stimuli
Acoustic noise
Sound
SAM
Face reader
Emotions
Crossmodal
Estimulación
Ruido
Medición acústicas
Sonómetro
title_short Design of a protocol for the measurement of physiological and emotional responses to sound stimuli
title_full Design of a protocol for the measurement of physiological and emotional responses to sound stimuli
title_fullStr Design of a protocol for the measurement of physiological and emotional responses to sound stimuli
title_full_unstemmed Design of a protocol for the measurement of physiological and emotional responses to sound stimuli
title_sort Design of a protocol for the measurement of physiological and emotional responses to sound stimuli
dc.creator.fl_str_mv Macía Arango, Andrés Felipe
dc.contributor.advisor.none.fl_str_mv Ochoa Villegas, Jonathan
dc.contributor.author.none.fl_str_mv Macía Arango, Andrés Felipe
dc.subject.spa.fl_str_mv Acoustic noise
Sound
SAM
Face reader
Emotions
Crossmodal
topic Acoustic noise
Sound
SAM
Face reader
Emotions
Crossmodal
Estimulación
Ruido
Medición acústicas
Sonómetro
dc.subject.lemb.spa.fl_str_mv Estimulación
Ruido
Medición acústicas
Sonómetro
description A protocol for the measurement of physiological and emotional responses to sound stimuli is presented. The sounds used in the study were created by comparing different methods and possible bandwidths. These sounds correspond to white noise filtered in 3 central frequencies, to know, 125 Hz, 500Hz, and 3150Hz, with a variable bandwidth based on 1/3 octaves. additionally, spatial information was given to the sounds by convolving them with a simulated binaural impulse response. Two experiments were conducted. The first one consisted of 3 sounds created at different sound pressure levels, between 50 dB and 80 dB in 6 steps. It was found that both valence and arousal changed as the level increased, the first one decreasing, and the last one increasing, showing a possible relation between elicited emotions from a sound and its sound pressure level. The second experiment presented both image and sound simultaneously. The sound corresponded to the same described above, at a fixed level of 65dB. The images were two, one with a positive semantic content, and the other with a negative one. Both images were taken from the IAPS (International Affective Picture System). Responses were measured with the self assessment manikin SAM and Noldus FaceReader technology. The results obtained with the SAM were not conclusive, probably due to sample size, experiment design and other factors. The results obtained with the FaceReader showed clear reactions from participants to the audiovisual.
publishDate 2017
dc.date.accessioned.none.fl_str_mv 2017-06-30T19:33:40Z
dc.date.available.none.fl_str_mv 2017-06-30T19:33:40Z
dc.date.issued.none.fl_str_mv 2017
dc.date.submitted.none.fl_str_mv 2017-06-30
dc.type.spa.fl_str_mv Trabajo de grado - Pregrado
dc.type.coar.fl_str_mv http://purl.org/coar/resource_type/c_7a1f
dc.type.spa.spa.fl_str_mv Trabajo de Grado
dc.type.driver.spa.fl_str_mv info:eu-repo/semantics/bachelorThesis
dc.identifier.uri.none.fl_str_mv http://hdl.handle.net/10819/4131
url http://hdl.handle.net/10819/4131
dc.language.iso.spa.fl_str_mv spa
language spa
dc.rights.coar.fl_str_mv http://purl.org/coar/access_right/c_abf2
dc.rights.cc.spa.fl_str_mv Atribución-NoComercial-SinDerivadas 2.5 Colombia
dc.rights.uri.spa.fl_str_mv http://creativecommons.org/licenses/by-nc-nd/2.5/co/
rights_invalid_str_mv Atribución-NoComercial-SinDerivadas 2.5 Colombia
http://creativecommons.org/licenses/by-nc-nd/2.5/co/
http://purl.org/coar/access_right/c_abf2
dc.format.spa.fl_str_mv pdf
dc.format.extent.spa.fl_str_mv 63 páginas
dc.format.medium.spa.fl_str_mv Recurso en linea
dc.format.mimetype.spa.fl_str_mv application/pdf
dc.publisher.faculty.spa.fl_str_mv Ingenierias
dc.publisher.program.spa.fl_str_mv Ingeniería de Sonido
dc.publisher.sede.spa.fl_str_mv Medellín
institution Universidad de San Buenaventura
dc.source.bibliographicCitation.spa.fl_str_mv [1] E. Niedermeyer, D. L. Schomer, and F. H. Lopes da Silva, Niedermeyer’s electroencephalography : basic principles, clinical applications, and related fields. Wolters Kluwer/Lippincott Williams & Wilkins Health, 2011.
[2] S. M. Lee and S. K. Lee, “Objective evaluation of human perception of automotive sound based on physiological signal of human brain,” Int. J. Automot. Technol., vol. 15, no. 2, pp. 273–282, 2014.
[3] M. Omata, K. Ashihara, M. Koubori, Y. Moriya, M. Kyoso, and S. Kiryu, “A Psycho‐ acoustic Measurement and ABR for the Sound Signals in the Frequency Range between 10 kHz and 24 kHz,” October, pp. 1–5, 2008.
[4] O. Tsutomu, E. Nishina, N. Kawai, Y. Fuwamoto, and H. Imai, “High-Frequency Sound Above the Audible Range Affects Brain Electric activity and sound perception,” AES
[5] B. Liu, Y. Lin, X. Gao, and J. Dang, “Correlation between audio-visual enhancement of speech in different noise environments and SNR: A combined behavioral and electrophysiological study,” Neuroscience, vol. 247, pp. 145–151, 2013.
[6] Y. Lin, B. Liu, Z. Liu, and X. Gao, “EEG gamma-band activity during audiovisual speech comprehension in different noise environments,” Cogn. Neurodyn., vol. 9, no. 4, pp. 389–398, 2015.
[7] J.-N. Antons, A. K. Porbadnigk, R. Schleicher, B. Blankertz, S. Möller, and G. Curio, “Subjective Listening Tests and Neural Correlates of Speech Degradation in Case of Signal-correlated Noise,” Audio Eng. Soc., no. 100, pp. 2–5, 2010
[8] L. M. Luxon and D. Prasher, Noise and its effects, 1st ed. Wiley, 2007.
[9] C. P. Beaman, “Auditory distraction from low-intensity noise: A review of the consequences for learning and workplace environments,” Appl. Cogn. Psychol., vol. 19, no. 8, pp. 1041–1064, 2005.
[10] S. HYGGE and I. KNEZ, “Effects of Noise, Heat and Indoor Lighting on Cognitive Performance and Self-Reported Affect,” J. Environ. Psychol., vol. 21, no. 3, pp. 291– 299, 2001.
[11] A. Liebl, J. Haller, B. Jödicke, H. Baumgartner, S. Schlittmeier, and J. Hellbrück, References “Combined effects of acoustic and visual distraction on cognitive performance and well-being,” Appl. Ergon., vol. 43, no. 2, pp. 424–434, 2012.
[12] S. Banbury and D. C. Berry, “Disruption of office-related tasks by speech and office noise,” British Journal of Psychology, vol. 89. pp. 499–517, 1998.
[13] P. Roelofsen, “Performance loss in open‐plan offices due to noise by speech,” J. Facil. Manag., vol. 6, no. 3, pp. 202–211, Jul. 2008.
[14] S. J. Schlittmeier, J. Hellbrück, R. Thaden, and M. Vorländer, “The impact of background speech varying in intelligibility: effects on cognitive performance and perceived disturbance.,” Ergonomics, vol. 51, no. 5, pp. 719–36, May 2008.
[15] J. P. Cowan, The effects of sound on people, 1st ed. Chichester: John Wiley & Sons, Ltd, 2016.
[16] C. Kasprzak, “The influence of infrasound noise from wind turbines on EEG signal patterns in humans,” ACTA Physica Poloica A, vol. 125, no. 4–A. pp. 20–23, 2014.
[17] C. Kasprzak, “The effect of the narrow-band noise in the range 4-8 Hz on the alpha waves in the EEG signal,” Acta Phys. Pol. A, vol. 123, no. 6, pp. 980–983, 2013.
[18] K. Inui, T. Urakawa, and K. Yamashiro, “Echoic memory of a single pure tone indexed by change-related brain activity,” Bmc …, vol. 11, no. 1, p. 135, 2010.
[19] B. W. Johnson, S. D. Muthukumaraswamy, W. C. Gaetz, and D. O. Cheyne, “Neuromagnetic and neuroelectric oscillatory responses to acoustic stimulation with broadband noise,” Int. Congr. Ser., vol. 1300, pp. 41–44, 2007.
[20] E. Manjarrez, I. Mendez, L. Martinez, A. Flores, and C. R. Mirasso, “Effects of auditory noise on the psychophysical detection of visual signals: Cross-modal stochastic resonance,” Neurosci. Lett., vol. 415, no. 3, pp. 231–236, 2007.
[21] M. H. Thaut, Rhythm, music, and the brain: Scientific foundations and clinical applications. Routledge, 2005.
[22] T. Egner and J. Gruzelier, “Ecological validity of neurofeedback: modulation of slow wave EEG enhances musical performance.,” Neuroreport, vol. 14, no. 9, pp. 1221– 1224, 2003.
[23] O. Sourina, Y. Liu, and M. K. Nguyen, “Real-time EEG-based emotion recognition for music therapy,” J. Multimodal User Interfaces, vol. 5, no. 1–2, pp. 27–35, 2012.
[24] D. Justin, Patrik N., & Västfjäll, “Emotional Responses to Music: The Need to Consider References 47 Underlying Mechanisms,” Behav. Brain Sci., vol. 31, no. 5, pp. 559–575, 2008.
[25] I. Daly et al., “Music-induced emotions can be predicted from a combination of brain activity and acoustic features,” Brain Cogn., vol. 101, pp. 1–11, 2015.
[26] I. Cross, S. Hallam, and M. Thaut, The Oxford Handbook of Music Psychology. Oxford: Oxford University Press, 2008.
[27] M.-F. C., E. Premat, A. D., and V. M, “Noise and its Effects – A Review on Qualitative Aspects of Sound . Part II : Noise and Annoyance,” Acta Acust. united with Acust., vol. 91, no. January, pp. 626–642, 2005.
[28] M. A. Henríquez and A. D. Londoño, “evaluación de auralizaciones creadas mediante métodos numéricos basados en acústica geométrica y reproducidas en el sistema de reproducción binaural opsodis,” Universidad de San Buenaventura, 2015.
[29] J. C. Rodriguez and A. Naranjo, “evaluación de auralizaciones obtenidas combinando métodos de elementos finitos y acústica geométrica en dos recintos y su aplicación en la valoración acústica de uno de ellos,” Universidad de San Buenaventura, 2015.
[30] D. Q. VERTEL, “análisis del impacto de las condiciones acústicas en un aula de enseñanza sobre los procesos cognitivos mediante auralizaciones,” Universidad de San Buenaventura, 2015.
[31] D. C. P. B. OCHOA and S. ESCOBAR, “análisis del impacto de ruido de fondo y tiempo de reverberación en procesos cognitivos por medio de auralizaciones,” Universidad de San Buenaventura, 2015.
[32] H. McGurk and J. Macdonald, “Hearing lips and seeing voices.,” Nature, vol. 264, pp. 691–811, 1976.
[33] J. Udesen, T. Piechowiak, and F. Gran, “The effect of vision on psychoacoustic testing with headphone-based virtual sound,” AES J. Audio Eng. Soc., vol. 63, no. 7–8, pp. 552–561, 2015.
[34] S. Yuval-Greenberg and L. Y. Deouell, “What You See Is Not (Always) What You Hear: Induced Gamma Band Responses Reflect Cross-Modal Interactions in Familiar Object Recognition,” J. Neurosci., vol. 27, no. 5, pp. 1090–1096, 2007.
[35] D. R. Tobergte and S. Curtis, the handbook of multisensory processes, vol. 53, no. 9. 2013.
[36] W. M. Hartmann, Signals, Sound and Sensation, 1st ed. Michigan: Springer.
[37] “UNE-ISO 1996-2,” 1998.
[38] J. Rennies and J. L. Verhey, “Temporal weighting in loudness of broadband and narrowband signals.,” J. Acoust. Soc. Am., vol. 126, no. 3, pp. 951–4, 2009.
[39] ISO, “Acoustics - Description, measurement and assessment of environmental noise. Part 1: Basic quantities and assessment procedures (ISO 1996-1:2003),” 2003.
[40] J. Blauert, The technology of binaural listening, 1st ed. Bochum: Springer, 2014.
[41] Burkhard and Sachs, “Anthropometric manikin for acoustic research,” 1975.
[42] V. Michael and M. Vorländer, Auralization. Fundamentals of Acoustics, Modelling, Simulation, Algorithms and Acoustic Virtual Reality. Springer, 2008.
[43] K. Drossos, A. Floros, A. Giannakoulopoulos, and N. Kanellopoulos, “Investigating the impact of sound angular position on the listener affective state,” IEEE Trans. Affect. Comput., vol. 6, no. 1, pp. 27–42, 2015.
[44] S. A. Gelfand, Hearing: An Introduction to Psychological and Physiological Acoustics, 5th ed., vol. 45, no. 12. London: informa healthcare, 2010.
[45] S. A. Gelfand, Hearing, 5th ed. New York: informa healthcare, 2010.
[46] P. Ekman and E. Rosenberg, What the face reveals. 2005.
[47] M. M. Bradley and P. J. Lang, “Measuring emotion: The self-assessment manikin and the semantic differential,” J. Behav. Ther. Exp. Psychiatry, vol. 25, no. 1, pp. 49–59, 1994.
[48] F. Weninger, F. Eyben, B. W. Schuller, M. Mortillaro, and K. R. Scherer, “On the acoustics of emotion in audio: What speech, music, and sound have in common,” Front. Psychol., vol. 4, no. MAY, pp. 1–12, 2013.
[49] J. Redondo, I. Fraga, I. Padrón, and M. Comesaña, “The Spanish adaptation of ANEW (affective norms for English words).,” Behav. Res. Methods, vol. 39, no. 3, pp. 600– 605, 2007.
[50] V. Terzis, C. N. Moridis, and A. a. Economides, “Measuring instant emotions during a self-assessment test,” Proc. 7th Int. Conf. Methods Tech. Behav. Res. - MB ’10, vol. 2010, pp. 1–4, 2010.
[51] D. Oberfeld, W. Heeren, J. Rennies, and J. Verhey, “Spectro-Temporal Weighting of Loudness,” PLoS One, vol. 7, no. 11, 2012.
[52] J. L. Verhey and A.-K. Anweiler, “Spectral loudness summation for short and long signals as a function of level,” Acoust. Soc. Am., vol. 45, no. 5, pp. 287–294, 2006.
[53] D. U. RUIZ, “impacto de las condiciones acústicas en la inteligibilidad y la dificultad de escucha en tres aulas de la universidad de san buenaventura medellín, sede san benito,” vol. 1, 2015.
[54] C. A. Gantiva Díaz, P. Guerra Muñoz, and J. Vila Castelar, “Colombian Validation of the International Affective Picture,” Acta Colomb. Psicol., vol. 14, no. 2, pp. 103–111, 2011.
[55] J. O. VILLEGAS, “estimación del coeficiente de absorción en incidencia aleatoria utilizando presión y velocidad de partícula mediante la sonda pu de microflown technologies,” 2015.
dc.source.instname.spa.fl_str_mv Universidad de San Buenaventura - Medellín
dc.source.other.spa.fl_str_mv Biblioteca USB Medellín (San Benito): CD-4106t
dc.source.reponame.spa.fl_str_mv Biblioteca Digital Universidad de San Buenaventura
bitstream.url.fl_str_mv https://bibliotecadigital.usb.edu.co/bitstreams/b2bd85ea-fd59-48c1-9f95-200a48559c97/download
https://bibliotecadigital.usb.edu.co/bitstreams/ff294230-59c9-431a-be35-9ec6d25adf2d/download
https://bibliotecadigital.usb.edu.co/bitstreams/17d52e97-0deb-4c48-8e6f-343b1977813d/download
https://bibliotecadigital.usb.edu.co/bitstreams/28ff8e36-255d-4613-b937-14ecbf0e0b7c/download
bitstream.checksum.fl_str_mv 2a92fc2bc9544bac1f51df5228a6dcfb
0c7b7184e7583ec671a5d9e43f0939c0
28293c1838ac92476309c3fcf65bd32b
d5e941bd150a06a67ecd675f48e6d2d4
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
MD5
repository.name.fl_str_mv Repositorio Institucional Universidad de San Buenaventura Colombia
repository.mail.fl_str_mv bdigital@metabiblioteca.com
_version_ 1812932448249446400
spelling Comunidad Científica y AcadémicaOchoa Villegas, Jonathane6a28864-a336-4609-a786-6f0880620fe3-1Macía Arango, Andrés Felipe322c9515-4151-4774-9257-d155b87fe32e-12017-06-30T19:33:40Z2017-06-30T19:33:40Z20172017-06-30A protocol for the measurement of physiological and emotional responses to sound stimuli is presented. The sounds used in the study were created by comparing different methods and possible bandwidths. These sounds correspond to white noise filtered in 3 central frequencies, to know, 125 Hz, 500Hz, and 3150Hz, with a variable bandwidth based on 1/3 octaves. additionally, spatial information was given to the sounds by convolving them with a simulated binaural impulse response. Two experiments were conducted. The first one consisted of 3 sounds created at different sound pressure levels, between 50 dB and 80 dB in 6 steps. It was found that both valence and arousal changed as the level increased, the first one decreasing, and the last one increasing, showing a possible relation between elicited emotions from a sound and its sound pressure level. The second experiment presented both image and sound simultaneously. The sound corresponded to the same described above, at a fixed level of 65dB. The images were two, one with a positive semantic content, and the other with a negative one. Both images were taken from the IAPS (International Affective Picture System). Responses were measured with the self assessment manikin SAM and Noldus FaceReader technology. The results obtained with the SAM were not conclusive, probably due to sample size, experiment design and other factors. The results obtained with the FaceReader showed clear reactions from participants to the audiovisual.pdf63 páginasRecurso en lineaapplication/pdfhttp://hdl.handle.net/10819/4131spaIngenieriasIngeniería de SonidoMedellínAtribución-NoComercial-SinDerivadas 2.5 ColombiaPor medio de este formato manifiesto mi voluntad de AUTORIZAR a la Universidad de San Buenaventura, Sede Bogotá, Seccionales Medellín, Cali y Cartagena, la difusión en texto completo de manera gratuita y por tiempo indefinido en la Biblioteca Digital Universidad de San Buenaventura, el documento académico-investigativo objeto de la presente autorización, con fines estrictamente educativos, científicos y culturales, en los términos establecidos en la Ley 23 de 1982, Ley 44 de 1993, Decisión Andina 351 de 1993, Decreto 460 de 1995 y demás normas generales sobre derechos de autor. Como autor manifiesto que el presente documento académico-investigativo es original y se realiza sin violar o usurpar derechos de autor de terceros, por lo tanto, la obra es de mi exclusiva autora y poseo la titularidad sobre la misma. La Universidad de San Buenaventura no será responsable de ninguna utilización indebida del documento por parte de terceros y será exclusivamente mi responsabilidad atender personalmente cualquier reclamación que pueda presentarse a la Universidad. Autorizo a la Biblioteca Digital de la Universidad de San Buenaventura convertir el documento al formato que el repositorio lo requiera (impreso, digital, electrónico o cualquier otro conocido o por conocer) o con fines de preservación digital. Esta autorización no implica renuncia a la facultad que tengo de publicar posteriormente la obra, en forma total o parcial, por lo cual podrá, dando aviso por escrito con no menos de un mes de antelación, solicitar que el documento deje de estar disponible para el público en la Biblioteca Digital de la Universidad de San Buenaventura, así mismo, cuando se requiera por razones legales y/o reglas del editor de una revista.http://creativecommons.org/licenses/by-nc-nd/2.5/co/http://purl.org/coar/access_right/c_abf2[1] E. Niedermeyer, D. L. Schomer, and F. H. Lopes da Silva, Niedermeyer’s electroencephalography : basic principles, clinical applications, and related fields. Wolters Kluwer/Lippincott Williams & Wilkins Health, 2011.[2] S. M. Lee and S. K. Lee, “Objective evaluation of human perception of automotive sound based on physiological signal of human brain,” Int. J. Automot. Technol., vol. 15, no. 2, pp. 273–282, 2014.[3] M. Omata, K. Ashihara, M. Koubori, Y. Moriya, M. Kyoso, and S. Kiryu, “A Psycho‐ acoustic Measurement and ABR for the Sound Signals in the Frequency Range between 10 kHz and 24 kHz,” October, pp. 1–5, 2008.[4] O. Tsutomu, E. Nishina, N. Kawai, Y. Fuwamoto, and H. Imai, “High-Frequency Sound Above the Audible Range Affects Brain Electric activity and sound perception,” AES[5] B. Liu, Y. Lin, X. Gao, and J. Dang, “Correlation between audio-visual enhancement of speech in different noise environments and SNR: A combined behavioral and electrophysiological study,” Neuroscience, vol. 247, pp. 145–151, 2013.[6] Y. Lin, B. Liu, Z. Liu, and X. Gao, “EEG gamma-band activity during audiovisual speech comprehension in different noise environments,” Cogn. Neurodyn., vol. 9, no. 4, pp. 389–398, 2015.[7] J.-N. Antons, A. K. Porbadnigk, R. Schleicher, B. Blankertz, S. Möller, and G. Curio, “Subjective Listening Tests and Neural Correlates of Speech Degradation in Case of Signal-correlated Noise,” Audio Eng. Soc., no. 100, pp. 2–5, 2010[8] L. M. Luxon and D. Prasher, Noise and its effects, 1st ed. Wiley, 2007.[9] C. P. Beaman, “Auditory distraction from low-intensity noise: A review of the consequences for learning and workplace environments,” Appl. Cogn. Psychol., vol. 19, no. 8, pp. 1041–1064, 2005.[10] S. HYGGE and I. KNEZ, “Effects of Noise, Heat and Indoor Lighting on Cognitive Performance and Self-Reported Affect,” J. Environ. Psychol., vol. 21, no. 3, pp. 291– 299, 2001.[11] A. Liebl, J. Haller, B. Jödicke, H. Baumgartner, S. Schlittmeier, and J. Hellbrück, References “Combined effects of acoustic and visual distraction on cognitive performance and well-being,” Appl. Ergon., vol. 43, no. 2, pp. 424–434, 2012.[12] S. Banbury and D. C. Berry, “Disruption of office-related tasks by speech and office noise,” British Journal of Psychology, vol. 89. pp. 499–517, 1998.[13] P. Roelofsen, “Performance loss in open‐plan offices due to noise by speech,” J. Facil. Manag., vol. 6, no. 3, pp. 202–211, Jul. 2008.[14] S. J. Schlittmeier, J. Hellbrück, R. Thaden, and M. Vorländer, “The impact of background speech varying in intelligibility: effects on cognitive performance and perceived disturbance.,” Ergonomics, vol. 51, no. 5, pp. 719–36, May 2008.[15] J. P. Cowan, The effects of sound on people, 1st ed. Chichester: John Wiley & Sons, Ltd, 2016.[16] C. Kasprzak, “The influence of infrasound noise from wind turbines on EEG signal patterns in humans,” ACTA Physica Poloica A, vol. 125, no. 4–A. pp. 20–23, 2014.[17] C. Kasprzak, “The effect of the narrow-band noise in the range 4-8 Hz on the alpha waves in the EEG signal,” Acta Phys. Pol. A, vol. 123, no. 6, pp. 980–983, 2013.[18] K. Inui, T. Urakawa, and K. Yamashiro, “Echoic memory of a single pure tone indexed by change-related brain activity,” Bmc …, vol. 11, no. 1, p. 135, 2010.[19] B. W. Johnson, S. D. Muthukumaraswamy, W. C. Gaetz, and D. O. Cheyne, “Neuromagnetic and neuroelectric oscillatory responses to acoustic stimulation with broadband noise,” Int. Congr. Ser., vol. 1300, pp. 41–44, 2007.[20] E. Manjarrez, I. Mendez, L. Martinez, A. Flores, and C. R. Mirasso, “Effects of auditory noise on the psychophysical detection of visual signals: Cross-modal stochastic resonance,” Neurosci. Lett., vol. 415, no. 3, pp. 231–236, 2007.[21] M. H. Thaut, Rhythm, music, and the brain: Scientific foundations and clinical applications. Routledge, 2005.[22] T. Egner and J. Gruzelier, “Ecological validity of neurofeedback: modulation of slow wave EEG enhances musical performance.,” Neuroreport, vol. 14, no. 9, pp. 1221– 1224, 2003.[23] O. Sourina, Y. Liu, and M. K. Nguyen, “Real-time EEG-based emotion recognition for music therapy,” J. Multimodal User Interfaces, vol. 5, no. 1–2, pp. 27–35, 2012.[24] D. Justin, Patrik N., & Västfjäll, “Emotional Responses to Music: The Need to Consider References 47 Underlying Mechanisms,” Behav. Brain Sci., vol. 31, no. 5, pp. 559–575, 2008.[25] I. Daly et al., “Music-induced emotions can be predicted from a combination of brain activity and acoustic features,” Brain Cogn., vol. 101, pp. 1–11, 2015.[26] I. Cross, S. Hallam, and M. Thaut, The Oxford Handbook of Music Psychology. Oxford: Oxford University Press, 2008.[27] M.-F. C., E. Premat, A. D., and V. M, “Noise and its Effects – A Review on Qualitative Aspects of Sound . Part II : Noise and Annoyance,” Acta Acust. united with Acust., vol. 91, no. January, pp. 626–642, 2005.[28] M. A. Henríquez and A. D. Londoño, “evaluación de auralizaciones creadas mediante métodos numéricos basados en acústica geométrica y reproducidas en el sistema de reproducción binaural opsodis,” Universidad de San Buenaventura, 2015.[29] J. C. Rodriguez and A. Naranjo, “evaluación de auralizaciones obtenidas combinando métodos de elementos finitos y acústica geométrica en dos recintos y su aplicación en la valoración acústica de uno de ellos,” Universidad de San Buenaventura, 2015.[30] D. Q. VERTEL, “análisis del impacto de las condiciones acústicas en un aula de enseñanza sobre los procesos cognitivos mediante auralizaciones,” Universidad de San Buenaventura, 2015.[31] D. C. P. B. OCHOA and S. ESCOBAR, “análisis del impacto de ruido de fondo y tiempo de reverberación en procesos cognitivos por medio de auralizaciones,” Universidad de San Buenaventura, 2015.[32] H. McGurk and J. Macdonald, “Hearing lips and seeing voices.,” Nature, vol. 264, pp. 691–811, 1976.[33] J. Udesen, T. Piechowiak, and F. Gran, “The effect of vision on psychoacoustic testing with headphone-based virtual sound,” AES J. Audio Eng. Soc., vol. 63, no. 7–8, pp. 552–561, 2015.[34] S. Yuval-Greenberg and L. Y. Deouell, “What You See Is Not (Always) What You Hear: Induced Gamma Band Responses Reflect Cross-Modal Interactions in Familiar Object Recognition,” J. Neurosci., vol. 27, no. 5, pp. 1090–1096, 2007.[35] D. R. Tobergte and S. Curtis, the handbook of multisensory processes, vol. 53, no. 9. 2013.[36] W. M. Hartmann, Signals, Sound and Sensation, 1st ed. Michigan: Springer.[37] “UNE-ISO 1996-2,” 1998.[38] J. Rennies and J. L. Verhey, “Temporal weighting in loudness of broadband and narrowband signals.,” J. Acoust. Soc. Am., vol. 126, no. 3, pp. 951–4, 2009.[39] ISO, “Acoustics - Description, measurement and assessment of environmental noise. Part 1: Basic quantities and assessment procedures (ISO 1996-1:2003),” 2003.[40] J. Blauert, The technology of binaural listening, 1st ed. Bochum: Springer, 2014.[41] Burkhard and Sachs, “Anthropometric manikin for acoustic research,” 1975.[42] V. Michael and M. Vorländer, Auralization. Fundamentals of Acoustics, Modelling, Simulation, Algorithms and Acoustic Virtual Reality. Springer, 2008.[43] K. Drossos, A. Floros, A. Giannakoulopoulos, and N. Kanellopoulos, “Investigating the impact of sound angular position on the listener affective state,” IEEE Trans. Affect. Comput., vol. 6, no. 1, pp. 27–42, 2015.[44] S. A. Gelfand, Hearing: An Introduction to Psychological and Physiological Acoustics, 5th ed., vol. 45, no. 12. London: informa healthcare, 2010.[45] S. A. Gelfand, Hearing, 5th ed. New York: informa healthcare, 2010.[46] P. Ekman and E. Rosenberg, What the face reveals. 2005.[47] M. M. Bradley and P. J. Lang, “Measuring emotion: The self-assessment manikin and the semantic differential,” J. Behav. Ther. Exp. Psychiatry, vol. 25, no. 1, pp. 49–59, 1994.[48] F. Weninger, F. Eyben, B. W. Schuller, M. Mortillaro, and K. R. Scherer, “On the acoustics of emotion in audio: What speech, music, and sound have in common,” Front. Psychol., vol. 4, no. MAY, pp. 1–12, 2013.[49] J. Redondo, I. Fraga, I. Padrón, and M. Comesaña, “The Spanish adaptation of ANEW (affective norms for English words).,” Behav. Res. Methods, vol. 39, no. 3, pp. 600– 605, 2007.[50] V. Terzis, C. N. Moridis, and A. a. Economides, “Measuring instant emotions during a self-assessment test,” Proc. 7th Int. Conf. Methods Tech. Behav. Res. - MB ’10, vol. 2010, pp. 1–4, 2010.[51] D. Oberfeld, W. Heeren, J. Rennies, and J. Verhey, “Spectro-Temporal Weighting of Loudness,” PLoS One, vol. 7, no. 11, 2012.[52] J. L. Verhey and A.-K. Anweiler, “Spectral loudness summation for short and long signals as a function of level,” Acoust. Soc. Am., vol. 45, no. 5, pp. 287–294, 2006.[53] D. U. RUIZ, “impacto de las condiciones acústicas en la inteligibilidad y la dificultad de escucha en tres aulas de la universidad de san buenaventura medellín, sede san benito,” vol. 1, 2015.[54] C. A. Gantiva Díaz, P. Guerra Muñoz, and J. Vila Castelar, “Colombian Validation of the International Affective Picture,” Acta Colomb. Psicol., vol. 14, no. 2, pp. 103–111, 2011.[55] J. O. VILLEGAS, “estimación del coeficiente de absorción en incidencia aleatoria utilizando presión y velocidad de partícula mediante la sonda pu de microflown technologies,” 2015.Universidad de San Buenaventura - MedellínBiblioteca USB Medellín (San Benito): CD-4106tBiblioteca Digital Universidad de San BuenaventuraAcoustic noiseSoundSAMFace readerEmotionsCrossmodalEstimulaciónRuidoMedición acústicasSonómetroIngeniero de SonidoDesign of a protocol for the measurement of physiological and emotional responses to sound stimuliTrabajo de grado - PregradoTrabajo de Gradoinfo:eu-repo/semantics/bachelorThesishttp://purl.org/coar/resource_type/c_7a1fPublicationORIGINALDesign_Protocol_Measurement_Macia_2017.pdfDesign_Protocol_Measurement_Macia_2017.pdfapplication/pdf2224283https://bibliotecadigital.usb.edu.co/bitstreams/b2bd85ea-fd59-48c1-9f95-200a48559c97/download2a92fc2bc9544bac1f51df5228a6dcfbMD51LICENSElicense.txtlicense.txttext/plain; charset=utf-82071https://bibliotecadigital.usb.edu.co/bitstreams/ff294230-59c9-431a-be35-9ec6d25adf2d/download0c7b7184e7583ec671a5d9e43f0939c0MD52TEXTDesign_Protocol_Measurement_Macia_2017.pdf.txtDesign_Protocol_Measurement_Macia_2017.pdf.txtExtracted texttext/plain84725https://bibliotecadigital.usb.edu.co/bitstreams/17d52e97-0deb-4c48-8e6f-343b1977813d/download28293c1838ac92476309c3fcf65bd32bMD53THUMBNAILDesign_Protocol_Measurement_Macia_2017.pdf.jpgDesign_Protocol_Measurement_Macia_2017.pdf.jpgGenerated Thumbnailimage/jpeg5991https://bibliotecadigital.usb.edu.co/bitstreams/28ff8e36-255d-4613-b937-14ecbf0e0b7c/downloadd5e941bd150a06a67ecd675f48e6d2d4MD5410819/4131oai:bibliotecadigital.usb.edu.co:10819/41312023-02-24 11:31:34.314http://creativecommons.org/licenses/by-nc-nd/2.5/co/https://bibliotecadigital.usb.edu.coRepositorio Institucional Universidad de San Buenaventura Colombiabdigital@metabiblioteca.comPGNlbnRlcj4KPGgzPkJJQkxJT1RFQ0EgRElHSVRBTCBVTklWRVJTSURBRCBERSBTQU4gQlVFTkFWRU5UVVJBIC0gQ09MT01CSUE8L2gzPgo8cD4KVMOpcm1pbm9zIGRlIGxhIGxpY2VuY2lhIGdlbmVyYWwgcGFyYSBwdWJsaWNhY2nDs24gZGUgb2JyYXMgZW4gZWwgcmVwb3NpdG9yaW8gaW5zdGl0dWNpb25hbDwvcD48L2NlbnRlcj4KPFAgQUxJR049Y2VudGVyPgpQb3IgbWVkaW8gZGUgZXN0ZSBmb3JtYXRvIG1hbmlmaWVzdG8gbWkgdm9sdW50YWQgZGUgQVVUT1JJWkFSIGEgbGEgVW5pdmVyc2lkYWQgZGUgU2FuIEJ1ZW5hdmVudHVyYSwgU2VkZSBCb2dvdMOhIHkgPEJSPlNlY2Npb25hbGVzIE1lZGVsbMOtbiwgQ2FsaSB5IENhcnRhZ2VuYSwgbGEgZGlmdXNpw7NuIGVuIHRleHRvIGNvbXBsZXRvIGRlIG1hbmVyYSBncmF0dWl0YSB5IHBvciB0aWVtcG8gaW5kZWZpbmlkbyBlbiBsYTxCUj4gQmlibGlvdGVjYSBEaWdpdGFsIFVuaXZlcnNpZGFkIGRlIFNhbiBCdWVuYXZlbnR1cmEsIGVsIGRvY3VtZW50byBhY2Fkw6ltaWNvIC0gaW52ZXN0aWdhdGl2byBvYmpldG8gZGUgbGEgcHJlc2VudGUgPEJSPmF1dG9yaXphY2nDs24sIGNvbiBmaW5lcyBlc3RyaWN0YW1lbnRlIGVkdWNhdGl2b3MsIGNpZW50w63CrWZpY29zIHkgY3VsdHVyYWxlcywgZW4gbG9zIHTDqXJtaW5vcyBlc3RhYmxlY2lkb3MgZW4gbGEgTGV5IDIzIGRlIDxCUj4gMTk4MiwgTGV5IDQ0IGRlIDE5OTMsIERlY2lzacOzbiBBbmRpbmEgMzUxIGRlIDE5OTMsIERlY3JldG8gNDYwIGRlIDE5OTUgeSBkZW3DoXMgbm9ybWFzIGdlbmVyYWxlcyBzb2JyZSBkZXJlY2hvczxCUj4gZGUgYXV0b3IuIDxCUj4gCiAKQ29tbyBhdXRvciBtYW5pZmllc3RvIHF1ZSBlbCBwcmVzZW50ZSBkb2N1bWVudG8gYWNhZMOpbWljbyAtIGludmVzdGlnYXRpdm8gZXMgb3JpZ2luYWwgeSBzZSByZWFsaXrDsyBzaW4gdmlvbGFyIG8gPEJSPiB1c3VycGFyIGRlcmVjaG9zIGRlIGF1dG9yIGRlIHRlcmNlcm9zLCBwb3IgbG8gdGFudG8sIGxhIG9icmEgZXMgZGUgbWkgZXhjbHVzaXZhIGF1dG9yw63CrWEgeSBwb3NlbyBsYSB0aXR1bGFyaWRhZCA8QlI+IHNvYnJlIGxhIG1pc21hLiBMYSBVbml2ZXJzaWRhZCBkZSBTYW4gQnVlbmF2ZW50dXJhIG5vIHNlcsOhIHJlc3BvbnNhYmxlIGRlIG5pbmd1bmEgdXRpbGl6YWNpw7NuIGluZGViaWRhIGRlbCBkb2N1bWVudG8gPEJSPnBvciBwYXJ0ZSBkZSB0ZXJjZXJvcyB5IHNlcsOhIGV4Y2x1c2l2YW1lbnRlIG1pIHJlc3BvbnNhYmlsaWRhZCBhdGVuZGVyIHBlcnNvbmFsbWVudGUgY3VhbHF1aWVyIHJlY2xhbWFjacOzbiBxdWUgcHVlZGE8QlI+IHByZXNlbnRhcnNlIGEgbGEgVW5pdmVyc2lkYWQuIDxCUj4KIApBdXRvcml6byBhIGxhIEJpYmxpb3RlY2EgRGlnaXRhbCBkZSBsYSBVbml2ZXJzaWRhZCBkZSBTYW4gQnVlbmF2ZW50dXJhIGNvbnZlcnRpciBlbCBkb2N1bWVudG8gYWwgZm9ybWF0byBxdWUgZWwgPEJSPnJlcG9zaXRvcmlvIGxvIHJlcXVpZXJhIChpbXByZXNvLCBkaWdpdGFsLCBlbGVjdHLDs25pY28gbyBjdWFscXVpZXIgb3RybyBjb25vY2lkbyBvIHBvciBjb25vY2VyKSBvIGNvbiBmaW5lcyBkZTxCUj4gcHJlc2VydmFjacOzbiBkaWdpdGFsLiA8QlI+CiAKRXN0YSBhdXRvcml6YWNpw7NuIG5vIGltcGxpY2EgcmVudW5jaWEgYSBsYSBmYWN1bHRhZCBxdWUgdGVuZ28gZGUgcHVibGljYXIgcG9zdGVyaW9ybWVudGUgbGEgb2JyYSwgZW4gZm9ybWEgdG90YWwgbyA8QlI+cGFyY2lhbCwgcG9yIGxvIGN1YWwgcG9kcsOpLCBkYW5kbyBhdmlzbyBwb3IgZXNjcml0byBjb24gbm8gbWVub3MgZGUgdW4gbWVzIGRlIGFudGVsYWNpw7NuLCBzb2xpY2l0YXIgcXVlIGVsIDxCUj5kb2N1bWVudG8gZGVqZSBkZSBlc3RhciBkaXNwb25pYmxlIHBhcmEgZWwgcMO6YmxpY28gZW4gbGEgQmlibGlvdGVjYSBEaWdpdGFsIGRlIGxhIFVuaXZlcnNpZGFkIGRlIFNhbiBCdWVuYXZlbnR1cmEsIDxCUj4gYXPDrcKtIG1pc21vLCBjdWFuZG8gc2UgcmVxdWllcmEgcG9yIHJhem9uZXMgbGVnYWxlcyB5L28gcmVnbGFzIGRlbCBlZGl0b3IgZGUgdW5hIHJldmlzdGEuIDxCUj48L1A+Cg==