Uso de grandes modelos de lenguaje para realizar diagnósticos clínicos: desde el principio de precaución

El avance de la inteligencia artificial (IA) y sus grandes modelos de lenguaje en medicina, especialmente en diagnóstico clínico, plantea disyuntivas y desconciertos que no deben dejarse a un lado y exigen un análisis desde una perspectiva bioética. Estas herramientas prometen mejorar la precisión d...

Full description

Autores:
Rincón Arenas, Paula Steffany
Tipo de recurso:
https://purl.org/coar/resource_type/c_7a1f
Fecha de publicación:
2024
Institución:
Universidad El Bosque
Repositorio:
Repositorio U. El Bosque
Idioma:
spa
OAI Identifier:
oai:repositorio.unbosque.edu.co:20.500.12495/13868
Acceso en línea:
https://hdl.handle.net/20.500.12495/13868
Palabra clave:
Bioética
Principio de precaución
Inteligencia artificial
Diagnóstico clínico
Incertidumbre
Bioethics
Precautionary principle
Artificial intelligence
Clinical diagnosis
Uncertainty
WB60
Rights
License
Attribution-NonCommercial-ShareAlike 4.0 International
id UNBOSQUE2_6e22505f7d60e0c72273704d87b1e041
oai_identifier_str oai:repositorio.unbosque.edu.co:20.500.12495/13868
network_acronym_str UNBOSQUE2
network_name_str Repositorio U. El Bosque
repository_id_str
dc.title.none.fl_str_mv Uso de grandes modelos de lenguaje para realizar diagnósticos clínicos: desde el principio de precaución
dc.title.translated.none.fl_str_mv Using Large Language Models to Make Clinical Diagnoses: From the Precautionary Principle
title Uso de grandes modelos de lenguaje para realizar diagnósticos clínicos: desde el principio de precaución
spellingShingle Uso de grandes modelos de lenguaje para realizar diagnósticos clínicos: desde el principio de precaución
Bioética
Principio de precaución
Inteligencia artificial
Diagnóstico clínico
Incertidumbre
Bioethics
Precautionary principle
Artificial intelligence
Clinical diagnosis
Uncertainty
WB60
title_short Uso de grandes modelos de lenguaje para realizar diagnósticos clínicos: desde el principio de precaución
title_full Uso de grandes modelos de lenguaje para realizar diagnósticos clínicos: desde el principio de precaución
title_fullStr Uso de grandes modelos de lenguaje para realizar diagnósticos clínicos: desde el principio de precaución
title_full_unstemmed Uso de grandes modelos de lenguaje para realizar diagnósticos clínicos: desde el principio de precaución
title_sort Uso de grandes modelos de lenguaje para realizar diagnósticos clínicos: desde el principio de precaución
dc.creator.fl_str_mv Rincón Arenas, Paula Steffany
dc.contributor.advisor.none.fl_str_mv Tellez Alarcon, Manuela
dc.contributor.author.none.fl_str_mv Rincón Arenas, Paula Steffany
dc.contributor.orcid.none.fl_str_mv Rincón Arenas, Paula Steffany [0009-0008-8445-9738]
dc.subject.none.fl_str_mv Bioética
Principio de precaución
Inteligencia artificial
Diagnóstico clínico
Incertidumbre
topic Bioética
Principio de precaución
Inteligencia artificial
Diagnóstico clínico
Incertidumbre
Bioethics
Precautionary principle
Artificial intelligence
Clinical diagnosis
Uncertainty
WB60
dc.subject.keywords.none.fl_str_mv Bioethics
Precautionary principle
Artificial intelligence
Clinical diagnosis
Uncertainty
dc.subject.nlm.none.fl_str_mv WB60
description El avance de la inteligencia artificial (IA) y sus grandes modelos de lenguaje en medicina, especialmente en diagnóstico clínico, plantea disyuntivas y desconciertos que no deben dejarse a un lado y exigen un análisis desde una perspectiva bioética. Estas herramientas prometen mejorar la precisión diagnóstica y optimizar recursos, pero también generan incertidumbre y preocupaciones bioéticas y legales. En este contexto, se debe evaluar el principio de precaución, el cual plantea actuar frente a posibles riesgos significativos o irreparables, incluso sin certeza completa de estos. A través de un ensayo crítico, se propone identificar condiciones necesarias para integrar estas herramientas de forma segura y responsable, priorizando el principio de precaución. Se destaca un enfoque prudente que equilibre la innovación tecnológica con la seguridad del paciente y los valores fundamentales en la medicina actual.
publishDate 2024
dc.date.issued.none.fl_str_mv 2024-11
dc.date.accessioned.none.fl_str_mv 2025-02-06T21:04:59Z
dc.date.available.none.fl_str_mv 2025-02-06T21:04:59Z
dc.type.coar.fl_str_mv http://purl.org/coar/resource_type/c_7a1f
dc.type.local.spa.fl_str_mv Tesis/Trabajo de grado - Monografía - Especialización
dc.type.coar.none.fl_str_mv https://purl.org/coar/resource_type/c_7a1f
dc.type.driver.none.fl_str_mv info:eu-repo/semantics/bachelorThesis
dc.type.coarversion.none.fl_str_mv https://purl.org/coar/version/c_ab4af688f83e57aa
format https://purl.org/coar/resource_type/c_7a1f
dc.identifier.uri.none.fl_str_mv https://hdl.handle.net/20.500.12495/13868
dc.identifier.instname.spa.fl_str_mv instname:Universidad El Bosque
dc.identifier.reponame.spa.fl_str_mv reponame:Repositorio Institucional Universidad El Bosque
dc.identifier.repourl.none.fl_str_mv repourl:https://repositorio.unbosque.edu.co
url https://hdl.handle.net/20.500.12495/13868
identifier_str_mv instname:Universidad El Bosque
reponame:Repositorio Institucional Universidad El Bosque
repourl:https://repositorio.unbosque.edu.co
dc.language.iso.fl_str_mv spa
language spa
dc.relation.references.none.fl_str_mv Aamodt, A.E, & Plaza E. (1994). Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches. AI Communications, 7, 39-59. https://www.iiia.csic.es/~enric/papers/AICom.pdf
Abrams, C. (2019). Google's Effort to Prevent Blindness Shows AI Challenges. The Wall Streed Journal. https://www.wsj.com/articles/googles-effort-to-prevent-blindness-hits- roadblock-11548504004.
Agrawal, A. (2009). Medication errors: prevention using information technology systems. Br J Clin Pharmacol, 67(6), 681-686. https://pubmed.ncbi.nlm.nih.gov/19594538/
Ahuja, A. S. (2019). The impact of artificial intelligence in medicine on the future role of the physician. PeerJ, 7, e7702. https://peerj.com/articles/7702/
Alex-Howard, W. H, & Gerada, A. (2023). ChatGPT and antimicrobial advice: the end of the consulting infection doctor? The Lancet Infectious Diseases, 23(4), 405-406. https://www.thelancet.com/pdfs/journals/laninf/PIIS1473-3099(23)00113-5.pdf
Alexander, P. L., Martindale, C. D., Llewellyn, R. O., de Visser, B., Ng, V., Ngai, A. U., Kale, L. F., di Ruffano, R. M., Golub, G. S., Collins, D., Moher, M. D., McCradden, L., Oakden-Rayner, S. C., Rivera, M., Calvert, C. J., Kelly, C. S., Lee, C., Yau, A.-W., Chan, P. A., Keane, A. L. B., Denniston, A. K., & Liu, X. (2024). Concordance of randomised controlled trials for artificial intelligence interventions with the CONSORT-AI reporting guidelines. Nature Communications, 15(1619). https://doi.org/10.1038/s41467-024- 45355-3
Andorno, R. (2008). Principio de precaución. En J. C. Tealdi (Dir.), Diccionario latinoamericano de bioética (pp. 345-347). Bogotá: UNESCO-Universidad Nacional de Colombia.
Aquino, S. T. (2012, septiembre). Partes cuasi integrales de la prudencia. Suma teológica - Parte II-IIae - Cuestión 49. https://hjg.com.ar/sumat/c/c49.html#a8
Attewell, P. (1987). The Deskilling Controversy. Work and Occupations, 14(3), 323-246. https://onwork.edu.au/bibitem/1987-Attewell,Paul-The+Deskilling+Controversy/
Aung, Y. y. M., Wong, D. C. S., & Ting, D. S. W. (2021). The promise of artificial intelligence: a review of the opportunities and challenges of artificial intelligence in healthcare. British Medical Bulletin, 139(1), 4-15. https://doi.org/10.1093/bmb/ldab016
BBC News Mundo. (2023, 20 de abril). Qué es la misteriosa “caja negra” de la inteligencia artificial que desconcierta a los expertos (y por qué aún no entendemos cómo aprenden las máquinas). BBC News Mundo. https://www.bbc.com/mundo/noticias-65331262
Beede, E., Baylor, E., Hersch, F., & Lurchenko, A. (2020). A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. Association for Computing Machinery. https://doi.org/10.1145/3313831.3376718
Blanco-Gonzalez, A., Cabezón, A., Seco-González, A., Conde-Torres, D., Antelo-Riveiro, P., Piñeiro, Á., & Garcia-Fandimo, R. (2023). The role of AI in drug discovery: Challenges, opportunities, and strategies. Pharmaceuticals, 16(6), 891. https://www.mdpi.com/1424- 8247/16/6/891
Blas-Lahitte H. S. V. (2011). Aportes para una bioética medioambiental y la cohabitabilidad humana desde una visión relacional. Persona y Bioética, 1(15), 40-51. https://personaybioetica.unisabana.edu.co/index.php/personaybioetica/article/view/1909
Bonamigo, E. L. (2010). El principio de precaución: Un nuevo principio bioético y biojurídico. [Tesis de doctorado, Universidad Rey Juan Carlos]. http://www.estsp.ipp.pt/fileManager/editor/Documentos_Publicos/Comissao%20de%20E tica/Acervo%20C.E./Principios_bioeticos/7.pdf
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., & Amodei, D. (2020). Language Models are Few-Shot Learners. arXiv.org. https://arxiv.org/abs/2005.14165
Capella, V. B. (2007). Intervenciones genéticas en la línea germinal humana y justicia. Biotecnología y Posthumanismo., 461-486. https://dialnet.unirioja.es/servlet/articulo?codigo=2671225
Cascella, M., Montomoli, J., Bellini, V., & Bignami, E. (2023). Evaluating the feasibility of ChatGPT in healthcare: An analysis of multiple clinical and research scenarios. Journal of Medical Systems, 47. https://pmc.ncbi.nlm.nih.gov/articles/PMC9985086/
Castelvecchi, D. (2016). Can we open the black box of AI? . Nature, 538 (7623). https://www.nature.com/news/can-we-open-the-black-box-of-ai-1.20731
Chanchan-He, W., Liu, W., Xu, J., Huang, Y., Dong, Z., Wu, Y., & Kharrazi, H. (2024). Efficiency, accuracy, and health professionals' perspectives regarding artificial intelligence in radiology practice: A scoping review. iRADIOLOGY, 2(2), 156–172. https://doi.org/10.1002/ird3.63
Chen, I. Y., Szolovits, P., & Ghassemi, M. (2019, febrero). Can AI help reduce disparities in general medical and mental health care? AMA Journal of Ethics, 21(2), E167-E179. https://journalofethics.ama-assn.org/article/can-ai-help-reduce-disparities-general- medical-and-mental-health-care/2019-02
Cohen, J. P., Cao, T., Viviano, J. D., Huang, C. W., Fralick, M., Ghassemi, M., Mamdani, M., Greiner, R., & Bengio, Y. (2021). Problems in the deployment of machine-learned models in health care. CMAJ, 193(35), 1391-1394. https://pubmed.ncbi.nlm.nih.gov/34462316/
Comisión Mundial de Ética del Conocimiento Científico y la Tecnología [COMEST]. (2005). Informe del Grupo de Expertos sobre el Principio Precautorio. UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000139578_spa
Comisión Europea. (2000). Comunicación de la Comisión sobre el recurso al principio de precaución. https://eurlex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2000:0001:FIN:es:PDF
Córdoba, S. M. (2008). Estudios contemporáneos sobre ética. Universitas.
Cortina, A. (2004). Fundamentos filosóficos del principio de precaución. En R. C. Maria, Principio de precaución, Biotecnología y Derecho (pp. 3-16). Comares
Council of Europe. (2018 ). Discrimination, artificial intelligence, and algotihmic decision- making.https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decision- making/1680925d73
Couzin-Frankel, J. (2019). Medicine contends with how to use artificial intelligence. Science,364(6446). https://www.science.org/doi/10.1126/science.364.6446.1119
Dias, L. D. (2021). Deskilling of medical professionals: an unintended consequence of AI implementation? Giomale di filosofia, 2(2).https://mimesisjournals.com/ojs/index.php/giornale-filosofia/article/view/1691
Durning, S. J., Artino, A. R., Schuwirth, L., & Van Der Vleuten, C. (2013). Clarifying Assumptions to Enhance Our Understanding and Assessment of Clinical Reasoning. Academic Medicine, 88(4), 442-448. https://doi.org/10.1097/acm.0b013e3182851b5b
European Commission Executive Agency for Small and Medium-size Enterprises. (2020). Artificial intelligence-based software as a medical device. https://ati.ec.europa.eu/sites/default/files/202007/ATI%20-%20Artificial%20Intelligencebased%20software%20as%20a%20medical%20device.pdf
Fernández, R. (2019). Robótica, inteligencia artificial y seguridad: Como encajar la responsabilidad civil ?https://riunet.upv.es/bitstream/handle/10251/117875/Rob%c3%b3tica.pdf?sequence=1&i sAllowed=y
Foster, K. R., Koprowski, R., & Skufca, J. D. (2014). Machine learning, medical diagnosis, and biomedical engineering research - Commentary. BioMedical Engineering OnLine, 13(94). https://biomedical-engineering-online.biomedcentral.com/articles/10.1186/1475- 925X-13-94
Garrido, G. M. T. (2024). El Principio de Precaución en Bioética. bio.ética web. https://www.bioeticaweb.com/el-principio-de-precauciasn-en-bioactica-g-tomais-garrido/
Gaube, S., Suresh, H., Raue, M., Merritt, A., Berkowitz, S. J., Lermer, E., Coughlin, J. F., Guttag, J. V., Colak, E., & Ghassemi, M. (2021). Do as AI say: susceptibility in deployment of clinical decision-aids. Npj Digital Medicine, 4(1). https://doi.org/10.1038/s41746-021-00385-9
Gebauer, S., & Eckert, C. (2024). Survey of US physicians’ attitudes and knowledge of AI. BMJ Evidence-based Medicine, 29(4), 279-281. https://doi.org/10.1136/bmjebm-2023-112726
Glauser, W. (2020). AI in health care: Improving outcomes or threatening equity? Canadian Medical Association Journal, 192(1). https://pmc.ncbi.nlm.nih.gov/articles/PMC6944301/
Gordijn, B., & Have, H. T. (2023). ChatGPT: Evolution or revolution? Medicine, Health Care and Philosophy, 26, 1–2. https://doi.org/10.1007/s11019-023-10136-0
Grote, T. (2021). Trustworthy medical AI systems need to know when they don’t know. Med Ethics, 47, 337-338. https://jme.bmj.com/content/47/5/337.long
Gruetzemacher, R., Paradice, D., & Lee, K. B. (2020). Forecasting extreme labor displacement: A survey of AI practitioners. Technological Forecasting And Social Change, 161, 120323. https://doi.org/10.1016/j.techfore.2020.120323
Guío-Español, A., Tamayo-Uribe, E & Gomez-Ayerbe, P. (2021). Marco ético para la inteligencia artificial en Colombia. Ministerio de Ciencia, Tecnología y Educación. https://minciencias.gov.co/sites/default/files/marco-etico-ia-colombia-2021.pdf
Hoff, T. (2011). Deskilling and adaptation among primary care physicians using two work innovations. Health Care Management Review, 338-348. https://pubmed.ncbi.nlm.nih.gov/21685794/
Holdsworth, J., & Scapicchio, M. (2024, junio 17). ¿Qué es el deep learning?. IBM. https://www.ibm.com/es-es/topics/deep-learning
Holohan, M. (2023, 11 de septiembre). A boy saw 17 doctors over 3 years for chronic pain. ChatGPT found the diagnosis. TODAY all day. https://www.today.com/health/mom-chatgpt-diagnosis-pain-rcna101843
Hottois, G. (2005). Cultura tecnocientífica y medio ambiente. La biodiversidad en el tecnocosmos. En J. Escobar Triana, C. E. Maldonado, M. A. Sánchez G., P. Simón Lorda, K. Cranley Glass, R. Villarroel, A. Couceiro Vidal, M. F. Castro F., Y. Bernal G., T. León Sicard, & S. E. Arango D. (Eds.). Bioética y medio ambiente. Colección Bios y Ethos (pp. 21-41). Kimpres Ltda.
IBM. (s.f.). Qué son los modelos de lenguaje grande (LLM). https://www.ibm.com/mx- es/topics/large-language-models?mhsrc=ibmsearch_a&mhq=que%20son%20los%20llm
Ibrahi- Habil, T. L & P. Z. (2020). Artificial intelligence in health care: accountability and safety. Bulletin of the World Health Organization, 4, 251-256. https://pmc.ncbi.nlm.nih.gov/articles/PMC7133468/
Jonas, H. (1995). El principio de responsabilidad: Ensayo de una ética para la civilización tecnológica. Herder. https://doi.org/10.2307/j.ctvt9k2sz
Juarez, J. M. (Marzo de 2018). #1 Tres cosas que saber sobre la Inteligencia Artificial en Medicina [Mensaje en un blog]. Blog de IA, Computación y Medicina https://webs.um.es/jmjuarez/1-queesinteligenciaartificialmedicina/
Jungmann, F., Jorg, T., Hahn, F., Pinto Dos Santos, D., Jungmann, S. M., Düber, C., Mildenberger, P., & Kloeckner, R. (2021). Attitudes toward artificial intelligence among radiologists, IT specialists, and industry. Academic Radiology, 28(6), 834-840. https://pubmed.ncbi.nlm.nih.gov/32414637/
Kim, D. W., Jang, H. Y., Kim, K. W., & Shin, Y. (2019). Design characteristics of studies reporting the performance of artificial intelligence algorithms for diagnostic analysis of medical images: Results from recently published papers. Korean Journal of Radiology, 20(3), 405-410. https://pubmed.ncbi.nlm.nih.gov/30799571/
Khedkar, S., Gandhi, P., Shinde, G., & Subramanian, V. (2019). Deep Learning and Explainable AI in Healthcare Using EHR. Springer. https://doi.org/10.1007/978-3-030-33966-1_7
Kompa, B., Snoek, J., & Beam, A. L. (2021). Second opinion needed: Communicating uncertainty in medical machine learning. npj digital medicine, 4(4). https://www.nature.com/articles/s41746-020-00367-3
Kottow, M. (2011). Bioética pública: una propuesta. Revista Bioética, 19(1), 61-76. http://www.redalyc.org/articulo.oa?id=361533255005
Kung, T. H., Cheatham, M., ChatGPT, Medenilla, A., Sillos, C., De Leon, L., Elepaño, C., Madriaga, M., Aggabao, R., Diaz-Candido, G., Maningo, J., & Tseng, V. (2022). Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. medRxiv. https://doi.org/10.1101/2022.12.19.22283643
Lakhani P., S. B. (2017). Deep Learning at Chest Radiography: Automated Classificaction of Pulmonary Tuberculosis by Using Convolutional Neural Networks. Radiology, 284(2), 574-582. https://pubmed.ncbi.nlm.nih.gov/28436741/
Langlotz, C. P. (2019). Will Artificial Intelligence Replace Radiologists? Radiology: Artificial Intelligence, 1(3). https://pmc.ncbi.nlm.nih.gov/articles/PMC8017417/
Lebrún, G. (1999). Sobre a Tecnofobia. En NOVAES Adauto. A crise da razão. (pp. 471-494). São Paulo: Companhia das Letras.
Lee, J. T., Moffett, A. T., Maliha, G., Faraji, Z., Kanter, G. P., & Weissman, G. E. (2023). Analysis of devices authorized by the FDA for clinical decision support in critical care. JAMA Internal Medicine, 183(12), 1399-1402. https://pubmed.ncbi.nlm.nih.gov/37812404/
Leiss, W., Beck, U., Ritter, M., & Lash, S. (2000). Risk society: Towards a new modernity. Canadian Journal of Sociology, 19(4), 544-547. Leiss, W., Beck, U., Ritter, M., & Lash, S. (2000). Risk society: Towards a new modernity. Canadian Journal of Sociology, 19(4), 544.
Lenharo, M. (2024). The testing of AI in medicine is a mess. Here's how it should be done. Nature, 632, 722-724. https://www.nature.com/articles/d41586-024-02675-0
Liu, X., Faes, L., Kale, A. U., Wagner, S. K., Fu, D. J., Bruynseels, A., Mahendiran, T., Moraes, G., Shamdas, M., Kern, C., Ledsam, J. R., Schmid, M. K., Balaskas, K., Topol, E. J., Bachmann, L. M., Keane, P. A., & Denniston, A. K. (2019). A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: A systematic review and meta-analysis. Lancet Digital Health, 1, 271-297. https://pubmed.ncbi.nlm.nih.gov/33323251/
Liu, Y., Gadepalli, K., Norouzi, M., Dahl, G. E., Kohlberger, T., Boyko, A., Venugopalan, S., Timofeev, A., Nelson, P. Q., Corrado, G. S., Hipp, J. D., Peng, L., & Stumpe, M. C. (2017). Detecting cancer metastases on gigapixel pathology images. arXiv. https://doi.org/10.48550/arXiv.1703.02442
Llano, A. A. (6 de Mayo de 2024). Proyectos de Ley para uso responsable de IA y protección laboral ante nuevas tecnologías. Congreso de la República de Colombia. https://www.senado.gov.co/index.php/el-senado/noticias/5476-proyectos-de-ley-para- uso-responsable-de-ia-y-proteccion-laboral-ante-nuevas- tecnologias#:~:text=%2DA%20plenaria%20de%20Senado%20pasó,y%20equidad%20pa ra%20sus%20usuarios.
Louisa-Dahmani, V. D. (2020). Habitual use of GPS negatively impacts spatial memory during self-guided navigation. Scientific Reports, 10. https://www.nature.com/articles/s41598- 020-62877-0
Maheshwari, K., Shimada, T., Yang, D., Khanna, S., Cywinski, J. B., Irefin, S. A., Ayad, S., Turan, A., Ruetzler, K., Qiu, Y., Saha, P., Mascha, E. J., & Sessler, D. I. (2020). Hypotension prediction index for prevention of hypotension during moderate- to high- risk noncardiac surgery. Anesthesiology, 133(6), 1214-1222. https://pubs.asahq.org/anesthesiology/article/133/6/1214/110700/Hypotension- Prediction-Index-for-Prevention-of
Mann, D. L. (2023). Artificial Intelligence Discusses the Role of Artificial Intelligence in Tanslational Medicine: A JACC: Basic to Translational Science Interview With ChatGPT. JACC Journals(2), 221-223. https://pmc.ncbi.nlm.nih.gov/articles/PMC9998448/
Marcos, A. (2001). Ética ambiental. Universidad de Valladolid. http://www.fyl.uva.es/~wfilosof/webMarcos/textos/Etica_Ambiental_2as_pruebas.pdf
Nagendran, M., Chen, Y., Lovejoy, C. A., Gordon, A. C., Komorowski, M., Harvey, H., Topol, E. J., Ioannidis, J. P. A., Collins, G. S., & Maruthappu, M. (2020). Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies. BMJ, m689. https://doi.org/10.1136/bmj.m689
Nnamoko, N., & Korkontzelos, I. (2020). Efficient treatment of outliers and class imbalance for diabetes prediction. Artificial Intelligence In Medicine, 104, 101815. https://doi.org/10.1016/j.artmed.2020.101815
Norman, G. (2005). Research in clinical reasoning: past history and current trends. Medical Education, 39(4), 418-427. https://doi.org/10.1111/j.1365-2929.2005.02127.x
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453. https://doi.org/10.1126/science.aax2342
Obermeyer, Z., & Emanuel, E. J. (2016). Predicting the Future — Big Data, Machine Learning, and Clinical Medicine. New England Journal Of Medicine, 375(13), 1216-1219. https://doi.org/10.1056/nejmp1606181
Ocampo, E. B. (2012). El principio precautorio y sus fundamentos filosóficos. México: Universidad Nacional Autónoma de México. Instituto de Investigaciones Jurídicas.
Organización de Naciones Unidas [ONU]. (1992). Declaración de Rio sobre el Medio Ambielnte y el Desarrollo. Declaración, Conferencia de las Naciones Unidas sobre el Medio Ambiente y el Desarrollo, Departamenteo de Asuntos Económicos y Sociales, Rio de Janeiro. https://www.un.org/spanish/esa/sustdev/agenda21/riodeclaration.htm
O’Riordan, T., & Jordan, A. (1995). TheprecautionaryPrinciple in Contemporary Environmental Politics. Environ Values; 4(3), 191-212. https://doi.org/10.3197/096327195776679475
Parray, A. A., Inam, Z. M., Ramonfaur, D., Haider, S. S., Mistry, S. K., & Pandya, A. K. (2023). ChatGPT and global public health: Applications, challenges, ethical considerations and mitigation strategies. Global Transitions, 5, 50-54. https://doi.org/10.1016/j.glt.2023.05.001
Parikh, R. B., & Helmchen, L. A. (2022). Paying for artificial intelligence in medicine. Npj Digital Medicine, 5(1). https://doi.org/10.1038/s41746-022-00609-6
Pelaccia, T., Forestier, G., & Wemmert, C. (2019). Deconstructing the diagnostic reasoning of human versus artificial intelligence. Canadian Medical Association Journal, 191(48), E1332-E1335. https://doi.org/10.1503/cmaj.190506
Prevedello, L. M., Halabi, S. S., Shih, G., Wu, C. C., Kohli, M. D., Chokshi, F. H., Erickson, B. J., Kalpathy-Cramer, J., Andriole, K. P., & Flanders, A. E. (2019). Challenges related to artificial intelligence research in medical imaging and the importance of image analysis competitions. Radiology: Artificial Intelligence, 1. https://pubmed.ncbi.nlm.nih.gov/33937783/
Puttha, R., Thalava, H., Mehta, J., & Thalava, R. (2024). 6866 Health care provider’s perception of artificial intelligence: focusing on our change drivers. BMJ, A343.2-A343. https://doi.org/10.1136/archdischild-2024-rcpch.541
Raffensperger, C., & Tickner, J. (1999). Protecting public health and the environment: Implementing the Precautionary Principle. Island Press.
Rajkomar, A., Oren, E., Chen, K., Dai, A. M., Hajaj, N., Hardt, M., Liu, P. J., Marcus, J., Sun, M., Sundberg, P., Yee, H., Zhang, K., Zhang, Y., Flores, G., Duggan, G. E., Irvine, J., Le, Q., Litsch, K., Mossin, A., & Dean, J. (2018). Scalable and accurate deep learning with electronic health records. npj Digital Medicine, 1(18). https://doi.org/10.1038/s41746- 018-0029-1
Ravaut, M., Sadeghi, H., Leung, K. K., Volkovs, M., Kornas, K., Harish, V., Watson, T., Lewis, G. F., Weisman, A., Poutanen, T., & Rosella, L. (2021). Predicting adverse outcomes due to diabetes complications with machine learning using administrative health data. npj Digital Medicine, 24. https://www.nature.com/articles/s41746-021-00394-8
Real Academia Española. (2003). Diccionario de la lengua española. Planeta.
Research, I. C. o. M., Aurora, N., Rao, M. V. V., Mathur, R., Singh, H., Menon, G. R., Sharma, S., Singh, M. P., & Cell, T. A. (2023). Ethical Guidelines for Application of Artificial Intelligence in Biomedical Research and Healthcare, India, 2023. En Zenodo (CERN European Organization for Nuclear Research). https://doi.org/10.5281/zenodo.8262489
Riechmann, J. (2002). Introducción: Un principio para reorientar las relaciones de la humanidad con la biosfera. En J. y. RIECHMANN, El principio de precaución en medio ambiente y salud pública: de las definiciones a la práctica (p. 18). Icaria Ediciones.
Riechmann, J. (2007). Introducción al principio de precaución . En J. A. García, El cáncer, una enfermedad prevenible . Murcia.
Rinard, R. G. (1996). Technology, Deskilling, and Nurses: The Impact of the Technologically Changing Environment. Advances in Nursing Science, 18(4), 60-69. https://pubmed.ncbi.nlm.nih.gov/8790690/
Rivera, S. C., Liu, X., Chan, A., Denniston, A. K., Calvert, M. J., Darzi, A., Holmes, C., Yau, C., Moher, D., Ashrafian, H., Deeks, J. J., Di Ruffano, L. F., Faes, L., Keane, P. A., Vollmer, S. J., Lee, A. Y., Jonas, A., Esteva, A., Beam, A. L., & Rowley, S. (2020). Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Nature Medicine, 26(9), 1351-1363. https://doi.org/10.1038/s41591-020-1037- 7
Romeo-Casabona, C. M. (2004). Salud Humana, Biotecnología y Principio de Precaución . En C. M, Romero-Carmona. El Principio de Precaución y su Proyección en el Derecho Administrativo Español (p.p. 215-256). Lerdo Print.
Roqué, M. V., Macpherson, I., & Gonzalvo Cirac, M. (2015). El principio de precaución y los límites en biomedicina. Persona Y Bioética, 19(1). https://personaybioetica.unisabana.edu.co/index.php/personaybioetica/article/view/4870
Ruiz, I. (8 de abril de 2019). UNA MIRADA CRÍTICA A LAS RELACIONES LABORALES. [Mensaje en un blog] Blog del Derecho del Trabajo y de la Seguridad Social . https://ignasibeltran.com/2019/04/08/automatizacion-y- obsolescencia-humana/
Shen, Y., Heacock, L., Elias, J., Hentel, K. D., Reig, B., Shih, G., & Moy, L. (2023). ChatGPT and Other Large Language Models Are Double-edged Swords. Radiology, 307(2).https://doi.org/10.1148/radiol.230163
Singh, D., Nagaraj, S., Mashouri, P., Drysdale, E., Fischer, J., Goldenberg, A., & Brudno, M. (2022). Assessment of Machine Learning–Based Medical Directives to Expedite Care in Pediatric Emergency Medicine. JAMA Network Open, 5(3), e222599. https://doi.org/10.1001/jamanetworkopen.2022.2599
Singhal, K., Azizi, S., Tu, T., Mahdavi, S. S., Wei, J., Chung, H. W., Scales, N., Tanwani, A., Cole-Lewis, H., Pfohl, S., Payne, P., Seneviratne, M., Gamble, P., Kelly, C., Scharli, N., Chowdhery, A., Mansfield, P., Aguera y Arcas, B., Webster, D., Corrado, G. S., Matias, Y., Chou, K., Gottweis, J., Tomasev, N., Liu, Y., Rajkomar, A., Barral, J., Semturs, C., Karthikesalingam, A., & Natarajan, V. (2022). Large language models encode clinical knowledge. arXiv. https://arxiv.org/abs/2212.13138
Shortliffe, E. H., & Sepúlveda, M. J. (2018). Clinical decision support in the era of artificial intelligence. JAMA, 320(21), 2199–2200. https://doi.org/10.1001/jama.2018.16356
Tapia- Hermida, A. J. (2020). Decálogo de la inteligencia artificial ética y responsable en la Unión Europea. Diario La Ley, 87.
Tejani, A. S., Klontzas, M. E., Gatti, A. A., Mongan, J. T., Moy, L., Park, S. H., Kahn, C. E., Abbara, S., Afat, S., Anazodo, U. C., Andreychenko, A., Asselbergs, F. W., Badano, A., Baessler, B., Bold, B., Bisdas, S., Brismar, T. B., Cacciamani, G. E., Carrino, J. A., & Zins, M. (2024). Checklist for Artificial Intelligence in Medical Imaging (CLAIM): 2024 Update. Radiology Artificial Intelligence, 6(4). https://doi.org/10.1148/ryai.240300
Thrall, J. H., Li, X., Li, Q., Cruz, C., Do, S., Dreyer, K., & Brink, J. (2018). Artificial intelligence and machine learning in radiology: Opportunities, challenges, pitfalls, and criteria for success. Journal of the American College of Radiology, 15(3), 504-508. https://pubmed.ncbi.nlm.nih.gov/29402533/
Tickner, J. A. (Julio de 2002). Aplicando el principio de precaución. Un proceso de seis etapas. https://www.daphnia.es/revista/29/articulo/161/Aplicando-el-Principio-de-Precaucion.- Un-proceso-en- seisetapas#:~:text=Los%20pasos%20son%20simples%3A%201,y%206)%20realizar%20un%20seguimiento.
Tickner, J., Raffensperger, C., & Meyers, N. (1999). El principio precautorio en acción. Manual. Red de Ciencia y Salud ambiental, 10-11. https://andoni.garritz.com/documentos/Lecturas.CS.%20Garritz/Principio.precautorio/El %20Principio%20Precautorio.pdf
Tickner, R. y. (s.f.). Protecting Public Health and the Environment. Implementing the Precautionary Principle. Island Press, 8-9. https://islandpress.org/books/protecting- public-health-and-environment#desc
Veinot, T. C., Mitchell, H., & Ancker, J. S. (2018). Good intentions are not enough: how informatics interventions can worsen inequality. Journal Of The American Medical Informatics Association, 25(8), 1080-1088. https://doi.org/10.1093/jamia/ocy052
Vogel, L. (2019). Rise of medical AI poses new legal risks for doctors. CMAJ, 191(42), 1173- 1174. https://pmc.ncbi.nlm.nih.gov/articles/PMC6805168/
Vogel, L. B. (2020). How should specialist physicians prepare for the AI revolution? CMAJ, 192(21). https://www.cmaj.ca/content/192/21/e595
Weyerer, J. C., & Langer, P. F. (2019). Garbage in, garbage out: The vicious cycle of AI-based discrimination in the public sector. ACM digital library. https://dl.acm.org/doi/10.1145/3325112.3328220
Wijnberge, M., Geerts, B. F., Hol, L., Lemmers, N., Mulder, M. P., Berge, P., Schenk, J., Terwindt, L. E., Hollmann, M. W., Vlaar, A. P., & Veelo, D. P. (2020). Effect of a machine learning-derived early warning system for intraoperative hypotension vs standard care on depth and duration of intraoperative hypotension during elective noncardiac surgery: The HYPE randomized clinical trial. JAMA, 323(11), 1052-1060. https://jamanetwork.com/journals/jama/fullarticle/2761469
Wong O.G.W, N. I. (2019). Interpretación de aprendizaje automático de la genotipificación ampliada del virus del papiloma humano mediante Onclarity en una población asiática de cribado de cáncer cervical. Journal Of Clinical Microbiology.
Wu, K., Wu, E., Ho, D. E., & Zou, J. (2024, febrero 13). Generando errores médicos: GenAI y referencias médicas erróneas. Centro de Estudios Económicos de Baja California. https://tribunaeconomica.com.mx/publicaciones/seccion/academia/generando-errores- medicos-genai-y-referencias-medicas-erroneas
Yao, X., Rushlow, D. R., Inselman, J. W., McCoy, R. G., Thacher, T. D., Behnken, E. M., Bernard, M. E., Rosas, S. L., Akfaly, A., Misra, A., Molling, P. E., Krien, J. S., Foss, R. M., Barry, B. A., Siontis, K. C., Kapa, S., Pellikka, P. A., Lopez-Jimenez, F., Attia, Z. I., & Noseworthy, P. A. (2021). Artificial intelligence–enabled electrocardiograms for identification of patients with low ejection fraction: a pragmatic, randomized clinical trial. Nature Medicine, 27(5), 815-819. https://doi.org/10.1038/s41591-021-01335-4
Yusuf, M., Atal, I., Li, J., Smith, P., Ravaud, P., Fergie, M., Callaghan, M., & Selfe, J. (2020). Reporting quality of studies using machine learning models for medical diagnosis: a systematic review. BMJ Open, 10(3), e034568. https://doi.org/10.1136/bmjopen-2019- 034568
dc.rights.en.fl_str_mv Attribution-NonCommercial-ShareAlike 4.0 International
dc.rights.coar.fl_str_mv http://purl.org/coar/access_right/c_abf2
dc.rights.uri.none.fl_str_mv http://creativecommons.org/licenses/by-nc-sa/4.0/
dc.rights.local.spa.fl_str_mv Acceso abierto
dc.rights.accessrights.none.fl_str_mv https://purl.org/coar/access_right/c_abf2
rights_invalid_str_mv Attribution-NonCommercial-ShareAlike 4.0 International
http://creativecommons.org/licenses/by-nc-sa/4.0/
Acceso abierto
https://purl.org/coar/access_right/c_abf2
http://purl.org/coar/access_right/c_abf2
dc.format.mimetype.none.fl_str_mv application/pdf
dc.publisher.program.spa.fl_str_mv Especialización en Bioética
dc.publisher.grantor.spa.fl_str_mv Universidad El Bosque
dc.publisher.faculty.spa.fl_str_mv Departamento de Bioética
institution Universidad El Bosque
bitstream.url.fl_str_mv https://repositorio.unbosque.edu.co/bitstreams/6b3a4579-92e0-4620-97b9-a3d03d0f08a5/download
https://repositorio.unbosque.edu.co/bitstreams/b515e75d-6403-4c12-b764-5b8ff4f32f9b/download
https://repositorio.unbosque.edu.co/bitstreams/fa76a85e-52a3-40d2-88e0-5dc715a9beab/download
https://repositorio.unbosque.edu.co/bitstreams/10e92c42-765d-43c9-8c09-f88b2288f109/download
https://repositorio.unbosque.edu.co/bitstreams/e8874bad-152f-40bc-ae5c-b34738f38c37/download
https://repositorio.unbosque.edu.co/bitstreams/25e4578f-6b26-4869-9f53-b0c2117961f2/download
https://repositorio.unbosque.edu.co/bitstreams/9a8b3905-d4bf-4895-ae40-400361bd3dfc/download
bitstream.checksum.fl_str_mv 5643bfd9bcf29d560eeec56d584edaa9
08b943080e6b504f9874d61f94383849
84959dc95a1cad6830e2ffa1ba8df91e
cc7193cf7ba511822efd8c565c3263d2
17cc15b951e7cc6b3728a574117320f9
c106dc010fe56154a24496d4b477a609
2b70d982e00f9475c7dfc54188e12eec
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
MD5
MD5
MD5
MD5
repository.name.fl_str_mv Repositorio Institucional Universidad El Bosque
repository.mail.fl_str_mv bibliotecas@biteca.com
_version_ 1828164645500747776
spelling Tellez Alarcon, ManuelaRincón Arenas, Paula SteffanyRincón Arenas, Paula Steffany [0009-0008-8445-9738]2025-02-06T21:04:59Z2025-02-06T21:04:59Z2024-11https://hdl.handle.net/20.500.12495/13868instname:Universidad El Bosquereponame:Repositorio Institucional Universidad El Bosquerepourl:https://repositorio.unbosque.edu.coEl avance de la inteligencia artificial (IA) y sus grandes modelos de lenguaje en medicina, especialmente en diagnóstico clínico, plantea disyuntivas y desconciertos que no deben dejarse a un lado y exigen un análisis desde una perspectiva bioética. Estas herramientas prometen mejorar la precisión diagnóstica y optimizar recursos, pero también generan incertidumbre y preocupaciones bioéticas y legales. En este contexto, se debe evaluar el principio de precaución, el cual plantea actuar frente a posibles riesgos significativos o irreparables, incluso sin certeza completa de estos. A través de un ensayo crítico, se propone identificar condiciones necesarias para integrar estas herramientas de forma segura y responsable, priorizando el principio de precaución. Se destaca un enfoque prudente que equilibre la innovación tecnológica con la seguridad del paciente y los valores fundamentales en la medicina actual.Especialista en BioéticaEspecializaciónThe advance of artificial intelligence (AI) and its large language models in medicine, especially in clinical diagnosis, raises dilemmas and confusion that should not be left aside and require an analysis from a bioethical perspective. These tools promise to improve diagnostic accuracy and optimize resources, but they also generate uncertainty and bioethical and legal concerns. In this context, the precautionary principle must be evaluated, which proposes acting against possible significant or irreparable risks, even without complete certainty of these. Through a critical test, it is proposed to identify necessary conditions to integrate these tools in a safe and responsible manner, prioritizing the precautionary principle. A prudent approach that balances technological innovation with patient safety and fundamental values ​​in today's medicine is highlighted.application/pdfAttribution-NonCommercial-ShareAlike 4.0 Internationalhttp://creativecommons.org/licenses/by-nc-sa/4.0/Acceso abiertohttps://purl.org/coar/access_right/c_abf2http://purl.org/coar/access_right/c_abf2BioéticaPrincipio de precauciónInteligencia artificialDiagnóstico clínicoIncertidumbreBioethicsPrecautionary principleArtificial intelligenceClinical diagnosisUncertaintyWB60Uso de grandes modelos de lenguaje para realizar diagnósticos clínicos: desde el principio de precauciónUsing Large Language Models to Make Clinical Diagnoses: From the Precautionary PrincipleEspecialización en BioéticaUniversidad El BosqueDepartamento de BioéticaTesis/Trabajo de grado - Monografía - Especializaciónhttps://purl.org/coar/resource_type/c_7a1fhttp://purl.org/coar/resource_type/c_7a1finfo:eu-repo/semantics/bachelorThesishttps://purl.org/coar/version/c_ab4af688f83e57aaAamodt, A.E, & Plaza E. (1994). Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches. AI Communications, 7, 39-59. https://www.iiia.csic.es/~enric/papers/AICom.pdfAbrams, C. (2019). Google's Effort to Prevent Blindness Shows AI Challenges. The Wall Streed Journal. https://www.wsj.com/articles/googles-effort-to-prevent-blindness-hits- roadblock-11548504004.Agrawal, A. (2009). Medication errors: prevention using information technology systems. Br J Clin Pharmacol, 67(6), 681-686. https://pubmed.ncbi.nlm.nih.gov/19594538/Ahuja, A. S. (2019). The impact of artificial intelligence in medicine on the future role of the physician. PeerJ, 7, e7702. https://peerj.com/articles/7702/Alex-Howard, W. H, & Gerada, A. (2023). ChatGPT and antimicrobial advice: the end of the consulting infection doctor? The Lancet Infectious Diseases, 23(4), 405-406. https://www.thelancet.com/pdfs/journals/laninf/PIIS1473-3099(23)00113-5.pdfAlexander, P. L., Martindale, C. D., Llewellyn, R. O., de Visser, B., Ng, V., Ngai, A. U., Kale, L. F., di Ruffano, R. M., Golub, G. S., Collins, D., Moher, M. D., McCradden, L., Oakden-Rayner, S. C., Rivera, M., Calvert, C. J., Kelly, C. S., Lee, C., Yau, A.-W., Chan, P. A., Keane, A. L. B., Denniston, A. K., & Liu, X. (2024). Concordance of randomised controlled trials for artificial intelligence interventions with the CONSORT-AI reporting guidelines. Nature Communications, 15(1619). https://doi.org/10.1038/s41467-024- 45355-3Andorno, R. (2008). Principio de precaución. En J. C. Tealdi (Dir.), Diccionario latinoamericano de bioética (pp. 345-347). Bogotá: UNESCO-Universidad Nacional de Colombia.Aquino, S. T. (2012, septiembre). Partes cuasi integrales de la prudencia. Suma teológica - Parte II-IIae - Cuestión 49. https://hjg.com.ar/sumat/c/c49.html#a8Attewell, P. (1987). The Deskilling Controversy. Work and Occupations, 14(3), 323-246. https://onwork.edu.au/bibitem/1987-Attewell,Paul-The+Deskilling+Controversy/Aung, Y. y. M., Wong, D. C. S., & Ting, D. S. W. (2021). The promise of artificial intelligence: a review of the opportunities and challenges of artificial intelligence in healthcare. British Medical Bulletin, 139(1), 4-15. https://doi.org/10.1093/bmb/ldab016BBC News Mundo. (2023, 20 de abril). Qué es la misteriosa “caja negra” de la inteligencia artificial que desconcierta a los expertos (y por qué aún no entendemos cómo aprenden las máquinas). BBC News Mundo. https://www.bbc.com/mundo/noticias-65331262Beede, E., Baylor, E., Hersch, F., & Lurchenko, A. (2020). A human-centered evaluation of a deep learning system deployed in clinics for the detection of diabetic retinopathy. Association for Computing Machinery. https://doi.org/10.1145/3313831.3376718Blanco-Gonzalez, A., Cabezón, A., Seco-González, A., Conde-Torres, D., Antelo-Riveiro, P., Piñeiro, Á., & Garcia-Fandimo, R. (2023). The role of AI in drug discovery: Challenges, opportunities, and strategies. Pharmaceuticals, 16(6), 891. https://www.mdpi.com/1424- 8247/16/6/891Blas-Lahitte H. S. V. (2011). Aportes para una bioética medioambiental y la cohabitabilidad humana desde una visión relacional. Persona y Bioética, 1(15), 40-51. https://personaybioetica.unisabana.edu.co/index.php/personaybioetica/article/view/1909Bonamigo, E. L. (2010). El principio de precaución: Un nuevo principio bioético y biojurídico. [Tesis de doctorado, Universidad Rey Juan Carlos]. http://www.estsp.ipp.pt/fileManager/editor/Documentos_Publicos/Comissao%20de%20E tica/Acervo%20C.E./Principios_bioeticos/7.pdfBrown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., & Amodei, D. (2020). Language Models are Few-Shot Learners. arXiv.org. https://arxiv.org/abs/2005.14165Capella, V. B. (2007). Intervenciones genéticas en la línea germinal humana y justicia. Biotecnología y Posthumanismo., 461-486. https://dialnet.unirioja.es/servlet/articulo?codigo=2671225Cascella, M., Montomoli, J., Bellini, V., & Bignami, E. (2023). Evaluating the feasibility of ChatGPT in healthcare: An analysis of multiple clinical and research scenarios. Journal of Medical Systems, 47. https://pmc.ncbi.nlm.nih.gov/articles/PMC9985086/Castelvecchi, D. (2016). Can we open the black box of AI? . Nature, 538 (7623). https://www.nature.com/news/can-we-open-the-black-box-of-ai-1.20731Chanchan-He, W., Liu, W., Xu, J., Huang, Y., Dong, Z., Wu, Y., & Kharrazi, H. (2024). Efficiency, accuracy, and health professionals' perspectives regarding artificial intelligence in radiology practice: A scoping review. iRADIOLOGY, 2(2), 156–172. https://doi.org/10.1002/ird3.63Chen, I. Y., Szolovits, P., & Ghassemi, M. (2019, febrero). Can AI help reduce disparities in general medical and mental health care? AMA Journal of Ethics, 21(2), E167-E179. https://journalofethics.ama-assn.org/article/can-ai-help-reduce-disparities-general- medical-and-mental-health-care/2019-02Cohen, J. P., Cao, T., Viviano, J. D., Huang, C. W., Fralick, M., Ghassemi, M., Mamdani, M., Greiner, R., & Bengio, Y. (2021). Problems in the deployment of machine-learned models in health care. CMAJ, 193(35), 1391-1394. https://pubmed.ncbi.nlm.nih.gov/34462316/Comisión Mundial de Ética del Conocimiento Científico y la Tecnología [COMEST]. (2005). Informe del Grupo de Expertos sobre el Principio Precautorio. UNESCO. https://unesdoc.unesco.org/ark:/48223/pf0000139578_spaComisión Europea. (2000). Comunicación de la Comisión sobre el recurso al principio de precaución. https://eurlex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2000:0001:FIN:es:PDFCórdoba, S. M. (2008). Estudios contemporáneos sobre ética. Universitas.Cortina, A. (2004). Fundamentos filosóficos del principio de precaución. En R. C. Maria, Principio de precaución, Biotecnología y Derecho (pp. 3-16). ComaresCouncil of Europe. (2018 ). Discrimination, artificial intelligence, and algotihmic decision- making.https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decision- making/1680925d73Couzin-Frankel, J. (2019). Medicine contends with how to use artificial intelligence. Science,364(6446). https://www.science.org/doi/10.1126/science.364.6446.1119Dias, L. D. (2021). Deskilling of medical professionals: an unintended consequence of AI implementation? Giomale di filosofia, 2(2).https://mimesisjournals.com/ojs/index.php/giornale-filosofia/article/view/1691Durning, S. J., Artino, A. R., Schuwirth, L., & Van Der Vleuten, C. (2013). Clarifying Assumptions to Enhance Our Understanding and Assessment of Clinical Reasoning. Academic Medicine, 88(4), 442-448. https://doi.org/10.1097/acm.0b013e3182851b5bEuropean Commission Executive Agency for Small and Medium-size Enterprises. (2020). Artificial intelligence-based software as a medical device. https://ati.ec.europa.eu/sites/default/files/202007/ATI%20-%20Artificial%20Intelligencebased%20software%20as%20a%20medical%20device.pdfFernández, R. (2019). Robótica, inteligencia artificial y seguridad: Como encajar la responsabilidad civil ?https://riunet.upv.es/bitstream/handle/10251/117875/Rob%c3%b3tica.pdf?sequence=1&i sAllowed=yFoster, K. R., Koprowski, R., & Skufca, J. D. (2014). Machine learning, medical diagnosis, and biomedical engineering research - Commentary. BioMedical Engineering OnLine, 13(94). https://biomedical-engineering-online.biomedcentral.com/articles/10.1186/1475- 925X-13-94Garrido, G. M. T. (2024). El Principio de Precaución en Bioética. bio.ética web. https://www.bioeticaweb.com/el-principio-de-precauciasn-en-bioactica-g-tomais-garrido/Gaube, S., Suresh, H., Raue, M., Merritt, A., Berkowitz, S. J., Lermer, E., Coughlin, J. F., Guttag, J. V., Colak, E., & Ghassemi, M. (2021). Do as AI say: susceptibility in deployment of clinical decision-aids. Npj Digital Medicine, 4(1). https://doi.org/10.1038/s41746-021-00385-9Gebauer, S., & Eckert, C. (2024). Survey of US physicians’ attitudes and knowledge of AI. BMJ Evidence-based Medicine, 29(4), 279-281. https://doi.org/10.1136/bmjebm-2023-112726Glauser, W. (2020). AI in health care: Improving outcomes or threatening equity? Canadian Medical Association Journal, 192(1). https://pmc.ncbi.nlm.nih.gov/articles/PMC6944301/Gordijn, B., & Have, H. T. (2023). ChatGPT: Evolution or revolution? Medicine, Health Care and Philosophy, 26, 1–2. https://doi.org/10.1007/s11019-023-10136-0Grote, T. (2021). Trustworthy medical AI systems need to know when they don’t know. Med Ethics, 47, 337-338. https://jme.bmj.com/content/47/5/337.longGruetzemacher, R., Paradice, D., & Lee, K. B. (2020). Forecasting extreme labor displacement: A survey of AI practitioners. Technological Forecasting And Social Change, 161, 120323. https://doi.org/10.1016/j.techfore.2020.120323Guío-Español, A., Tamayo-Uribe, E & Gomez-Ayerbe, P. (2021). Marco ético para la inteligencia artificial en Colombia. Ministerio de Ciencia, Tecnología y Educación. https://minciencias.gov.co/sites/default/files/marco-etico-ia-colombia-2021.pdfHoff, T. (2011). Deskilling and adaptation among primary care physicians using two work innovations. Health Care Management Review, 338-348. https://pubmed.ncbi.nlm.nih.gov/21685794/Holdsworth, J., & Scapicchio, M. (2024, junio 17). ¿Qué es el deep learning?. IBM. https://www.ibm.com/es-es/topics/deep-learningHolohan, M. (2023, 11 de septiembre). A boy saw 17 doctors over 3 years for chronic pain. ChatGPT found the diagnosis. TODAY all day. https://www.today.com/health/mom-chatgpt-diagnosis-pain-rcna101843Hottois, G. (2005). Cultura tecnocientífica y medio ambiente. La biodiversidad en el tecnocosmos. En J. Escobar Triana, C. E. Maldonado, M. A. Sánchez G., P. Simón Lorda, K. Cranley Glass, R. Villarroel, A. Couceiro Vidal, M. F. Castro F., Y. Bernal G., T. León Sicard, & S. E. Arango D. (Eds.). Bioética y medio ambiente. Colección Bios y Ethos (pp. 21-41). Kimpres Ltda.IBM. (s.f.). Qué son los modelos de lenguaje grande (LLM). https://www.ibm.com/mx- es/topics/large-language-models?mhsrc=ibmsearch_a&mhq=que%20son%20los%20llmIbrahi- Habil, T. L & P. Z. (2020). Artificial intelligence in health care: accountability and safety. Bulletin of the World Health Organization, 4, 251-256. https://pmc.ncbi.nlm.nih.gov/articles/PMC7133468/Jonas, H. (1995). El principio de responsabilidad: Ensayo de una ética para la civilización tecnológica. Herder. https://doi.org/10.2307/j.ctvt9k2szJuarez, J. M. (Marzo de 2018). #1 Tres cosas que saber sobre la Inteligencia Artificial en Medicina [Mensaje en un blog]. Blog de IA, Computación y Medicina https://webs.um.es/jmjuarez/1-queesinteligenciaartificialmedicina/Jungmann, F., Jorg, T., Hahn, F., Pinto Dos Santos, D., Jungmann, S. M., Düber, C., Mildenberger, P., & Kloeckner, R. (2021). Attitudes toward artificial intelligence among radiologists, IT specialists, and industry. Academic Radiology, 28(6), 834-840. https://pubmed.ncbi.nlm.nih.gov/32414637/Kim, D. W., Jang, H. Y., Kim, K. W., & Shin, Y. (2019). Design characteristics of studies reporting the performance of artificial intelligence algorithms for diagnostic analysis of medical images: Results from recently published papers. Korean Journal of Radiology, 20(3), 405-410. https://pubmed.ncbi.nlm.nih.gov/30799571/Khedkar, S., Gandhi, P., Shinde, G., & Subramanian, V. (2019). Deep Learning and Explainable AI in Healthcare Using EHR. Springer. https://doi.org/10.1007/978-3-030-33966-1_7Kompa, B., Snoek, J., & Beam, A. L. (2021). Second opinion needed: Communicating uncertainty in medical machine learning. npj digital medicine, 4(4). https://www.nature.com/articles/s41746-020-00367-3Kottow, M. (2011). Bioética pública: una propuesta. Revista Bioética, 19(1), 61-76. http://www.redalyc.org/articulo.oa?id=361533255005Kung, T. H., Cheatham, M., ChatGPT, Medenilla, A., Sillos, C., De Leon, L., Elepaño, C., Madriaga, M., Aggabao, R., Diaz-Candido, G., Maningo, J., & Tseng, V. (2022). Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. medRxiv. https://doi.org/10.1101/2022.12.19.22283643Lakhani P., S. B. (2017). Deep Learning at Chest Radiography: Automated Classificaction of Pulmonary Tuberculosis by Using Convolutional Neural Networks. Radiology, 284(2), 574-582. https://pubmed.ncbi.nlm.nih.gov/28436741/Langlotz, C. P. (2019). Will Artificial Intelligence Replace Radiologists? Radiology: Artificial Intelligence, 1(3). https://pmc.ncbi.nlm.nih.gov/articles/PMC8017417/Lebrún, G. (1999). Sobre a Tecnofobia. En NOVAES Adauto. A crise da razão. (pp. 471-494). São Paulo: Companhia das Letras.Lee, J. T., Moffett, A. T., Maliha, G., Faraji, Z., Kanter, G. P., & Weissman, G. E. (2023). Analysis of devices authorized by the FDA for clinical decision support in critical care. JAMA Internal Medicine, 183(12), 1399-1402. https://pubmed.ncbi.nlm.nih.gov/37812404/Leiss, W., Beck, U., Ritter, M., & Lash, S. (2000). Risk society: Towards a new modernity. Canadian Journal of Sociology, 19(4), 544-547. Leiss, W., Beck, U., Ritter, M., & Lash, S. (2000). Risk society: Towards a new modernity. Canadian Journal of Sociology, 19(4), 544.Lenharo, M. (2024). The testing of AI in medicine is a mess. Here's how it should be done. Nature, 632, 722-724. https://www.nature.com/articles/d41586-024-02675-0Liu, X., Faes, L., Kale, A. U., Wagner, S. K., Fu, D. J., Bruynseels, A., Mahendiran, T., Moraes, G., Shamdas, M., Kern, C., Ledsam, J. R., Schmid, M. K., Balaskas, K., Topol, E. J., Bachmann, L. M., Keane, P. A., & Denniston, A. K. (2019). A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: A systematic review and meta-analysis. Lancet Digital Health, 1, 271-297. https://pubmed.ncbi.nlm.nih.gov/33323251/Liu, Y., Gadepalli, K., Norouzi, M., Dahl, G. E., Kohlberger, T., Boyko, A., Venugopalan, S., Timofeev, A., Nelson, P. Q., Corrado, G. S., Hipp, J. D., Peng, L., & Stumpe, M. C. (2017). Detecting cancer metastases on gigapixel pathology images. arXiv. https://doi.org/10.48550/arXiv.1703.02442Llano, A. A. (6 de Mayo de 2024). Proyectos de Ley para uso responsable de IA y protección laboral ante nuevas tecnologías. Congreso de la República de Colombia. https://www.senado.gov.co/index.php/el-senado/noticias/5476-proyectos-de-ley-para- uso-responsable-de-ia-y-proteccion-laboral-ante-nuevas- tecnologias#:~:text=%2DA%20plenaria%20de%20Senado%20pasó,y%20equidad%20pa ra%20sus%20usuarios.Louisa-Dahmani, V. D. (2020). Habitual use of GPS negatively impacts spatial memory during self-guided navigation. Scientific Reports, 10. https://www.nature.com/articles/s41598- 020-62877-0Maheshwari, K., Shimada, T., Yang, D., Khanna, S., Cywinski, J. B., Irefin, S. A., Ayad, S., Turan, A., Ruetzler, K., Qiu, Y., Saha, P., Mascha, E. J., & Sessler, D. I. (2020). Hypotension prediction index for prevention of hypotension during moderate- to high- risk noncardiac surgery. Anesthesiology, 133(6), 1214-1222. https://pubs.asahq.org/anesthesiology/article/133/6/1214/110700/Hypotension- Prediction-Index-for-Prevention-ofMann, D. L. (2023). Artificial Intelligence Discusses the Role of Artificial Intelligence in Tanslational Medicine: A JACC: Basic to Translational Science Interview With ChatGPT. JACC Journals(2), 221-223. https://pmc.ncbi.nlm.nih.gov/articles/PMC9998448/Marcos, A. (2001). Ética ambiental. Universidad de Valladolid. http://www.fyl.uva.es/~wfilosof/webMarcos/textos/Etica_Ambiental_2as_pruebas.pdfNagendran, M., Chen, Y., Lovejoy, C. A., Gordon, A. C., Komorowski, M., Harvey, H., Topol, E. J., Ioannidis, J. P. A., Collins, G. S., & Maruthappu, M. (2020). Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies. BMJ, m689. https://doi.org/10.1136/bmj.m689Nnamoko, N., & Korkontzelos, I. (2020). Efficient treatment of outliers and class imbalance for diabetes prediction. Artificial Intelligence In Medicine, 104, 101815. https://doi.org/10.1016/j.artmed.2020.101815Norman, G. (2005). Research in clinical reasoning: past history and current trends. Medical Education, 39(4), 418-427. https://doi.org/10.1111/j.1365-2929.2005.02127.xObermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453. https://doi.org/10.1126/science.aax2342Obermeyer, Z., & Emanuel, E. J. (2016). Predicting the Future — Big Data, Machine Learning, and Clinical Medicine. New England Journal Of Medicine, 375(13), 1216-1219. https://doi.org/10.1056/nejmp1606181Ocampo, E. B. (2012). El principio precautorio y sus fundamentos filosóficos. México: Universidad Nacional Autónoma de México. Instituto de Investigaciones Jurídicas.Organización de Naciones Unidas [ONU]. (1992). Declaración de Rio sobre el Medio Ambielnte y el Desarrollo. Declaración, Conferencia de las Naciones Unidas sobre el Medio Ambiente y el Desarrollo, Departamenteo de Asuntos Económicos y Sociales, Rio de Janeiro. https://www.un.org/spanish/esa/sustdev/agenda21/riodeclaration.htmO’Riordan, T., & Jordan, A. (1995). TheprecautionaryPrinciple in Contemporary Environmental Politics. Environ Values; 4(3), 191-212. https://doi.org/10.3197/096327195776679475Parray, A. A., Inam, Z. M., Ramonfaur, D., Haider, S. S., Mistry, S. K., & Pandya, A. K. (2023). ChatGPT and global public health: Applications, challenges, ethical considerations and mitigation strategies. Global Transitions, 5, 50-54. https://doi.org/10.1016/j.glt.2023.05.001Parikh, R. B., & Helmchen, L. A. (2022). Paying for artificial intelligence in medicine. Npj Digital Medicine, 5(1). https://doi.org/10.1038/s41746-022-00609-6Pelaccia, T., Forestier, G., & Wemmert, C. (2019). Deconstructing the diagnostic reasoning of human versus artificial intelligence. Canadian Medical Association Journal, 191(48), E1332-E1335. https://doi.org/10.1503/cmaj.190506Prevedello, L. M., Halabi, S. S., Shih, G., Wu, C. C., Kohli, M. D., Chokshi, F. H., Erickson, B. J., Kalpathy-Cramer, J., Andriole, K. P., & Flanders, A. E. (2019). Challenges related to artificial intelligence research in medical imaging and the importance of image analysis competitions. Radiology: Artificial Intelligence, 1. https://pubmed.ncbi.nlm.nih.gov/33937783/Puttha, R., Thalava, H., Mehta, J., & Thalava, R. (2024). 6866 Health care provider’s perception of artificial intelligence: focusing on our change drivers. BMJ, A343.2-A343. https://doi.org/10.1136/archdischild-2024-rcpch.541Raffensperger, C., & Tickner, J. (1999). Protecting public health and the environment: Implementing the Precautionary Principle. Island Press.Rajkomar, A., Oren, E., Chen, K., Dai, A. M., Hajaj, N., Hardt, M., Liu, P. J., Marcus, J., Sun, M., Sundberg, P., Yee, H., Zhang, K., Zhang, Y., Flores, G., Duggan, G. E., Irvine, J., Le, Q., Litsch, K., Mossin, A., & Dean, J. (2018). Scalable and accurate deep learning with electronic health records. npj Digital Medicine, 1(18). https://doi.org/10.1038/s41746- 018-0029-1Ravaut, M., Sadeghi, H., Leung, K. K., Volkovs, M., Kornas, K., Harish, V., Watson, T., Lewis, G. F., Weisman, A., Poutanen, T., & Rosella, L. (2021). Predicting adverse outcomes due to diabetes complications with machine learning using administrative health data. npj Digital Medicine, 24. https://www.nature.com/articles/s41746-021-00394-8Real Academia Española. (2003). Diccionario de la lengua española. Planeta.Research, I. C. o. M., Aurora, N., Rao, M. V. V., Mathur, R., Singh, H., Menon, G. R., Sharma, S., Singh, M. P., & Cell, T. A. (2023). Ethical Guidelines for Application of Artificial Intelligence in Biomedical Research and Healthcare, India, 2023. En Zenodo (CERN European Organization for Nuclear Research). https://doi.org/10.5281/zenodo.8262489Riechmann, J. (2002). Introducción: Un principio para reorientar las relaciones de la humanidad con la biosfera. En J. y. RIECHMANN, El principio de precaución en medio ambiente y salud pública: de las definiciones a la práctica (p. 18). Icaria Ediciones.Riechmann, J. (2007). Introducción al principio de precaución . En J. A. García, El cáncer, una enfermedad prevenible . Murcia.Rinard, R. G. (1996). Technology, Deskilling, and Nurses: The Impact of the Technologically Changing Environment. Advances in Nursing Science, 18(4), 60-69. https://pubmed.ncbi.nlm.nih.gov/8790690/Rivera, S. C., Liu, X., Chan, A., Denniston, A. K., Calvert, M. J., Darzi, A., Holmes, C., Yau, C., Moher, D., Ashrafian, H., Deeks, J. J., Di Ruffano, L. F., Faes, L., Keane, P. A., Vollmer, S. J., Lee, A. Y., Jonas, A., Esteva, A., Beam, A. L., & Rowley, S. (2020). Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Nature Medicine, 26(9), 1351-1363. https://doi.org/10.1038/s41591-020-1037- 7Romeo-Casabona, C. M. (2004). Salud Humana, Biotecnología y Principio de Precaución . En C. M, Romero-Carmona. El Principio de Precaución y su Proyección en el Derecho Administrativo Español (p.p. 215-256). Lerdo Print.Roqué, M. V., Macpherson, I., & Gonzalvo Cirac, M. (2015). El principio de precaución y los límites en biomedicina. Persona Y Bioética, 19(1). https://personaybioetica.unisabana.edu.co/index.php/personaybioetica/article/view/4870Ruiz, I. (8 de abril de 2019). UNA MIRADA CRÍTICA A LAS RELACIONES LABORALES. [Mensaje en un blog] Blog del Derecho del Trabajo y de la Seguridad Social . https://ignasibeltran.com/2019/04/08/automatizacion-y- obsolescencia-humana/Shen, Y., Heacock, L., Elias, J., Hentel, K. D., Reig, B., Shih, G., & Moy, L. (2023). ChatGPT and Other Large Language Models Are Double-edged Swords. Radiology, 307(2).https://doi.org/10.1148/radiol.230163Singh, D., Nagaraj, S., Mashouri, P., Drysdale, E., Fischer, J., Goldenberg, A., & Brudno, M. (2022). Assessment of Machine Learning–Based Medical Directives to Expedite Care in Pediatric Emergency Medicine. JAMA Network Open, 5(3), e222599. https://doi.org/10.1001/jamanetworkopen.2022.2599Singhal, K., Azizi, S., Tu, T., Mahdavi, S. S., Wei, J., Chung, H. W., Scales, N., Tanwani, A., Cole-Lewis, H., Pfohl, S., Payne, P., Seneviratne, M., Gamble, P., Kelly, C., Scharli, N., Chowdhery, A., Mansfield, P., Aguera y Arcas, B., Webster, D., Corrado, G. S., Matias, Y., Chou, K., Gottweis, J., Tomasev, N., Liu, Y., Rajkomar, A., Barral, J., Semturs, C., Karthikesalingam, A., & Natarajan, V. (2022). Large language models encode clinical knowledge. arXiv. https://arxiv.org/abs/2212.13138Shortliffe, E. H., & Sepúlveda, M. J. (2018). Clinical decision support in the era of artificial intelligence. JAMA, 320(21), 2199–2200. https://doi.org/10.1001/jama.2018.16356Tapia- Hermida, A. J. (2020). Decálogo de la inteligencia artificial ética y responsable en la Unión Europea. Diario La Ley, 87.Tejani, A. S., Klontzas, M. E., Gatti, A. A., Mongan, J. T., Moy, L., Park, S. H., Kahn, C. E., Abbara, S., Afat, S., Anazodo, U. C., Andreychenko, A., Asselbergs, F. W., Badano, A., Baessler, B., Bold, B., Bisdas, S., Brismar, T. B., Cacciamani, G. E., Carrino, J. A., & Zins, M. (2024). Checklist for Artificial Intelligence in Medical Imaging (CLAIM): 2024 Update. Radiology Artificial Intelligence, 6(4). https://doi.org/10.1148/ryai.240300Thrall, J. H., Li, X., Li, Q., Cruz, C., Do, S., Dreyer, K., & Brink, J. (2018). Artificial intelligence and machine learning in radiology: Opportunities, challenges, pitfalls, and criteria for success. Journal of the American College of Radiology, 15(3), 504-508. https://pubmed.ncbi.nlm.nih.gov/29402533/Tickner, J. A. (Julio de 2002). Aplicando el principio de precaución. Un proceso de seis etapas. https://www.daphnia.es/revista/29/articulo/161/Aplicando-el-Principio-de-Precaucion.- Un-proceso-en- seisetapas#:~:text=Los%20pasos%20son%20simples%3A%201,y%206)%20realizar%20un%20seguimiento.Tickner, J., Raffensperger, C., & Meyers, N. (1999). El principio precautorio en acción. Manual. Red de Ciencia y Salud ambiental, 10-11. https://andoni.garritz.com/documentos/Lecturas.CS.%20Garritz/Principio.precautorio/El %20Principio%20Precautorio.pdfTickner, R. y. (s.f.). Protecting Public Health and the Environment. Implementing the Precautionary Principle. Island Press, 8-9. https://islandpress.org/books/protecting- public-health-and-environment#descVeinot, T. C., Mitchell, H., & Ancker, J. S. (2018). Good intentions are not enough: how informatics interventions can worsen inequality. Journal Of The American Medical Informatics Association, 25(8), 1080-1088. https://doi.org/10.1093/jamia/ocy052Vogel, L. (2019). Rise of medical AI poses new legal risks for doctors. CMAJ, 191(42), 1173- 1174. https://pmc.ncbi.nlm.nih.gov/articles/PMC6805168/Vogel, L. B. (2020). How should specialist physicians prepare for the AI revolution? CMAJ, 192(21). https://www.cmaj.ca/content/192/21/e595Weyerer, J. C., & Langer, P. F. (2019). Garbage in, garbage out: The vicious cycle of AI-based discrimination in the public sector. ACM digital library. https://dl.acm.org/doi/10.1145/3325112.3328220Wijnberge, M., Geerts, B. F., Hol, L., Lemmers, N., Mulder, M. P., Berge, P., Schenk, J., Terwindt, L. E., Hollmann, M. W., Vlaar, A. P., & Veelo, D. P. (2020). Effect of a machine learning-derived early warning system for intraoperative hypotension vs standard care on depth and duration of intraoperative hypotension during elective noncardiac surgery: The HYPE randomized clinical trial. JAMA, 323(11), 1052-1060. https://jamanetwork.com/journals/jama/fullarticle/2761469Wong O.G.W, N. I. (2019). Interpretación de aprendizaje automático de la genotipificación ampliada del virus del papiloma humano mediante Onclarity en una población asiática de cribado de cáncer cervical. Journal Of Clinical Microbiology.Wu, K., Wu, E., Ho, D. E., & Zou, J. (2024, febrero 13). Generando errores médicos: GenAI y referencias médicas erróneas. Centro de Estudios Económicos de Baja California. https://tribunaeconomica.com.mx/publicaciones/seccion/academia/generando-errores- medicos-genai-y-referencias-medicas-erroneasYao, X., Rushlow, D. R., Inselman, J. W., McCoy, R. G., Thacher, T. D., Behnken, E. M., Bernard, M. E., Rosas, S. L., Akfaly, A., Misra, A., Molling, P. E., Krien, J. S., Foss, R. M., Barry, B. A., Siontis, K. C., Kapa, S., Pellikka, P. A., Lopez-Jimenez, F., Attia, Z. I., & Noseworthy, P. A. (2021). Artificial intelligence–enabled electrocardiograms for identification of patients with low ejection fraction: a pragmatic, randomized clinical trial. Nature Medicine, 27(5), 815-819. https://doi.org/10.1038/s41591-021-01335-4Yusuf, M., Atal, I., Li, J., Smith, P., Ravaud, P., Fergie, M., Callaghan, M., & Selfe, J. (2020). Reporting quality of studies using machine learning models for medical diagnosis: a systematic review. BMJ Open, 10(3), e034568. https://doi.org/10.1136/bmjopen-2019- 034568spaCC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-81160https://repositorio.unbosque.edu.co/bitstreams/6b3a4579-92e0-4620-97b9-a3d03d0f08a5/download5643bfd9bcf29d560eeec56d584edaa9MD55TEXTTrabajo de grado.pdf.txtTrabajo de grado.pdf.txtExtracted texttext/plain84981https://repositorio.unbosque.edu.co/bitstreams/b515e75d-6403-4c12-b764-5b8ff4f32f9b/download08b943080e6b504f9874d61f94383849MD58THUMBNAILTrabajo de grado.pdf.jpgTrabajo de grado.pdf.jpgGenerated Thumbnailimage/jpeg2432https://repositorio.unbosque.edu.co/bitstreams/fa76a85e-52a3-40d2-88e0-5dc715a9beab/download84959dc95a1cad6830e2ffa1ba8df91eMD59ORIGINALTrabajo de grado.pdfTrabajo de grado.pdfapplication/pdf290756https://repositorio.unbosque.edu.co/bitstreams/10e92c42-765d-43c9-8c09-f88b2288f109/downloadcc7193cf7ba511822efd8c565c3263d2MD53LICENSElicense.txtlicense.txttext/plain; charset=utf-82000https://repositorio.unbosque.edu.co/bitstreams/e8874bad-152f-40bc-ae5c-b34738f38c37/download17cc15b951e7cc6b3728a574117320f9MD54Carta de autorizacion.pdfapplication/pdf268252https://repositorio.unbosque.edu.co/bitstreams/25e4578f-6b26-4869-9f53-b0c2117961f2/downloadc106dc010fe56154a24496d4b477a609MD56Anexo 1 Acta de aprobacion.pdfapplication/pdf214194https://repositorio.unbosque.edu.co/bitstreams/9a8b3905-d4bf-4895-ae40-400361bd3dfc/download2b70d982e00f9475c7dfc54188e12eecMD5720.500.12495/13868oai:repositorio.unbosque.edu.co:20.500.12495/138682025-02-07 03:08:45.506http://creativecommons.org/licenses/by-nc-sa/4.0/Attribution-NonCommercial-ShareAlike 4.0 Internationalopen.accesshttps://repositorio.unbosque.edu.coRepositorio Institucional Universidad El Bosquebibliotecas@biteca.comTGljZW5jaWEgZGUgRGlzdHJpYnVjacOzbiBObyBFeGNsdXNpdmEKClBhcmEgcXVlIGVsIFJlcG9zaXRvcmlvIGRlIGxhIFVuaXZlcnNpZGFkIEVsIEJvc3F1ZSBhIHB1ZWRhIHJlcHJvZHVjaXIgeSBjb211bmljYXIgcMO6YmxpY2FtZW50ZSBzdSBkb2N1bWVudG8gZXMgbmVjZXNhcmlvIGxhIGFjZXB0YWNpw7NuIGRlIGxvcyBzaWd1aWVudGVzIHTDqXJtaW5vcy4gUG9yIGZhdm9yLCBsZWEgbGFzIHNpZ3VpZW50ZXMgY29uZGljaW9uZXMgZGUgbGljZW5jaWE6CgoxLiBBY2VwdGFuZG8gZXN0YSBsaWNlbmNpYSwgdXN0ZWQgKGVsIGF1dG9yL2VzIG8gZWwgcHJvcGlldGFyaW8vcyBkZSBsb3MgZGVyZWNob3MgZGUgYXV0b3IpIGdhcmFudGl6YSBhIGxhIFVuaXZlcnNpZGFkIEVsIEJvc3F1ZSBlbCBkZXJlY2hvIG5vIGV4Y2x1c2l2byBkZSBhcmNoaXZhciwgcmVwcm9kdWNpciwgY29udmVydGlyIChjb21vIHNlIGRlZmluZSBtw6FzIGFiYWpvKSwgY29tdW5pY2FyIHkvbyBkaXN0cmlidWlyIHN1IGRvY3VtZW50byBtdW5kaWFsbWVudGUgZW4gZm9ybWF0byBlbGVjdHLDs25pY28uCgoyLiBUYW1iacOpbiBlc3TDoSBkZSBhY3VlcmRvIGNvbiBxdWUgbGEgVW5pdmVyc2lkYWQgRWwgQm9zcXVlIHB1ZWRhIGNvbnNlcnZhciBtw6FzIGRlIHVuYSBjb3BpYSBkZSBlc3RlIGRvY3VtZW50byB5LCBzaW4gYWx0ZXJhciBzdSBjb250ZW5pZG8sIGNvbnZlcnRpcmxvIGEgY3VhbHF1aWVyIGZvcm1hdG8gZGUgZmljaGVybywgbWVkaW8gbyBzb3BvcnRlLCBwYXJhIHByb3DDs3NpdG9zIGRlIHNlZ3VyaWRhZCwgcHJlc2VydmFjacOzbiB5IGFjY2Vzby4KCjMuIERlY2xhcmEgcXVlIGVsIGRvY3VtZW50byBlcyB1biB0cmFiYWpvIG9yaWdpbmFsIHN1eW8geS9vIHF1ZSB0aWVuZSBlbCBkZXJlY2hvIHBhcmEgb3RvcmdhciBsb3MgZGVyZWNob3MgY29udGVuaWRvcyBlbiBlc3RhIGxpY2VuY2lhLiBUYW1iacOpbiBkZWNsYXJhIHF1ZSBzdSBkb2N1bWVudG8gbm8gaW5mcmluZ2UsIGVuIHRhbnRvIGVuIGN1YW50byBsZSBzZWEgcG9zaWJsZSBzYWJlciwgbG9zIGRlcmVjaG9zIGRlIGF1dG9yIGRlIG5pbmd1bmEgb3RyYSBwZXJzb25hIG8gZW50aWRhZC4KCjQuIFNpIGVsIGRvY3VtZW50byBjb250aWVuZSBtYXRlcmlhbGVzIGRlIGxvcyBjdWFsZXMgbm8gdGllbmUgbG9zIGRlcmVjaG9zIGRlIGF1dG9yLCBkZWNsYXJhIHF1ZSBoYSBvYnRlbmlkbyBlbCBwZXJtaXNvIHNpbiByZXN0cmljY2nDs24gZGVsIHByb3BpZXRhcmlvIGRlIGxvcyBkZXJlY2hvcyBkZSBhdXRvciBwYXJhIG90b3JnYXIgYSBsYSBVbml2ZXJzaWRhZCBFbCBCb3NxdWUgbG9zIGRlcmVjaG9zIHJlcXVlcmlkb3MgcG9yIGVzdGEgbGljZW5jaWEsIHkgcXVlIGVzZSBtYXRlcmlhbCBjdXlvcyBkZXJlY2hvcyBzb24gZGUgdGVyY2Vyb3MgZXN0w6EgY2xhcmFtZW50ZSBpZGVudGlmaWNhZG8geSByZWNvbm9jaWRvIGVuIGVsIHRleHRvIG8gY29udGVuaWRvIGRlbCBkb2N1bWVudG8gZW50cmVnYWRvLgoKNS4gU2kgZWwgZG9jdW1lbnRvIHNlIGJhc2EgZW4gdW5hIG9icmEgcXVlIGhhIHNpZG8gcGF0cm9jaW5hZGEgbyBhcG95YWRhIHBvciB1bmEgYWdlbmNpYSB1IG9yZ2FuaXphY2nDs24gZGlmZXJlbnRlIGRlIGxhIFVuaXZlcnNpZGFkIEVsIEJvc3F1ZSwgc2UgcHJlc3Vwb25lIHF1ZSBzZSBoYSBjdW1wbGlkbyBjb24gY3VhbHF1aWVyIGRlcmVjaG8gZGUgcmV2aXNpw7NuIHUgb3RyYXMgb2JsaWdhY2lvbmVzIHJlcXVlcmlkYXMgcG9yIGVzdGUgY29udHJhdG8gbyBhY3VlcmRvLgoKNi4gVW5pdmVyc2lkYWQgRWwgQm9zcXVlIGlkZW50aWZpY2Fyw6EgY2xhcmFtZW50ZSBzdS9zIG5vbWJyZS9zIGNvbW8gZWwvbG9zIGF1dG9yL2VzIG8gcHJvcGlldGFyaW8vcyBkZSBsb3MgZGVyZWNob3MgZGVsIGRvY3VtZW50bywgeSBubyBoYXLDoSBuaW5ndW5hIGFsdGVyYWNpw7NuIGRlIHN1IGRvY3VtZW50byBkaWZlcmVudGUgYSBsYXMgcGVybWl0aWRhcyBlbiBlc3RhIGxpY2VuY2lhLgo=