La vulnerabilidad antropotécnica: una propuesta para la interpretación de las transformaciones de la relación médico paciente en la era de la inteligencia artificial

La inteligencia artificial (IA), al ser una tecnología autónoma, evolutiva y altamente eficiente, promete automatizar y optimizar procesos en diversos sectores, incluyendo el político, económico, empresarial y de la salud. Esta promesa conlleva una potencial transformación en múltiples actividades h...

Full description

Autores:
Riaño Moreno, Julián Camilo
Tipo de recurso:
https://purl.org/coar/resource_type/c_db06
Fecha de publicación:
2024
Institución:
Universidad El Bosque
Repositorio:
Repositorio U. El Bosque
Idioma:
spa
OAI Identifier:
oai:repositorio.unbosque.edu.co:20.500.12495/12842
Acceso en línea:
https://hdl.handle.net/20.500.12495/12842
Palabra clave:
Inteligencia Artificial
Relación médico-paciente
Salud Digital
Interacción humano-máquina
Vulnerabilidad
Bioética
Ética médica
Artificial Intelligence
Doctor-Patient Relationship
Digital Health
Human-Machine Interaction
Vulnerability
Bioethics
Medical Ethics
WB60
Rights
closedAccess
License
Acceso cerrado
id UNBOSQUE2_0bed6932f5fb21ddfd006ac781362c9f
oai_identifier_str oai:repositorio.unbosque.edu.co:20.500.12495/12842
network_acronym_str UNBOSQUE2
network_name_str Repositorio U. El Bosque
repository_id_str
dc.title.none.fl_str_mv La vulnerabilidad antropotécnica: una propuesta para la interpretación de las transformaciones de la relación médico paciente en la era de la inteligencia artificial
dc.title.translated.none.fl_str_mv The anthropotechnical vulnerability: a proposal for interpreting the transformations of the doctor-patient relationship in the era of artificial intelligence
title La vulnerabilidad antropotécnica: una propuesta para la interpretación de las transformaciones de la relación médico paciente en la era de la inteligencia artificial
spellingShingle La vulnerabilidad antropotécnica: una propuesta para la interpretación de las transformaciones de la relación médico paciente en la era de la inteligencia artificial
Inteligencia Artificial
Relación médico-paciente
Salud Digital
Interacción humano-máquina
Vulnerabilidad
Bioética
Ética médica
Artificial Intelligence
Doctor-Patient Relationship
Digital Health
Human-Machine Interaction
Vulnerability
Bioethics
Medical Ethics
WB60
title_short La vulnerabilidad antropotécnica: una propuesta para la interpretación de las transformaciones de la relación médico paciente en la era de la inteligencia artificial
title_full La vulnerabilidad antropotécnica: una propuesta para la interpretación de las transformaciones de la relación médico paciente en la era de la inteligencia artificial
title_fullStr La vulnerabilidad antropotécnica: una propuesta para la interpretación de las transformaciones de la relación médico paciente en la era de la inteligencia artificial
title_full_unstemmed La vulnerabilidad antropotécnica: una propuesta para la interpretación de las transformaciones de la relación médico paciente en la era de la inteligencia artificial
title_sort La vulnerabilidad antropotécnica: una propuesta para la interpretación de las transformaciones de la relación médico paciente en la era de la inteligencia artificial
dc.creator.fl_str_mv Riaño Moreno, Julián Camilo
dc.contributor.advisor.none.fl_str_mv Escobar Triana, Jaime
dc.contributor.author.none.fl_str_mv Riaño Moreno, Julián Camilo
dc.subject.none.fl_str_mv Inteligencia Artificial
Relación médico-paciente
Salud Digital
Interacción humano-máquina
Vulnerabilidad
Bioética
Ética médica
topic Inteligencia Artificial
Relación médico-paciente
Salud Digital
Interacción humano-máquina
Vulnerabilidad
Bioética
Ética médica
Artificial Intelligence
Doctor-Patient Relationship
Digital Health
Human-Machine Interaction
Vulnerability
Bioethics
Medical Ethics
WB60
dc.subject.keywords.none.fl_str_mv Artificial Intelligence
Doctor-Patient Relationship
Digital Health
Human-Machine Interaction
Vulnerability
Bioethics
Medical Ethics
dc.subject.nlm.none.fl_str_mv WB60
description La inteligencia artificial (IA), al ser una tecnología autónoma, evolutiva y altamente eficiente, promete automatizar y optimizar procesos en diversos sectores, incluyendo el político, económico, empresarial y de la salud. Esta promesa conlleva una potencial transformación en múltiples actividades humanas, incluyendo la práctica médica, y en particular, la relación médico-paciente (RMP). La inserción de la IA en el ámbito médico coloca tanto al médico como al paciente en el núcleo de un complejo industrial y digital imponente. El médico adopta el rol de intermediario de información, mientras que el paciente actúa como consumidor, gestor y productor en el entorno digital. En este contexto, la IA prioriza valores vinculados con su industria, como la eficiencia y precisión, así como la privatización y rendimiento económico, en lugar de valores intrínsecos a la RMP, tales como la honestidad, confianza y confidencialidad. Esta dinámica engendra nuevas formas de vulnerabilidad, que requieren un análisis cuidadoso desde una perspectiva que integre la relación entre vulnerabilidad y tecnología. Al analizar las interacciones entre vulnerabilidad e IA, se observa que las perspectivas comunes sobre vulnerabilidad suelen ser técnicas y centradas en la ingeniería y el diseño, enfocándose en la identificación y minimización de amenazas y riesgos. Aunque este enfoque es esencial, es notable su adopción similar en la medicina, lo que indica una percepción instrumental de la IA en este campo y una urgencia por su implementación en las ciencias de la salud, acorde con un tecno-solucionismo médico. Esto también señala un alineamiento de la medicina con los valores de los actores e industrias del mundo digital. Por otro lado, los enfoques tradicionales de vulnerabilidad en la atención sanitaria se centran en el cuidado y protección de las personas, considerando la vulnerabilidad como una característica inherente a individuos cuya autonomía está comprometida o como un adjetivo aplicable a grupos que no pueden protegerse a sí mismos o son dependientes. Esto conlleva a que, en la RMP, generalmente solo los pacientes sean vistos como vulnerables, ignorando la vulnerabilidad inherente tanto al médico como al paciente. Esta limitación en los entendimientos de vulnerabilidad, restringidos a contextos de atención sanitaria o ingenierías, es problemática porque ignora la complejidad de la vulnerabilidad en la RMP, reduciéndola a un asunto meramente técnico o instrumental y no ético. Por ende, en el marco de esta investigación, se hace necesario avanzar más allá de la perspectiva técnica de vulnerabilidad, integrándola, para orientarla hacia el ámbito bioético. Esto conduce a cuestionamientos sobre las relaciones y significados de vulnerabilidad y tecnología en los análisis bioéticos, y si estos son adecuados para abordar las transformaciones en la RMP impulsadas por la IA. Se busca comprender cómo estos enfoques bioéticos pueden ofrecer respuestas a los desafíos planteados por la incorporación de la IA en la medicina, especialmente en lo que respecta a las dinámicas de vulnerabilidad entre médicos y pacientes. Para responder a estas cuestiones, se realizó un análisis de las fundamentaciones más recientes sobre vulnerabilidad desde tres perspectivas éticas diferentes: la bioética global de Henk ten Have, la ética de la caricia de Corine Pelluchon y la ética de la tecnología de Mark Coeckelbergh. Este análisis se llevó a cabo mediante una metodología hermenéutica, utilizando un análisis semántico por CAQDAS y un ejercicio dialéctico-interpretativo siguiendo la “hermenéutica de la distancia” de Paul Ricoeur. La anticipación de sentido central de este trabajo fue, que los que los significados de vulnerabilidad dependen de los significados de tecnología de cada autor, un aspecto frecuentemente pasado por alto en el discurso tradicional de la bioética. Así, comprender las transformaciones de la RMP provocadas por la IA implica repensar los significados de vulnerabilidad a partir de la tecnología, considerando nuevos tipos de actores y relaciones, como la interacción hombre-máquina. La metodología y el desarrollo de la anticipación se detallan en el primer capítulo de la tesis. Este capítulo establece tres objetivos específicos que orientan el propósito central del estudio: comprender las transformaciones en la RMP provocadas por la IA, analizando las relaciones y significados de vulnerabilidad y tecnología según Henk Ten Have, Corine Pelluchon y Mark Coeckelbergh. Estos objetivos se desarrollan en los capítulos dos al cuatro. El primer objetivo, abordado en el segundo capítulo, es describir las transformaciones de la RMP impulsadas por la IA entre 2010 y 2021. El segundo objetivo, tratado en el tercer capítulo, es explicar las convergencias y divergencias entre los significados de vulnerabilidad y tecnología en las reflexiones de los autores mencionados. El tercer objetivo, expuesto en el cuarto capítulo, es proponer un modelo de vulnerabilidad que permita entender la RMP en el contexto de la IA, basándose en los significados de vulnerabilidad y tecnología de estos autores. Estos capítulos forman una secuencia que describe la situación actual (capítulo dos), proporciona un marco analítico (capítulo tres) y presenta una propuesta (capítulo cuatro). El análisis secuencial realizado en este trabajo culmina en la propuesta de una ética de la vulnerabilidad fundamentada en la idea de "vulnerabilidad antropoténica". Esta concepción se nutre de diversas teorías y filosofías, entre ellas la posfenomenología, la teoría de la interacción, la teoría de los sistemas sociotécnicos (STS), la antropotécnica de Sloterdijk, y la imaginación moral. Este enfoque proporciona una perspectiva novedosa para abordar las transformaciones observadas en la RMP en el contexto de la IA, teniendo en cuenta no solo los actores tradicionales de la RMP sino también, otros actores humanos como desarrolladores y diseñadores, también, actores no humanos como los mismos dispositivos tecnológicos, y meta actores, como la industria digital, entre otras. Para abarcar la complejidad de la RMP en la era de la IA, se propone el "modelo STS de la IMP/RMP", como un marco analítico que permite la exploración detallada de estas transformaciones y de la vulnerabilidad emergente. Este modelo, al integrar aspectos sociotécnicos y reconoce la tecnología como “sistema”, ofrece una comprensión más profunda de cómo la interacción médico-paciente (IMP) se ve influenciada y modificada por diferentes actores y formas de interacción que emergen a causa de la IA. De esta manera, la propuesta de “vulnerabilidad antropoténica”, como concepto central desarrollado en este trabajo, sugiere que la vulnerabilidad en el contexto de la RMP no es estática ni unidimensional, sino que es dinámica y se ve afectada por múltiples factores tecnológicos y humanos. Esta perspectiva amplía el entendimiento de la vulnerabilidad más allá de una simple categorización de individuos o grupos como "vulnerables", hacia una comprensión más integrada de cómo la tecnología y la interacción humana colectiva conforman y redefinen la vulnerabilidad en el ámbito de la salud. Este trabajo representa uno de los primeros acercamientos en Latinoamérica para proponer elementos fundacionales para una bioética de la IA, superando los enfoques tradicionales de las éticas de la IA, los humanismos de la IA, y las dimensiones reduccionistas de las éticas prácticas y su modelo ingenieril. Ofrece un marco epistemológico y metodológico para analizar la vulnerabilidad, una categoría generalmente desatendida en la ética de la IA y comúnmente abordada desde una perspectiva de ingeniería que limita la reflexión sobre la vulnerabilidad. Así, este trabajo se posiciona como una contribución pionera a la ética biomédica y la bioética en la era digital.
publishDate 2024
dc.date.accessioned.none.fl_str_mv 2024-08-06T19:49:28Z
dc.date.available.none.fl_str_mv 2024-08-06T19:49:28Z
dc.date.issued.none.fl_str_mv 2024-07
dc.type.coar.fl_str_mv http://purl.org/coar/resource_type/c_db06
dc.type.local.spa.fl_str_mv Tesis/Trabajo de grado - Monografía - Doctorado
dc.type.coar.none.fl_str_mv https://purl.org/coar/resource_type/c_db06
dc.type.driver.none.fl_str_mv info:eu-repo/semantics/doctoralThesis
dc.type.coarversion.none.fl_str_mv https://purl.org/coar/version/c_ab4af688f83e57aa
format https://purl.org/coar/resource_type/c_db06
dc.identifier.uri.none.fl_str_mv https://hdl.handle.net/20.500.12495/12842
dc.identifier.instname.spa.fl_str_mv instname:Universidad El Bosque
dc.identifier.reponame.spa.fl_str_mv reponame:Repositorio Institucional Universidad El Bosque
dc.identifier.repourl.none.fl_str_mv repourl:https://repositorio.unbosque.edu.co
url https://hdl.handle.net/20.500.12495/12842
identifier_str_mv instname:Universidad El Bosque
reponame:Repositorio Institucional Universidad El Bosque
repourl:https://repositorio.unbosque.edu.co
dc.language.iso.fl_str_mv spa
language spa
dc.relation.references.none.fl_str_mv Abate, T. (2020). Smarter Hospitals: How AI-Enabled Sensors Could Save Lives. Obtenido de https://hai.stanford.edu/news/smarter-hospitals-how-ai-enabled-sensors-could-save-lives
Abdalla, M., & Abdalla, M. (30 de 07 de 2021). The Grey Hoodie Project: Big Tobacco, Big Tech, and the Threat on Academic Integrity. Association for Computing Machinery. doi:10.1145/3461702.3462563
Adela Martin, D., Conlon , E., & Bowe , B. (2021). Multi-level Review of Engineering Ethics Education: Towards a Socio-technical Orientation of Engineering Education for Ethics. Sci Eng Ethics, 27(60). doi:https://doi.org/10.1007/s11948-021-00333-6
Agamben, G. (2006). Homo sacer. El poder soberano y la nuda vida. Valencia: Pre-Textos.
Ahuja, A. S. (2019). The impact of artificial intelligence in medicine on the future role of the physician. PeerJ, 7(e7702). doi:https://doi.org/10.7717/peerj.7702
Akbari, A. (2021). Authoritarian Surveillance: A Corona Test. Surveillance & Society, 19(1), 98-103. doi:doi:10.24908/ss.v19i1.14545
Albrieu, R. R. (2018). Inteligencia artificial y crecimiento económico. Oportunidades y desafíos para Colombia. CIPPEC. Obtenido de chrome extension://efaidnbmnnnibpcajpcglclefindmkaj/https://news.microsoft.com/wp content/uploads/prod/sites/41/2018/11/IA-y-Crecimiento-COLOMBIA.pdf
Altman, I., & Dalmas A, T. (1973). Social penetration: The development of interpersonal relationships. Holt, Rinehart & Winston.
Alvarellos, M., Sheppard, H., Knarston, I., Davison, C., Raine , N., Seeger, T., . . . Chatzou Dunford, M. (2022). Democratizing clinical-genomic data: How federated platforms can promote benefits sharing in genomics. Genet, 13(1045450). doi:https://doi.org/10.3389/fgene.2022.1045450
Amazon. (23 de 02 de 2023). One Medical Joins Amazon to Make It Easier for People to Get and Stay Healthier. Obtenido de https://www.onemedical.com/mediacenter/one-medical joins-amazon/
Aminololama-Shakeri, S., & López, J. E. (2019). The Doctor-Patient Relationship With Artificial Intelligence. AJR. American journal of roentgenology, 212(2), 308–310. doi: 10.2214/AJR.18.20509
Andersen, T., Langstrup, H., & Lomborg , S. (2020). Experiences With Wearable Activity Data During Self-Care by Chronic Heart Patients: Qualitative Study. Journal of medical Internet research, 20(7), e15873. doi:doi: 10.2196/15873
Arendt, H. (2016). La condición humana. Paidós.
Arisa, E., Nagakura, K., & Fujita, T. (2020). Proposal for Type Classification for Building Trust in Medical Artificial Intelligence Systems. AIES '20: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 251-257. doi:https://doi.org/10.1145/3375627.3375846
Aristóteles. (1973). Ética a Nicómaco. Sepan Cuantos.
Arthur, C. (23 de 08 de 2013). Tech giants may be huge, but nothing matches big data. Obtenido de The Guardian: https://www.theguardian.com/technology/2013/aug/23/tech-giants-data
Ballesteros de Valderrama, B. P. (2005). El concepto de significado desde el análisis del comportamiento y otras perspectivas. Universitas psychologica, 4(2), 231-244. Obtenido de http://www.scielo.unal.edu.co/scielo.php?pid=S1657-92672005000200010&script=sci_abstract&tlng=es
Banginwar, S. A. (2020). Impact of internet on doctor-patient relationship. International Journal of Basic & Clinical Pharmacology, 9(5), 731-735. doi:doi:http://dx.doi.org/10.18203/2319-2003.ijbcp20201748
Bergson, H. (1988). Essai sur les données immédiates de la conscience. Félix Alcan.
Bhuiyan , J., & Robins-Early, N. (14 de 06 de 2023). The EU is leading the way on AI laws. The US is still playing catch-up. Obtenido de The Guardian: https://www.theguardian.com/technology/2023/jun/13/artificial-intelligence-us-regulation
Big tech spends more on lobbying than pharma, finance and chemicals firms combined: report. (07 de 09 de 2021). Obtenido de https://www.campaignlive.co.uk/article/big-tech-spends lobbying-pharma-finance-chemicals-firms-combined-report/1726554
Binagwaho, A., Mathewos, K., & Davis, S. (2021). Time for the ethical management of COVID 19 vaccines. The Lancet. Global health, 9(8). doi:DOI: 10.1016/S2214-109X(21)00180
Blease, C. (2023). Open AI meets open notes: surveillance capitalism, patient privacy and online record access. Journal of Medical Ethics. doi:http://orcid.org/0000-0002-0205-1165
Bogataj, T. (2023). Chapter 1 - Unpacking digital sovereignty through data governance by Melody Musoni 1. Obtenido de European Centre for Development Policy Management: https://policycommons.net/artifacts/3846704/chapter-1/4652659/ on 20 Dec 2023. CID: 20.500.12592/fpkqh2.
Boldt, J. (2019). The concept of vulnerability in medical ethics and philosophy. Philosophy, Ethics, and Humanities in Medicine, 14(6), 1-8. doi:https://doi.org/10.1186/s13010-019-0075-6
Bostrom , N. (2005). The Fable of the Dragon-Tyrant. Journal of Medical Ethics, 31(5), 273-277. doi:chrome extension://efaidnbmnnnibpcajpcglclefindmkaj/https://nickbostrom.com/fable/dragon.pdf
Bostrom, N. &. (2014). The Ethics of Artificial Intelligence. En F. &Amp, & W. Ramsey, Cambridge Handbook of Artificial Intelligence. Cambridge University Press. doi:doi:10.1017/CBO9781139046855.020
Bowlby, J. (1969). Attachment and loss. Hogarth Press and the Institute of Psycho-Analysis, 1.
Bowles, C. (2020). Future Ethics: the must-read guide to the ethics of emerging tech. NowNext.
Brandts-Longtin, O., Lalu, M., A Adie, E., Albert, M., Almoli, E., Almoli, F., . . . Montroy, J., P. (2022). Assessing the impact of predatory journals on policy and guidance documents: a cross-sectional study protocol. BMJ open, 12(4), e059445. doi:https://doi.org/10.1136/bmjopen-2021-059445
Buccella, A. (2023). "AI for all” is a matter of social justice. AI Ethics, 1143–1152. doi:https://doi.org/10.1007/s43681-022-00222-z
Byrne, R., & Whiten, A. (1998). Machiavellian Intelligence Social Expertise and the Evolution of Intellect in Monkeys, Apes, and Humans. Oxford University Press.
Calvo, T. &. (1991). Paul Ricoeur: los caminos de la interpretación. Barcelona, España: Anthropos.
Caplan, A. L. (1980). Ethical engineers need not apply: The state of applied ethics today. Science, Technology, & Human Values, 5(4), 24-32.
Capuzzo, K. (13 de 06 de 2023). 4 Step Guide on How to Transition from Healthcare to Tech. Obtenido de https://blog.qwasar.io/blog/4-step-guide-on-how-to-transition-from healthcare-to-tech
Carnemolla, P. (2018). Ageing in place and the internet of things – how smart home technologies, the built environment and caregiving intersect. Vis. in Eng, 6(7). doi:https://doi.org/10.1186/s40327-018-0066-5
Castro-Gómez, S. (2012). Sobre el concepto de antropotécnica en Peter Sloterdijk. Revista de Estudios Sociales, 43, 63-73. Obtenido de http://journals.openedition.org/revestudsoc/7127
Cenci, A., & Cawthorne , D. (2020). Refining Value Sensitive Design: A (Capability-Based) Procedural Ethics Approach to Technological Design for Well-Being. Sci Eng Ethics, 26, 2629–2662. doi:https://doi.org/10.1007/s11948-020-00223-3
Chugunova, M., & Sele, D. (2022). We and It: An Interdisciplinary Review of the Experimental Evidence on How Humans Interact with Machines. Journal of Behavioral and Experimental Economics, 99(101897), 102. doi:http://dx.doi.org/10.2139/ssrn.3692293
Cipolla, C. (2018). Designing for Vulnerability: Interpersonal Relations and Design. The Journal of Design, Economics, and Innovation, 4(1), 111-122. doi:doi:https://doi.org/10.1016/j.sheji.2018.03.001
Clark, A., & Chalmers, D. (2011). La mente extendida. CIC. Cuadernos de Información y Comunicación, 16, 15-28. Obtenido de https://www.redalyc.org/articulo.oa?id=93521629002
Clusmann, J., Kolbinger, F., Muti, H., Carrero, Z., Eckardt, J.-N., Laleh, N., . . . Kather, J. (2023). The future landscape of large language models in medicine. Commun Med, 3(141). doi:https://doi.org/10.1038/s43856-023-00370-1).
Coeckelbergh, M. (2010). Health Care, Capabilities, and AI Assistive Technologies. Ethical Theory and Moral Practice, 13, 181–190. doi:doi:https://doi.org/10.1007/s10677-009-9186-2
Coeckelbergh, M. (2011). Artificial companions: empathy and vulnerability mirroring in human robot relations. Studies in ethics, law, and technology, 4(3). doi:DOI:10.2202/1941-6008.1126
Coeckelbergh, M. (2011). Vulnerable Cyborgs: Learning to Live with our Dragons. Journal of Evolution and Technology, 22(1), 1-9.
Coeckelbergh, M. (2013). Drones, Information Technology, and Distance: Mapping The Moral Epistemology Of Remote Fighting. Ethics and Information Technology, 15(2). doi:DOI:10.1007/s10676-013-9313-6
Coeckelbergh, M. (2013). Human Being@Risk. Enhancement, Technology, and the Evaluation of Vulnerability Transformations. Springer.
Coeckelbergh, M. (2014). Good healthcare is in the “how”: The quality of care, the role of machines, and the need for new skills. En Machine medical ethics (págs. 33-47). Cham: Springer International Publishing.
Coeckelbergh, M. (2014). The Moral Standing of Machines: Towards a Relational and Non Cartesian Moral Hermeneutics. Philos. Technol, 27, 61–77. doi:https://doi.org/10.1007/s13347-013-0133-8
Coeckelbergh, M. (2015). Artificial agents, good care, and modernity. Theoretical Medicine and Bioethics, 36, 265-277.
Coeckelbergh, M. (2017). Hacking Technological Practices and the Vulnerability of the Modern Hero. Found Sci, 22, 357–362.
Coeckelbergh, M. (2017). The Art of Living with ICTs: The Ethics–Aesthetics of Vulnerability Coping and Its Implications for Understanding and Evaluating ICT Cultures. Found Sci,22, 339–348.
Coeckelbergh, M. (2019). Moved by Machines: Performance Metaphors and Philosophy of Technology. Routledge.
Coeckelbergh, M. (2020). AI Ethics. The MIT Press.
Coeckelbergh, M. (2020). Technoperformances: using metaphors from the performance arts for a postphenomenology and posthermeneutics of technology use. AI & SOCIETY, 35(3), 557-568. doi:https://doi.org/10.1007/s00146-019-00926-7
Coeckelbergh, M. (2020). The Postdigital in Pandemic Times: a Comment on the Covid-19 Crisis and its Political Epistemologies. Postdigit Sci Educ, 2, 547–550. doi:https://doi.org/10.1007/s42438-020-00119-2
Coeckelbergh, M. (2021). Time Machines: Artificial Intelligence, Process, and Narrative. Philos. Technol, 34, 1623–1638. doi:https://doi.org/10.1007/s13347-021-00479-y
Coeckelbergh, M. (2022). The Political Philosophy of AI: An Introduction.
Coeckelbergh, M., & Wackers, G. (2007). Imagination, distributed responsibility and vulnerable technological systems: the case of Snorre A. SCI ENG ETHICS, 13, 235–248. doi:https://doi.org/10.1007/s11948-007-9008-7
Collins, R. (2005). Interaction Ritual Chains. Princeton Univerity Press.
Cook, A., Thompson, M., & Ross, P. (2023). Virtual first impressions: Zoom backgrounds affect judgements of trust and competence. Plos One. doi:https://doi.org/10.1371/journal.pone.0291444
Cooper, A, & Rodman, A. (2023). AI and Medical Education - A 21st-Century Pandora's Box. The New England journal of medicine, 389(5), 385–387. doi:https://doi.org/10.1056/NEJMp2304993
Couture, S., & Toupin, S. (12 de 08 de 2019). What does the notion of “sovereignty” mean when referring to the digital? New Media & Society, 21(10). doi:https://doi.org/10.1177/1461444819865984
Cowie, M. &. (2021). Remote monitoring and digital health tools in CVD management. Nature Reviews Cardiology, 18, 457–458. doi:doi.org/10.1038/s41569-021-00548-x
Crawford, K. (04 de 06 de 2021). Time to regulate AI that interprets human emotions. Nature. doi:https://doi.org/10.1038/d41586-021-00868-5
Cummings, M. L. (2006). Integrating ethics in design through the value-sensitive design approach. Science and engineering ethics, 12, 701–715. doi:https://doi.org/10.1007/s11948-006-0065-0
Dalton-Brown, S. (2020). The Ethics of Medical AI and the Physician-Patient Relationship. Camb Q Healthc Ethics, 29(1), 115-121. doi:doi: 10.1017/S0963180119000847
D'Amore, F., & Pirone , F. (2018). Doctor 2.0 and i-Patient: information technology in medicine and its influence on the physician-patient relation. Italian Journal Of Medicine, 12(1). doi:https://doi.org/10.4081/itjm.2018.956
Davies, R, Ives, J, & Dunn, M. A . (2015). A systematic review of empirical bioethics methodologies. BMC Med Ethics, 16(15). doi:https://doi.org/10.1186/s12910-015-0010-3
de Boer, B. (2021). Explaining multistability: postphenomenology and affordances of technologies. AI & Soc. doi:https://doi.org/10.1007/s00146-021-01272-3
de Vries, M. J. (2005). Technology and the nature of humans. En M. J. de Vries, Teaching about Technology: An Introduction to the Philosophy of Technology for non-philosophers. Springer.
DeCamp, M, & Tilburt, J. C. (2019). Why we cannot trust artificial intelligence in medicine. Lancet Digit Health, 1(8), 30197-9. doi:doi: 10.1016/S25897500(19)30197-9
Demographic Change and Healthy Ageing, Health Ethics & Governance (HEG). (2022). Ageism in artificial intelligence for health. WHO.
Departamento Administrativo de la Presidencia de la República. (30 de 03 de 2021). Obtenido de Cámara Colombiana dde Informática y Telecomunicaciones: https://dapre.presidencia.gov.co/TD/MARCO-ETICO-PARA-LA-INTELIGENCIA ARTIFICIAL-EN-COLOMBIA-2021.pdf
Dorr Goold, S. &. (1999). The doctor-patient relationship: challenges, opportunities, and strategies. Journal of general internal medicine, 14(Suppl 1(Suppl 1), S26–S33), 26-33. doi: 10.1046/j.1525-1497.1999.00267.x.
Doudna, J. &. (2014). The new frontier of genome engineering with CRISPR-Cas9. Science, 346(6213).DOI: 10.1126/science.1258096
Doudna, J., & Charpentier, E. (2014). The new frontier of genome engineering with CRISPR Cas9. Science, 346(6213). doi:doi:DOI: 10.1126/science.1258096
Drummond, D. (2021). Between competence and warmth: the remaining place of the physician in the era of artificial intelligence. npj Digit. Med, 4(85). doi:https://doi.org/10.1038/s41746-021-00457-w
Duarte, A. (2004). Biopolítica y diseminación de la violencia Arendt y la crítica del presente. Pasajes: Revista de pensamiento contemporáneo, 97-105.
Dubov, A., & Shoptawb, S. (2020). The Value and Ethics of Using Technology to Contain the COVID-19 Epidemic. he American journal of bioethics : AJOB, 20(7), W7–W11. doi:https://doi.org/10.1080/15265161.2020.1764136
Dusek, V. (2009). What Is Technology? Defining or Characterizing Technology. En V. Dusek, Philosophy of Technology: An Introduction (págs. 26-37). Oxford: Blackwell Publishing.
Dutt, R. (2020). The impact of artificial intelligence on healthcare insurances. Artificial Intelligence in Healthcare, 271-293. doi:DOI:10.1016/B978-0-12-818438 7.00011-3
Eichler, E. (2019). Genetic Variation, Comparative Genomics,and the diagnosis of disease. N Engl J Med, 381, 64-74.
Ekmekci, P., & Arda, B. (2020). Artificial Intelligence and Bioethics. Switzerland: Springer.doi.org/10.1007/978-3-030-52448-7
Emanuel , E. J., & Dubler, N. N. (1995). Preserving the physician-patient relationship in the era of managed care. JAMA, 273(4), 323–329.
Emery, Fred E, & Eric L. Trist. (1960). Socio-technical systems. Management science, models and techniques, 2, 83-97.
Ezekiel, E., & Ezekiel, L. (1992). Four Models of the Physician-Patient Relationship Four Models of the Physician-Patient Relationship. JAMA.
Fernandes Martins, M., & Murry, L. T. (2022). Direct-to-consumer genetic testing: an updated systematic review of healthcare professionals’ knowledge and views, and ethical and legal concerns. European Journal of Human Genetics, 30, 1331–1343. doi: https://doi.org/10.1038/s41431-022-01205-8
Ferraris, M. (1998). La Hermenéutica. Taurus.
Fineman, M. (2008). The Vulnerable Subject: Anchoring Equality in the Human Condition. Yale Journal of Law & Feminism, 20(1), 1-24.
FirstPost. (05 de 06 de 2023). AI Groom: US woman creates AI bot, marries it and starts family, calls 'him' the perfect husband. Obtenido de https://www.firstpost.com/world/us-woman creates-ai-bot-marries-it-and-starts-family-calls-him-the-perfect-husband-12693012.html
Fogg, B. J. (2003). Persuasive Technology: Using Computers to Change What We Think and Do. San Francisco, Estados Unidos: Morgan Kaufmann Publishers.
Forbes. (29 de 04 de 2021). How Digital Transformation Impacts The Future Of Career Transitions. Obtenido de https://www.forbes.com/sites/forbeshumanresourcescouncil/2021/04/29/how-digital transformation-impacts-the-future-of-career-transitions/?sh=59ab9f5e3f71
Forbes Colombia. (29 de 05 de 2022). Cómo abogados, psicólogos y hasta filósofos se están convirtiendo desarrolladores de software y científicos de datos. Obtenido de Forbes: https://forbes.co/2022/05/29/editors-picks/como-abogados-psicologos-y-hasta-filosofos se-estan-convirtiendo-desarrolladores-de-software-y-cientificos-de-dato
Fosso Wamba, S., Bawack, E., Guthrie, C., Queiroz, M., & André Carillo, K. (2021). Are we preparing for a good AI society? A bibliometric review and research agenda. Technological Forecasting and Social Change, 167(120482), 1-27. doi:doi:doi.org/10.1016/j.techfore.2020.120482
Foucault, M. (2002). Vigilar y castigar: Nacimiento de la presión. Siglo XXI. Obtenido de chrome extension://efaidnbmnnnibpcajpcglclefindmkaj/https://www.ivanillich.org.mx/Foucault Castigar.pdf
Fox, B. (29 de 06 de 2022). Healthcare Companies Spent More on Lobbying Than Any Other Industry Last Year. Obtenido de Promarket: https://www.promarket.org/2022/06/29/healthcare-companies-spent-more-on-lobbying than-any-other-industry-last-year
Future Of Life Institute. (2017). Principios de IA de Asilomar. Obtenido de https://futureoflife.org/open-letter/ai-principles/
Gartner. (s.f.). Hype Cycle de Gartner. Obtenido de https://www.gartner.es/es/metodologias/hype-cycle
Gaube, S., Suresh, H., Raue, M., Merritt, A., Berkowitz, S., Lermer, E., . . . Ghassemi , M. (2021). Do as AI say: susceptibility in deployment of clinical decision-aids. npj Digital Medicine , 4(31). doi:doi:doi.org/10.1038/s41746-021-00385-9
Gegúndez Fernández, J. (2018). Technification versus humanisation. Artificial intelligence for medical diagnosis. Arch Soc Esp Oftalmol (Engl Ed), 93(3), e17-e19. DOI: 10.1016/j.oftal.2017.11.004.
Giddens, A. (1991). Modernity and Self Identity: Self and Society in the Late Modern Age.Cambridge: Polity Press.
Github. (2020). Citaciones abriertas/coronavirus. Obtenido de https://github.com/opencitations/coronavirus/blob/master/data/dois_no_ref.csv
Gleeson, D., Townsend, B., Tenni, B., & Phillips, T. (2023). Global inequities in access to COVID-19 health products and technologies: A political economy analysis. Health Place, 83(103051). doi:DOI: 10.1016/j.healthplace.2023.103051
Glenn, J. C., & Gordon, T. J. (30 de 04 de 2009). Futures Research Methodology — Version 3.0. (T. M. Project, Ed.) Washington, D.C.
Goasduff, L. (11 de 04 de 2022). Choose Adaptive Data Governance Over One-Size-Fits-All for Greater Flexibility. Obtenido de Gartner: https://www.gartner.com/en/articles/choose adaptive-data-governance-over-one-size-fits-all-for-greater-flexib
Gómez-Vírseda, C, de Maeseneer, Y, & Gastmans, C. (2020). Relational autonomy in end-of-life care ethics: a contextualized approach to real-life complexities. BMC Med Ethics, 21(50). doi:https://doi.org/10.1186/s12910-020-00495-1
Goodday, S. M., Geddes, J. R., & Friend, S. H. (2021). Disrupting the power balance between doctors and patients in the digital era. The Lancet. Digital health, 3(3), e142–e143. doi:https://doi.org/10.1016/S2589-7500(21)00004-2
Gouldner, A. (1960). The Norm of Reciprocity: A Preliminary Statement. American Sociological Review, 25(2), 161-178. doi:https://doi.org/10.2307/2092623
Gravett, W. (2022). Digital neocolonialism: the Chinese surveillance state in Africa. African Journal of International and Comparative Law, 30(1), 39-58.
Gröger, C. (2021). There is no AI without data. Association for Computing Machinery, 64(11), 98–108. doi:https://doi.org/10.1145/3448247
Gu, H. (2023). Data, Big Tech, and the New Concept of Sovereignty. J OF CHIN POLIT SCI . doi:https://doi.org/10.1007/s11366-023-09855-1
H+Pedia. (2023). Obtenido de https://hpluspedia.org/wiki/File:Transhumanism_Futures_Wheel.png
Haan, M, Ongena, Y. P., Hommes, S, & Kwee, T. C . (2019). A Qualitative Study to Understand Patient Perspective on the Use of Artificial Intelligence in Radiology. Journal of the American College of Radiology : JACR, 16(10), 1416–1419. doi:https://doi.org/10.1016/j.jacr.2018.12.043
Haas, B. (04 de 04 de 2017). Chinese man 'marries' robot he built himself. Obtenido de The Guardian: https://www.theguardian.com/world/2017/apr/04/chinese-man-marries-robot built-himself
Haenlein, M., & Kaplan, A. (2019). A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence. California Management Review, 61(4), 5-14. doi:doi:doi.org/10.1177/0008125619864925
Hamet, P., & Tremblay, J. (2017). Artificial intelligence in medicine. Metabolism. Metabolism: clinical and experimental(69(Supplement), S36-S40. ). doi:https://doi.org/10.1016/j.metabol.2017.01.011
Hannibal, G. (2021). Focusing on the Vulnerabilities of Robots through Expert Interviews for Trust in Human-Robot Interaction. Association for Computing Machinery, 288–293. doi:10.1145/3434074.3447178
Hannibal, G. (s.f.). Trust in HRI: Probing Vulnerability as an Active Precondition. 2021. Obtenido de https://www.youtube.com/watch?v=DzpdRXZgMwk
Hannibal, Glenda, & Weiss, Astrid. (s.f.). Exploring the Situated Vulnerabilities of Robots for Interpersonal Trust in Human-Robot Interaction. En Trust in Robots (págs. 33–56). TU Wien Academic Press. doi:https://doi.org/10.34727/2022/isbn.978-3-85448-052-5_2
Hansson, S. O. (2009). Philosophy of Medical Technology. En A. Meijers, Philosophy of Technology and Engineering Sciences (Vol. 9). Amsterdam, North Holland.
Harari, Y. N. (2017). Homo Deus: A Brief History of Tomorrow. Harper.
Harari, Y. N. (10 de 2018). Why Technology Favors Tyranny. The Atlantic . Obtenido de https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology tyranny/568330/
Harris, J. (2023). An AI-Enhanced Electronic Health Record Could Boost Primary Care Productivity. JAMA, 330(9), 801-802. doi:DOI: 10.1001/jama.2023.14525
Hassanpour, A., Nguyen, A., Rani, A., Shaikh, S., Xu, Y., & Zhang, H. (2022). Big Tech Companies Impact on Research at the Faculty of Information Technology and Electrical Engineering. Computers and Society. doi:https://arxiv.org/abs/2205.01039
Hazarika, I. (2020). Artificial intelligence: opportunities and implications for the health workforce. International health, 12(4), 241–245. doi:DOI: 10.1093/inthealth/ihaa007
Heidegger, M. (1999). The Question Concerning Technology. En M. Heidegger, Basic Writings.Londres: Routledge.
Hendrikse, R., Adriaans, I., Klinge, T., & Fernández, R. (2021). The Big Techification of Everything. Science as Culture, 31(1), 59-71. doi:DOI:10.1080/09505431.2021.1984423
Herkert, J. (2005). Ways of thinking about and teaching ethical problem solving: Microethics and macroethics in engineering. SCI ENG ETHICS , 11, 373–385. doi:https://doi.org/10.1007/s11948-005-0006-3
Heyen, N.B., & Salloch, S. (2021). The ethics of machine learning-based clinical decision support: an analysis through the lens of professionalisation theory. BMC Medical Ethics, 22(112). doi:https://doi.org/10.1186/s12910-021-00679-3
Hinde, R. A. (1976). Interactions, Relationships and Social Structure. Man, 11(1), 1-17. doi:https://doi.org/10.2307/2800384
Hinde, R. A. (1997). Relationships: A Dialectical Perspective. Psychology Press.
Hoc, J. M. (2000). From human-machine interaction to human-machine cooperation. Ergonomics, 47(7), 833-843. doi:doi:10.1080/001401300409044
Hofmann, B. (2001). The technological invention of disease. Journal of Medical Ethics: Medical Humanities, 27, 10-19.
Hommels, A., Mesman, J., & Bijker, W. E. (2014). Vulnerability Technological Cultures: New Directions in Research and Governance. The MIT press. doi:https://doi.org/10.7551/mitpress/9209.001.0001
Idhe, D. (2009). Postphenomenology and Technoscience: The Peking University Lectures. State University New York Press.
Ienca, M., & Vayena, E. (2020). On the responsible use of digital data to tackle the COVID-19 pandemic. Nature Medicine, 26, 463–464. doi:doi:https://doi.org/10.1038/s41591-020-0832-5
Ihde, D. (1990). Technology and the Lifeworld: From Garden to Earth. Indiana University Press.
Ihde, D. (2015). Acoustic Technics. Lexington Books.
Ihde, D. (2019). Medical Technics. Minnesota: University of Minnesota Press.
Intelligence, N. M. (2021). People have the AI power. Nat Mach Intell, 3(275). doi: https://doi.org/10.1038/s42256-021-00340-z
Interactions, Relationships and Social Structure. (1976). Man, 11(1), 1-17. doi:https://doi.org/10.2307/2800384
Jeanne, L., Bourdin, S., Nadou, F., & Noiret , G. (2023). Economic globalization and the COVID-19 pandemic: global spread and inequalities. GeoJournal, 88, 1181–1188. doi:https://doi.org/10.1007/s10708-022-10607-6
Jecker, N. S. (2021). Nothing to be ashamed of: sex robots for older adults with disabilities. Journal of Medical Ethics, 47(1). doi:doi:dx.doi.org/10.1136/medethics-2020-106645
Jinek, M., Chylinski, K., Fonfara, I., Hauer, M., & Doudna, J. (2017). A programmable dual RNA-guided DNA endonuclease in adaptive bacterial immunity. Science, 337(6096), 816-821. doi: 10.1126/science.1225829
Johannsen, G. (2019). Human-Machine Interaction (Vol. 21). CIRP Encyclopedia of Production Engineering.
Jonas, H. (2000). The Vulnerability of the Human Condition. En &. P. J. Rendtorff, Basic Ehical Principle in Bioethics and Biolaw. Autonomy, Dignity, Integrity and Vulnerability (págs. 115-122). Centre for Ethics and Law/Institut Borja de Bioética.
Kaltenbach, T. (2014). The impact of E-health on the pharmaceutical industry. International Journal of Healthcare Management, 7(4), 223-225. doi:doi:10.1179/2047970014Z.000000000103
Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15-25. doi:doi:doi.org/10.1016/j.bushor.2018.08.004
Karches, K. (2018). Against the iDoctor: why artificial intelligence should not replace physician judgment. Theor Med Bioeth, 39(91), 91–110. doi:https://doi.org/10.1007/s11017-018-9442-3
Karches, K. E. (2018). Against the iDoctor: why artificial intelligence should not replace physician judgment. Theoretical Medicine and Bioethics, 39, 91–110. doi:https://doi.org/10.1007/s11017-018-9442-3
Kasperbauer , T. (2020). Conflicting roles for humans in learning health systems and AI-enabled healthcare. Journal Of Evaluation in Clinical Practice, 27(3). doi:https://doi.org/10.1111/jep.13510
Kickbusch , I., Piselli, D., Agrawal, A., Balicer, R., Banner, O., Adelhardt , M., . . . Xue, L. (2021). The Lancet and Financial Times Commission on governing health futures 2030: growing up in a digital world. Lancet (London, England), 398(10312), 1727–1776. doi:https://doi.org/10.1016/S0140-6736(21)01824-9
Kim, J. (2005). Physicalism, or Something Near Enough. Princeton Monographs in Philosophy.
Kittay, E. (2011). The Ethics of Care, Dependence, and Disability. Ratio Juris, 24(1), 49-58.
Komasawa, N., & Yokohira, M. (2023). Simulation-Based Education in the Artificial Intelligence Era. Cureus, 15(6), e40940. doi:https://doi.org/10.7759/cureus.40940
Koncz, A. (13 de 09 de 2022). The First Database Of Tech Giants Collaborating With Healthcare: What Can We Learn? The Medical Futurist. Obtenido de https://medicalfuturist.com/the-first-database-of-tech-giants-collaborating-with healthcare-what-can-we-learn/
Koncz, A. (19 de 10 de 2023). Digital Health Anxiety: When Wellness Tech Becomes A Stressor. Obtenido de The Medical Futurist: https://medicalfuturist.com/digital-health-anxiety when-wellness-tech-becomes-a-stressor/
Kshetri, N. (2023). ChatGPT in Developing Economies. IT Professional, 25(2), 16–19. doi:https://doi.org/10.1109/MITP.2023.3254639
Kudina, O. (2019). The technological mediation of morality: Value dynamism, and the complex. PhD dissertation. Enschede: University of Twente.
Kudina, O., & Coeckelbergh, M. (2021). Alexa, define empowerment: voice assistants at home, appropriation and technoperformances. Journal of Information, Communication and Ethics in Society, 19(2), 299-312. doi:doi:10.1108/JICES-06-2020-0072
Kumar, K., Kumar, N., & Rachna, S. R. (2020). Role of IoT to avoid spreading of COVID-19. International Journal of Intelligent Networks, 1, 32-35. doi:doi:https://doi.org/10.1016/j.ijin.2020.05.002.
Lage Gonçalves, L., Nardi, A., & Spear King, A. (2023). Digital Dependence in Organizations: Impacts on the Physical and Mental Health of Employees. Clinical Practice And Epidemiology In Mental Health, 19. doi:DOI: 10.2174/17450179-v19-e230109-2022-17
Latour, B. (1987). Science in Action: how to Follow Scientists and Engineers through Society.Cambridge: Harvard University Press.
Lee, K.-F., & Li, K. (2018). AI Superpowers: China, Silicon Valley, and the New World Order. Boston, Estados Unidos: Houghton Mifflin Harcourt.
Lee, N. (2019). Brave New World of Transhumanism. En N. Lee, The Transhumanism Handbook. Switzerland: Springer Nature.
Leite, H., Hodgkinson, I. R., & Gruber, T. (2020). New development: ‘Healing at a distance’—telemedicine and COVID-19. Public Money & Management, 40(6), 483-485. doi:doi:10.1080/09540962.2020.1748855
Lerner, I., Veil, R., Nguyen, D.-P., Phuc Luu, V., & Jantzen, R. (2018). Revolution in Health Care: How Will Data Science Impact Doctor–Patient Relationships? Frontiers in Public Health, 6. doi:https://doi.org/10.3389/fpubh.2018.00099
Levinas, E. (1972). Humanismo del otro hombre. Buenos Aires, Argentina: Siglo XXI.
Lin, S. Y., Mahoney, M. R., & Sinsky, C. A. (2019). Ten Ways Artificial Intelligence Will Transform Primary Care. J GEN INTERN MED, 34, 626–1630. doi:https://doi.org/10.1007/s11606-019-05035-1
Liu, X., Keane, P., & Denniston, A. (2018). Time to regenerate: the doctor in the age of artificial intelligence. Journal of the Royal Society of Medicine, 111(4), 113-116. doi:https://doi.org/10.1177/0141076818762648
Loten, A., & Bousquette, I. (01 de 09 de 2022). Tech Companies Say Going Private Comes With Benefits. Obtenido de The Wall Street Journal: Tech Companies Say Going Private Comes With Benefits
Luchini, C., Pea, A., & Scarpa, A. (2022). Artificial intelligence in oncology: current applications and future perspectives. British Journal of Cancer, 126, 4–9 . doi:https://doi.org/10.1038/s41416-021-01633-1
Lupton, D. (2020). A more-than-human approach to bioethics: The example of digital health. Bioethics, 34, 1-8. doi:doi:10.1111/bioe.12798
Luxton, D. (2014). Recommendations for the ethical use and design of artificial intelligent care providers. Artificial intelligence in medicine, 62(1), 1-10. doi:DOI: 10.1016/j.artmed.2014.06.004
Machine, N. (2021). People have the AI power. Nature Machine Intelligence, 3(275). doi:https://doi.org/10.1038/s42256-021-00340-z
Mackenzie, C, Rogers, W, & Dodds, S. (2013). Vulnerability: New Essays in Ethics and Feminist Philosophy. Oxford University Press.
Maliandi, R. (2010). Fenomenología de la conflictividad. Las Cuarenta. DOI:10.24316/prometeica.v0i3.64
McAninch , A. (2023). Go Big or Go Home? A New Case for Integrating Micro-ethics and Macro-ethics in Engineering Ethics Education. Science and engineering ethics, 29(3), 20. doi:https://doi.org/10.1007/s11948-023-00441-5
McKendrick, J. (05 de 07 de 2016). Is All-Cloud Computing Inevitable? Analysts Suggest It Is. Obtenido de Forbes: https://www.forbes.com/sites/joemckendrick/2016/07/05/is-all cloud-computing-inevitable-analysts-suggest-it-is/?sh=71497bccebf0
McWhinney, I. R. (1978). Medical Knowledge and the Rise of Technology. The Journal of Medicine and Philosophy, 3(4), 293-304.
Mello, M. M., & Wang, J. C. (2020). Ethics and governance for digital disease surveillance. Science, 368(6494), 951-954. doi: doi:10.1126/science.abb9045
Menéndez, E. (2020). Modelo médico hegemónico: tendencias posibles y tendencias más o menos imaginarias. Salud Colectiva, 16. doi:https://doi.org/10.18294/sc.2020.2615
Miller, B., Blanks, W., & Yagi , B. (2023). The 510(k) Third Party Review Program: Promise and Potential. J Med Syst , 47(93). doi:https://doi.org/10.1007/s10916-023-01986-5
Mims, C. (08 de 04 de 2013). Is Big Tech’s R&D Spending Actually Hurting Innovation in the U.S.? Obtenido de The Wall Street Journal: https://www.wsj.com/articles/is-big-techs-r d-spending-actually-hurting-innovation-in-the-u-s-acfa004e).
Mittelstadt, B. (2021). The impact of artificial intelligence on the doctor-patient relationship.Council of Europe. Obtenido de https://www.coe.int/en/web/bioethics/report-impact-of ai-on-the-doctor-patient-relationship
Mohamed, S., Png, M.-T., & Isaac, W. (2020). Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. Philos. Technol, 33, 659–684.
Moreno Hernández, H. (2008). Profanación a la biopolítica: a propósito de giorgio agamben. Revista de Ciencias Sociales de la Universidad Iberoamericana, III(6).
Morozov, E. (2013). To save everithing, click here: the folly of technological solutionism. New York: PublicAffairs.
Morton , C., Smith , S., Lwin, T., George, M., & Williams, M. (2019). Computer Programming: Should Medical Students Be Learning It? JMIR Med Educ , 5(1).
Muehlematter, U., Bluethgen, C., & Vokinger, K. (2023). FDA-cleared artificial intelligence and machine learning-based medical devices and their 510(k) predicate networks. The Lancet Digital Healt, 5(9). doi:DOI:https://doi.org/10.1016/S2589-7500(23)00126-7
Münch, C., Marx , E., Benz, L., Hartmann, E., & Matzner, M. (2022). Capabilities of digital servitization: Evidence from the socio-technical systems theory. Technological Forecasting and Social Change, 176(121361). doi:https://doi.org/10.1016/j.techfore.2021.121361.
Munn, L. (2023). The uselessness of AI ethics. Ética de la IA, 3, 869–877 . doi:https://doi.org/10.1007/s43681-022-00209-w
Murphy, K., Di Ruggiero, E., Upshur, R., Willison, D. J., Malhotra, N., Cai, J., . Gibson, J. (2021). Artificial intelligence for good health: a scoping review of the ethics literature. BMC Med Ethics, 22(14). doi:https://doi.org/10.1186/s12910-021-00577-8
Neves, M. P. (2007). Sentidos da vulnerabilidade; característica, condição, princípio. Revista Brasileira de Bioética, 2, 157-172.
Ng, A. (2019). Coursera. Obtenido de ). IA para todos by deeplearning.ai.: www.coursera.org
North Whitehead, A. (1978). Process And Reality An Essay In Cosmology An Essay In Cosmology. The Free Prees.
Nundy S, Montgomery T, & Wachter RM. (2019). Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. JAMA, 322(6), 497–498. doi:10.1001/jama.2018.20563
Obermeyer, Z., & Emanuel, E. J. (2016). Predicting the Future — Big Data, Machine Learning, and Clinical Medicine. New England Journal of Medicine, 375(13), 1216-1219. doi.org/10.1056/NEJMp1606181
Olaronke, I., Oluwaseun, O., & Rhoda, I. (2017). State of the art: a study of human-robot interaction in healthcare. International Journal of Information Engineering and Electronic Business, 9(3), 43-55. doi:doi:10.5815/ijieeb.2017.03.06
Oosterlaken, I. (2014). Human Capabilities in Design for Values. En van den Hoven, J, Vermaas, P, & van de Poel, I, Handbook of Ethics, Values, and Technological Design (págs. 1–26). Springer. Obtenido de https://doi.org/10.1007/978-94-007-6994-6_7-1
Ostherr, K. (2020). Artificial Intelligence and Medical Humanities. J Med Humanit, 43, 211–232. doi:https://doi.org/10.1007/s10912-020-09636-4
Pellegrino, E. D., & Thomasma, D. C. (1993). The Virtues in Medical Practice. New York, Estados Unidos: Oxford University Press.
Pelluchon, C. (2013). Elementos para una ética de la vulnerabilidad. Los hombres, los animales, la naturaleza. Bogotá, Colombia: U. Javeriana; Universidad El bosqu.
Pelluchon, C. (2013). La Autonomia Quebrada: Bioética y filosofía. Bogotá, Colombia: Universidad El Bosque.
Pelluchon, C. (2015). Elementos para una ética de la vulnerabilidad. Los hombres, los animales, la naturaleza. Pontificia Universidad Javeriana.
Pelluchon, C. (2016). Taking Vulnerability Seriously: What Does It Change for Bioethics and Politics? En A. Masferrer,, & E. García-Sánchez (Edits.), Human Dignity of the Vulnerable in the Age of Rights (Vol. 55). Obtenido de https://doi.org/10.1007/978-3-319-32693-1_13
Perri, L. (23 de 08 de 2023). What’s New in the 2023 Gartner Hype Cycle for Emerging Technologies. Gartner. Obtenido de https://www.gartner.com/en/articles/what-s-new-in the-2023-gartner-hype-cycle-for-emerging-technologie
Pharmaceutical-Technology. (09 de 03 de 2021). COVID-19 accelerated digital transformation of the pharma industry by five years: Poll. Obtenido de https://www.pharmaceutical technology.com/news/covid-19-accelerated-digital-transformation-of-the-pharma industry-by-five-years-poll
Pinto Bustamante, B., Riaño-Moreno, J., Clavijo Montaya, H., Cárdenas Galindo, M., & Campos Figueredo, W. (2023). Bioethics and artificial intelligence: between deliberation on values and rational choice theory. Robot AI, 10. doi:https://doi.org/10.3389/frobt.2023.1140901
Ponce-Correa, A., Ospina-Ospina, A., & Correa-Gutierrez, R. (s.f.). Curriculum Analysis Of Ethics In Engineering: A Case Study. DYNA, 89(222), 67-73. doi:https://doi.org/10.15446/dyna.v89n222.101800
Prabhu, S. P. (2019). Ethical challenges of machine learning and deep learning algorithms. The Lancet. Oncology, 20(5), 621–622. doi:https://doi.org/10.1016/S1470-2045(19)30230-X
Prainsack, B, & Forgó, N. (2022). Why paying individual people for their health data is a bad idea. Nature Medicine, 28, 1989–1991. doi:https://doi.org/10.1038/s41591-022-01955-4
Prainsack, B., & Van Hoyweghen , I. (2020). Shifting Solidarities: Personalisation in Insurance and Medicine. Cham: Palgrave Macmillan. doi:https://doi.org/10.1007/978-3-030-44062-6_7
Prainsack, B., El-Sayed, S., Forgó, N., Szoszkiewicz, Ł., & Baumer, P. (2022). Data solidarity: a blueprint for governing health futures. The Lancet Digital Healt, 4(11), E773-E774. doi:DOI:https://doi.org/10.1016/S2589-7500(22)00189-3
Prati, Andrea, Shan, Caifeng, & Wang, Kevin I-Kai. (2019). Sensors, vision and networks: From video surveillance to activity recognition and health monitoring. Journal of Ambient Intelligence and Smart Environments, 11(1), 5-22. doi:10.3233/AIS-180510
Psychologs Magazine. (06 de 09 de 2021). Smart Watches And Mental Health. Obtenido de Psychologs Indian's Firts Mental Healt: https://www.psychologs.com/smartwatches-and mental-health/
Rackimuthu, S., Narain, K., Lal, A., Nawaz, F., Mohanan,, P., Yasir Essar, M., & Ashworth, H. (2022). Redressing COVID-19 vaccine inequity amidst booster doses: charting a bold path for global health solidarity, together. Globalization and Health, 18(23). doi:https://doi.org/10.1186/s12992-022-00817-5
Rampton V. (2020). Artificial intelligence versus clinicians. BMJ (Clinical research ed.), 3(369).DOI: 10.1136/bmj.m1326
Raphael, B. (1976). The Thinking Computer: Mind Inside Matter. San Francisco: Freeman and Company.
Reddy, H., Joshi, S., Joshi, A., & Wagh, V. (2022). A Critical Review of Global Digital Divide and the Role of Technology in Healthcare. Cureus , 14(9), e29739. doi:10.7759/cureus.29739
Rezaev, A. V., Starikov, V. S, & Tregubova, N. D. (2018). Sociological Considerations on Human-Machine Interactions: from Artificial Intelligence to Artificial Sociality. Conferencia Internacional sobre Industria, Negocios y Ciencias Sociales (págs. 364-371). Tokio: Waseda University.
Ricoeur, P. (1996). Les trois niveaux du jugement médical. Esprit.
Ricoeur, P. (2010). Del texto a la acción. Mexico, D.F: Fondo de Cultura Económica.
Rostislavovna Schislyaeva, E., & Saychenko, O. (2022). Labor Market Soft Skills in the Context of Digitalization of the Economy. Social Sciences, 11(3). doi:10.3390/socsci11030091
Rousseau, J.-J. (2003). Sobre las ciencias y las artes. Madrid: Alianza Editorial.
Rowley, J. (2007). The wisdom hierarchy: representations of the DIKW hierarchy. Journal of Information Science, 33(2), 163–180. doi:DOI: 10.1177/0165551506070706
Ruíz, J., Cantú, G., Ávila, D., Gamboa, J. D., Juarez, L., de Hoyos, A., . . . Garduño, J. (2015). Revisión de modelos para el análisis de dilemas éticos. Boletín Médico del Hospital Infantil de México, 72(2), 89-98. https://doi.org/https://doi.org/10.1016/j.bmhimx.2015.03.006
Sappleton, N., & Takruri-Rizk, H. (2008). The Gender Subtext of Science, Engineering, and Technology (SET) Organizations: A Review and Critique. Women's Studies, 37(3). doi:https://doi.org/10.1080/00497870801917242
Savulescu, J. (2012). Moral Enhancement, Freedom, And The God Machine. The Monist, 95(3), 399-421. doi:doi:doi:10.5840/monist201295321
Schaper, M., Wöhlke, S., & Schicktanz , S. (2019). “I would rather have it done by a doctor”laypeople’s perceptions of direct-to-consumer genetic testing (DTC GT) and its ethical implications. Med Health Care and Philos, 22, 31-40. doi:https://doi.org/10.1007/s11019-018-9837-y
Schoenhagen, P., & Mehta, N. (2017). Big data, smart computer systems, and doctor-patient relationship. European heart journal, 38(7), 508–510. doi:https://doi.org/10.1093/eurheartj/ehw217
Schwab, K. (26 de 02 de 2021). ‘This is bigger than just Timnit’: How Google tried to silence a critic and ignited a movement. Obtenido de Fast Company: https://www.fastcompany.com/90608471/timnit-gebru-google-ai-ethics-equitable-tech movemen
Semana. (25 de 11 de 2014). Así controlan las instituciones y empresas de salud a los médicos. Obtenido de https://www.semana.com/nacion/articulo/las-eps-controlan-los-medicos con-polemicos-metodos/409528-3/)./
Semana. (05 de 09 de 2020). Colombia, cada vez más rezagada en inteligencia artificial. Semana. Obtenido de Obtenido de https://www.semana.com/tecnologia/articulo/estados-unidos-y china-los-primeros-en-inteligencia-artificial--noticias-hoy/701009
Shew, A. (2020). Ableism, Technoableism, and Future AI. IEEE Technology and Society Magazine, 39(1), 40-85. doi:doi: 10.1109/MTS.2020.2967492
Singhal, Karan, Azizi, Shekoofeh, Tu, Tao, Mahdavi, S. Sara, Wei, Jason, Chung, Hyung Won.Corrado, Greg S. (2023). Large language models encode clinical knowledge. Nature, 620, 172–180. doi:). https://doi.org/10.1038/s41586-023-06291-2
Sisk, B. A, & Baker, J. N. (2018). Microethics of Communication-Hidden Roles of Bias and Heuristics in the Words We Choose. JAMA pediatrics, 172(12), 1115–1116. doi:https://doi.org/10.1001/jamapediatrics.2018.3111
Sloterdijk, P. (2001). Normas sobre el parque humano. Una respuesta a la Carta sobre el humanismo de Heidegger. Madrid: Ediciones Siruela.
Sloterdijk, P. (2003). Esferas I. Burbujas. Microesferología. Ediciones Siruela.
Sloterdijk, P. (2012). Has de cambiar tu vida. Pre-Textos.
Smite, D., Brede Moe, N., Hildrum, J., Gonzalez-Huerta, J., & Mendez, D. (2023). Work-from home is here to stay: Call for flexibility in post-pandemic work policies. Journal of Systems and Software, 195(111552). doi:https://doi.org/10.1016/j.jss.2022.111552
Smits, M., Ludden, G., Peters, R., Bredie, S., van Goor, H., & Paul Verbeek, P. (2022). Values that Matter: A New Method to Design and Assess Moral Mediation of Technology. Design issues, 38(1), 39-54. doi:https://doi.org/10.1162/desi_a_00669
Solbakk, J. H. (2011). Vulnerabilidad: ¿un principio fútil o útil en la ética de la asistencia sanitaria? Revista Redbioética/UNESCO, 1(3), 89-101.
Srivastava, T., & Waghmare, L. (2020). Implications of Artificial Intelligence (AI) on Dynamics of Medical Education and Care: A Perspective. Journal of Clinical and Diagnostic Research, 14(3), JI01-JI02. doi:DOI: 10.7860/JCDR/2020/43293.13565
Srnicek, N. (2018). Capitalismo De Plataformas. Caja Negra.
Stahl, B. (2021). AI Ecosystems for Human Flourishing: The Recommendations. SpringerBriefs in Research and Innovation Governance, 91–115. doi:doi:10.1007/978-3-030-69978-9_7
Stahl, B. C. (2021). AI Ecosystems for Human Flourishing: The Recommendations. In: Artificial Intelligence for a Better Future. SpringerBriefs in Research and Innovation Governance. doi:https://doi.org/10.1007/978-3-030-69978-9_7
Starke, G., van den Brule, R., Elger, B., & Haselager, P. (2022). Intentional machines: A defence of trust in medical artificial intelligence. Bioethics: Special Issue: Promises And Challenges Of Medical Ai, 36(2), 154-161. doi:https://doi.org/10.1111/bioe.12891
Stempsey, W. E. (2006). Emerging Medical Technologies and Emerging Conceptions of Health. Theor Med Bioeth, 27, 227–243. doi:https://doi.org/10.1007/s11017-006-9003-z
Stiegler, B. (1994). La técnica y el tiempo I: el pecado de Epimeteo. Hondarribia: Argiraletxe Hiru.
Su, H., Lallo, A. D, Murphy, R. R, Taylor, R. H., Garibaldi, B. T, & Krieger, A. (2021). Physical human–robot interaction for clinical care in infectious environments. Nature Machine Intelligence, 3, 184-186. doi:doi:doi.org/10.1038/s42256-021-00324-z
Susan, S. (2020). What COVID-19 Reveals About Twenty-First Century Capitalism: Adversity and Opportunity. Development, 63, 150–156. doi:doi:https://doi.org/10.1057/s41301-020-00263-z
Takshi S. (2021). Unexpected Inequality: Disparate-Impact From Artificial Intelligence in Healthcare Decisions. Journal of law and health, 34(2), 215–251
Taylor, L. (2021). There Is an App for That: Technological Solutionism as COVID-19 Policy in the Global North. En E. Aarts, M. Fleuren, M. Sitskoorn, & T. Wilthagen, The New Common (págs. 209–215). Switzerland: Springer Nature. doi:doi:doi: 10.1007/978-3-030-65355-2_30
Teepe, G., Glase, E., & Reips, U.-D. (07 de 04 de 2023). Increasing digitalization is associated with anxiety and depression: A Google Ngram analysis. doi:https://doi.org/10.1371/journal.pone.0284091
ten Have, H. (2015). Respect for Human Vulnerability: The Emergence of a New Principle in Bioethics. Bioethical Inquiry, 12, 395–408. doi:https://doi.org/10.1007/s11673-015-9641-9
ten Have, H. (2015). Respect for Human Vulnerability: The Emergence of a New Principle in Bioethics. Bioethical Inquiry, 12, 395–408. doi:https://doi.org/10.1007/s11673-015-9641-9
ten Have, H. (2015). Respect for Human Vulnerability: The Emergence of a New Principle. Bioethical Inquiry, 12, 395–408. doi:https://doi.org/10.1007/s11673-015-9641-9
ten Have, H. (2015). Respect for Human Vulnerability: The Emergence of a New Principle in Bioethics. Bioethical Inquiry, 12, 395–408. doi:https://doi.org/10.1007/s11673-015-9641-9
ten Have, H. (2015). Respect for Human Vulnerability: The Emergence of a New Principle in Bioethics. Bioethical Inquiry, 12, 395–408. doi:https://doi.org/10.1007/s11673-015-9641-9
ten Have, K. (2016). Global Bioethics: An introduction (1 ed.). New York, Estados Unidos: Routledge.
The Economist. (20 de 06 de 2022). Alphabet is spending billions to become a force in health care. Obtenido de https://www.economist.com/business/2022/06/20/alphabet-is spending-billions-to-become-a-force-in-health-care
The Guardian. (08 de 03 de 2015). Silicon Valley is cool and powerful. But where are the women Obtenido de https://www.theguardian.com/technology/2015/mar/08/sexism silicon-valley-wome
The New York Times. (23 de 07 de 2022). Google Fires Engineer Who Claims Its A.I. Is Conscious. Obtenido de https://www.nytimes.com/2022/07/23/technology/google engineer-artificial-intelligence.html
Thomson, S., & C Ip , E. (2020). COVID-19 emergency measures and the impending authoritarian pandemic. Journal of Law and the Biosciences, 7(1 lsaa064). doi:doi:10.1093/jlb/lsaa064
Tiku, N. (11 de 06 de 2022). The Google engineer who thinks the company’s AI has come to life. Obtenido de The Washington Post: https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake lemoine/
Timmermans, S., & Berg, M. (2003). The practice of medical technology. Sociology of Health & Illness, 25(3), 97-114. doi:doi:10.1111/1467-9566.00342
Toma, A., Senkaiahliyan, S., Lawler, P., Rubin, B., & Wang, B. (30 de 11 de 2023). Generative AI could revolutionize health care — but not if control is ceded to big tech. Obtenido de Natura: https://www.nature.com/articles/d41586-023-03803-y
Topol, E. (2014). The Patient Will See You Now: The Future of Medicine is in Your Hands. New York, Estados Unidos: Basic Books.
Topol, E. (2015). The Patient Will See You Now: The Future of Medicine is in Your Hands. J Clin Sleep Med, 11(6), 689–690. doi:doi: 10.5664/jcsm.4788
Topol, E. (2019). Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books
Topol, E. (2019). High-performance medicine: the convergence of human and artificial intelligence. Nature Medicine, 25(1), 44-56. doi:doi:10.1038/s41591-018-0300-7
Tozzo, P , Angiola, F, Gabbin, A, Politi, C, & Caenazzo, L. (2021). The difficult role of Artificial Intelligence in Medical Liability: to err is not only human. La Clinica terapeutica, 172(6), 527-528. doi:DOI: 10.7417/CT.2021.2372
Tran, B.-X., Thu Vu, G., Hai Ha, G., Vuong , Q.-H., Tung Ho, M., Vuong, T.-T., M Ho, R. (2019). Global Evolution of Research in Artificial Intelligence in Health and Medicine: A Bibliometric Study. Journal of Clinical Medicine, 8(3(360)), 1-18. doi:https://doi.org/10.3390/jcm8030360
Trist, E. L, & Bamforth, K. W. (1951). Human Relations. 4(1), 3-38. doi:https://doi org.ezproxy.unbosque.edu.co/10.1177/001872675100400101
Truong, A. (2019). Are you ready to be diagnosed without a human doctor? A discussion about artificial intelligence, technology, and humanism in dermatology. Int J Womens Dermatol, 5(4), 267–268. doi:doi: 10.1016/j.ijwd.2019.05.001
Tucker, G. (2015). Forming an ethical paradigm for morally sentient robots: Sentience is not necessary for evil. EEE International Symposium on Technology and Society, 1-5.doi: 10.1109/ISTAS.2015.7439420.
Ulrich, B. (1998). Politics of Risk Society. En J. Franklin, In The Politics of Risk Society. Polity Press.
Umbrello, S. (2022). The Role of Engineers in Harmonising Human Values for AI Systems Design. Journal of Responsible Technology, 10. doi:https://doi.org/10.1016/j.jrt.2022.100031
UNESCO. (2005). Declaración Universal sobre Bioética y Derechos Hujmanos. Obtenido de UNESCO: http://portal.unesco.org/es/ev.php URL_ID=31058&URL_DO=DO_TOPIC&URL_SECTION=201.html
UNESCO. (2015). Parte 1: Programa Temáticoprograma De Educação Em Ética. Obtenido de https://unesdoc.unesco.org/ark:/48223/pf0000163613_por
UNESCO. (16 de 01 de 2023). Data solidarity: why sharing is not always caring . Obtenido de https://en.unesco.org/inclusivepolicylab/analytics/data-solidarity-why-sharing-not always-caring%C2%A0
Universidad Externado De Colombia. (13 de 04 de 2018). Colombia le apuesta a la implementación de la inteligencia artificial. Obtenido de https://www.uexternado.edu.co/derecho/colombia-le-apuesta-la-implementacion-de-la inteligenciaartificial/#:~:text=Seg%C3%BAn%20explic%C3%B3%2C%20el%20Gobierno%20colombiano,ciento%20de%20las%20aplicaciones%20empresariales
University Of Denver. (s.f.). 18 Skills All Programmers Need to Have. Obtenido de https://bootcamp.du.edu/blog/programming-skills/
Vakkuri, V., Kemell, K.-K., Jantunen, M., & Abrahamsson, P. (2020). “This is Just a Prototype”: How Ethics Are Ignored in Software Startup-Like Environments. En V. Stray, R. Hoda, M. Paasivaara, & P. Kruchten, Agile Processes in Software Engineering and Extreme Programming (Vol. 383, págs. 195–210). Cham: Spring.
van de Poel, I. (2021). Design for value change. Ethics Inf Technol, 23, 27–31. doi:https://doi.org/10.1007/s10676-018-9461-9
Van Noorden, R., & Thompson, B. (2023). Audio long read: Medicine is plagued by untrustworthy clinical trials. How many studies are faked or flawed. doi: ¿https://doi.org/10.1038/d41586-023-02627-0
van Weert , J. (2020). Facing frailty by effective digital and patient-provider communication.Patient Educ Couns, 103(3), 433-435. doi:doi: 10.1016/j.pec.2020.02.020.
Vegas-Motta, E. (2020). Hermenéutica: un concepto, múltiples visiones. Revista Estudios Culturales, 13(25), 121-130.
Verbeek, P.-P. (2005). What things do: Philosophical reflections on technology, agency, and design. Pennsylvania: Pennsylvania State Univeristy Press.
Verbeek, P.-P. (2011). Moralizing technology: Understanding and designing the morality of things. Chicago: The University of Chicago press.
Verbeek, P.-P. (2020). Politicizing Postphenomenology. hilosophy of Engineering and Technology. doi:https://doi.org/10.1007/978-3-030-35967-6_9
Verghese, A., Shah, N. H., & Harrington, R. A. (2018). What This Computer Needs Is a Physician Humanism and Artificial Intelligence. JAMA, 319(1), 19–20. doi:10.1001/jama.2017.19198
Vermeer, L., & Thomas , M. (2020). Pharmaceutical/high-tech alliances; transforming healthcare? Digitalization in the healthcare industry. Strategic Direction, 36(12), 43-46. doi:https://doi.org/10.1108/SD-06-2020-0113
Viernes, F. (14 de 09 de 2021). Stop Saying ‘Data is the New Oil’. Obtenido de Medium: https://medium.com/geekculture/stop-saying-data-is-the-new-oil-a2422727218c
Vokinger, K. N., & Gasser, U. (2021). Regulating AI in medicine in the United States and Europe. National Library of Medicine, 3(9), 738–739. doi:doi: 10.1038/s42256-021-00386-z
Vos, R., & Willems, D. L. (2000). Technology in medicine: ontology, epistemology, ethics and social philosophy at the crossroads. Theoretical Medicine and Bioethics, 21, 1–7.
Wang, J, Yang, J, Zhang, H., Lu, H., Skreta, M., Husić, M., . . . Brudno, M. (2022). PhenoPad: Building AI enabled note-taking interfaces for patient encounters. npj Digit. Med, 5(12). doi:https://doi.org/10.1038/s41746-021-00555-9
Wang, X., & Luan, W. (2022). Research progress on digital health literacy of older adults: A scoping review. Frontiers in Public Health, 10. doi: https://doi.org/10.3389/fpubh.2022.906089
Wartman, S. A. (2021). Medicine, Machines, and Medical Education. Academic medicine : journal of the Association of American Medical Colleges, 96(7), 947–950. doi: https://doi.org/10.1097/ACM.0000000000004113
Webster, P. (13 de 04 de 2023). Big tech companies invest billions in health research. Obtenido de Nature Medicine: https://www.nature.com/articles/s41591-023-02290-y
Webster, P. (2023). Big tech companies invest billions in health research. Nature Medicine, 29, 1034–1037. doi:https://doi.org/10.1038/s41591-023-02290-y
Weiner, M., & Biondich, P. (2006). The influence of information technology on patient physician relationships. Journal of general internal medicine, 21(Suppl 1), S35-9. doi:10.1111/j.1525-1497.2006.00307.x.
Weinstein, J. (2019). Artificial Intelligence: Have You Met Your New Friends; Siri, Cortona, Alexa, Dot, Spot, and Puck. Spine, 44(1), 1-4. doi:https://doi.org/10.1097/BRS.0000000000002913
Wenk, H. (2020). Kommunikation in Zeiten künstlicher IntelligenzCommunication in the age of artificial intelligence. Gefässchirurgie, 25. doi:DOI:10.1007/s00772-020-00644-1
Whitelaw, S., Mamas, M. A, Topol, E., & Spall, H. (2020). Applications of digital technology in COVID-19 pandemic planning and response. The Lancet Digital Health, 2(8), e435-e440. doi:doi:10.1016/S2589-7500(20)30142-4.
WHO. (09 de 09 de 2020). Tracking COVID-19: Contact Tracing in the Digital Age. Obtenido de World Healt Organization: https://www.who.int/news-room/feature stories/detail/tracking-covid-19-contact-tracing-in-the-digital age#:~:text=contact%20tracing%20is%20the%20process,the%20last%20two%20weeks
WHO. (2021). Ethics and governance of artificial intelligence for health. Obtenido de World Healt Organization: https://www.who.int/publications/i/item/9789240029200
WHO. (28 de 06 de 2021). Ética y gobernanza de la inteligencia artificial para la salud. Obtenido de https://www.who.int/publications/i/item/9789240029200
Wu, E., Wu, K, Daneshjou, R, Ouyang, D, Ho, D. E, & Zou, J. (2021). How medical AI devices are evaluated: limitations and recommendations from an analysis of FDA approvals. Nature Medicine, 27, 582-584. doi:doi:doi.org/10.1038/s41591-021-01312-x
Xolocotzi Yáñez, Á. (2020). La verdad del cuerpo. Heidegger y la ambigüedad de lo corporal. Universidad de Antioquia. DOI: https://doi.org/10.17533/udea.ef.n61a09
Xu, W., & Ouyang, F. (2022). The application of AI technologies in STEM education: a systematic review from 2011 to 2021. International Journal of STEM Education, 9(59). doi:https://doi.org/10.1186/s40594-022-00377-5
Yee, V., Bajaj, S., & Cody Stanford, F. (2022). Paradox of telemedicine: building or neglecting trust and equity. The Lancet Digital Healt, 4(7), E480-E481. doi:DOI:https://doi.org/10.1016/S2589-7500(22)00100-5
Zang, P. (2010). Advanced Industrial Control Technology. William Andrew Publishing.
Zwart, H. (2008). Challenges of Macro-ethics: Bioethics and the Transformation of Knowledge Production. Bioethical Inquiry, 5, 283–293.doi:https://doi.org/10.1007/s11673-008-9110-9
dc.rights.local.spa.fl_str_mv Acceso cerrado
dc.rights.accessrights.none.fl_str_mv info:eu-repo/semantics/closedAccess
http://purl.org/coar/access_right/c_14cb
rights_invalid_str_mv Acceso cerrado
http://purl.org/coar/access_right/c_14cb
eu_rights_str_mv closedAccess
dc.format.mimetype.none.fl_str_mv application/pdf
dc.publisher.program.spa.fl_str_mv Doctorado en Bioética
dc.publisher.grantor.spa.fl_str_mv Universidad El Bosque
dc.publisher.faculty.spa.fl_str_mv Departamento de Bioética
institution Universidad El Bosque
bitstream.url.fl_str_mv https://repositorio.unbosque.edu.co/bitstreams/26dfbb23-dcb3-4518-922a-57326e9a2d2e/download
https://repositorio.unbosque.edu.co/bitstreams/de7722b3-9772-49bb-9b77-f39d49605864/download
https://repositorio.unbosque.edu.co/bitstreams/e5fcf422-5a7e-4b6f-a716-b93d3116960a/download
https://repositorio.unbosque.edu.co/bitstreams/21221828-bc1e-47d1-a4c6-c150fb71d3c5/download
https://repositorio.unbosque.edu.co/bitstreams/8643bf99-d385-4f9d-82e9-43976efa3518/download
https://repositorio.unbosque.edu.co/bitstreams/7cef085e-1b0b-421f-8209-44e35a90d439/download
bitstream.checksum.fl_str_mv 1225ec77593e308fdf0afe39822c7fea
17cc15b951e7cc6b3728a574117320f9
45ef8d169dba1706ebae1e22ae706451
5643bfd9bcf29d560eeec56d584edaa9
b3fa90e2b273aaf9436c7c860c221841
7b9823febb1476d002a54e9ad10023d2
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
MD5
MD5
MD5
repository.name.fl_str_mv Repositorio Institucional Universidad El Bosque
repository.mail.fl_str_mv bibliotecas@biteca.com
_version_ 1828164441007456256
spelling Escobar Triana, JaimeRiaño Moreno, Julián Camilo2024-08-06T19:49:28Z2024-08-06T19:49:28Z2024-07https://hdl.handle.net/20.500.12495/12842instname:Universidad El Bosquereponame:Repositorio Institucional Universidad El Bosquerepourl:https://repositorio.unbosque.edu.coLa inteligencia artificial (IA), al ser una tecnología autónoma, evolutiva y altamente eficiente, promete automatizar y optimizar procesos en diversos sectores, incluyendo el político, económico, empresarial y de la salud. Esta promesa conlleva una potencial transformación en múltiples actividades humanas, incluyendo la práctica médica, y en particular, la relación médico-paciente (RMP). La inserción de la IA en el ámbito médico coloca tanto al médico como al paciente en el núcleo de un complejo industrial y digital imponente. El médico adopta el rol de intermediario de información, mientras que el paciente actúa como consumidor, gestor y productor en el entorno digital. En este contexto, la IA prioriza valores vinculados con su industria, como la eficiencia y precisión, así como la privatización y rendimiento económico, en lugar de valores intrínsecos a la RMP, tales como la honestidad, confianza y confidencialidad. Esta dinámica engendra nuevas formas de vulnerabilidad, que requieren un análisis cuidadoso desde una perspectiva que integre la relación entre vulnerabilidad y tecnología. Al analizar las interacciones entre vulnerabilidad e IA, se observa que las perspectivas comunes sobre vulnerabilidad suelen ser técnicas y centradas en la ingeniería y el diseño, enfocándose en la identificación y minimización de amenazas y riesgos. Aunque este enfoque es esencial, es notable su adopción similar en la medicina, lo que indica una percepción instrumental de la IA en este campo y una urgencia por su implementación en las ciencias de la salud, acorde con un tecno-solucionismo médico. Esto también señala un alineamiento de la medicina con los valores de los actores e industrias del mundo digital. Por otro lado, los enfoques tradicionales de vulnerabilidad en la atención sanitaria se centran en el cuidado y protección de las personas, considerando la vulnerabilidad como una característica inherente a individuos cuya autonomía está comprometida o como un adjetivo aplicable a grupos que no pueden protegerse a sí mismos o son dependientes. Esto conlleva a que, en la RMP, generalmente solo los pacientes sean vistos como vulnerables, ignorando la vulnerabilidad inherente tanto al médico como al paciente. Esta limitación en los entendimientos de vulnerabilidad, restringidos a contextos de atención sanitaria o ingenierías, es problemática porque ignora la complejidad de la vulnerabilidad en la RMP, reduciéndola a un asunto meramente técnico o instrumental y no ético. Por ende, en el marco de esta investigación, se hace necesario avanzar más allá de la perspectiva técnica de vulnerabilidad, integrándola, para orientarla hacia el ámbito bioético. Esto conduce a cuestionamientos sobre las relaciones y significados de vulnerabilidad y tecnología en los análisis bioéticos, y si estos son adecuados para abordar las transformaciones en la RMP impulsadas por la IA. Se busca comprender cómo estos enfoques bioéticos pueden ofrecer respuestas a los desafíos planteados por la incorporación de la IA en la medicina, especialmente en lo que respecta a las dinámicas de vulnerabilidad entre médicos y pacientes. Para responder a estas cuestiones, se realizó un análisis de las fundamentaciones más recientes sobre vulnerabilidad desde tres perspectivas éticas diferentes: la bioética global de Henk ten Have, la ética de la caricia de Corine Pelluchon y la ética de la tecnología de Mark Coeckelbergh. Este análisis se llevó a cabo mediante una metodología hermenéutica, utilizando un análisis semántico por CAQDAS y un ejercicio dialéctico-interpretativo siguiendo la “hermenéutica de la distancia” de Paul Ricoeur. La anticipación de sentido central de este trabajo fue, que los que los significados de vulnerabilidad dependen de los significados de tecnología de cada autor, un aspecto frecuentemente pasado por alto en el discurso tradicional de la bioética. Así, comprender las transformaciones de la RMP provocadas por la IA implica repensar los significados de vulnerabilidad a partir de la tecnología, considerando nuevos tipos de actores y relaciones, como la interacción hombre-máquina. La metodología y el desarrollo de la anticipación se detallan en el primer capítulo de la tesis. Este capítulo establece tres objetivos específicos que orientan el propósito central del estudio: comprender las transformaciones en la RMP provocadas por la IA, analizando las relaciones y significados de vulnerabilidad y tecnología según Henk Ten Have, Corine Pelluchon y Mark Coeckelbergh. Estos objetivos se desarrollan en los capítulos dos al cuatro. El primer objetivo, abordado en el segundo capítulo, es describir las transformaciones de la RMP impulsadas por la IA entre 2010 y 2021. El segundo objetivo, tratado en el tercer capítulo, es explicar las convergencias y divergencias entre los significados de vulnerabilidad y tecnología en las reflexiones de los autores mencionados. El tercer objetivo, expuesto en el cuarto capítulo, es proponer un modelo de vulnerabilidad que permita entender la RMP en el contexto de la IA, basándose en los significados de vulnerabilidad y tecnología de estos autores. Estos capítulos forman una secuencia que describe la situación actual (capítulo dos), proporciona un marco analítico (capítulo tres) y presenta una propuesta (capítulo cuatro). El análisis secuencial realizado en este trabajo culmina en la propuesta de una ética de la vulnerabilidad fundamentada en la idea de "vulnerabilidad antropoténica". Esta concepción se nutre de diversas teorías y filosofías, entre ellas la posfenomenología, la teoría de la interacción, la teoría de los sistemas sociotécnicos (STS), la antropotécnica de Sloterdijk, y la imaginación moral. Este enfoque proporciona una perspectiva novedosa para abordar las transformaciones observadas en la RMP en el contexto de la IA, teniendo en cuenta no solo los actores tradicionales de la RMP sino también, otros actores humanos como desarrolladores y diseñadores, también, actores no humanos como los mismos dispositivos tecnológicos, y meta actores, como la industria digital, entre otras. Para abarcar la complejidad de la RMP en la era de la IA, se propone el "modelo STS de la IMP/RMP", como un marco analítico que permite la exploración detallada de estas transformaciones y de la vulnerabilidad emergente. Este modelo, al integrar aspectos sociotécnicos y reconoce la tecnología como “sistema”, ofrece una comprensión más profunda de cómo la interacción médico-paciente (IMP) se ve influenciada y modificada por diferentes actores y formas de interacción que emergen a causa de la IA. De esta manera, la propuesta de “vulnerabilidad antropoténica”, como concepto central desarrollado en este trabajo, sugiere que la vulnerabilidad en el contexto de la RMP no es estática ni unidimensional, sino que es dinámica y se ve afectada por múltiples factores tecnológicos y humanos. Esta perspectiva amplía el entendimiento de la vulnerabilidad más allá de una simple categorización de individuos o grupos como "vulnerables", hacia una comprensión más integrada de cómo la tecnología y la interacción humana colectiva conforman y redefinen la vulnerabilidad en el ámbito de la salud. Este trabajo representa uno de los primeros acercamientos en Latinoamérica para proponer elementos fundacionales para una bioética de la IA, superando los enfoques tradicionales de las éticas de la IA, los humanismos de la IA, y las dimensiones reduccionistas de las éticas prácticas y su modelo ingenieril. Ofrece un marco epistemológico y metodológico para analizar la vulnerabilidad, una categoría generalmente desatendida en la ética de la IA y comúnmente abordada desde una perspectiva de ingeniería que limita la reflexión sobre la vulnerabilidad. Así, este trabajo se posiciona como una contribución pionera a la ética biomédica y la bioética en la era digital.Doctor en BioéticaDoctoradoArtificial Intelligence (AI), as an autonomous, evolving, and highly efficient technology, promises to automate and optimize processes in various sectors, including politics, economics, business, and healthcare. This promise entails a potential transformation in multiple human activities, including medical practice, and particularly, the doctor-patient relationship (DPR). The insertion of AI in the medical field places both the doctor and the patient at the core of an imposing industrial and digital complex. The doctor adopts the role of information intermediary, while the patient acts as a consumer, manager, and producer in the digital environment. In this context, AI prioritizes values linked to its industry, such as efficiency and precision, as well as privatization and economic performance, instead of values intrinsic to the DPR, such as honesty, trust, and confidentiality. This dynamic engenders new forms of vulnerability, which require careful analysis from a perspective that integrates the relationship between vulnerability and technology. When analyzing the interactions between vulnerability and AI, it is observed that common perspectives on vulnerability tend to be technical and centered on engineering and design, focusing on identifying and minimizing threats and risks. Although this approach is essential, its similar adoption in medicine is notable, indicating an instrumental perception of AI in this field and an urgency for its implementation in health sciences, in line with medical techno-solutionism. This also signals an alignment of medicine with the values of actors and industries in the digital world. On the other hand, traditional approaches to vulnerability in healthcare focus on the care and protection of people, considering vulnerability as an inherent characteristic of individuals whose autonomy is compromised or as an adjective applicable to groups that cannot protect themselves or are dependent. This leads to, in the DPR, generally only patients being seen as vulnerable, ignoring the inherent vulnerability of both the doctor and the patient. This limitation in the understandings of vulnerability, restricted to healthcare contexts or engineering, is problematic because it ignores the complexity of vulnerability in the DPR, reducing it to a merely technical or instrumental issue and not an ethical one. Therefore, within the framework of this research, it becomes necessary to move beyond the technical perspective of vulnerability, integrating it, to orient it towards the bioethical realm. This leads to questions about the relationships and meanings of vulnerability and technology in bioethical analyses, and whether these are adequate to address the transformations in the DPR driven by AI. It seeks to understand how these bioethical approaches can offer responses to the challenges posed by the incorporation of AI in medicine, especially regarding the dynamics of vulnerability between doctors and patients. To answer these questions, an analysis of the most recent foundations on vulnerability was carried out from three different ethical perspectives: Henk ten Have's global bioethics, Corine Pelluchon's ethics of caress, and Mark Coeckelbergh's ethics of technology. This analysis was conducted using a hermeneutic methodology, utilizing a semantic analysis by CAQDAS and a dialectical interpretative exercise following Paul Ricoeur's "hermeneutics of distance." The central anticipation of meaning in this work was that the meanings of vulnerability depend on the meanings of technology of each author, an aspect often overlooked in traditional bioethical discourse. Thus, understanding the transformations of the DPR caused by AI implies rethinking the meanings of vulnerability based on technology, considering new types of actors and relationships, such as human-machine interaction. The methodology and development of the anticipation are detailed in the first chapter of the thesis. This chapter establishes three specific objectives that guide the central purpose of the study: to understand the transformations in the DPR caused by AI, analyzing the relationships and meanings of vulnerability and technology according to Henk Ten Have, Corine Pelluchon, and Mark Coeckelbergh. These objectives are developed in chapters two through four. The first objective, addressed in the second chapter, is to describe the transformations of the DPR driven by AI between 2010 and 2021. The second objective, dealt with in the third chapter, is to explain the convergences and divergences between the meanings of vulnerability and technology in the reflections of the mentioned authors. The third objective, presented in the fourth chapter, is to propose a model of vulnerability that allows understanding the DPR in the context of AI, based on the meanings of vulnerability and technology of these authors. These chapters form a sequence that describes the current situation (chapter two), provides an analytical framework (chapter three), and presents a proposal (chapter four). The sequential analysis carried out in this work culminates in the proposal of an ethics of vulnerability based on the idea of "anthropotenic vulnerability." This conception draws from various theories and philosophies, including post-phenomenology, interaction theory, sociotechnical systems theory (STS), Sloterdijk's anthropotechnics, and moral imagination. This approach provides a novel perspective to address the observed transformations in the DPR in the context of AI, considering not only the traditional actors of the DPR but also other human actors such as developers and designers, as well as non-human actors such as the technological devices themselves, and meta-actors, such as the digital industry, among others. To encompass the complexity of the DPR in the AI era, the "STS model of DPI/DPR" is proposed as an analytical framework that allows for detailed exploration of these transformations and emerging vulnerability. This model, by integrating sociotechnical aspects and recognizing technology as a "system," offers a deeper understanding of how the doctor-patient interaction (DPI) is influenced and modified by different actors and forms of interaction that emerge due to AI. In this way, the proposal of "anthropotenic vulnerability," as the central concept developed in this work, suggests that vulnerability in the context of the DPR is neither static nor one-dimensional, but dynamic and affected by multiple technological and human factors. This perspective broadens the understanding of vulnerability beyond a simple categorization of individuals or groups as "vulnerable," towards a more integrated understanding of how technology and collective human interaction shape and redefine vulnerability in the health field. This work represents one of the first approaches in Latin America to propose foundational elements for bioethics of AI, surpassing traditional approaches of AI ethics, AI humanisms, and reductionist dimensions of practical ethics and their engineering model. It offers an epistemological and methodological framework to analyze vulnerability, a category generally neglected in AI ethics and commonly approached from an engineering perspective that limits reflection on vulnerability. Thus, this work positions itself as a pioneering contribution to biomedical ethics and bioethics in the digital era.application/pdfInteligencia ArtificialRelación médico-pacienteSalud DigitalInteracción humano-máquinaVulnerabilidadBioéticaÉtica médicaArtificial IntelligenceDoctor-Patient RelationshipDigital HealthHuman-Machine InteractionVulnerabilityBioethicsMedical EthicsWB60La vulnerabilidad antropotécnica: una propuesta para la interpretación de las transformaciones de la relación médico paciente en la era de la inteligencia artificialThe anthropotechnical vulnerability: a proposal for interpreting the transformations of the doctor-patient relationship in the era of artificial intelligenceDoctorado en BioéticaUniversidad El BosqueDepartamento de BioéticaTesis/Trabajo de grado - Monografía - Doctoradohttps://purl.org/coar/resource_type/c_db06http://purl.org/coar/resource_type/c_db06info:eu-repo/semantics/doctoralThesishttps://purl.org/coar/version/c_ab4af688f83e57aaAbate, T. (2020). Smarter Hospitals: How AI-Enabled Sensors Could Save Lives. Obtenido de https://hai.stanford.edu/news/smarter-hospitals-how-ai-enabled-sensors-could-save-livesAbdalla, M., & Abdalla, M. (30 de 07 de 2021). The Grey Hoodie Project: Big Tobacco, Big Tech, and the Threat on Academic Integrity. Association for Computing Machinery. doi:10.1145/3461702.3462563Adela Martin, D., Conlon , E., & Bowe , B. (2021). Multi-level Review of Engineering Ethics Education: Towards a Socio-technical Orientation of Engineering Education for Ethics. Sci Eng Ethics, 27(60). doi:https://doi.org/10.1007/s11948-021-00333-6Agamben, G. (2006). Homo sacer. El poder soberano y la nuda vida. Valencia: Pre-Textos.Ahuja, A. S. (2019). The impact of artificial intelligence in medicine on the future role of the physician. PeerJ, 7(e7702). doi:https://doi.org/10.7717/peerj.7702Akbari, A. (2021). Authoritarian Surveillance: A Corona Test. Surveillance & Society, 19(1), 98-103. doi:doi:10.24908/ss.v19i1.14545Albrieu, R. R. (2018). Inteligencia artificial y crecimiento económico. Oportunidades y desafíos para Colombia. CIPPEC. Obtenido de chrome extension://efaidnbmnnnibpcajpcglclefindmkaj/https://news.microsoft.com/wp content/uploads/prod/sites/41/2018/11/IA-y-Crecimiento-COLOMBIA.pdfAltman, I., & Dalmas A, T. (1973). Social penetration: The development of interpersonal relationships. Holt, Rinehart & Winston.Alvarellos, M., Sheppard, H., Knarston, I., Davison, C., Raine , N., Seeger, T., . . . Chatzou Dunford, M. (2022). Democratizing clinical-genomic data: How federated platforms can promote benefits sharing in genomics. Genet, 13(1045450). doi:https://doi.org/10.3389/fgene.2022.1045450Amazon. (23 de 02 de 2023). One Medical Joins Amazon to Make It Easier for People to Get and Stay Healthier. Obtenido de https://www.onemedical.com/mediacenter/one-medical joins-amazon/Aminololama-Shakeri, S., & López, J. E. (2019). The Doctor-Patient Relationship With Artificial Intelligence. AJR. American journal of roentgenology, 212(2), 308–310. doi: 10.2214/AJR.18.20509Andersen, T., Langstrup, H., & Lomborg , S. (2020). Experiences With Wearable Activity Data During Self-Care by Chronic Heart Patients: Qualitative Study. Journal of medical Internet research, 20(7), e15873. doi:doi: 10.2196/15873Arendt, H. (2016). La condición humana. Paidós.Arisa, E., Nagakura, K., & Fujita, T. (2020). Proposal for Type Classification for Building Trust in Medical Artificial Intelligence Systems. AIES '20: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 251-257. doi:https://doi.org/10.1145/3375627.3375846Aristóteles. (1973). Ética a Nicómaco. Sepan Cuantos.Arthur, C. (23 de 08 de 2013). Tech giants may be huge, but nothing matches big data. Obtenido de The Guardian: https://www.theguardian.com/technology/2013/aug/23/tech-giants-dataBallesteros de Valderrama, B. P. (2005). El concepto de significado desde el análisis del comportamiento y otras perspectivas. Universitas psychologica, 4(2), 231-244. Obtenido de http://www.scielo.unal.edu.co/scielo.php?pid=S1657-92672005000200010&script=sci_abstract&tlng=esBanginwar, S. A. (2020). Impact of internet on doctor-patient relationship. International Journal of Basic & Clinical Pharmacology, 9(5), 731-735. doi:doi:http://dx.doi.org/10.18203/2319-2003.ijbcp20201748Bergson, H. (1988). Essai sur les données immédiates de la conscience. Félix Alcan.Bhuiyan , J., & Robins-Early, N. (14 de 06 de 2023). The EU is leading the way on AI laws. The US is still playing catch-up. Obtenido de The Guardian: https://www.theguardian.com/technology/2023/jun/13/artificial-intelligence-us-regulationBig tech spends more on lobbying than pharma, finance and chemicals firms combined: report. (07 de 09 de 2021). Obtenido de https://www.campaignlive.co.uk/article/big-tech-spends lobbying-pharma-finance-chemicals-firms-combined-report/1726554Binagwaho, A., Mathewos, K., & Davis, S. (2021). Time for the ethical management of COVID 19 vaccines. The Lancet. Global health, 9(8). doi:DOI: 10.1016/S2214-109X(21)00180Blease, C. (2023). Open AI meets open notes: surveillance capitalism, patient privacy and online record access. Journal of Medical Ethics. doi:http://orcid.org/0000-0002-0205-1165Bogataj, T. (2023). Chapter 1 - Unpacking digital sovereignty through data governance by Melody Musoni 1. Obtenido de European Centre for Development Policy Management: https://policycommons.net/artifacts/3846704/chapter-1/4652659/ on 20 Dec 2023. CID: 20.500.12592/fpkqh2.Boldt, J. (2019). The concept of vulnerability in medical ethics and philosophy. Philosophy, Ethics, and Humanities in Medicine, 14(6), 1-8. doi:https://doi.org/10.1186/s13010-019-0075-6Bostrom , N. (2005). The Fable of the Dragon-Tyrant. Journal of Medical Ethics, 31(5), 273-277. doi:chrome extension://efaidnbmnnnibpcajpcglclefindmkaj/https://nickbostrom.com/fable/dragon.pdfBostrom, N. &. (2014). The Ethics of Artificial Intelligence. En F. &Amp, & W. Ramsey, Cambridge Handbook of Artificial Intelligence. Cambridge University Press. doi:doi:10.1017/CBO9781139046855.020Bowlby, J. (1969). Attachment and loss. Hogarth Press and the Institute of Psycho-Analysis, 1.Bowles, C. (2020). Future Ethics: the must-read guide to the ethics of emerging tech. NowNext.Brandts-Longtin, O., Lalu, M., A Adie, E., Albert, M., Almoli, E., Almoli, F., . . . Montroy, J., P. (2022). Assessing the impact of predatory journals on policy and guidance documents: a cross-sectional study protocol. BMJ open, 12(4), e059445. doi:https://doi.org/10.1136/bmjopen-2021-059445Buccella, A. (2023). "AI for all” is a matter of social justice. AI Ethics, 1143–1152. doi:https://doi.org/10.1007/s43681-022-00222-zByrne, R., & Whiten, A. (1998). Machiavellian Intelligence Social Expertise and the Evolution of Intellect in Monkeys, Apes, and Humans. Oxford University Press.Calvo, T. &. (1991). Paul Ricoeur: los caminos de la interpretación. Barcelona, España: Anthropos.Caplan, A. L. (1980). Ethical engineers need not apply: The state of applied ethics today. Science, Technology, & Human Values, 5(4), 24-32.Capuzzo, K. (13 de 06 de 2023). 4 Step Guide on How to Transition from Healthcare to Tech. Obtenido de https://blog.qwasar.io/blog/4-step-guide-on-how-to-transition-from healthcare-to-techCarnemolla, P. (2018). Ageing in place and the internet of things – how smart home technologies, the built environment and caregiving intersect. Vis. in Eng, 6(7). doi:https://doi.org/10.1186/s40327-018-0066-5Castro-Gómez, S. (2012). Sobre el concepto de antropotécnica en Peter Sloterdijk. Revista de Estudios Sociales, 43, 63-73. Obtenido de http://journals.openedition.org/revestudsoc/7127Cenci, A., & Cawthorne , D. (2020). Refining Value Sensitive Design: A (Capability-Based) Procedural Ethics Approach to Technological Design for Well-Being. Sci Eng Ethics, 26, 2629–2662. doi:https://doi.org/10.1007/s11948-020-00223-3Chugunova, M., & Sele, D. (2022). We and It: An Interdisciplinary Review of the Experimental Evidence on How Humans Interact with Machines. Journal of Behavioral and Experimental Economics, 99(101897), 102. doi:http://dx.doi.org/10.2139/ssrn.3692293Cipolla, C. (2018). Designing for Vulnerability: Interpersonal Relations and Design. The Journal of Design, Economics, and Innovation, 4(1), 111-122. doi:doi:https://doi.org/10.1016/j.sheji.2018.03.001Clark, A., & Chalmers, D. (2011). La mente extendida. CIC. Cuadernos de Información y Comunicación, 16, 15-28. Obtenido de https://www.redalyc.org/articulo.oa?id=93521629002Clusmann, J., Kolbinger, F., Muti, H., Carrero, Z., Eckardt, J.-N., Laleh, N., . . . Kather, J. (2023). The future landscape of large language models in medicine. Commun Med, 3(141). doi:https://doi.org/10.1038/s43856-023-00370-1).Coeckelbergh, M. (2010). Health Care, Capabilities, and AI Assistive Technologies. Ethical Theory and Moral Practice, 13, 181–190. doi:doi:https://doi.org/10.1007/s10677-009-9186-2Coeckelbergh, M. (2011). Artificial companions: empathy and vulnerability mirroring in human robot relations. Studies in ethics, law, and technology, 4(3). doi:DOI:10.2202/1941-6008.1126Coeckelbergh, M. (2011). Vulnerable Cyborgs: Learning to Live with our Dragons. Journal of Evolution and Technology, 22(1), 1-9.Coeckelbergh, M. (2013). Drones, Information Technology, and Distance: Mapping The Moral Epistemology Of Remote Fighting. Ethics and Information Technology, 15(2). doi:DOI:10.1007/s10676-013-9313-6Coeckelbergh, M. (2013). Human Being@Risk. Enhancement, Technology, and the Evaluation of Vulnerability Transformations. Springer.Coeckelbergh, M. (2014). Good healthcare is in the “how”: The quality of care, the role of machines, and the need for new skills. En Machine medical ethics (págs. 33-47). Cham: Springer International Publishing.Coeckelbergh, M. (2014). The Moral Standing of Machines: Towards a Relational and Non Cartesian Moral Hermeneutics. Philos. Technol, 27, 61–77. doi:https://doi.org/10.1007/s13347-013-0133-8Coeckelbergh, M. (2015). Artificial agents, good care, and modernity. Theoretical Medicine and Bioethics, 36, 265-277.Coeckelbergh, M. (2017). Hacking Technological Practices and the Vulnerability of the Modern Hero. Found Sci, 22, 357–362.Coeckelbergh, M. (2017). The Art of Living with ICTs: The Ethics–Aesthetics of Vulnerability Coping and Its Implications for Understanding and Evaluating ICT Cultures. Found Sci,22, 339–348.Coeckelbergh, M. (2019). Moved by Machines: Performance Metaphors and Philosophy of Technology. Routledge.Coeckelbergh, M. (2020). AI Ethics. The MIT Press.Coeckelbergh, M. (2020). Technoperformances: using metaphors from the performance arts for a postphenomenology and posthermeneutics of technology use. AI & SOCIETY, 35(3), 557-568. doi:https://doi.org/10.1007/s00146-019-00926-7Coeckelbergh, M. (2020). The Postdigital in Pandemic Times: a Comment on the Covid-19 Crisis and its Political Epistemologies. Postdigit Sci Educ, 2, 547–550. doi:https://doi.org/10.1007/s42438-020-00119-2Coeckelbergh, M. (2021). Time Machines: Artificial Intelligence, Process, and Narrative. Philos. Technol, 34, 1623–1638. doi:https://doi.org/10.1007/s13347-021-00479-yCoeckelbergh, M. (2022). The Political Philosophy of AI: An Introduction.Coeckelbergh, M., & Wackers, G. (2007). Imagination, distributed responsibility and vulnerable technological systems: the case of Snorre A. SCI ENG ETHICS, 13, 235–248. doi:https://doi.org/10.1007/s11948-007-9008-7Collins, R. (2005). Interaction Ritual Chains. Princeton Univerity Press.Cook, A., Thompson, M., & Ross, P. (2023). Virtual first impressions: Zoom backgrounds affect judgements of trust and competence. Plos One. doi:https://doi.org/10.1371/journal.pone.0291444Cooper, A, & Rodman, A. (2023). AI and Medical Education - A 21st-Century Pandora's Box. The New England journal of medicine, 389(5), 385–387. doi:https://doi.org/10.1056/NEJMp2304993Couture, S., & Toupin, S. (12 de 08 de 2019). What does the notion of “sovereignty” mean when referring to the digital? New Media & Society, 21(10). doi:https://doi.org/10.1177/1461444819865984Cowie, M. &. (2021). Remote monitoring and digital health tools in CVD management. Nature Reviews Cardiology, 18, 457–458. doi:doi.org/10.1038/s41569-021-00548-xCrawford, K. (04 de 06 de 2021). Time to regulate AI that interprets human emotions. Nature. doi:https://doi.org/10.1038/d41586-021-00868-5Cummings, M. L. (2006). Integrating ethics in design through the value-sensitive design approach. Science and engineering ethics, 12, 701–715. doi:https://doi.org/10.1007/s11948-006-0065-0Dalton-Brown, S. (2020). The Ethics of Medical AI and the Physician-Patient Relationship. Camb Q Healthc Ethics, 29(1), 115-121. doi:doi: 10.1017/S0963180119000847D'Amore, F., & Pirone , F. (2018). Doctor 2.0 and i-Patient: information technology in medicine and its influence on the physician-patient relation. Italian Journal Of Medicine, 12(1). doi:https://doi.org/10.4081/itjm.2018.956Davies, R, Ives, J, & Dunn, M. A . (2015). A systematic review of empirical bioethics methodologies. BMC Med Ethics, 16(15). doi:https://doi.org/10.1186/s12910-015-0010-3de Boer, B. (2021). Explaining multistability: postphenomenology and affordances of technologies. AI & Soc. doi:https://doi.org/10.1007/s00146-021-01272-3de Vries, M. J. (2005). Technology and the nature of humans. En M. J. de Vries, Teaching about Technology: An Introduction to the Philosophy of Technology for non-philosophers. Springer.DeCamp, M, & Tilburt, J. C. (2019). Why we cannot trust artificial intelligence in medicine. Lancet Digit Health, 1(8), 30197-9. doi:doi: 10.1016/S25897500(19)30197-9Demographic Change and Healthy Ageing, Health Ethics & Governance (HEG). (2022). Ageism in artificial intelligence for health. WHO.Departamento Administrativo de la Presidencia de la República. (30 de 03 de 2021). Obtenido de Cámara Colombiana dde Informática y Telecomunicaciones: https://dapre.presidencia.gov.co/TD/MARCO-ETICO-PARA-LA-INTELIGENCIA ARTIFICIAL-EN-COLOMBIA-2021.pdfDorr Goold, S. &. (1999). The doctor-patient relationship: challenges, opportunities, and strategies. Journal of general internal medicine, 14(Suppl 1(Suppl 1), S26–S33), 26-33. doi: 10.1046/j.1525-1497.1999.00267.x.Doudna, J. &. (2014). The new frontier of genome engineering with CRISPR-Cas9. Science, 346(6213).DOI: 10.1126/science.1258096Doudna, J., & Charpentier, E. (2014). The new frontier of genome engineering with CRISPR Cas9. Science, 346(6213). doi:doi:DOI: 10.1126/science.1258096Drummond, D. (2021). Between competence and warmth: the remaining place of the physician in the era of artificial intelligence. npj Digit. Med, 4(85). doi:https://doi.org/10.1038/s41746-021-00457-wDuarte, A. (2004). Biopolítica y diseminación de la violencia Arendt y la crítica del presente. Pasajes: Revista de pensamiento contemporáneo, 97-105.Dubov, A., & Shoptawb, S. (2020). The Value and Ethics of Using Technology to Contain the COVID-19 Epidemic. he American journal of bioethics : AJOB, 20(7), W7–W11. doi:https://doi.org/10.1080/15265161.2020.1764136Dusek, V. (2009). What Is Technology? Defining or Characterizing Technology. En V. Dusek, Philosophy of Technology: An Introduction (págs. 26-37). Oxford: Blackwell Publishing.Dutt, R. (2020). The impact of artificial intelligence on healthcare insurances. Artificial Intelligence in Healthcare, 271-293. doi:DOI:10.1016/B978-0-12-818438 7.00011-3Eichler, E. (2019). Genetic Variation, Comparative Genomics,and the diagnosis of disease. N Engl J Med, 381, 64-74.Ekmekci, P., & Arda, B. (2020). Artificial Intelligence and Bioethics. Switzerland: Springer.doi.org/10.1007/978-3-030-52448-7Emanuel , E. J., & Dubler, N. N. (1995). Preserving the physician-patient relationship in the era of managed care. JAMA, 273(4), 323–329.Emery, Fred E, & Eric L. Trist. (1960). Socio-technical systems. Management science, models and techniques, 2, 83-97.Ezekiel, E., & Ezekiel, L. (1992). Four Models of the Physician-Patient Relationship Four Models of the Physician-Patient Relationship. JAMA.Fernandes Martins, M., & Murry, L. T. (2022). Direct-to-consumer genetic testing: an updated systematic review of healthcare professionals’ knowledge and views, and ethical and legal concerns. European Journal of Human Genetics, 30, 1331–1343. doi: https://doi.org/10.1038/s41431-022-01205-8Ferraris, M. (1998). La Hermenéutica. Taurus.Fineman, M. (2008). The Vulnerable Subject: Anchoring Equality in the Human Condition. Yale Journal of Law & Feminism, 20(1), 1-24.FirstPost. (05 de 06 de 2023). AI Groom: US woman creates AI bot, marries it and starts family, calls 'him' the perfect husband. Obtenido de https://www.firstpost.com/world/us-woman creates-ai-bot-marries-it-and-starts-family-calls-him-the-perfect-husband-12693012.htmlFogg, B. J. (2003). Persuasive Technology: Using Computers to Change What We Think and Do. San Francisco, Estados Unidos: Morgan Kaufmann Publishers.Forbes. (29 de 04 de 2021). How Digital Transformation Impacts The Future Of Career Transitions. Obtenido de https://www.forbes.com/sites/forbeshumanresourcescouncil/2021/04/29/how-digital transformation-impacts-the-future-of-career-transitions/?sh=59ab9f5e3f71Forbes Colombia. (29 de 05 de 2022). Cómo abogados, psicólogos y hasta filósofos se están convirtiendo desarrolladores de software y científicos de datos. Obtenido de Forbes: https://forbes.co/2022/05/29/editors-picks/como-abogados-psicologos-y-hasta-filosofos se-estan-convirtiendo-desarrolladores-de-software-y-cientificos-de-datoFosso Wamba, S., Bawack, E., Guthrie, C., Queiroz, M., & André Carillo, K. (2021). Are we preparing for a good AI society? A bibliometric review and research agenda. Technological Forecasting and Social Change, 167(120482), 1-27. doi:doi:doi.org/10.1016/j.techfore.2020.120482Foucault, M. (2002). Vigilar y castigar: Nacimiento de la presión. Siglo XXI. Obtenido de chrome extension://efaidnbmnnnibpcajpcglclefindmkaj/https://www.ivanillich.org.mx/Foucault Castigar.pdfFox, B. (29 de 06 de 2022). Healthcare Companies Spent More on Lobbying Than Any Other Industry Last Year. Obtenido de Promarket: https://www.promarket.org/2022/06/29/healthcare-companies-spent-more-on-lobbying than-any-other-industry-last-yearFuture Of Life Institute. (2017). Principios de IA de Asilomar. Obtenido de https://futureoflife.org/open-letter/ai-principles/Gartner. (s.f.). Hype Cycle de Gartner. Obtenido de https://www.gartner.es/es/metodologias/hype-cycleGaube, S., Suresh, H., Raue, M., Merritt, A., Berkowitz, S., Lermer, E., . . . Ghassemi , M. (2021). Do as AI say: susceptibility in deployment of clinical decision-aids. npj Digital Medicine , 4(31). doi:doi:doi.org/10.1038/s41746-021-00385-9Gegúndez Fernández, J. (2018). Technification versus humanisation. Artificial intelligence for medical diagnosis. Arch Soc Esp Oftalmol (Engl Ed), 93(3), e17-e19. DOI: 10.1016/j.oftal.2017.11.004.Giddens, A. (1991). Modernity and Self Identity: Self and Society in the Late Modern Age.Cambridge: Polity Press.Github. (2020). Citaciones abriertas/coronavirus. Obtenido de https://github.com/opencitations/coronavirus/blob/master/data/dois_no_ref.csvGleeson, D., Townsend, B., Tenni, B., & Phillips, T. (2023). Global inequities in access to COVID-19 health products and technologies: A political economy analysis. Health Place, 83(103051). doi:DOI: 10.1016/j.healthplace.2023.103051Glenn, J. C., & Gordon, T. J. (30 de 04 de 2009). Futures Research Methodology — Version 3.0. (T. M. Project, Ed.) Washington, D.C.Goasduff, L. (11 de 04 de 2022). Choose Adaptive Data Governance Over One-Size-Fits-All for Greater Flexibility. Obtenido de Gartner: https://www.gartner.com/en/articles/choose adaptive-data-governance-over-one-size-fits-all-for-greater-flexibGómez-Vírseda, C, de Maeseneer, Y, & Gastmans, C. (2020). Relational autonomy in end-of-life care ethics: a contextualized approach to real-life complexities. BMC Med Ethics, 21(50). doi:https://doi.org/10.1186/s12910-020-00495-1Goodday, S. M., Geddes, J. R., & Friend, S. H. (2021). Disrupting the power balance between doctors and patients in the digital era. The Lancet. Digital health, 3(3), e142–e143. doi:https://doi.org/10.1016/S2589-7500(21)00004-2Gouldner, A. (1960). The Norm of Reciprocity: A Preliminary Statement. American Sociological Review, 25(2), 161-178. doi:https://doi.org/10.2307/2092623Gravett, W. (2022). Digital neocolonialism: the Chinese surveillance state in Africa. African Journal of International and Comparative Law, 30(1), 39-58.Gröger, C. (2021). There is no AI without data. Association for Computing Machinery, 64(11), 98–108. doi:https://doi.org/10.1145/3448247Gu, H. (2023). Data, Big Tech, and the New Concept of Sovereignty. J OF CHIN POLIT SCI . doi:https://doi.org/10.1007/s11366-023-09855-1H+Pedia. (2023). Obtenido de https://hpluspedia.org/wiki/File:Transhumanism_Futures_Wheel.pngHaan, M, Ongena, Y. P., Hommes, S, & Kwee, T. C . (2019). A Qualitative Study to Understand Patient Perspective on the Use of Artificial Intelligence in Radiology. Journal of the American College of Radiology : JACR, 16(10), 1416–1419. doi:https://doi.org/10.1016/j.jacr.2018.12.043Haas, B. (04 de 04 de 2017). Chinese man 'marries' robot he built himself. Obtenido de The Guardian: https://www.theguardian.com/world/2017/apr/04/chinese-man-marries-robot built-himselfHaenlein, M., & Kaplan, A. (2019). A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence. California Management Review, 61(4), 5-14. doi:doi:doi.org/10.1177/0008125619864925Hamet, P., & Tremblay, J. (2017). Artificial intelligence in medicine. Metabolism. Metabolism: clinical and experimental(69(Supplement), S36-S40. ). doi:https://doi.org/10.1016/j.metabol.2017.01.011Hannibal, G. (2021). Focusing on the Vulnerabilities of Robots through Expert Interviews for Trust in Human-Robot Interaction. Association for Computing Machinery, 288–293. doi:10.1145/3434074.3447178Hannibal, G. (s.f.). Trust in HRI: Probing Vulnerability as an Active Precondition. 2021. Obtenido de https://www.youtube.com/watch?v=DzpdRXZgMwkHannibal, Glenda, & Weiss, Astrid. (s.f.). Exploring the Situated Vulnerabilities of Robots for Interpersonal Trust in Human-Robot Interaction. En Trust in Robots (págs. 33–56). TU Wien Academic Press. doi:https://doi.org/10.34727/2022/isbn.978-3-85448-052-5_2Hansson, S. O. (2009). Philosophy of Medical Technology. En A. Meijers, Philosophy of Technology and Engineering Sciences (Vol. 9). Amsterdam, North Holland.Harari, Y. N. (2017). Homo Deus: A Brief History of Tomorrow. Harper.Harari, Y. N. (10 de 2018). Why Technology Favors Tyranny. The Atlantic . Obtenido de https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology tyranny/568330/Harris, J. (2023). An AI-Enhanced Electronic Health Record Could Boost Primary Care Productivity. JAMA, 330(9), 801-802. doi:DOI: 10.1001/jama.2023.14525Hassanpour, A., Nguyen, A., Rani, A., Shaikh, S., Xu, Y., & Zhang, H. (2022). Big Tech Companies Impact on Research at the Faculty of Information Technology and Electrical Engineering. Computers and Society. doi:https://arxiv.org/abs/2205.01039Hazarika, I. (2020). Artificial intelligence: opportunities and implications for the health workforce. International health, 12(4), 241–245. doi:DOI: 10.1093/inthealth/ihaa007Heidegger, M. (1999). The Question Concerning Technology. En M. Heidegger, Basic Writings.Londres: Routledge.Hendrikse, R., Adriaans, I., Klinge, T., & Fernández, R. (2021). The Big Techification of Everything. Science as Culture, 31(1), 59-71. doi:DOI:10.1080/09505431.2021.1984423Herkert, J. (2005). Ways of thinking about and teaching ethical problem solving: Microethics and macroethics in engineering. SCI ENG ETHICS , 11, 373–385. doi:https://doi.org/10.1007/s11948-005-0006-3Heyen, N.B., & Salloch, S. (2021). The ethics of machine learning-based clinical decision support: an analysis through the lens of professionalisation theory. BMC Medical Ethics, 22(112). doi:https://doi.org/10.1186/s12910-021-00679-3Hinde, R. A. (1976). Interactions, Relationships and Social Structure. Man, 11(1), 1-17. doi:https://doi.org/10.2307/2800384Hinde, R. A. (1997). Relationships: A Dialectical Perspective. Psychology Press.Hoc, J. M. (2000). From human-machine interaction to human-machine cooperation. Ergonomics, 47(7), 833-843. doi:doi:10.1080/001401300409044Hofmann, B. (2001). The technological invention of disease. Journal of Medical Ethics: Medical Humanities, 27, 10-19.Hommels, A., Mesman, J., & Bijker, W. E. (2014). Vulnerability Technological Cultures: New Directions in Research and Governance. The MIT press. doi:https://doi.org/10.7551/mitpress/9209.001.0001Idhe, D. (2009). Postphenomenology and Technoscience: The Peking University Lectures. State University New York Press.Ienca, M., & Vayena, E. (2020). On the responsible use of digital data to tackle the COVID-19 pandemic. Nature Medicine, 26, 463–464. doi:doi:https://doi.org/10.1038/s41591-020-0832-5Ihde, D. (1990). Technology and the Lifeworld: From Garden to Earth. Indiana University Press.Ihde, D. (2015). Acoustic Technics. Lexington Books.Ihde, D. (2019). Medical Technics. Minnesota: University of Minnesota Press.Intelligence, N. M. (2021). People have the AI power. Nat Mach Intell, 3(275). doi: https://doi.org/10.1038/s42256-021-00340-zInteractions, Relationships and Social Structure. (1976). Man, 11(1), 1-17. doi:https://doi.org/10.2307/2800384Jeanne, L., Bourdin, S., Nadou, F., & Noiret , G. (2023). Economic globalization and the COVID-19 pandemic: global spread and inequalities. GeoJournal, 88, 1181–1188. doi:https://doi.org/10.1007/s10708-022-10607-6Jecker, N. S. (2021). Nothing to be ashamed of: sex robots for older adults with disabilities. Journal of Medical Ethics, 47(1). doi:doi:dx.doi.org/10.1136/medethics-2020-106645Jinek, M., Chylinski, K., Fonfara, I., Hauer, M., & Doudna, J. (2017). A programmable dual RNA-guided DNA endonuclease in adaptive bacterial immunity. Science, 337(6096), 816-821. doi: 10.1126/science.1225829Johannsen, G. (2019). Human-Machine Interaction (Vol. 21). CIRP Encyclopedia of Production Engineering.Jonas, H. (2000). The Vulnerability of the Human Condition. En &. P. J. Rendtorff, Basic Ehical Principle in Bioethics and Biolaw. Autonomy, Dignity, Integrity and Vulnerability (págs. 115-122). Centre for Ethics and Law/Institut Borja de Bioética.Kaltenbach, T. (2014). The impact of E-health on the pharmaceutical industry. International Journal of Healthcare Management, 7(4), 223-225. doi:doi:10.1179/2047970014Z.000000000103Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15-25. doi:doi:doi.org/10.1016/j.bushor.2018.08.004Karches, K. (2018). Against the iDoctor: why artificial intelligence should not replace physician judgment. Theor Med Bioeth, 39(91), 91–110. doi:https://doi.org/10.1007/s11017-018-9442-3Karches, K. E. (2018). Against the iDoctor: why artificial intelligence should not replace physician judgment. Theoretical Medicine and Bioethics, 39, 91–110. doi:https://doi.org/10.1007/s11017-018-9442-3Kasperbauer , T. (2020). Conflicting roles for humans in learning health systems and AI-enabled healthcare. Journal Of Evaluation in Clinical Practice, 27(3). doi:https://doi.org/10.1111/jep.13510Kickbusch , I., Piselli, D., Agrawal, A., Balicer, R., Banner, O., Adelhardt , M., . . . Xue, L. (2021). The Lancet and Financial Times Commission on governing health futures 2030: growing up in a digital world. Lancet (London, England), 398(10312), 1727–1776. doi:https://doi.org/10.1016/S0140-6736(21)01824-9Kim, J. (2005). Physicalism, or Something Near Enough. Princeton Monographs in Philosophy.Kittay, E. (2011). The Ethics of Care, Dependence, and Disability. Ratio Juris, 24(1), 49-58.Komasawa, N., & Yokohira, M. (2023). Simulation-Based Education in the Artificial Intelligence Era. Cureus, 15(6), e40940. doi:https://doi.org/10.7759/cureus.40940Koncz, A. (13 de 09 de 2022). The First Database Of Tech Giants Collaborating With Healthcare: What Can We Learn? The Medical Futurist. Obtenido de https://medicalfuturist.com/the-first-database-of-tech-giants-collaborating-with healthcare-what-can-we-learn/Koncz, A. (19 de 10 de 2023). Digital Health Anxiety: When Wellness Tech Becomes A Stressor. Obtenido de The Medical Futurist: https://medicalfuturist.com/digital-health-anxiety when-wellness-tech-becomes-a-stressor/Kshetri, N. (2023). ChatGPT in Developing Economies. IT Professional, 25(2), 16–19. doi:https://doi.org/10.1109/MITP.2023.3254639Kudina, O. (2019). The technological mediation of morality: Value dynamism, and the complex. PhD dissertation. Enschede: University of Twente.Kudina, O., & Coeckelbergh, M. (2021). Alexa, define empowerment: voice assistants at home, appropriation and technoperformances. Journal of Information, Communication and Ethics in Society, 19(2), 299-312. doi:doi:10.1108/JICES-06-2020-0072Kumar, K., Kumar, N., & Rachna, S. R. (2020). Role of IoT to avoid spreading of COVID-19. International Journal of Intelligent Networks, 1, 32-35. doi:doi:https://doi.org/10.1016/j.ijin.2020.05.002.Lage Gonçalves, L., Nardi, A., & Spear King, A. (2023). Digital Dependence in Organizations: Impacts on the Physical and Mental Health of Employees. Clinical Practice And Epidemiology In Mental Health, 19. doi:DOI: 10.2174/17450179-v19-e230109-2022-17Latour, B. (1987). Science in Action: how to Follow Scientists and Engineers through Society.Cambridge: Harvard University Press.Lee, K.-F., & Li, K. (2018). AI Superpowers: China, Silicon Valley, and the New World Order. Boston, Estados Unidos: Houghton Mifflin Harcourt.Lee, N. (2019). Brave New World of Transhumanism. En N. Lee, The Transhumanism Handbook. Switzerland: Springer Nature.Leite, H., Hodgkinson, I. R., & Gruber, T. (2020). New development: ‘Healing at a distance’—telemedicine and COVID-19. Public Money & Management, 40(6), 483-485. doi:doi:10.1080/09540962.2020.1748855Lerner, I., Veil, R., Nguyen, D.-P., Phuc Luu, V., & Jantzen, R. (2018). Revolution in Health Care: How Will Data Science Impact Doctor–Patient Relationships? Frontiers in Public Health, 6. doi:https://doi.org/10.3389/fpubh.2018.00099Levinas, E. (1972). Humanismo del otro hombre. Buenos Aires, Argentina: Siglo XXI.Lin, S. Y., Mahoney, M. R., & Sinsky, C. A. (2019). Ten Ways Artificial Intelligence Will Transform Primary Care. J GEN INTERN MED, 34, 626–1630. doi:https://doi.org/10.1007/s11606-019-05035-1Liu, X., Keane, P., & Denniston, A. (2018). Time to regenerate: the doctor in the age of artificial intelligence. Journal of the Royal Society of Medicine, 111(4), 113-116. doi:https://doi.org/10.1177/0141076818762648Loten, A., & Bousquette, I. (01 de 09 de 2022). Tech Companies Say Going Private Comes With Benefits. Obtenido de The Wall Street Journal: Tech Companies Say Going Private Comes With BenefitsLuchini, C., Pea, A., & Scarpa, A. (2022). Artificial intelligence in oncology: current applications and future perspectives. British Journal of Cancer, 126, 4–9 . doi:https://doi.org/10.1038/s41416-021-01633-1Lupton, D. (2020). A more-than-human approach to bioethics: The example of digital health. Bioethics, 34, 1-8. doi:doi:10.1111/bioe.12798Luxton, D. (2014). Recommendations for the ethical use and design of artificial intelligent care providers. Artificial intelligence in medicine, 62(1), 1-10. doi:DOI: 10.1016/j.artmed.2014.06.004Machine, N. (2021). People have the AI power. Nature Machine Intelligence, 3(275). doi:https://doi.org/10.1038/s42256-021-00340-zMackenzie, C, Rogers, W, & Dodds, S. (2013). Vulnerability: New Essays in Ethics and Feminist Philosophy. Oxford University Press.Maliandi, R. (2010). Fenomenología de la conflictividad. Las Cuarenta. DOI:10.24316/prometeica.v0i3.64McAninch , A. (2023). Go Big or Go Home? A New Case for Integrating Micro-ethics and Macro-ethics in Engineering Ethics Education. Science and engineering ethics, 29(3), 20. doi:https://doi.org/10.1007/s11948-023-00441-5McKendrick, J. (05 de 07 de 2016). Is All-Cloud Computing Inevitable? Analysts Suggest It Is. Obtenido de Forbes: https://www.forbes.com/sites/joemckendrick/2016/07/05/is-all cloud-computing-inevitable-analysts-suggest-it-is/?sh=71497bccebf0McWhinney, I. R. (1978). Medical Knowledge and the Rise of Technology. The Journal of Medicine and Philosophy, 3(4), 293-304.Mello, M. M., & Wang, J. C. (2020). Ethics and governance for digital disease surveillance. Science, 368(6494), 951-954. doi: doi:10.1126/science.abb9045Menéndez, E. (2020). Modelo médico hegemónico: tendencias posibles y tendencias más o menos imaginarias. Salud Colectiva, 16. doi:https://doi.org/10.18294/sc.2020.2615Miller, B., Blanks, W., & Yagi , B. (2023). The 510(k) Third Party Review Program: Promise and Potential. J Med Syst , 47(93). doi:https://doi.org/10.1007/s10916-023-01986-5Mims, C. (08 de 04 de 2013). Is Big Tech’s R&D Spending Actually Hurting Innovation in the U.S.? Obtenido de The Wall Street Journal: https://www.wsj.com/articles/is-big-techs-r d-spending-actually-hurting-innovation-in-the-u-s-acfa004e).Mittelstadt, B. (2021). The impact of artificial intelligence on the doctor-patient relationship.Council of Europe. Obtenido de https://www.coe.int/en/web/bioethics/report-impact-of ai-on-the-doctor-patient-relationshipMohamed, S., Png, M.-T., & Isaac, W. (2020). Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. Philos. Technol, 33, 659–684.Moreno Hernández, H. (2008). Profanación a la biopolítica: a propósito de giorgio agamben. Revista de Ciencias Sociales de la Universidad Iberoamericana, III(6).Morozov, E. (2013). To save everithing, click here: the folly of technological solutionism. New York: PublicAffairs.Morton , C., Smith , S., Lwin, T., George, M., & Williams, M. (2019). Computer Programming: Should Medical Students Be Learning It? JMIR Med Educ , 5(1).Muehlematter, U., Bluethgen, C., & Vokinger, K. (2023). FDA-cleared artificial intelligence and machine learning-based medical devices and their 510(k) predicate networks. The Lancet Digital Healt, 5(9). doi:DOI:https://doi.org/10.1016/S2589-7500(23)00126-7Münch, C., Marx , E., Benz, L., Hartmann, E., & Matzner, M. (2022). Capabilities of digital servitization: Evidence from the socio-technical systems theory. Technological Forecasting and Social Change, 176(121361). doi:https://doi.org/10.1016/j.techfore.2021.121361.Munn, L. (2023). The uselessness of AI ethics. Ética de la IA, 3, 869–877 . doi:https://doi.org/10.1007/s43681-022-00209-wMurphy, K., Di Ruggiero, E., Upshur, R., Willison, D. J., Malhotra, N., Cai, J., . Gibson, J. (2021). Artificial intelligence for good health: a scoping review of the ethics literature. BMC Med Ethics, 22(14). doi:https://doi.org/10.1186/s12910-021-00577-8Neves, M. P. (2007). Sentidos da vulnerabilidade; característica, condição, princípio. Revista Brasileira de Bioética, 2, 157-172.Ng, A. (2019). Coursera. Obtenido de ). IA para todos by deeplearning.ai.: www.coursera.orgNorth Whitehead, A. (1978). Process And Reality An Essay In Cosmology An Essay In Cosmology. The Free Prees.Nundy S, Montgomery T, & Wachter RM. (2019). Promoting Trust Between Patients and Physicians in the Era of Artificial Intelligence. JAMA, 322(6), 497–498. doi:10.1001/jama.2018.20563Obermeyer, Z., & Emanuel, E. J. (2016). Predicting the Future — Big Data, Machine Learning, and Clinical Medicine. New England Journal of Medicine, 375(13), 1216-1219. doi.org/10.1056/NEJMp1606181Olaronke, I., Oluwaseun, O., & Rhoda, I. (2017). State of the art: a study of human-robot interaction in healthcare. International Journal of Information Engineering and Electronic Business, 9(3), 43-55. doi:doi:10.5815/ijieeb.2017.03.06Oosterlaken, I. (2014). Human Capabilities in Design for Values. En van den Hoven, J, Vermaas, P, & van de Poel, I, Handbook of Ethics, Values, and Technological Design (págs. 1–26). Springer. Obtenido de https://doi.org/10.1007/978-94-007-6994-6_7-1Ostherr, K. (2020). Artificial Intelligence and Medical Humanities. J Med Humanit, 43, 211–232. doi:https://doi.org/10.1007/s10912-020-09636-4Pellegrino, E. D., & Thomasma, D. C. (1993). The Virtues in Medical Practice. New York, Estados Unidos: Oxford University Press.Pelluchon, C. (2013). Elementos para una ética de la vulnerabilidad. Los hombres, los animales, la naturaleza. Bogotá, Colombia: U. Javeriana; Universidad El bosqu.Pelluchon, C. (2013). La Autonomia Quebrada: Bioética y filosofía. Bogotá, Colombia: Universidad El Bosque.Pelluchon, C. (2015). Elementos para una ética de la vulnerabilidad. Los hombres, los animales, la naturaleza. Pontificia Universidad Javeriana.Pelluchon, C. (2016). Taking Vulnerability Seriously: What Does It Change for Bioethics and Politics? En A. Masferrer,, & E. García-Sánchez (Edits.), Human Dignity of the Vulnerable in the Age of Rights (Vol. 55). Obtenido de https://doi.org/10.1007/978-3-319-32693-1_13Perri, L. (23 de 08 de 2023). What’s New in the 2023 Gartner Hype Cycle for Emerging Technologies. Gartner. Obtenido de https://www.gartner.com/en/articles/what-s-new-in the-2023-gartner-hype-cycle-for-emerging-technologiePharmaceutical-Technology. (09 de 03 de 2021). COVID-19 accelerated digital transformation of the pharma industry by five years: Poll. Obtenido de https://www.pharmaceutical technology.com/news/covid-19-accelerated-digital-transformation-of-the-pharma industry-by-five-years-pollPinto Bustamante, B., Riaño-Moreno, J., Clavijo Montaya, H., Cárdenas Galindo, M., & Campos Figueredo, W. (2023). Bioethics and artificial intelligence: between deliberation on values and rational choice theory. Robot AI, 10. doi:https://doi.org/10.3389/frobt.2023.1140901Ponce-Correa, A., Ospina-Ospina, A., & Correa-Gutierrez, R. (s.f.). Curriculum Analysis Of Ethics In Engineering: A Case Study. DYNA, 89(222), 67-73. doi:https://doi.org/10.15446/dyna.v89n222.101800Prabhu, S. P. (2019). Ethical challenges of machine learning and deep learning algorithms. The Lancet. Oncology, 20(5), 621–622. doi:https://doi.org/10.1016/S1470-2045(19)30230-XPrainsack, B, & Forgó, N. (2022). Why paying individual people for their health data is a bad idea. Nature Medicine, 28, 1989–1991. doi:https://doi.org/10.1038/s41591-022-01955-4Prainsack, B., & Van Hoyweghen , I. (2020). Shifting Solidarities: Personalisation in Insurance and Medicine. Cham: Palgrave Macmillan. doi:https://doi.org/10.1007/978-3-030-44062-6_7Prainsack, B., El-Sayed, S., Forgó, N., Szoszkiewicz, Ł., & Baumer, P. (2022). Data solidarity: a blueprint for governing health futures. The Lancet Digital Healt, 4(11), E773-E774. doi:DOI:https://doi.org/10.1016/S2589-7500(22)00189-3Prati, Andrea, Shan, Caifeng, & Wang, Kevin I-Kai. (2019). Sensors, vision and networks: From video surveillance to activity recognition and health monitoring. Journal of Ambient Intelligence and Smart Environments, 11(1), 5-22. doi:10.3233/AIS-180510Psychologs Magazine. (06 de 09 de 2021). Smart Watches And Mental Health. Obtenido de Psychologs Indian's Firts Mental Healt: https://www.psychologs.com/smartwatches-and mental-health/Rackimuthu, S., Narain, K., Lal, A., Nawaz, F., Mohanan,, P., Yasir Essar, M., & Ashworth, H. (2022). Redressing COVID-19 vaccine inequity amidst booster doses: charting a bold path for global health solidarity, together. Globalization and Health, 18(23). doi:https://doi.org/10.1186/s12992-022-00817-5Rampton V. (2020). Artificial intelligence versus clinicians. BMJ (Clinical research ed.), 3(369).DOI: 10.1136/bmj.m1326Raphael, B. (1976). The Thinking Computer: Mind Inside Matter. San Francisco: Freeman and Company.Reddy, H., Joshi, S., Joshi, A., & Wagh, V. (2022). A Critical Review of Global Digital Divide and the Role of Technology in Healthcare. Cureus , 14(9), e29739. doi:10.7759/cureus.29739Rezaev, A. V., Starikov, V. S, & Tregubova, N. D. (2018). Sociological Considerations on Human-Machine Interactions: from Artificial Intelligence to Artificial Sociality. Conferencia Internacional sobre Industria, Negocios y Ciencias Sociales (págs. 364-371). Tokio: Waseda University.Ricoeur, P. (1996). Les trois niveaux du jugement médical. Esprit.Ricoeur, P. (2010). Del texto a la acción. Mexico, D.F: Fondo de Cultura Económica.Rostislavovna Schislyaeva, E., & Saychenko, O. (2022). Labor Market Soft Skills in the Context of Digitalization of the Economy. Social Sciences, 11(3). doi:10.3390/socsci11030091Rousseau, J.-J. (2003). Sobre las ciencias y las artes. Madrid: Alianza Editorial.Rowley, J. (2007). The wisdom hierarchy: representations of the DIKW hierarchy. Journal of Information Science, 33(2), 163–180. doi:DOI: 10.1177/0165551506070706Ruíz, J., Cantú, G., Ávila, D., Gamboa, J. D., Juarez, L., de Hoyos, A., . . . Garduño, J. (2015). Revisión de modelos para el análisis de dilemas éticos. Boletín Médico del Hospital Infantil de México, 72(2), 89-98. https://doi.org/https://doi.org/10.1016/j.bmhimx.2015.03.006Sappleton, N., & Takruri-Rizk, H. (2008). The Gender Subtext of Science, Engineering, and Technology (SET) Organizations: A Review and Critique. Women's Studies, 37(3). doi:https://doi.org/10.1080/00497870801917242Savulescu, J. (2012). Moral Enhancement, Freedom, And The God Machine. The Monist, 95(3), 399-421. doi:doi:doi:10.5840/monist201295321Schaper, M., Wöhlke, S., & Schicktanz , S. (2019). “I would rather have it done by a doctor”laypeople’s perceptions of direct-to-consumer genetic testing (DTC GT) and its ethical implications. Med Health Care and Philos, 22, 31-40. doi:https://doi.org/10.1007/s11019-018-9837-ySchoenhagen, P., & Mehta, N. (2017). Big data, smart computer systems, and doctor-patient relationship. European heart journal, 38(7), 508–510. doi:https://doi.org/10.1093/eurheartj/ehw217Schwab, K. (26 de 02 de 2021). ‘This is bigger than just Timnit’: How Google tried to silence a critic and ignited a movement. Obtenido de Fast Company: https://www.fastcompany.com/90608471/timnit-gebru-google-ai-ethics-equitable-tech movemenSemana. (25 de 11 de 2014). Así controlan las instituciones y empresas de salud a los médicos. Obtenido de https://www.semana.com/nacion/articulo/las-eps-controlan-los-medicos con-polemicos-metodos/409528-3/)./Semana. (05 de 09 de 2020). Colombia, cada vez más rezagada en inteligencia artificial. Semana. Obtenido de Obtenido de https://www.semana.com/tecnologia/articulo/estados-unidos-y china-los-primeros-en-inteligencia-artificial--noticias-hoy/701009Shew, A. (2020). Ableism, Technoableism, and Future AI. IEEE Technology and Society Magazine, 39(1), 40-85. doi:doi: 10.1109/MTS.2020.2967492Singhal, Karan, Azizi, Shekoofeh, Tu, Tao, Mahdavi, S. Sara, Wei, Jason, Chung, Hyung Won.Corrado, Greg S. (2023). Large language models encode clinical knowledge. Nature, 620, 172–180. doi:). https://doi.org/10.1038/s41586-023-06291-2Sisk, B. A, & Baker, J. N. (2018). Microethics of Communication-Hidden Roles of Bias and Heuristics in the Words We Choose. JAMA pediatrics, 172(12), 1115–1116. doi:https://doi.org/10.1001/jamapediatrics.2018.3111Sloterdijk, P. (2001). Normas sobre el parque humano. Una respuesta a la Carta sobre el humanismo de Heidegger. Madrid: Ediciones Siruela.Sloterdijk, P. (2003). Esferas I. Burbujas. Microesferología. Ediciones Siruela.Sloterdijk, P. (2012). Has de cambiar tu vida. Pre-Textos.Smite, D., Brede Moe, N., Hildrum, J., Gonzalez-Huerta, J., & Mendez, D. (2023). Work-from home is here to stay: Call for flexibility in post-pandemic work policies. Journal of Systems and Software, 195(111552). doi:https://doi.org/10.1016/j.jss.2022.111552Smits, M., Ludden, G., Peters, R., Bredie, S., van Goor, H., & Paul Verbeek, P. (2022). Values that Matter: A New Method to Design and Assess Moral Mediation of Technology. Design issues, 38(1), 39-54. doi:https://doi.org/10.1162/desi_a_00669Solbakk, J. H. (2011). Vulnerabilidad: ¿un principio fútil o útil en la ética de la asistencia sanitaria? Revista Redbioética/UNESCO, 1(3), 89-101.Srivastava, T., & Waghmare, L. (2020). Implications of Artificial Intelligence (AI) on Dynamics of Medical Education and Care: A Perspective. Journal of Clinical and Diagnostic Research, 14(3), JI01-JI02. doi:DOI: 10.7860/JCDR/2020/43293.13565Srnicek, N. (2018). Capitalismo De Plataformas. Caja Negra.Stahl, B. (2021). AI Ecosystems for Human Flourishing: The Recommendations. SpringerBriefs in Research and Innovation Governance, 91–115. doi:doi:10.1007/978-3-030-69978-9_7Stahl, B. C. (2021). AI Ecosystems for Human Flourishing: The Recommendations. In: Artificial Intelligence for a Better Future. SpringerBriefs in Research and Innovation Governance. doi:https://doi.org/10.1007/978-3-030-69978-9_7Starke, G., van den Brule, R., Elger, B., & Haselager, P. (2022). Intentional machines: A defence of trust in medical artificial intelligence. Bioethics: Special Issue: Promises And Challenges Of Medical Ai, 36(2), 154-161. doi:https://doi.org/10.1111/bioe.12891Stempsey, W. E. (2006). Emerging Medical Technologies and Emerging Conceptions of Health. Theor Med Bioeth, 27, 227–243. doi:https://doi.org/10.1007/s11017-006-9003-zStiegler, B. (1994). La técnica y el tiempo I: el pecado de Epimeteo. Hondarribia: Argiraletxe Hiru.Su, H., Lallo, A. D, Murphy, R. R, Taylor, R. H., Garibaldi, B. T, & Krieger, A. (2021). Physical human–robot interaction for clinical care in infectious environments. Nature Machine Intelligence, 3, 184-186. doi:doi:doi.org/10.1038/s42256-021-00324-zSusan, S. (2020). What COVID-19 Reveals About Twenty-First Century Capitalism: Adversity and Opportunity. Development, 63, 150–156. doi:doi:https://doi.org/10.1057/s41301-020-00263-zTakshi S. (2021). Unexpected Inequality: Disparate-Impact From Artificial Intelligence in Healthcare Decisions. Journal of law and health, 34(2), 215–251Taylor, L. (2021). There Is an App for That: Technological Solutionism as COVID-19 Policy in the Global North. En E. Aarts, M. Fleuren, M. Sitskoorn, & T. Wilthagen, The New Common (págs. 209–215). Switzerland: Springer Nature. doi:doi:doi: 10.1007/978-3-030-65355-2_30Teepe, G., Glase, E., & Reips, U.-D. (07 de 04 de 2023). Increasing digitalization is associated with anxiety and depression: A Google Ngram analysis. doi:https://doi.org/10.1371/journal.pone.0284091ten Have, H. (2015). Respect for Human Vulnerability: The Emergence of a New Principle in Bioethics. Bioethical Inquiry, 12, 395–408. doi:https://doi.org/10.1007/s11673-015-9641-9ten Have, H. (2015). Respect for Human Vulnerability: The Emergence of a New Principle in Bioethics. Bioethical Inquiry, 12, 395–408. doi:https://doi.org/10.1007/s11673-015-9641-9ten Have, H. (2015). Respect for Human Vulnerability: The Emergence of a New Principle. Bioethical Inquiry, 12, 395–408. doi:https://doi.org/10.1007/s11673-015-9641-9ten Have, H. (2015). Respect for Human Vulnerability: The Emergence of a New Principle in Bioethics. Bioethical Inquiry, 12, 395–408. doi:https://doi.org/10.1007/s11673-015-9641-9ten Have, H. (2015). Respect for Human Vulnerability: The Emergence of a New Principle in Bioethics. Bioethical Inquiry, 12, 395–408. doi:https://doi.org/10.1007/s11673-015-9641-9ten Have, K. (2016). Global Bioethics: An introduction (1 ed.). New York, Estados Unidos: Routledge.The Economist. (20 de 06 de 2022). Alphabet is spending billions to become a force in health care. Obtenido de https://www.economist.com/business/2022/06/20/alphabet-is spending-billions-to-become-a-force-in-health-careThe Guardian. (08 de 03 de 2015). Silicon Valley is cool and powerful. But where are the women Obtenido de https://www.theguardian.com/technology/2015/mar/08/sexism silicon-valley-womeThe New York Times. (23 de 07 de 2022). Google Fires Engineer Who Claims Its A.I. Is Conscious. Obtenido de https://www.nytimes.com/2022/07/23/technology/google engineer-artificial-intelligence.htmlThomson, S., & C Ip , E. (2020). COVID-19 emergency measures and the impending authoritarian pandemic. Journal of Law and the Biosciences, 7(1 lsaa064). doi:doi:10.1093/jlb/lsaa064Tiku, N. (11 de 06 de 2022). The Google engineer who thinks the company’s AI has come to life. Obtenido de The Washington Post: https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake lemoine/Timmermans, S., & Berg, M. (2003). The practice of medical technology. Sociology of Health & Illness, 25(3), 97-114. doi:doi:10.1111/1467-9566.00342Toma, A., Senkaiahliyan, S., Lawler, P., Rubin, B., & Wang, B. (30 de 11 de 2023). Generative AI could revolutionize health care — but not if control is ceded to big tech. Obtenido de Natura: https://www.nature.com/articles/d41586-023-03803-yTopol, E. (2014). The Patient Will See You Now: The Future of Medicine is in Your Hands. New York, Estados Unidos: Basic Books.Topol, E. (2015). The Patient Will See You Now: The Future of Medicine is in Your Hands. J Clin Sleep Med, 11(6), 689–690. doi:doi: 10.5664/jcsm.4788Topol, E. (2019). Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic BooksTopol, E. (2019). High-performance medicine: the convergence of human and artificial intelligence. Nature Medicine, 25(1), 44-56. doi:doi:10.1038/s41591-018-0300-7Tozzo, P , Angiola, F, Gabbin, A, Politi, C, & Caenazzo, L. (2021). The difficult role of Artificial Intelligence in Medical Liability: to err is not only human. La Clinica terapeutica, 172(6), 527-528. doi:DOI: 10.7417/CT.2021.2372Tran, B.-X., Thu Vu, G., Hai Ha, G., Vuong , Q.-H., Tung Ho, M., Vuong, T.-T., M Ho, R. (2019). Global Evolution of Research in Artificial Intelligence in Health and Medicine: A Bibliometric Study. Journal of Clinical Medicine, 8(3(360)), 1-18. doi:https://doi.org/10.3390/jcm8030360Trist, E. L, & Bamforth, K. W. (1951). Human Relations. 4(1), 3-38. doi:https://doi org.ezproxy.unbosque.edu.co/10.1177/001872675100400101Truong, A. (2019). Are you ready to be diagnosed without a human doctor? A discussion about artificial intelligence, technology, and humanism in dermatology. Int J Womens Dermatol, 5(4), 267–268. doi:doi: 10.1016/j.ijwd.2019.05.001Tucker, G. (2015). Forming an ethical paradigm for morally sentient robots: Sentience is not necessary for evil. EEE International Symposium on Technology and Society, 1-5.doi: 10.1109/ISTAS.2015.7439420.Ulrich, B. (1998). Politics of Risk Society. En J. Franklin, In The Politics of Risk Society. Polity Press.Umbrello, S. (2022). The Role of Engineers in Harmonising Human Values for AI Systems Design. Journal of Responsible Technology, 10. doi:https://doi.org/10.1016/j.jrt.2022.100031UNESCO. (2005). Declaración Universal sobre Bioética y Derechos Hujmanos. Obtenido de UNESCO: http://portal.unesco.org/es/ev.php URL_ID=31058&URL_DO=DO_TOPIC&URL_SECTION=201.htmlUNESCO. (2015). Parte 1: Programa Temáticoprograma De Educação Em Ética. Obtenido de https://unesdoc.unesco.org/ark:/48223/pf0000163613_porUNESCO. (16 de 01 de 2023). Data solidarity: why sharing is not always caring . Obtenido de https://en.unesco.org/inclusivepolicylab/analytics/data-solidarity-why-sharing-not always-caring%C2%A0Universidad Externado De Colombia. (13 de 04 de 2018). Colombia le apuesta a la implementación de la inteligencia artificial. Obtenido de https://www.uexternado.edu.co/derecho/colombia-le-apuesta-la-implementacion-de-la inteligenciaartificial/#:~:text=Seg%C3%BAn%20explic%C3%B3%2C%20el%20Gobierno%20colombiano,ciento%20de%20las%20aplicaciones%20empresarialesUniversity Of Denver. (s.f.). 18 Skills All Programmers Need to Have. Obtenido de https://bootcamp.du.edu/blog/programming-skills/Vakkuri, V., Kemell, K.-K., Jantunen, M., & Abrahamsson, P. (2020). “This is Just a Prototype”: How Ethics Are Ignored in Software Startup-Like Environments. En V. Stray, R. Hoda, M. Paasivaara, & P. Kruchten, Agile Processes in Software Engineering and Extreme Programming (Vol. 383, págs. 195–210). Cham: Spring.van de Poel, I. (2021). Design for value change. Ethics Inf Technol, 23, 27–31. doi:https://doi.org/10.1007/s10676-018-9461-9Van Noorden, R., & Thompson, B. (2023). Audio long read: Medicine is plagued by untrustworthy clinical trials. How many studies are faked or flawed. doi: ¿https://doi.org/10.1038/d41586-023-02627-0van Weert , J. (2020). Facing frailty by effective digital and patient-provider communication.Patient Educ Couns, 103(3), 433-435. doi:doi: 10.1016/j.pec.2020.02.020.Vegas-Motta, E. (2020). Hermenéutica: un concepto, múltiples visiones. Revista Estudios Culturales, 13(25), 121-130.Verbeek, P.-P. (2005). What things do: Philosophical reflections on technology, agency, and design. Pennsylvania: Pennsylvania State Univeristy Press.Verbeek, P.-P. (2011). Moralizing technology: Understanding and designing the morality of things. Chicago: The University of Chicago press.Verbeek, P.-P. (2020). Politicizing Postphenomenology. hilosophy of Engineering and Technology. doi:https://doi.org/10.1007/978-3-030-35967-6_9Verghese, A., Shah, N. H., & Harrington, R. A. (2018). What This Computer Needs Is a Physician Humanism and Artificial Intelligence. JAMA, 319(1), 19–20. doi:10.1001/jama.2017.19198Vermeer, L., & Thomas , M. (2020). Pharmaceutical/high-tech alliances; transforming healthcare? Digitalization in the healthcare industry. Strategic Direction, 36(12), 43-46. doi:https://doi.org/10.1108/SD-06-2020-0113Viernes, F. (14 de 09 de 2021). Stop Saying ‘Data is the New Oil’. Obtenido de Medium: https://medium.com/geekculture/stop-saying-data-is-the-new-oil-a2422727218cVokinger, K. N., & Gasser, U. (2021). Regulating AI in medicine in the United States and Europe. National Library of Medicine, 3(9), 738–739. doi:doi: 10.1038/s42256-021-00386-zVos, R., & Willems, D. L. (2000). Technology in medicine: ontology, epistemology, ethics and social philosophy at the crossroads. Theoretical Medicine and Bioethics, 21, 1–7.Wang, J, Yang, J, Zhang, H., Lu, H., Skreta, M., Husić, M., . . . Brudno, M. (2022). PhenoPad: Building AI enabled note-taking interfaces for patient encounters. npj Digit. Med, 5(12). doi:https://doi.org/10.1038/s41746-021-00555-9Wang, X., & Luan, W. (2022). Research progress on digital health literacy of older adults: A scoping review. Frontiers in Public Health, 10. doi: https://doi.org/10.3389/fpubh.2022.906089Wartman, S. A. (2021). Medicine, Machines, and Medical Education. Academic medicine : journal of the Association of American Medical Colleges, 96(7), 947–950. doi: https://doi.org/10.1097/ACM.0000000000004113Webster, P. (13 de 04 de 2023). Big tech companies invest billions in health research. Obtenido de Nature Medicine: https://www.nature.com/articles/s41591-023-02290-yWebster, P. (2023). Big tech companies invest billions in health research. Nature Medicine, 29, 1034–1037. doi:https://doi.org/10.1038/s41591-023-02290-yWeiner, M., & Biondich, P. (2006). The influence of information technology on patient physician relationships. Journal of general internal medicine, 21(Suppl 1), S35-9. doi:10.1111/j.1525-1497.2006.00307.x.Weinstein, J. (2019). Artificial Intelligence: Have You Met Your New Friends; Siri, Cortona, Alexa, Dot, Spot, and Puck. Spine, 44(1), 1-4. doi:https://doi.org/10.1097/BRS.0000000000002913Wenk, H. (2020). Kommunikation in Zeiten künstlicher IntelligenzCommunication in the age of artificial intelligence. Gefässchirurgie, 25. doi:DOI:10.1007/s00772-020-00644-1Whitelaw, S., Mamas, M. A, Topol, E., & Spall, H. (2020). Applications of digital technology in COVID-19 pandemic planning and response. The Lancet Digital Health, 2(8), e435-e440. doi:doi:10.1016/S2589-7500(20)30142-4.WHO. (09 de 09 de 2020). Tracking COVID-19: Contact Tracing in the Digital Age. Obtenido de World Healt Organization: https://www.who.int/news-room/feature stories/detail/tracking-covid-19-contact-tracing-in-the-digital age#:~:text=contact%20tracing%20is%20the%20process,the%20last%20two%20weeksWHO. (2021). Ethics and governance of artificial intelligence for health. Obtenido de World Healt Organization: https://www.who.int/publications/i/item/9789240029200WHO. (28 de 06 de 2021). Ética y gobernanza de la inteligencia artificial para la salud. Obtenido de https://www.who.int/publications/i/item/9789240029200Wu, E., Wu, K, Daneshjou, R, Ouyang, D, Ho, D. E, & Zou, J. (2021). How medical AI devices are evaluated: limitations and recommendations from an analysis of FDA approvals. Nature Medicine, 27, 582-584. doi:doi:doi.org/10.1038/s41591-021-01312-xXolocotzi Yáñez, Á. (2020). La verdad del cuerpo. Heidegger y la ambigüedad de lo corporal. Universidad de Antioquia. DOI: https://doi.org/10.17533/udea.ef.n61a09Xu, W., & Ouyang, F. (2022). The application of AI technologies in STEM education: a systematic review from 2011 to 2021. International Journal of STEM Education, 9(59). doi:https://doi.org/10.1186/s40594-022-00377-5Yee, V., Bajaj, S., & Cody Stanford, F. (2022). Paradox of telemedicine: building or neglecting trust and equity. The Lancet Digital Healt, 4(7), E480-E481. doi:DOI:https://doi.org/10.1016/S2589-7500(22)00100-5Zang, P. (2010). Advanced Industrial Control Technology. William Andrew Publishing.Zwart, H. (2008). Challenges of Macro-ethics: Bioethics and the Transformation of Knowledge Production. Bioethical Inquiry, 5, 283–293.doi:https://doi.org/10.1007/s11673-008-9110-9Acceso cerradoinfo:eu-repo/semantics/closedAccesshttp://purl.org/coar/access_right/c_14cbspaORIGINALTrabajo de Grado.pdfapplication/pdf8688213https://repositorio.unbosque.edu.co/bitstreams/26dfbb23-dcb3-4518-922a-57326e9a2d2e/download1225ec77593e308fdf0afe39822c7feaMD59LICENSElicense.txtlicense.txttext/plain; charset=utf-82000https://repositorio.unbosque.edu.co/bitstreams/de7722b3-9772-49bb-9b77-f39d49605864/download17cc15b951e7cc6b3728a574117320f9MD55Carta de autorizacion.pdfapplication/pdf188338https://repositorio.unbosque.edu.co/bitstreams/e5fcf422-5a7e-4b6f-a716-b93d3116960a/download45ef8d169dba1706ebae1e22ae706451MD510CC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-81160https://repositorio.unbosque.edu.co/bitstreams/21221828-bc1e-47d1-a4c6-c150fb71d3c5/download5643bfd9bcf29d560eeec56d584edaa9MD58TEXTTrabajo de Grado.pdf.txtTrabajo de Grado.pdf.txtExtracted texttext/plain101928https://repositorio.unbosque.edu.co/bitstreams/8643bf99-d385-4f9d-82e9-43976efa3518/downloadb3fa90e2b273aaf9436c7c860c221841MD511THUMBNAILTrabajo de Grado.pdf.jpgTrabajo de Grado.pdf.jpgGenerated Thumbnailimage/jpeg2651https://repositorio.unbosque.edu.co/bitstreams/7cef085e-1b0b-421f-8209-44e35a90d439/download7b9823febb1476d002a54e9ad10023d2MD51220.500.12495/12842oai:repositorio.unbosque.edu.co:20.500.12495/128422024-08-08 03:02:55.637restrictedhttps://repositorio.unbosque.edu.coRepositorio Institucional Universidad El Bosquebibliotecas@biteca.comTGljZW5jaWEgZGUgRGlzdHJpYnVjacOzbiBObyBFeGNsdXNpdmEKClBhcmEgcXVlIGVsIFJlcG9zaXRvcmlvIGRlIGxhIFVuaXZlcnNpZGFkIEVsIEJvc3F1ZSBhIHB1ZWRhIHJlcHJvZHVjaXIgeSBjb211bmljYXIgcMO6YmxpY2FtZW50ZSBzdSBkb2N1bWVudG8gZXMgbmVjZXNhcmlvIGxhIGFjZXB0YWNpw7NuIGRlIGxvcyBzaWd1aWVudGVzIHTDqXJtaW5vcy4gUG9yIGZhdm9yLCBsZWEgbGFzIHNpZ3VpZW50ZXMgY29uZGljaW9uZXMgZGUgbGljZW5jaWE6CgoxLiBBY2VwdGFuZG8gZXN0YSBsaWNlbmNpYSwgdXN0ZWQgKGVsIGF1dG9yL2VzIG8gZWwgcHJvcGlldGFyaW8vcyBkZSBsb3MgZGVyZWNob3MgZGUgYXV0b3IpIGdhcmFudGl6YSBhIGxhIFVuaXZlcnNpZGFkIEVsIEJvc3F1ZSBlbCBkZXJlY2hvIG5vIGV4Y2x1c2l2byBkZSBhcmNoaXZhciwgcmVwcm9kdWNpciwgY29udmVydGlyIChjb21vIHNlIGRlZmluZSBtw6FzIGFiYWpvKSwgY29tdW5pY2FyIHkvbyBkaXN0cmlidWlyIHN1IGRvY3VtZW50byBtdW5kaWFsbWVudGUgZW4gZm9ybWF0byBlbGVjdHLDs25pY28uCgoyLiBUYW1iacOpbiBlc3TDoSBkZSBhY3VlcmRvIGNvbiBxdWUgbGEgVW5pdmVyc2lkYWQgRWwgQm9zcXVlIHB1ZWRhIGNvbnNlcnZhciBtw6FzIGRlIHVuYSBjb3BpYSBkZSBlc3RlIGRvY3VtZW50byB5LCBzaW4gYWx0ZXJhciBzdSBjb250ZW5pZG8sIGNvbnZlcnRpcmxvIGEgY3VhbHF1aWVyIGZvcm1hdG8gZGUgZmljaGVybywgbWVkaW8gbyBzb3BvcnRlLCBwYXJhIHByb3DDs3NpdG9zIGRlIHNlZ3VyaWRhZCwgcHJlc2VydmFjacOzbiB5IGFjY2Vzby4KCjMuIERlY2xhcmEgcXVlIGVsIGRvY3VtZW50byBlcyB1biB0cmFiYWpvIG9yaWdpbmFsIHN1eW8geS9vIHF1ZSB0aWVuZSBlbCBkZXJlY2hvIHBhcmEgb3RvcmdhciBsb3MgZGVyZWNob3MgY29udGVuaWRvcyBlbiBlc3RhIGxpY2VuY2lhLiBUYW1iacOpbiBkZWNsYXJhIHF1ZSBzdSBkb2N1bWVudG8gbm8gaW5mcmluZ2UsIGVuIHRhbnRvIGVuIGN1YW50byBsZSBzZWEgcG9zaWJsZSBzYWJlciwgbG9zIGRlcmVjaG9zIGRlIGF1dG9yIGRlIG5pbmd1bmEgb3RyYSBwZXJzb25hIG8gZW50aWRhZC4KCjQuIFNpIGVsIGRvY3VtZW50byBjb250aWVuZSBtYXRlcmlhbGVzIGRlIGxvcyBjdWFsZXMgbm8gdGllbmUgbG9zIGRlcmVjaG9zIGRlIGF1dG9yLCBkZWNsYXJhIHF1ZSBoYSBvYnRlbmlkbyBlbCBwZXJtaXNvIHNpbiByZXN0cmljY2nDs24gZGVsIHByb3BpZXRhcmlvIGRlIGxvcyBkZXJlY2hvcyBkZSBhdXRvciBwYXJhIG90b3JnYXIgYSBsYSBVbml2ZXJzaWRhZCBFbCBCb3NxdWUgbG9zIGRlcmVjaG9zIHJlcXVlcmlkb3MgcG9yIGVzdGEgbGljZW5jaWEsIHkgcXVlIGVzZSBtYXRlcmlhbCBjdXlvcyBkZXJlY2hvcyBzb24gZGUgdGVyY2Vyb3MgZXN0w6EgY2xhcmFtZW50ZSBpZGVudGlmaWNhZG8geSByZWNvbm9jaWRvIGVuIGVsIHRleHRvIG8gY29udGVuaWRvIGRlbCBkb2N1bWVudG8gZW50cmVnYWRvLgoKNS4gU2kgZWwgZG9jdW1lbnRvIHNlIGJhc2EgZW4gdW5hIG9icmEgcXVlIGhhIHNpZG8gcGF0cm9jaW5hZGEgbyBhcG95YWRhIHBvciB1bmEgYWdlbmNpYSB1IG9yZ2FuaXphY2nDs24gZGlmZXJlbnRlIGRlIGxhIFVuaXZlcnNpZGFkIEVsIEJvc3F1ZSwgc2UgcHJlc3Vwb25lIHF1ZSBzZSBoYSBjdW1wbGlkbyBjb24gY3VhbHF1aWVyIGRlcmVjaG8gZGUgcmV2aXNpw7NuIHUgb3RyYXMgb2JsaWdhY2lvbmVzIHJlcXVlcmlkYXMgcG9yIGVzdGUgY29udHJhdG8gbyBhY3VlcmRvLgoKNi4gVW5pdmVyc2lkYWQgRWwgQm9zcXVlIGlkZW50aWZpY2Fyw6EgY2xhcmFtZW50ZSBzdS9zIG5vbWJyZS9zIGNvbW8gZWwvbG9zIGF1dG9yL2VzIG8gcHJvcGlldGFyaW8vcyBkZSBsb3MgZGVyZWNob3MgZGVsIGRvY3VtZW50bywgeSBubyBoYXLDoSBuaW5ndW5hIGFsdGVyYWNpw7NuIGRlIHN1IGRvY3VtZW50byBkaWZlcmVudGUgYSBsYXMgcGVybWl0aWRhcyBlbiBlc3RhIGxpY2VuY2lhLgo=