ISeeU2: Visually interpretable mortality prediction inside the ICU using deep learning and free-text medical notes

Accurate mortality prediction allows Intensive Care Units (ICUs) to adequately benchmark clinical practice and identify patients with unexpected outcomes. Traditionally, simple statistical models have been used to assess patient death risk, many times with sub-optimal performance. On the other hand...

Full description

Autores:
Caicedo-Torres, William
Gutierrez, Jairo
Tipo de recurso:
Fecha de publicación:
2022
Institución:
Universidad Tecnológica de Bolívar
Repositorio:
Repositorio Institucional UTB
Idioma:
eng
OAI Identifier:
oai:repositorio.utb.edu.co:20.500.12585/12197
Acceso en línea:
https://hdl.handle.net/20.500.12585/12197
Palabra clave:
Imbalanced Data;
Cost-Sensitive Learning;
Data Classification
LEMB
Rights
openAccess
License
http://creativecommons.org/licenses/by-nc-nd/4.0/
Description
Summary:Accurate mortality prediction allows Intensive Care Units (ICUs) to adequately benchmark clinical practice and identify patients with unexpected outcomes. Traditionally, simple statistical models have been used to assess patient death risk, many times with sub-optimal performance. On the other hand Deep Learning holds promise to positively impact clinical practice by leveraging medical data to assist diagnosis and prediction, including mortality prediction. However, as the question of whether powerful Deep Learning models attend correlations backed by sound medical knowledge when generating predictions remains open, additional interpretability tools are needed to foster trust and encourage the use of AI by clinicians. In this work we show an interpretable Deep Learning model trained on MIMIC-III to predict mortality inside the ICU using raw nursing notes, together with visual explanations for word importance based on the Shapley Value. Our model reaches a ROC of 0.8629 (±0.0058), outperforming the traditional SAPS-II score and a LSTM recurrent neural network baseline while providing enhanced interpretability when compared with similar Deep Learning approaches. Supporting code can be found at https://github.com/williamcaicedo/ISeeU2. © 2022 Elsevier Ltd