A Deep Learning model for automatic grading of prostate cancer histopathology images
Gleason grading is recognized as the standard method for diagnosing prostate cancer. However, it is subject to significant inter-observer variability due to its reliance on subjective visual assessment. Current deep learning approaches for grading often require exhaustive pixel-level annotations and...
- Autores:
-
Medina Carrillo, Sebastian Rodrigo
- Tipo de recurso:
- Fecha de publicación:
- 2024
- Institución:
- Universidad Nacional de Colombia
- Repositorio:
- Universidad Nacional de Colombia
- Idioma:
- eng
- OAI Identifier:
- oai:repositorio.unal.edu.co:unal/86504
- Palabra clave:
- 610 - Medicina y salud::616 - Enfermedades
000 - Ciencias de la computación, información y obras generales::004 - Procesamiento de datos Ciencia de los computadores
Aprendizaje Profundo
Neoplasias de la Próstata/diagnóstico por imagen
Patología
Deep Learning
Prostatic Neoplasms/diagnostic imaging
Pathology
Prostate cancer
Histopathology
Deep Learning
Cancer grading
Density matrix
Interpretability
Cáncer de prostata
Histopatología
Aprendizaje automático
Gradación de cáncer
Matriz de densidad
Interpretabilidad
- Rights
- openAccess
- License
- Atribución-NoComercial-CompartirIgual 4.0 Internacional
Summary: | Gleason grading is recognized as the standard method for diagnosing prostate cancer. However, it is subject to significant inter-observer variability due to its reliance on subjective visual assessment. Current deep learning approaches for grading often require exhaustive pixel-level annotations and are generally limited to patch-level predictions, which do not incorporate slide-level information. Recently, weakly-supervised techniques have shown promise in generating whole-slide label predictions using pathology report labels, which are more readily available. However, these methods frequently lack visual and quantitative interpretability, reinforcing the black box nature of deep learning models, hindering their clinical adoption. This thesis introduces WiSDoM, a novel weakly-supervised and interpretable approach leveraging attention mechanisms and Kernel Density Matrices for the grading of prostate cancer on whole slides. This method is adaptable to varying levels of supervision. WiSDoM facilitates multi-scale interpretability through several features: detailed heatmaps that provide granular visual insights by highlighting critical morphological features without requiring tissue annotations; example-based phenotypical prototypes that illustrate the internal representation learned by the model, aiding in clinical verification; and visual-quantitative measures of model uncertainty, which enhance the transparency of the model's decision-making process, a crucial factor for clinical use. WiSDoM has been validated on core-needle biopsies from two different institutions, demonstrating robust agreement with the reference standard (quadratically weighted Kappa of 0.93). WiSDoM achieves state-of-the-art inter-observer agreement performance on the PANDA Challenge publicly available dataset while being clinically interpretable. |
---|