Deep learning architectures for the analysis and classification of brain tumors in MR images
The need to make timely and accurate diagnoses of brain diseases has posed challenges to computer-aided diagnosis systems. In this field, advances in deep learning techniques play an important role, as they carry out processes to extract relevant anatomical and functional characteristics of the tiss...
- Autores:
-
Osorio-Barone, A.
Contreras Ortiz, Sonia Helena
- Tipo de recurso:
- Fecha de publicación:
- 2020
- Institución:
- Universidad Tecnológica de Bolívar
- Repositorio:
- Repositorio Institucional UTB
- Idioma:
- eng
- OAI Identifier:
- oai:repositorio.utb.edu.co:20.500.12585/9955
- Acceso en línea:
- https://hdl.handle.net/20.500.12585/9955
https://www.spiedigitallibrary.org/conference-proceedings-of-spie/11583/115830B/Deep-learning-architectures-for-the-analysis-and-classification-of-brain/10.1117/12.2579618.short?SSO=1
- Palabra clave:
- Bioinformatics
Brain
Computer aided diagnosis
Convolutional neural networks
Image classification
Image enhancement
Learning systems
Magnetic resonance
Magnetic resonance imaging
Network architecture
Transfer learning
Tumors
LEMB
- Rights
- closedAccess
- License
- http://purl.org/coar/access_right/c_14cb
Summary: | The need to make timely and accurate diagnoses of brain diseases has posed challenges to computer-aided diagnosis systems. In this field, advances in deep learning techniques play an important role, as they carry out processes to extract relevant anatomical and functional characteristics of the tissues to classify them. In this paper, the study of various architectures of convolutional neural networks (CNN) is presented, with the aim of classifying three types of brain tumors in high-contrast magnetic resonance (MR) images. The architectures of the present study were VGG16, ResNet50, Xception, whose implementations are defined in the Keras framework. The evaluation of these architectures were preceded by data augmentation techniques and transfer learning, which improved the effectiveness of the training process, thanks to the use of pre-trained models with the ImageNet dataset. The VGG16 architecture was the one with the best performance, with an accuracy of 98.04%, followed by ResNet50 with 94.89%, and finally, Xception with 92.18%. |
---|