Evaluating supervised learning approaches for spatial-domain multi-focus image fusion

Image fusion is the generation of an image  that combines the most relevant information from a set of images of the same scene, acquired with different cameras or camera settings. Multi-Focus Image Fusion (MFIF) aims to generate an image with extended depth-of-field from a set of images taken at dif...

Full description

Autores:
Atencio Ortiz, Pedro
Sánchez torres, German
Branch Bedoya, John Willian
Tipo de recurso:
Article of journal
Fecha de publicación:
2017
Institución:
Universidad Nacional de Colombia
Repositorio:
Universidad Nacional de Colombia
Idioma:
spa
OAI Identifier:
oai:repositorio.unal.edu.co:unal/60367
Acceso en línea:
https://repositorio.unal.edu.co/handle/unal/60367
http://bdigital.unal.edu.co/58699/
Palabra clave:
62 Ingeniería y operaciones afines / Engineering
Multi-focus image fusion
image processing
supervised learning
machine learning
Fusión de imágenes mutifoco
procesamiento de imágenes
aprendizaje supervisado
aprendizaje de máquina
Rights
openAccess
License
Atribución-NoComercial 4.0 Internacional
id UNACIONAL2_4b6a01dfd8498efb1e3ad50bec8ec1de
oai_identifier_str oai:repositorio.unal.edu.co:unal/60367
network_acronym_str UNACIONAL2
network_name_str Universidad Nacional de Colombia
repository_id_str
dc.title.spa.fl_str_mv Evaluating supervised learning approaches for spatial-domain multi-focus image fusion
title Evaluating supervised learning approaches for spatial-domain multi-focus image fusion
spellingShingle Evaluating supervised learning approaches for spatial-domain multi-focus image fusion
62 Ingeniería y operaciones afines / Engineering
Multi-focus image fusion
image processing
supervised learning
machine learning
Fusión de imágenes mutifoco
procesamiento de imágenes
aprendizaje supervisado
aprendizaje de máquina
title_short Evaluating supervised learning approaches for spatial-domain multi-focus image fusion
title_full Evaluating supervised learning approaches for spatial-domain multi-focus image fusion
title_fullStr Evaluating supervised learning approaches for spatial-domain multi-focus image fusion
title_full_unstemmed Evaluating supervised learning approaches for spatial-domain multi-focus image fusion
title_sort Evaluating supervised learning approaches for spatial-domain multi-focus image fusion
dc.creator.fl_str_mv Atencio Ortiz, Pedro
Sánchez torres, German
Branch Bedoya, John Willian
dc.contributor.author.spa.fl_str_mv Atencio Ortiz, Pedro
Sánchez torres, German
Branch Bedoya, John Willian
dc.subject.ddc.spa.fl_str_mv 62 Ingeniería y operaciones afines / Engineering
topic 62 Ingeniería y operaciones afines / Engineering
Multi-focus image fusion
image processing
supervised learning
machine learning
Fusión de imágenes mutifoco
procesamiento de imágenes
aprendizaje supervisado
aprendizaje de máquina
dc.subject.proposal.spa.fl_str_mv Multi-focus image fusion
image processing
supervised learning
machine learning
Fusión de imágenes mutifoco
procesamiento de imágenes
aprendizaje supervisado
aprendizaje de máquina
description Image fusion is the generation of an image  that combines the most relevant information from a set of images of the same scene, acquired with different cameras or camera settings. Multi-Focus Image Fusion (MFIF) aims to generate an image with extended depth-of-field from a set of images taken at different focal distances or focal planes, and it proposes a solution to the typical limited depth-of-field problem in an optical system configuration. A broad variety of works presented in the literature address this problem. The primary approaches found there are domain transformations and block-of-pixels analysis. In this work, we evaluate different systems of supervised machine learning applied to MFIF, including k-nearest neighbors, linear discriminant analysis, neural networks, and support vector machines. We started from two images at different focal distances and divided them into rectangular regions. The main objective of the machine-learning-based classification system is to choose the parts of both images that must be in the fused image in order to obtain a completely focused image. For focus quantification, we used the most popular metrics proposed in the literature, such as: Laplacian energy, sum-modified Laplacian, and gradient energy, among others. The evaluation of the proposed method considered classifier testing and fusion quality metrics commonly used in research, such as visual information fidelity and mutual information feature. Our results strongly suggest that the automatic classification concept satisfactorily addresses the MFIF problem.
publishDate 2017
dc.date.issued.spa.fl_str_mv 2017-07-01
dc.date.accessioned.spa.fl_str_mv 2019-07-02T18:09:22Z
dc.date.available.spa.fl_str_mv 2019-07-02T18:09:22Z
dc.type.spa.fl_str_mv Artículo de revista
dc.type.coar.fl_str_mv http://purl.org/coar/resource_type/c_2df8fbb1
dc.type.driver.spa.fl_str_mv info:eu-repo/semantics/article
dc.type.version.spa.fl_str_mv info:eu-repo/semantics/publishedVersion
dc.type.coar.spa.fl_str_mv http://purl.org/coar/resource_type/c_6501
dc.type.coarversion.spa.fl_str_mv http://purl.org/coar/version/c_970fb48d4fbd8a85
dc.type.content.spa.fl_str_mv Text
dc.type.redcol.spa.fl_str_mv http://purl.org/redcol/resource_type/ART
format http://purl.org/coar/resource_type/c_6501
status_str publishedVersion
dc.identifier.issn.spa.fl_str_mv ISSN: 2346-2183
dc.identifier.uri.none.fl_str_mv https://repositorio.unal.edu.co/handle/unal/60367
dc.identifier.eprints.spa.fl_str_mv http://bdigital.unal.edu.co/58699/
identifier_str_mv ISSN: 2346-2183
url https://repositorio.unal.edu.co/handle/unal/60367
http://bdigital.unal.edu.co/58699/
dc.language.iso.spa.fl_str_mv spa
language spa
dc.relation.spa.fl_str_mv https://revistas.unal.edu.co/index.php/dyna/article/view/63389
dc.relation.ispartof.spa.fl_str_mv Universidad Nacional de Colombia Revistas electrónicas UN Dyna
Dyna
dc.relation.references.spa.fl_str_mv Atencio Ortiz, Pedro and Sánchez torres, German and Branch Bedoya, John Willian (2017) Evaluating supervised learning approaches for spatial-domain multi-focus image fusion. DYNA, 84 (202). pp. 137-146. ISSN 2346-2183
dc.rights.spa.fl_str_mv Derechos reservados - Universidad Nacional de Colombia
dc.rights.coar.fl_str_mv http://purl.org/coar/access_right/c_abf2
dc.rights.license.spa.fl_str_mv Atribución-NoComercial 4.0 Internacional
dc.rights.uri.spa.fl_str_mv http://creativecommons.org/licenses/by-nc/4.0/
dc.rights.accessrights.spa.fl_str_mv info:eu-repo/semantics/openAccess
rights_invalid_str_mv Atribución-NoComercial 4.0 Internacional
Derechos reservados - Universidad Nacional de Colombia
http://creativecommons.org/licenses/by-nc/4.0/
http://purl.org/coar/access_right/c_abf2
eu_rights_str_mv openAccess
dc.format.mimetype.spa.fl_str_mv application/pdf
dc.publisher.spa.fl_str_mv Universidad Nacional de Colombia (Sede Medellín). Facultad de Minas.
institution Universidad Nacional de Colombia
bitstream.url.fl_str_mv https://repositorio.unal.edu.co/bitstream/unal/60367/1/63389-350173-1-PB.pdf
https://repositorio.unal.edu.co/bitstream/unal/60367/2/63389-350173-1-PB.pdf.jpg
bitstream.checksum.fl_str_mv 92a832b475cb71d95533f70eb5c23003
80dcaf8ec96d40e14ef1faa01f047d3d
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
repository.name.fl_str_mv Repositorio Institucional Universidad Nacional de Colombia
repository.mail.fl_str_mv repositorio_nal@unal.edu.co
_version_ 1814089497586958336
spelling Atribución-NoComercial 4.0 InternacionalDerechos reservados - Universidad Nacional de Colombiahttp://creativecommons.org/licenses/by-nc/4.0/info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2Atencio Ortiz, Pedrod8654ad0-e061-4fae-8bcc-6f706e29ed6f300Sánchez torres, German3efcada8-0b77-454d-ab51-5230fb80a334300Branch Bedoya, John Willian3c6549fa-e308-4c51-b84b-43f77c96efde3002019-07-02T18:09:22Z2019-07-02T18:09:22Z2017-07-01ISSN: 2346-2183https://repositorio.unal.edu.co/handle/unal/60367http://bdigital.unal.edu.co/58699/Image fusion is the generation of an image  that combines the most relevant information from a set of images of the same scene, acquired with different cameras or camera settings. Multi-Focus Image Fusion (MFIF) aims to generate an image with extended depth-of-field from a set of images taken at different focal distances or focal planes, and it proposes a solution to the typical limited depth-of-field problem in an optical system configuration. A broad variety of works presented in the literature address this problem. The primary approaches found there are domain transformations and block-of-pixels analysis. In this work, we evaluate different systems of supervised machine learning applied to MFIF, including k-nearest neighbors, linear discriminant analysis, neural networks, and support vector machines. We started from two images at different focal distances and divided them into rectangular regions. The main objective of the machine-learning-based classification system is to choose the parts of both images that must be in the fused image in order to obtain a completely focused image. For focus quantification, we used the most popular metrics proposed in the literature, such as: Laplacian energy, sum-modified Laplacian, and gradient energy, among others. The evaluation of the proposed method considered classifier testing and fusion quality metrics commonly used in research, such as visual information fidelity and mutual information feature. Our results strongly suggest that the automatic classification concept satisfactorily addresses the MFIF problem.Resumen: La fusión de imágenes genera una imagen  que combina las características más relevantes de un conjunto de imágenes de la misma escena adquiridas con diferentes cámaras o configuraciones. La Fusión de Imágenes Multifoco (MFIF) parte de un conjunto de imágenes con diferente distancia focal para generar una imagen  con una profundidad de campo extendida. Lo que constituye una solución al problema de la profundidad de campo limitada en la configuración de un sistema óptico. La literatura muestra una amplia variedad de trabajos que abordan este problema. Las transformaciones de dominios y el análisis de bloques de píxeles son la base de los principales enfoques propuestos. En este trabajo se presenta una evaluación de diferentes sistemas de aprendizaje supervisado aplicados a MFIF, incluyendo k-vecinos más cercanos, análisis discriminante lineal, redes neuronales y máquinas de soporte vectorial. El método inicia con dos imágenes de la misma escena, pero con diferentes distancias focales que se dividen en regiones rectangulares. El objetivo principal del sistema de clasificación, que está basado en aprendizaje de máquina, es elegir las partes de ambas imágenes que deben estar en la imagen fusionada para obtener una imagen completamente enfocada. Para la cuantificación del enfoque se utilizaron las métricas más populares propuestas en la literatura como: la Energía Laplaciana, el Laplaciano Modificado por Suma y el Gradiente de Energía, entre otras. La evaluación del método propuesto incluye la fase de prueba de los clasificadores y las métricas de calidad de fusión utilizadas comúnmente en la investigación, tales como la fidelidad de la información visual y la característica de información mutua. Los resultados muestran que el concepto de clasificación automática puede abordar satisfactoriamente el problema MFIF.application/pdfspaUniversidad Nacional de Colombia (Sede Medellín). Facultad de Minas.https://revistas.unal.edu.co/index.php/dyna/article/view/63389Universidad Nacional de Colombia Revistas electrónicas UN DynaDynaAtencio Ortiz, Pedro and Sánchez torres, German and Branch Bedoya, John Willian (2017) Evaluating supervised learning approaches for spatial-domain multi-focus image fusion. DYNA, 84 (202). pp. 137-146. ISSN 2346-218362 Ingeniería y operaciones afines / EngineeringMulti-focus image fusionimage processingsupervised learningmachine learningFusión de imágenes mutifocoprocesamiento de imágenesaprendizaje supervisadoaprendizaje de máquinaEvaluating supervised learning approaches for spatial-domain multi-focus image fusionArtículo de revistainfo:eu-repo/semantics/articleinfo:eu-repo/semantics/publishedVersionhttp://purl.org/coar/resource_type/c_6501http://purl.org/coar/resource_type/c_2df8fbb1http://purl.org/coar/version/c_970fb48d4fbd8a85Texthttp://purl.org/redcol/resource_type/ARTORIGINAL63389-350173-1-PB.pdfapplication/pdf1259359https://repositorio.unal.edu.co/bitstream/unal/60367/1/63389-350173-1-PB.pdf92a832b475cb71d95533f70eb5c23003MD51THUMBNAIL63389-350173-1-PB.pdf.jpg63389-350173-1-PB.pdf.jpgGenerated Thumbnailimage/jpeg10009https://repositorio.unal.edu.co/bitstream/unal/60367/2/63389-350173-1-PB.pdf.jpg80dcaf8ec96d40e14ef1faa01f047d3dMD52unal/60367oai:repositorio.unal.edu.co:unal/603672023-04-06 23:05:45.054Repositorio Institucional Universidad Nacional de Colombiarepositorio_nal@unal.edu.co