Correction of Banding Errors in Satellite Images With Generative Adversarial Networks (GAN)
This research proposes an innovative method for correcting banding errors in satellite images based on Generative Adversarial Networks (GAN). Small satellites are frequently launched into space to obtain images that can be used in scientific or military research, commercial activities, and urban pla...
- Autores:
-
Zárate L., Paola
Arroyo H., Christian
Rincón U., Sonia
López Sotelo, Jesús Alfonso
- Tipo de recurso:
- Article of investigation
- Fecha de publicación:
- 2023
- Institución:
- Universidad Autónoma de Occidente
- Repositorio:
- RED: Repositorio Educativo Digital UAO
- Idioma:
- eng
- OAI Identifier:
- oai:red.uao.edu.co:10614/15860
- Acceso en línea:
- https://hdl.handle.net/10614/15860
https://red.uao.edu.co/
- Palabra clave:
- Artificial neural network
Deep learning
Generative adversarial network
Satellite images
Radiometric error
Banding
- Rights
- openAccess
- License
- Derechos reservados - IEEE, 2023
id |
REPOUAO2_ee5fb05b4719bc71377ac5e775378337 |
---|---|
oai_identifier_str |
oai:red.uao.edu.co:10614/15860 |
network_acronym_str |
REPOUAO2 |
network_name_str |
RED: Repositorio Educativo Digital UAO |
repository_id_str |
|
dc.title.eng.fl_str_mv |
Correction of Banding Errors in Satellite Images With Generative Adversarial Networks (GAN) |
title |
Correction of Banding Errors in Satellite Images With Generative Adversarial Networks (GAN) |
spellingShingle |
Correction of Banding Errors in Satellite Images With Generative Adversarial Networks (GAN) Artificial neural network Deep learning Generative adversarial network Satellite images Radiometric error Banding |
title_short |
Correction of Banding Errors in Satellite Images With Generative Adversarial Networks (GAN) |
title_full |
Correction of Banding Errors in Satellite Images With Generative Adversarial Networks (GAN) |
title_fullStr |
Correction of Banding Errors in Satellite Images With Generative Adversarial Networks (GAN) |
title_full_unstemmed |
Correction of Banding Errors in Satellite Images With Generative Adversarial Networks (GAN) |
title_sort |
Correction of Banding Errors in Satellite Images With Generative Adversarial Networks (GAN) |
dc.creator.fl_str_mv |
Zárate L., Paola Arroyo H., Christian Rincón U., Sonia López Sotelo, Jesús Alfonso |
dc.contributor.author.none.fl_str_mv |
Zárate L., Paola Arroyo H., Christian Rincón U., Sonia López Sotelo, Jesús Alfonso |
dc.subject.proposal.eng.fl_str_mv |
Artificial neural network Deep learning Generative adversarial network Satellite images Radiometric error Banding |
topic |
Artificial neural network Deep learning Generative adversarial network Satellite images Radiometric error Banding |
description |
This research proposes an innovative method for correcting banding errors in satellite images based on Generative Adversarial Networks (GAN). Small satellites are frequently launched into space to obtain images that can be used in scientific or military research, commercial activities, and urban planning, among other applications. However, its small cameras are more susceptible to radiometric, geometric errors, and other distortions caused by atmospheric interference. The proposed method was compared to the conventional correction technique using experimental data, showing the similar performance (92.64% and 90.05% accuracy, respectively). These experimental results suggest that generative models utilizing Artificial Intelligence (AI) techniques, specifically Deep Learning, are getting closer to achieving automatic correction close to conventional methods. Advantages of the GAN models include automating the task of correcting banding in satellite images, reducing the required time, and facilitating the processing without requiring prior technical knowledge in handling Geographic Information Systems (GIS). Potentially, this technique could represent a valuable tool for satellite image processing, improving the accuracy of the results and making the process more efficient. The research is particularly relevant to the field of remote sensing and can have practical applications in various industries |
publishDate |
2023 |
dc.date.issued.none.fl_str_mv |
2023 |
dc.date.accessioned.none.fl_str_mv |
2024-10-15T14:20:36Z |
dc.date.available.none.fl_str_mv |
2024-10-15T14:20:36Z |
dc.type.spa.fl_str_mv |
Artículo de revista |
dc.type.coarversion.fl_str_mv |
http://purl.org/coar/version/c_970fb48d4fbd8a85 |
dc.type.coar.eng.fl_str_mv |
http://purl.org/coar/resource_type/c_2df8fbb1 |
dc.type.content.eng.fl_str_mv |
Text |
dc.type.driver.eng.fl_str_mv |
info:eu-repo/semantics/article |
dc.type.redcol.eng.fl_str_mv |
http://purl.org/redcol/resource_type/ART |
dc.type.version.eng.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
format |
http://purl.org/coar/resource_type/c_2df8fbb1 |
status_str |
publishedVersion |
dc.identifier.citation.spa.fl_str_mv |
Zárate L., P.; López Sotelo, J. A.; Arroyo H. Ch. y Rincón U. S. (2023). Correction of Banding Errors in Satellite Images With Generative Adversarial Networks (GAN). IEEE Acces. volumen 11. 11 p. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10131946 |
dc.identifier.uri.none.fl_str_mv |
https://hdl.handle.net/10614/15860 |
dc.identifier.doi.spa.fl_str_mv |
10.1109/ACCESS.2023.3279265 |
dc.identifier.eissn.spa.fl_str_mv |
21693536 |
dc.identifier.instname.spa.fl_str_mv |
Universidad Autónoma de Occidente |
dc.identifier.reponame.spa.fl_str_mv |
Respositorio Educativo Digital UAO |
dc.identifier.repourl.none.fl_str_mv |
https://red.uao.edu.co/ |
identifier_str_mv |
Zárate L., P.; López Sotelo, J. A.; Arroyo H. Ch. y Rincón U. S. (2023). Correction of Banding Errors in Satellite Images With Generative Adversarial Networks (GAN). IEEE Acces. volumen 11. 11 p. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10131946 10.1109/ACCESS.2023.3279265 21693536 Universidad Autónoma de Occidente Respositorio Educativo Digital UAO |
url |
https://hdl.handle.net/10614/15860 https://red.uao.edu.co/ |
dc.language.iso.eng.fl_str_mv |
eng |
language |
eng |
dc.relation.citationendpage.spa.fl_str_mv |
51970 |
dc.relation.citationstartpage.spa.fl_str_mv |
51960 |
dc.relation.citationvolume.spa.fl_str_mv |
11 |
dc.relation.ispartofjournal.eng.fl_str_mv |
IEEE Acces |
dc.relation.references.none.fl_str_mv |
[1] P. Alonso. Correciones a Las Imágenes de Satélites. Universidad de Murcia. Accessed: Jan. 16, 2022. [Online]. Available: https://www. um.es/geograf/sigmur/teledet/tema07.pdf [2] F. Pachua-Cofrep, ‘‘Correlation between NDVI and tree-rings. Growth of forest species in southern Ecuador,’’ M.S. thesis, Departamento de Geomática-Z_GIS, Universidad de Salzburgo, Salzburg, Austria, 2019, doi: 10.13140/RG.2.2.34662.57922. [3] USGS. Data Citation. Accessed: Jan. 16, 2022. [Online].Available: https:// www.usgs.gov/centers/eros/data-citation [4] Y. Pang, J. Lin, T. Qin, and Z. Chen, ‘‘Image-to-image translation: Methods and applications,’’ in Proc. Comput. Vis. Pattern Recognit., Jul. 2021, pp. 1–14. [5] Y. Pang, J. Lin, T. Qin, and Z. Chen, ‘‘Image-to-image translation: Methods and applications,’’ IEEE Trans. Multimedia, vol. 24, pp. 3859–3881, 2022. [6] X. Chen, C. Xu, X. Yang, and D. Tao, ‘‘Attention-GAN for object transfiguration in wild images,’’ in Proc. Eur. Conf. Comput. Vis. (ECCV), 2018, pp. 164–180. [7] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, ‘‘Generative adversarial networks,’’ 2014, arXiv:1406.2661. [8] J. Gauthier. (2015). Conditional Generative Adversarial Nets for Convolutional Face Generation. [Online]. Available: http://cs231n.stanford.edu/ reports/2015/pdfs/jgauthie_final_report.pdf [9] A. Sharma. (Jul. 2021). Pix2Pix: Image-to-Image Translation in PyTorch& TensorFlow. LearnOpenCV. [Online]. Available: https://learnopencv.com/ paired-image-to-image-translation-pix2pix/#pix2pix [10] P. Isola, J. Zhu, T. Zhou, and A. A. Efros, ‘‘Image-to-image translation with conditional adversarial networks,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 5967–5976. [11] D. Bank, N.Koenigstein, and R. Giryes, ‘‘Autoencoders,’’ in Proc. Comput. Vis. Pattern Recognit., Mach. Learn., Apr. 2021, pp. 1–12. [12] D. Bank, N. Koenigstein, and R. Giryes, ‘‘Autoencoders,’’ 2020, arXiv:2003.05991. [13] Y. Zhang. (2018). A Better Autoencoder for Images: Convolutional Autoencoder. Australian National University. [Online]. Available: http://users. cecs.anu.edu.au/~Tom.Gedeon/conf/ABCs2018/paper/ABCs 2018_paper_58.pdf [14] O. Ronneberger, P. Fischer, and T. Brox, ‘‘U-Net: Convolutional networks for biomedical image segmentation,’’ in Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent. Cham, Switzerland: Springer, May 2015, pp. 1–4. [15] H. Joyce, N. Terry, and M. Den, ‘‘Pix2Pix GAN for image-to-image translation,’’ Community College Rhode Island, Tech. Rep., 2021, doi: 10.13140/RG.2.2.32286.66887. [16] J. Zhu, T. Park, P. Isola, and A. A. Efros, ‘‘Unpaired image-to-image translation using cycle-consistent adversarial networks,’’ in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Oct. 2017, pp. 2242–2251. [17] H. Bansal and A. Rathore. (Dec. 2017). Understanding and Implementing CycleGAN in TensorFlow. GitHub. [Online]. Available: https://hardikbansal.github.io/CycleGANBlog/ [18] T. Ganokratanaa, S. Aramvith, and N. Sebe, ‘‘Unsupervised anomaly detection and localization based on deep spatiotemporal translation network,’’ IEEE Access, vol. 8, pp. 50312–50329, 2020, doi: 10.1109/ACCESS.2020.2979869. [19] U. Demir and G. Unal, ‘‘Patch-based image inpainting with generative adversarial networks,’’ 2018, arXiv:1803.07422. [20] Y. Jia, Y. Guo, S. Chen, R. Song, G. Wang, X. Zhong, C. Yan, and G. Cui, ‘‘Multipath ghost and side/grating lobe suppression based on stacked generative adversarial nets in MIMO through-wall radar imaging,’’ IEEE Access, vol. 7, pp. 143367–143380, 2019, doi: 10.1109/ACCESS.2019.2945859. [21] K. Regmi and A. Borji, ‘‘Cross-view image synthesis using conditional GANs,’’ in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 3501–3510. [22] Q. Yang, N. Li, Z. Zhao, X. Fan, E. I.-C. Chang, and Y. Xu, ‘‘MRI cross-modality neuroimage-to-neuroImage translation,’’ 2018, arXiv:1801.06940. [23] T. Kim, M. Cha, H. Kim, J. Lee, and J. Kim, ‘‘Learning to discover crossdomain relations with generative adversarial networks,’’ in Proc. Int. Conf. Mach. Learn., May 2017, pp. 1–12. [24] R. Zhang, J.-Y. Zhu, P. Isola, X. Geng, A. S. Lin, T. Yu, and A. A. Efros, ‘‘Real-time user-guided image colorization with learned deep priors,’’ 2017, arXiv:1705.02999. [25] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, ‘‘Context encoders: Feature learning by inpainting,’’ 2016, arXiv:1604.07379. 26] C. Xu and B. Zhao, ‘‘Satellite image spoofing: Creating remote sensing dataset with generative adversarial networks,’’ in Proc. 10th Int. Conf. Geograph. Inf. Sci., 2018, pp. 3–6. [27] Y. Zhang, Y. Yin, R. Zimmermann, G. Wang, J. Varadarajan, and S. Ng, ‘‘An enhanced GAN model for automatic satellite-to-map image conversion,’’ IEEE Access, vol. 8, pp. 176704–176716, 2020, doi: 10.1109/ACCESS.2020.3025008. [28] S. Ganguli, P. Garzon, and N. Glaser, ‘‘GeoGAN: A conditional GAN generate standard layer of maps from satellite images,’’ Dept. Comput. Sci., Stanford Univ., Tech. Rep., Dec. 2018, doi: 10.13140/RG.2.2.19414.91205. [29] M. Shah, M. Gupta, and P. Thakkar, ‘‘SatGAN: Satellite image generation using conditional adversarial networks,’’ in Proc. Int. Conf. Commun. Inf. Comput. Technol. (ICCICT), Jun. 2021, pp. 1–6, doi: 10.1109/ICCICT50803.2021.9510104. [30] G. Kogan, G. Gambotto, A. Samsen, A. Boleslavský, M. Ferretti, D. Gui, and F. Frei. (Nov. 2016). Machine Learning for Artists Workshop at OpenDot. [Online]. Available: https://opendot.github.io/ml4a-invisiblecities/ implementation/ [31] F. R. Uebersch. (Feb. 2022). Creating a Dataset of Satellite Images for StyleGAN Training. [Online]. Available: https://ueberf.medium.com/ creating-a-dataset-of-satellite-images-for-stylegan-training-8eff8fd56e68 [32] A. Gautam, M. Sit, and I. Demir, ‘‘Realistic river image synthesis using deep generative adversarial networks,’’ 2020, arXiv:2003.00826. [33] K. Johnson. (Dec. 2020). NVidia Researchers Devise a Method for Training GANs With Fewer Data. VentureBeat. [Online]. Available: https://venturebeat.com/2020/12/07/nvidia-researchers-devise-methodfor- training-gans-with-less-data/ [34] R. Mourya. (Dec. 2020). Resolving CUDA: Being Out of Memory With Gradient Accumulation and AMP. Towards Data Science. [Online]. Available: https://towardsdatascience.com/i-am-so-done-withcuda- out-of-memory-c62f42947dca [35] J. Brownlee. (Jul. 2019). How to Implement Pix2Pix GAN Models From Scratch With Keras. Machine Learning Mastery. [Online]. Available: https://machinelearningmastery.com/how-to-implement-pix2pix-ganmodels- from-scratch-with-keras/ [36] A. Horé and D. Ziou, ‘‘Is there a relationship between peak-signal-to-noise ratio and structural similarity index measure?’’ IET Image Process., vol. 7, no. 1, pp. 12–24, Feb. 2013, doi: 10.1049/iet-ipr.2012.0489. [37] Y. Wang, C. Wu, L. Herranz, J. van de Weijer, A. Gonzalez-Garcia, and B. Raducanu, ‘‘Transferring GANs: Generating images from limited data,’’ in Proc. Eur. Conf. Comput. Vis. (ECCV), Oct. 2018, pp. 218–234. [38] P. A. Z. Luna and J. A. L. Sotelo, ‘‘Systematic literature review: Artificial neural networks applied in satellite images,’’ in Proc. IEEE Colombian Conf. Appl. Comput. Intell., Aug. 2020, pp. 1–6, doi: 10.1109/COLCACI50549.2020.9247916. |
dc.rights.spa.fl_str_mv |
Derechos reservados - IEEE, 2023 |
dc.rights.coar.fl_str_mv |
http://purl.org/coar/access_right/c_abf2 |
dc.rights.uri.eng.fl_str_mv |
https://creativecommons.org/licenses/by-nc-nd/4.0/ |
dc.rights.accessrights.eng.fl_str_mv |
info:eu-repo/semantics/openAccess |
dc.rights.creativecommons.spa.fl_str_mv |
Atribución-NoComercial-SinDerivadas 4.0 Internacional (CC BY-NC-ND 4.0) |
rights_invalid_str_mv |
Derechos reservados - IEEE, 2023 https://creativecommons.org/licenses/by-nc-nd/4.0/ Atribución-NoComercial-SinDerivadas 4.0 Internacional (CC BY-NC-ND 4.0) http://purl.org/coar/access_right/c_abf2 |
eu_rights_str_mv |
openAccess |
dc.format.extent.spa.fl_str_mv |
11 páginas |
dc.format.mimetype.none.fl_str_mv |
application/pdf |
dc.publisher.eng.fl_str_mv |
IEEE |
dc.publisher.place.eng.fl_str_mv |
Piscataway |
institution |
Universidad Autónoma de Occidente |
bitstream.url.fl_str_mv |
https://red.uao.edu.co/bitstreams/c01b8689-fa13-4faa-a293-deb3ad905b30/download https://red.uao.edu.co/bitstreams/f563a7b8-351d-442a-aa7a-7feea052c831/download https://red.uao.edu.co/bitstreams/92cc53ab-afb3-4bbf-8172-636a90694a81/download https://red.uao.edu.co/bitstreams/b2e3d29b-18fe-4dab-a9ef-66bc7bddb622/download |
bitstream.checksum.fl_str_mv |
dda7577d93f310954f317aca6dd1e3c0 6987b791264a2b5525252450f99b10d1 c592debc89adf04fd53f1da5ec6e9b97 f6d68cdba5c66303853898f875c0b41e |
bitstream.checksumAlgorithm.fl_str_mv |
MD5 MD5 MD5 MD5 |
repository.name.fl_str_mv |
Repositorio Digital Universidad Autonoma de Occidente |
repository.mail.fl_str_mv |
repositorio@uao.edu.co |
_version_ |
1814260122775453696 |
spelling |
Zárate L., PaolaArroyo H., ChristianRincón U., SoniaLópez Sotelo, Jesús Alfonsovirtual::5729-12024-10-15T14:20:36Z2024-10-15T14:20:36Z2023Zárate L., P.; López Sotelo, J. A.; Arroyo H. Ch. y Rincón U. S. (2023). Correction of Banding Errors in Satellite Images With Generative Adversarial Networks (GAN). IEEE Acces. volumen 11. 11 p. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10131946https://hdl.handle.net/10614/1586010.1109/ACCESS.2023.327926521693536Universidad Autónoma de OccidenteRespositorio Educativo Digital UAOhttps://red.uao.edu.co/This research proposes an innovative method for correcting banding errors in satellite images based on Generative Adversarial Networks (GAN). Small satellites are frequently launched into space to obtain images that can be used in scientific or military research, commercial activities, and urban planning, among other applications. However, its small cameras are more susceptible to radiometric, geometric errors, and other distortions caused by atmospheric interference. The proposed method was compared to the conventional correction technique using experimental data, showing the similar performance (92.64% and 90.05% accuracy, respectively). These experimental results suggest that generative models utilizing Artificial Intelligence (AI) techniques, specifically Deep Learning, are getting closer to achieving automatic correction close to conventional methods. Advantages of the GAN models include automating the task of correcting banding in satellite images, reducing the required time, and facilitating the processing without requiring prior technical knowledge in handling Geographic Information Systems (GIS). Potentially, this technique could represent a valuable tool for satellite image processing, improving the accuracy of the results and making the process more efficient. The research is particularly relevant to the field of remote sensing and can have practical applications in various industries11 páginasapplication/pdfengIEEEPiscatawayDerechos reservados - IEEE, 2023https://creativecommons.org/licenses/by-nc-nd/4.0/info:eu-repo/semantics/openAccessAtribución-NoComercial-SinDerivadas 4.0 Internacional (CC BY-NC-ND 4.0)http://purl.org/coar/access_right/c_abf2Correction of Banding Errors in Satellite Images With Generative Adversarial Networks (GAN)Artículo de revistahttp://purl.org/coar/resource_type/c_2df8fbb1Textinfo:eu-repo/semantics/articlehttp://purl.org/redcol/resource_type/ARTinfo:eu-repo/semantics/publishedVersionhttp://purl.org/coar/version/c_970fb48d4fbd8a85519705196011IEEE Acces[1] P. Alonso. Correciones a Las Imágenes de Satélites. Universidad de Murcia. Accessed: Jan. 16, 2022. [Online]. Available: https://www. um.es/geograf/sigmur/teledet/tema07.pdf[2] F. Pachua-Cofrep, ‘‘Correlation between NDVI and tree-rings. Growth of forest species in southern Ecuador,’’ M.S. thesis, Departamento de Geomática-Z_GIS, Universidad de Salzburgo, Salzburg, Austria, 2019, doi: 10.13140/RG.2.2.34662.57922.[3] USGS. Data Citation. Accessed: Jan. 16, 2022. [Online].Available: https:// www.usgs.gov/centers/eros/data-citation[4] Y. Pang, J. Lin, T. Qin, and Z. Chen, ‘‘Image-to-image translation: Methods and applications,’’ in Proc. Comput. Vis. Pattern Recognit., Jul. 2021, pp. 1–14.[5] Y. Pang, J. Lin, T. Qin, and Z. Chen, ‘‘Image-to-image translation: Methods and applications,’’ IEEE Trans. Multimedia, vol. 24, pp. 3859–3881, 2022.[6] X. Chen, C. Xu, X. Yang, and D. Tao, ‘‘Attention-GAN for object transfiguration in wild images,’’ in Proc. Eur. Conf. Comput. Vis. (ECCV), 2018, pp. 164–180.[7] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, ‘‘Generative adversarial networks,’’ 2014, arXiv:1406.2661.[8] J. Gauthier. (2015). Conditional Generative Adversarial Nets for Convolutional Face Generation. [Online]. Available: http://cs231n.stanford.edu/ reports/2015/pdfs/jgauthie_final_report.pdf[9] A. Sharma. (Jul. 2021). Pix2Pix: Image-to-Image Translation in PyTorch& TensorFlow. LearnOpenCV. [Online]. Available: https://learnopencv.com/ paired-image-to-image-translation-pix2pix/#pix2pix[10] P. Isola, J. Zhu, T. Zhou, and A. A. Efros, ‘‘Image-to-image translation with conditional adversarial networks,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 5967–5976.[11] D. Bank, N.Koenigstein, and R. Giryes, ‘‘Autoencoders,’’ in Proc. Comput. Vis. Pattern Recognit., Mach. Learn., Apr. 2021, pp. 1–12.[12] D. Bank, N. Koenigstein, and R. Giryes, ‘‘Autoencoders,’’ 2020, arXiv:2003.05991.[13] Y. Zhang. (2018). A Better Autoencoder for Images: Convolutional Autoencoder. Australian National University. [Online]. Available: http://users. cecs.anu.edu.au/~Tom.Gedeon/conf/ABCs2018/paper/ABCs 2018_paper_58.pdf[14] O. Ronneberger, P. Fischer, and T. Brox, ‘‘U-Net: Convolutional networks for biomedical image segmentation,’’ in Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent. Cham, Switzerland: Springer, May 2015, pp. 1–4.[15] H. Joyce, N. Terry, and M. Den, ‘‘Pix2Pix GAN for image-to-image translation,’’ Community College Rhode Island, Tech. Rep., 2021, doi: 10.13140/RG.2.2.32286.66887.[16] J. Zhu, T. Park, P. Isola, and A. A. Efros, ‘‘Unpaired image-to-image translation using cycle-consistent adversarial networks,’’ in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Oct. 2017, pp. 2242–2251.[17] H. Bansal and A. Rathore. (Dec. 2017). Understanding and Implementing CycleGAN in TensorFlow. GitHub. [Online]. Available: https://hardikbansal.github.io/CycleGANBlog/[18] T. Ganokratanaa, S. Aramvith, and N. Sebe, ‘‘Unsupervised anomaly detection and localization based on deep spatiotemporal translation network,’’ IEEE Access, vol. 8, pp. 50312–50329, 2020, doi: 10.1109/ACCESS.2020.2979869.[19] U. Demir and G. Unal, ‘‘Patch-based image inpainting with generative adversarial networks,’’ 2018, arXiv:1803.07422.[20] Y. Jia, Y. Guo, S. Chen, R. Song, G. Wang, X. Zhong, C. Yan, and G. Cui, ‘‘Multipath ghost and side/grating lobe suppression based on stacked generative adversarial nets in MIMO through-wall radar imaging,’’ IEEE Access, vol. 7, pp. 143367–143380, 2019, doi: 10.1109/ACCESS.2019.2945859.[21] K. Regmi and A. Borji, ‘‘Cross-view image synthesis using conditional GANs,’’ in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 3501–3510.[22] Q. Yang, N. Li, Z. Zhao, X. Fan, E. I.-C. Chang, and Y. Xu, ‘‘MRI cross-modality neuroimage-to-neuroImage translation,’’ 2018, arXiv:1801.06940.[23] T. Kim, M. Cha, H. Kim, J. Lee, and J. Kim, ‘‘Learning to discover crossdomain relations with generative adversarial networks,’’ in Proc. Int. Conf. Mach. Learn., May 2017, pp. 1–12.[24] R. Zhang, J.-Y. Zhu, P. Isola, X. Geng, A. S. Lin, T. Yu, and A. A. Efros, ‘‘Real-time user-guided image colorization with learned deep priors,’’ 2017, arXiv:1705.02999.[25] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, ‘‘Context encoders: Feature learning by inpainting,’’ 2016, arXiv:1604.07379.26] C. Xu and B. Zhao, ‘‘Satellite image spoofing: Creating remote sensing dataset with generative adversarial networks,’’ in Proc. 10th Int. Conf. Geograph. Inf. Sci., 2018, pp. 3–6.[27] Y. Zhang, Y. Yin, R. Zimmermann, G. Wang, J. Varadarajan, and S. Ng, ‘‘An enhanced GAN model for automatic satellite-to-map image conversion,’’ IEEE Access, vol. 8, pp. 176704–176716, 2020, doi: 10.1109/ACCESS.2020.3025008.[28] S. Ganguli, P. Garzon, and N. Glaser, ‘‘GeoGAN: A conditional GAN generate standard layer of maps from satellite images,’’ Dept. Comput. Sci., Stanford Univ., Tech. Rep., Dec. 2018, doi: 10.13140/RG.2.2.19414.91205.[29] M. Shah, M. Gupta, and P. Thakkar, ‘‘SatGAN: Satellite image generation using conditional adversarial networks,’’ in Proc. Int. Conf. Commun. Inf. Comput. Technol. (ICCICT), Jun. 2021, pp. 1–6, doi: 10.1109/ICCICT50803.2021.9510104.[30] G. Kogan, G. Gambotto, A. Samsen, A. Boleslavský, M. Ferretti, D. Gui, and F. Frei. (Nov. 2016). Machine Learning for Artists Workshop at OpenDot. [Online]. Available: https://opendot.github.io/ml4a-invisiblecities/ implementation/[31] F. R. Uebersch. (Feb. 2022). Creating a Dataset of Satellite Images for StyleGAN Training. [Online]. Available: https://ueberf.medium.com/ creating-a-dataset-of-satellite-images-for-stylegan-training-8eff8fd56e68[32] A. Gautam, M. Sit, and I. Demir, ‘‘Realistic river image synthesis using deep generative adversarial networks,’’ 2020, arXiv:2003.00826.[33] K. Johnson. (Dec. 2020). NVidia Researchers Devise a Method for Training GANs With Fewer Data. VentureBeat. [Online]. Available: https://venturebeat.com/2020/12/07/nvidia-researchers-devise-methodfor- training-gans-with-less-data/[34] R. Mourya. (Dec. 2020). Resolving CUDA: Being Out of Memory With Gradient Accumulation and AMP. Towards Data Science. [Online]. Available: https://towardsdatascience.com/i-am-so-done-withcuda- out-of-memory-c62f42947dca[35] J. Brownlee. (Jul. 2019). How to Implement Pix2Pix GAN Models From Scratch With Keras. Machine Learning Mastery. [Online]. Available: https://machinelearningmastery.com/how-to-implement-pix2pix-ganmodels- from-scratch-with-keras/[36] A. Horé and D. Ziou, ‘‘Is there a relationship between peak-signal-to-noise ratio and structural similarity index measure?’’ IET Image Process., vol. 7, no. 1, pp. 12–24, Feb. 2013, doi: 10.1049/iet-ipr.2012.0489.[37] Y. Wang, C. Wu, L. Herranz, J. van de Weijer, A. Gonzalez-Garcia, and B. Raducanu, ‘‘Transferring GANs: Generating images from limited data,’’ in Proc. Eur. Conf. Comput. Vis. (ECCV), Oct. 2018, pp. 218–234.[38] P. A. Z. Luna and J. A. L. Sotelo, ‘‘Systematic literature review: Artificial neural networks applied in satellite images,’’ in Proc. IEEE Colombian Conf. Appl. Comput. Intell., Aug. 2020, pp. 1–6, doi: 10.1109/COLCACI50549.2020.9247916.Artificial neural networkDeep learningGenerative adversarial networkSatellite imagesRadiometric errorBandingComunidad genralPublicationfc227fb1-22ec-47f0-afe7-521c61fddd32virtual::5729-1fc227fb1-22ec-47f0-afe7-521c61fddd32virtual::5729-1https://scholar.google.com.au/citations?user=7PIjh_MAAAAJ&hl=envirtual::5729-10000-0002-9731-8458virtual::5729-1https://scienti.minciencias.gov.co/cvlac/visualizador/generarCurriculoCv.do?cod_rh=0000249106virtual::5729-1ORIGINALCorrection_of_Banding_Errors_in_Satellite_Images_With_Generative_Adversarial_Networks_(GAN).pdfCorrection_of_Banding_Errors_in_Satellite_Images_With_Generative_Adversarial_Networks_(GAN).pdfArchivo texto completo del artículo de revista, PDFapplication/pdf2752350https://red.uao.edu.co/bitstreams/c01b8689-fa13-4faa-a293-deb3ad905b30/downloaddda7577d93f310954f317aca6dd1e3c0MD51LICENSElicense.txtlicense.txttext/plain; charset=utf-81672https://red.uao.edu.co/bitstreams/f563a7b8-351d-442a-aa7a-7feea052c831/download6987b791264a2b5525252450f99b10d1MD52TEXTCorrection_of_Banding_Errors_in_Satellite_Images_With_Generative_Adversarial_Networks_(GAN).pdf.txtCorrection_of_Banding_Errors_in_Satellite_Images_With_Generative_Adversarial_Networks_(GAN).pdf.txtExtracted texttext/plain48388https://red.uao.edu.co/bitstreams/92cc53ab-afb3-4bbf-8172-636a90694a81/downloadc592debc89adf04fd53f1da5ec6e9b97MD53THUMBNAILCorrection_of_Banding_Errors_in_Satellite_Images_With_Generative_Adversarial_Networks_(GAN).pdf.jpgCorrection_of_Banding_Errors_in_Satellite_Images_With_Generative_Adversarial_Networks_(GAN).pdf.jpgGenerated Thumbnailimage/jpeg15798https://red.uao.edu.co/bitstreams/b2e3d29b-18fe-4dab-a9ef-66bc7bddb622/downloadf6d68cdba5c66303853898f875c0b41eMD5410614/15860oai:red.uao.edu.co:10614/158602024-10-16 03:01:55.606https://creativecommons.org/licenses/by-nc-nd/4.0/Derechos reservados - IEEE, 2023open.accesshttps://red.uao.edu.coRepositorio Digital Universidad Autonoma de Occidenterepositorio@uao.edu.coPHA+RUwgQVVUT1IgYXV0b3JpemEgYSBsYSBVbml2ZXJzaWRhZCBBdXTDs25vbWEgZGUgT2NjaWRlbnRlLCBkZSBmb3JtYSBpbmRlZmluaWRhLCBwYXJhIHF1ZSBlbiBsb3MgdMOpcm1pbm9zIGVzdGFibGVjaWRvcyBlbiBsYSBMZXkgMjMgZGUgMTk4MiwgbGEgTGV5IDQ0IGRlIDE5OTMsIGxhIERlY2lzacOzbiBhbmRpbmEgMzUxIGRlIDE5OTMsIGVsIERlY3JldG8gNDYwIGRlIDE5OTUgeSBkZW3DoXMgbGV5ZXMgeSBqdXJpc3BydWRlbmNpYSB2aWdlbnRlIGFsIHJlc3BlY3RvLCBoYWdhIHB1YmxpY2FjacOzbiBkZSBlc3RlIGNvbiBmaW5lcyBlZHVjYXRpdm9zLiBQQVJBR1JBRk86IEVzdGEgYXV0b3JpemFjacOzbiBhZGVtw6FzIGRlIHNlciB2w6FsaWRhIHBhcmEgbGFzIGZhY3VsdGFkZXMgeSBkZXJlY2hvcyBkZSB1c28gc29icmUgbGEgb2JyYSBlbiBmb3JtYXRvIG8gc29wb3J0ZSBtYXRlcmlhbCwgdGFtYmnDqW4gcGFyYSBmb3JtYXRvIGRpZ2l0YWwsIGVsZWN0csOzbmljbywgdmlydHVhbCwgcGFyYSB1c29zIGVuIHJlZCwgSW50ZXJuZXQsIGV4dHJhbmV0LCBpbnRyYW5ldCwgYmlibGlvdGVjYSBkaWdpdGFsIHkgZGVtw6FzIHBhcmEgY3VhbHF1aWVyIGZvcm1hdG8gY29ub2NpZG8gbyBwb3IgY29ub2Nlci4gRUwgQVVUT1IsIGV4cHJlc2EgcXVlIGVsIGRvY3VtZW50byAodHJhYmFqbyBkZSBncmFkbywgcGFzYW50w61hLCBjYXNvcyBvIHRlc2lzKSBvYmpldG8gZGUgbGEgcHJlc2VudGUgYXV0b3JpemFjacOzbiBlcyBvcmlnaW5hbCB5IGxhIGVsYWJvcsOzIHNpbiBxdWVicmFudGFyIG5pIHN1cGxhbnRhciBsb3MgZGVyZWNob3MgZGUgYXV0b3IgZGUgdGVyY2Vyb3MsIHkgZGUgdGFsIGZvcm1hLCBlbCBkb2N1bWVudG8gKHRyYWJham8gZGUgZ3JhZG8sIHBhc2FudMOtYSwgY2Fzb3MgbyB0ZXNpcykgZXMgZGUgc3UgZXhjbHVzaXZhIGF1dG9yw61hIHkgdGllbmUgbGEgdGl0dWxhcmlkYWQgc29icmUgw6lzdGUuIFBBUkFHUkFGTzogZW4gY2FzbyBkZSBwcmVzZW50YXJzZSBhbGd1bmEgcmVjbGFtYWNpw7NuIG8gYWNjacOzbiBwb3IgcGFydGUgZGUgdW4gdGVyY2VybywgcmVmZXJlbnRlIGEgbG9zIGRlcmVjaG9zIGRlIGF1dG9yIHNvYnJlIGVsIGRvY3VtZW50byAoVHJhYmFqbyBkZSBncmFkbywgUGFzYW50w61hLCBjYXNvcyBvIHRlc2lzKSBlbiBjdWVzdGnDs24sIEVMIEFVVE9SLCBhc3VtaXLDoSBsYSByZXNwb25zYWJpbGlkYWQgdG90YWwsIHkgc2FsZHLDoSBlbiBkZWZlbnNhIGRlIGxvcyBkZXJlY2hvcyBhcXXDrSBhdXRvcml6YWRvczsgcGFyYSB0b2RvcyBsb3MgZWZlY3RvcywgbGEgVW5pdmVyc2lkYWQgIEF1dMOzbm9tYSBkZSBPY2NpZGVudGUgYWN0w7phIGNvbW8gdW4gdGVyY2VybyBkZSBidWVuYSBmZS4gVG9kYSBwZXJzb25hIHF1ZSBjb25zdWx0ZSB5YSBzZWEgZW4gbGEgYmlibGlvdGVjYSBvIGVuIG1lZGlvIGVsZWN0csOzbmljbyBwb2Ryw6EgY29waWFyIGFwYXJ0ZXMgZGVsIHRleHRvIGNpdGFuZG8gc2llbXByZSBsYSBmdWVudGUsIGVzIGRlY2lyIGVsIHTDrXR1bG8gZGVsIHRyYWJham8geSBlbCBhdXRvci4gRXN0YSBhdXRvcml6YWNpw7NuIG5vIGltcGxpY2EgcmVudW5jaWEgYSBsYSBmYWN1bHRhZCBxdWUgdGllbmUgRUwgQVVUT1IgZGUgcHVibGljYXIgdG90YWwgbyBwYXJjaWFsbWVudGUgbGEgb2JyYS48L3A+Cg== |