An Implementation of a Plasma Physics Application for Distributed-memory Supercomputers using a Directive-based Programming Framework

To extract performance from supercomputers, programmers in the High Performance Computing (HPC) community are often required to use a combination of frameworks to take advantage of the multiple levels of parallelism. However, over the years, efforts have been made to simplify this situation by creat...

Full description

Autores:
Asch, Christian
Francesquini, Emilio
Meneses, Esteban
Tipo de recurso:
Article of investigation
Fecha de publicación:
2024
Institución:
Universidad Autónoma de Bucaramanga - UNAB
Repositorio:
Repositorio UNAB
Idioma:
spa
OAI Identifier:
oai:repository.unab.edu.co:20.500.12749/26655
Acceso en línea:
http://hdl.handle.net/20.500.12749/26655
https://doi.org/10.29375/25392115.5053
Palabra clave:
Parallel Programming
Directive-based Programming
Plasma Physics
Rights
License
http://purl.org/coar/access_right/c_abf2
id UNAB2_3a362fe5ecee3d535f634c88b14e65fa
oai_identifier_str oai:repository.unab.edu.co:20.500.12749/26655
network_acronym_str UNAB2
network_name_str Repositorio UNAB
repository_id_str
dc.title.eng.fl_str_mv An Implementation of a Plasma Physics Application for Distributed-memory Supercomputers using a Directive-based Programming Framework
title An Implementation of a Plasma Physics Application for Distributed-memory Supercomputers using a Directive-based Programming Framework
spellingShingle An Implementation of a Plasma Physics Application for Distributed-memory Supercomputers using a Directive-based Programming Framework
Parallel Programming
Directive-based Programming
Plasma Physics
title_short An Implementation of a Plasma Physics Application for Distributed-memory Supercomputers using a Directive-based Programming Framework
title_full An Implementation of a Plasma Physics Application for Distributed-memory Supercomputers using a Directive-based Programming Framework
title_fullStr An Implementation of a Plasma Physics Application for Distributed-memory Supercomputers using a Directive-based Programming Framework
title_full_unstemmed An Implementation of a Plasma Physics Application for Distributed-memory Supercomputers using a Directive-based Programming Framework
title_sort An Implementation of a Plasma Physics Application for Distributed-memory Supercomputers using a Directive-based Programming Framework
dc.creator.fl_str_mv Asch, Christian
Francesquini, Emilio
Meneses, Esteban
dc.contributor.author.none.fl_str_mv Asch, Christian
Francesquini, Emilio
Meneses, Esteban
dc.contributor.orcid.spa.fl_str_mv Asch, Christian [0000-0002-3111-4858]
Francesquini, Emilio [0000-0002-5374-2521]
Meneses, Esteban [0000-0002-4307-6000]
dc.subject.keywords.eng.fl_str_mv Parallel Programming
Directive-based Programming
Plasma Physics
topic Parallel Programming
Directive-based Programming
Plasma Physics
description To extract performance from supercomputers, programmers in the High Performance Computing (HPC) community are often required to use a combination of frameworks to take advantage of the multiple levels of parallelism. However, over the years, efforts have been made to simplify this situation by creating frameworks that can take advantage of multiple levels. This often means that the programmer has to learn a new library. On the other hand, there are frameworks that were created by extending the capabilities of established paradigms. In this paper, we explore one of this libraries, OpenMP Cluster. As its name implies, it extends the OpenMP API, which allows seasoned programmers to take advantage of their experience to use just one API to program in sharedmemory and distributed-memory parallelism. In this paper, we took an existing plasma physics code that was programmed with MPI+OpenMP and ported it over to OpenMP Cluster. We also show that under certain conditions, the performance of OpenMP Cluster is similar to that of the MPI+OpenMP code.
publishDate 2024
dc.date.accessioned.none.fl_str_mv 2024-09-19T21:16:07Z
dc.date.available.none.fl_str_mv 2024-09-19T21:16:07Z
dc.date.issued.none.fl_str_mv 2024-06-18
dc.type.coarversion.fl_str_mv http://purl.org/coar/version/c_970fb48d4fbd8a85
dc.type.driver.none.fl_str_mv info:eu-repo/semantics/article
dc.type.local.spa.fl_str_mv Artículo
dc.type.coar.none.fl_str_mv http://purl.org/coar/resource_type/c_2df8fbb1
dc.type.redcol.none.fl_str_mv http://purl.org/redcol/resource_type/ART
format http://purl.org/coar/resource_type/c_2df8fbb1
dc.identifier.issn.spa.fl_str_mv ISSN: 1657-2831
e-ISSN: 2539-2115
dc.identifier.uri.none.fl_str_mv http://hdl.handle.net/20.500.12749/26655
dc.identifier.instname.spa.fl_str_mv instname:Universidad Autónoma de Bucaramanga UNAB
dc.identifier.repourl.spa.fl_str_mv repourl:https://repository.unab.edu.co
dc.identifier.doi.none.fl_str_mv https://doi.org/10.29375/25392115.5053
identifier_str_mv ISSN: 1657-2831
e-ISSN: 2539-2115
instname:Universidad Autónoma de Bucaramanga UNAB
repourl:https://repository.unab.edu.co
url http://hdl.handle.net/20.500.12749/26655
https://doi.org/10.29375/25392115.5053
dc.language.iso.spa.fl_str_mv spa
language spa
dc.relation.spa.fl_str_mv https://revistas.unab.edu.co/index.php/rcc/article/view/5053/3967
dc.relation.uri.spa.fl_str_mv https://revistas.unab.edu.co/index.php/rcc/issue/view/297
dc.relation.references.none.fl_str_mv Allmann-Rahn, F., Lautenbach, S., Deisenhofer, M., & Grauer, R. (2024, March). The muphyII Code: Multiphysics Plasma Simulation on Large HPC Systems. Computer Physics Communications, 296, 109064. doi:https://doi.org/10.1016/j.cpc.2023.109064
Choi, J. Y., Chang, C.-S., Dominski, J., Klasky, S., Merlo, G., Suchyta, E., . . . Wood, C. (2018). Coupling Exascale Multiphysics Applications: Methods and Lessons Learned. 2018 IEEE International Conference on e-Science and Grid Computing (pp. 442-452). Amsterdam, Netherlands: IEEE. doi:10.1109/eScience.2018.00133
Coto-Vílchez, F., Vargas, V. I., Solano-Piedra, R., Rojas-Quesada, M. A., Araya-Solano, L. A., Ramírez, A. A., . . . Arias, S. (2020, July 8). Progress on the small modular stellarator SCR-1: new diagnostics and heating scenarios. Journal of Plasma Physics, 86(4), 815860401. doi:10.1017/S0022377820000677
Di Francia Rosso, P. H., & Francesquini, E. (2022). OCFTL: An MPI Implementation-Independent Fault Tolerance Library for Task-Based Applications. In I. Gitler, C. J. Barrios Hernández, & M. Esteban (Ed.), High Performance Computing. 8th Latin American Conference, CARLA 2021, Guadalajara, Mexico, October 6–8, 2021, Revised Selected Papers. 1540, pp. 131-147. Springer, Cham. doi:10.1007/978-3-031-04209-6_10
Jiménez, D., Campos-Duarte, L., Solano-Piedra, R., Araya-Solano, L. A., Meneses, E., & Vargas, I. (2020). BS-SOLCTRA: Towards a Parallel Magnetic Plasma Confinement Simulation Framework for Modular Stellarator Devices. In J. L. Crespo-Mariño, & E. Meneses-Rojas (Ed.), High Performance Computing. 6th Latin American Conference, CARLA 2019, Turrialba, Costa Rica, September 25–27, 2019, Revised Selected Papers. 1087, pp. 33-48. Springer, Cham. doi:10.1007/978-3-030-41005-6_3
Jiménez, D., Herrera-Mora, J., Rampp, M., Laure, E., & Meneses, E. (2022). Implementing a GPU-Portable Field Line Tracing Application with OpenMP Offload. In P. Navaux, C. J. Barrios H, C. Osthoff, & G. Guerrero (Ed.), High Performance Computing. 9th Latin American Conference, CARLA 2022, Porto Alegre, Brazil, September 26–30, 2022, Revised Selected Papers (pp. 31-46). Springer International Publishing. doi:10.1007/978-3-031-23821-5_3
Jiménez, D., Meneses, E., & Vargas, V. I. (2021, July 17). Adaptive Plasma Physics Simulations: Dealing with Load Imbalance using Charm++. PEARC '21: Practice and Experience in Advanced Research Computing. Article No. 3, pp. 1-8. New York, NY, USA: Association for Computing Machinery. doi:10.1145/3437359.3465566
Topcuoglu, H., Hariri, S., & Wu, M.-Y. (2002, March). Performance-effective and low-complexity task scheduling for heterogeneous computing. IEEE Transactions on Parallel and Distributed Systems, 13(3), 260-274. doi:10.1109/71.993206
Yviquel, H., Pereira, M., Francesquini, E., Valarini, G., Leite, G., Rosso, P., . . . Araujo, G. (2023, January). The OpenMP Cluster Programming Model. ICPP Workshops '22: Workshop Proceedings of the 51st International Conference on Parallel Processing. Article No. 17, pp. 1-11. Bordeaux, France: Association for Computing Machinery. doi:10.1145/3547276.3548444
dc.rights.coar.fl_str_mv http://purl.org/coar/access_right/c_abf2
rights_invalid_str_mv http://purl.org/coar/access_right/c_abf2
dc.format.mimetype.spa.fl_str_mv application/pdf
dc.publisher.spa.fl_str_mv Universidad Autónoma de Bucaramanga UNAB
dc.source.spa.fl_str_mv Vol. 25 Núm. 1 (2024): Revista Colombiana de Computación (Enero-Junio); 39-47
institution Universidad Autónoma de Bucaramanga - UNAB
bitstream.url.fl_str_mv https://repository.unab.edu.co/bitstream/20.500.12749/26655/2/license.txt
https://repository.unab.edu.co/bitstream/20.500.12749/26655/1/Art%c3%adculo.pdf
https://repository.unab.edu.co/bitstream/20.500.12749/26655/3/Art%c3%adculo.pdf.jpg
bitstream.checksum.fl_str_mv 855f7d18ea80f5df821f7004dff2f316
c124f4cd7c0893c9bdf7c3b592d364e0
a4e73e31d19c38e5c2a50f9e14a762ef
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
repository.name.fl_str_mv Repositorio Institucional | Universidad Autónoma de Bucaramanga - UNAB
repository.mail.fl_str_mv repositorio@unab.edu.co
_version_ 1812205663278858240
spelling Asch, Christian1ef1e465-993f-4f55-9d03-a82a48d32e35Francesquini, Emilio25958e68-5807-49a8-b67f-ad86258e74d1Meneses, Esteban4a20e5ac-8885-4884-8394-47d2c557f95fAsch, Christian [0000-0002-3111-4858]Francesquini, Emilio [0000-0002-5374-2521]Meneses, Esteban [0000-0002-4307-6000]2024-09-19T21:16:07Z2024-09-19T21:16:07Z2024-06-18ISSN: 1657-2831e-ISSN: 2539-2115http://hdl.handle.net/20.500.12749/26655instname:Universidad Autónoma de Bucaramanga UNABrepourl:https://repository.unab.edu.cohttps://doi.org/10.29375/25392115.5053application/pdfspaUniversidad Autónoma de Bucaramanga UNABhttps://revistas.unab.edu.co/index.php/rcc/article/view/5053/3967https://revistas.unab.edu.co/index.php/rcc/issue/view/297Allmann-Rahn, F., Lautenbach, S., Deisenhofer, M., & Grauer, R. (2024, March). The muphyII Code: Multiphysics Plasma Simulation on Large HPC Systems. Computer Physics Communications, 296, 109064. doi:https://doi.org/10.1016/j.cpc.2023.109064Choi, J. Y., Chang, C.-S., Dominski, J., Klasky, S., Merlo, G., Suchyta, E., . . . Wood, C. (2018). Coupling Exascale Multiphysics Applications: Methods and Lessons Learned. 2018 IEEE International Conference on e-Science and Grid Computing (pp. 442-452). Amsterdam, Netherlands: IEEE. doi:10.1109/eScience.2018.00133Coto-Vílchez, F., Vargas, V. I., Solano-Piedra, R., Rojas-Quesada, M. A., Araya-Solano, L. A., Ramírez, A. A., . . . Arias, S. (2020, July 8). Progress on the small modular stellarator SCR-1: new diagnostics and heating scenarios. Journal of Plasma Physics, 86(4), 815860401. doi:10.1017/S0022377820000677Di Francia Rosso, P. H., & Francesquini, E. (2022). OCFTL: An MPI Implementation-Independent Fault Tolerance Library for Task-Based Applications. In I. Gitler, C. J. Barrios Hernández, & M. Esteban (Ed.), High Performance Computing. 8th Latin American Conference, CARLA 2021, Guadalajara, Mexico, October 6–8, 2021, Revised Selected Papers. 1540, pp. 131-147. Springer, Cham. doi:10.1007/978-3-031-04209-6_10Jiménez, D., Campos-Duarte, L., Solano-Piedra, R., Araya-Solano, L. A., Meneses, E., & Vargas, I. (2020). BS-SOLCTRA: Towards a Parallel Magnetic Plasma Confinement Simulation Framework for Modular Stellarator Devices. In J. L. Crespo-Mariño, & E. Meneses-Rojas (Ed.), High Performance Computing. 6th Latin American Conference, CARLA 2019, Turrialba, Costa Rica, September 25–27, 2019, Revised Selected Papers. 1087, pp. 33-48. Springer, Cham. doi:10.1007/978-3-030-41005-6_3Jiménez, D., Herrera-Mora, J., Rampp, M., Laure, E., & Meneses, E. (2022). Implementing a GPU-Portable Field Line Tracing Application with OpenMP Offload. In P. Navaux, C. J. Barrios H, C. Osthoff, & G. Guerrero (Ed.), High Performance Computing. 9th Latin American Conference, CARLA 2022, Porto Alegre, Brazil, September 26–30, 2022, Revised Selected Papers (pp. 31-46). Springer International Publishing. doi:10.1007/978-3-031-23821-5_3Jiménez, D., Meneses, E., & Vargas, V. I. (2021, July 17). Adaptive Plasma Physics Simulations: Dealing with Load Imbalance using Charm++. PEARC '21: Practice and Experience in Advanced Research Computing. Article No. 3, pp. 1-8. New York, NY, USA: Association for Computing Machinery. doi:10.1145/3437359.3465566Topcuoglu, H., Hariri, S., & Wu, M.-Y. (2002, March). Performance-effective and low-complexity task scheduling for heterogeneous computing. IEEE Transactions on Parallel and Distributed Systems, 13(3), 260-274. doi:10.1109/71.993206Yviquel, H., Pereira, M., Francesquini, E., Valarini, G., Leite, G., Rosso, P., . . . Araujo, G. (2023, January). The OpenMP Cluster Programming Model. ICPP Workshops '22: Workshop Proceedings of the 51st International Conference on Parallel Processing. Article No. 17, pp. 1-11. Bordeaux, France: Association for Computing Machinery. doi:10.1145/3547276.3548444Vol. 25 Núm. 1 (2024): Revista Colombiana de Computación (Enero-Junio); 39-47An Implementation of a Plasma Physics Application for Distributed-memory Supercomputers using a Directive-based Programming Frameworkinfo:eu-repo/semantics/articleArtículohttp://purl.org/coar/resource_type/c_2df8fbb1http://purl.org/redcol/resource_type/ARThttp://purl.org/coar/version/c_970fb48d4fbd8a85Parallel ProgrammingDirective-based ProgrammingPlasma PhysicsTo extract performance from supercomputers, programmers in the High Performance Computing (HPC) community are often required to use a combination of frameworks to take advantage of the multiple levels of parallelism. However, over the years, efforts have been made to simplify this situation by creating frameworks that can take advantage of multiple levels. This often means that the programmer has to learn a new library. On the other hand, there are frameworks that were created by extending the capabilities of established paradigms. In this paper, we explore one of this libraries, OpenMP Cluster. As its name implies, it extends the OpenMP API, which allows seasoned programmers to take advantage of their experience to use just one API to program in sharedmemory and distributed-memory parallelism. In this paper, we took an existing plasma physics code that was programmed with MPI+OpenMP and ported it over to OpenMP Cluster. We also show that under certain conditions, the performance of OpenMP Cluster is similar to that of the MPI+OpenMP code.http://purl.org/coar/access_right/c_abf2LICENSElicense.txtlicense.txttext/plain; charset=utf-8347https://repository.unab.edu.co/bitstream/20.500.12749/26655/2/license.txt855f7d18ea80f5df821f7004dff2f316MD52open accessORIGINALArtículo.pdfArtículo.pdfArtículoapplication/pdf570726https://repository.unab.edu.co/bitstream/20.500.12749/26655/1/Art%c3%adculo.pdfc124f4cd7c0893c9bdf7c3b592d364e0MD51open accessTHUMBNAILArtículo.pdf.jpgArtículo.pdf.jpgIM Thumbnailimage/jpeg10297https://repository.unab.edu.co/bitstream/20.500.12749/26655/3/Art%c3%adculo.pdf.jpga4e73e31d19c38e5c2a50f9e14a762efMD53open access20.500.12749/26655oai:repository.unab.edu.co:20.500.12749/266552024-09-19 22:03:07.219open accessRepositorio Institucional | Universidad Autónoma de Bucaramanga - UNABrepositorio@unab.edu.coTGEgUmV2aXN0YSBDb2xvbWJpYW5hIGRlIENvbXB1dGFjacOzbiBlcyBmaW5hbmNpYWRhIHBvciBsYSBVbml2ZXJzaWRhZCBBdXTDs25vbWEgZGUgQnVjYXJhbWFuZ2EuIEVzdGEgUmV2aXN0YSBubyBjb2JyYSB0YXNhIGRlIHN1bWlzacOzbiB5IHB1YmxpY2FjacOzbiBkZSBhcnTDrWN1bG9zLiBQcm92ZWUgYWNjZXNvIGxpYnJlIGlubWVkaWF0byBhIHN1IGNvbnRlbmlkbyBiYWpvIGVsIHByaW5jaXBpbyBkZSBxdWUgaGFjZXIgZGlzcG9uaWJsZSBncmF0dWl0YW1lbnRlIGludmVzdGlnYWNpw7NuIGFsIHDDumJsaWNvIGFwb3lhIGEgdW4gbWF5b3IgaW50ZXJjYW1iaW8gZGUgY29ub2NpbWllbnRvIGdsb2JhbC4=