Computación científica paralela mediante uso de herramientas para paso de mensajes

Los usuarios de Entornos de Computación Científica (SCE, por sus siglas en inglés) siempre requieren mayor potencia de cálculo para sus aplicaciones. Utilizando las herramientas propuestas, los usuarios de las conocidas plataformas Matlab® y Octave, en un cluster de computadores, pueden paralelizar...

Full description

Autores:
Fernández, Francisco J.
Anguita, Mancia
Tipo de recurso:
Article of journal
Fecha de publicación:
2012
Institución:
Corporación Universidad de la Costa
Repositorio:
REDICUC - Repositorio CUC
Idioma:
spa
OAI Identifier:
oai:repositorio.cuc.edu.co:11323/2658
Acceso en línea:
https://hdl.handle.net/11323/2658
https://repositorio.cuc.edu.co/
Palabra clave:
Computación científica
Programación paralela
Computación de altas prestaciones
Computación cluster
Paso de mensajes
Matlab paralelo
Scientific computing
Parallel programming
High performance computing
Cluster computing
Message-passing
Parallel Matlab
Rights
openAccess
License
http://purl.org/coar/access_right/c_abf2
id RCUC2_50ae755f2e38137be8681fa16cbeb7b0
oai_identifier_str oai:repositorio.cuc.edu.co:11323/2658
network_acronym_str RCUC2
network_name_str REDICUC - Repositorio CUC
repository_id_str
dc.title.spa.fl_str_mv Computación científica paralela mediante uso de herramientas para paso de mensajes
dc.title.translated.eng.fl_str_mv Parallel scientific computing with message-passing toolboxes
title Computación científica paralela mediante uso de herramientas para paso de mensajes
spellingShingle Computación científica paralela mediante uso de herramientas para paso de mensajes
Computación científica
Programación paralela
Computación de altas prestaciones
Computación cluster
Paso de mensajes
Matlab paralelo
Scientific computing
Parallel programming
High performance computing
Cluster computing
Message-passing
Parallel Matlab
title_short Computación científica paralela mediante uso de herramientas para paso de mensajes
title_full Computación científica paralela mediante uso de herramientas para paso de mensajes
title_fullStr Computación científica paralela mediante uso de herramientas para paso de mensajes
title_full_unstemmed Computación científica paralela mediante uso de herramientas para paso de mensajes
title_sort Computación científica paralela mediante uso de herramientas para paso de mensajes
dc.creator.fl_str_mv Fernández, Francisco J.
Anguita, Mancia
dc.contributor.author.spa.fl_str_mv Fernández, Francisco J.
Anguita, Mancia
dc.subject.spa.fl_str_mv Computación científica
Programación paralela
Computación de altas prestaciones
Computación cluster
Paso de mensajes
Matlab paralelo
topic Computación científica
Programación paralela
Computación de altas prestaciones
Computación cluster
Paso de mensajes
Matlab paralelo
Scientific computing
Parallel programming
High performance computing
Cluster computing
Message-passing
Parallel Matlab
dc.subject.eng.fl_str_mv Scientific computing
Parallel programming
High performance computing
Cluster computing
Message-passing
Parallel Matlab
description Los usuarios de Entornos de Computación Científica (SCE, por sus siglas en inglés) siempre requieren mayor potencia de cálculo para sus aplicaciones. Utilizando las herramientas propuestas, los usuarios de las conocidas plataformas Matlab® y Octave, en un cluster de computadores, pueden paralelizar sus aplicaciones interpretadas utilizando paso de mensajes, como el proporcionado por PVM (Parallel Virtual Machine) o MPI (Message Passing Interface). Para muchas aplicaciones SCE es posible encontrar un esquema de paralelización con ganancia en velocidad casi lineal. Estas herramientas son interfaces prácticamente exhaustivas a las correspondientes librerías, soportan todos los tipos de datos compatibles en el SCE base y se han diseñado teniendo en cuenta el rendimiento y la facilidad de mantenimiento. En este artículo se resumen trabajos anteriores, su repercusión, y algunos resultados obtenidos por usuarios finales. Con base en la herramienta más reciente, la Toolbox MPI para Octave, se describen brevemente sus características principales, y se presenta un estudio de caso, el conjunto de Mandelbrot
publishDate 2012
dc.date.issued.none.fl_str_mv 2012-10-31
dc.date.accessioned.none.fl_str_mv 2019-02-21T00:07:57Z
dc.date.available.none.fl_str_mv 2019-02-21T00:07:57Z
dc.type.spa.fl_str_mv Artículo de revista
dc.type.coar.fl_str_mv http://purl.org/coar/resource_type/c_2df8fbb1
dc.type.coar.spa.fl_str_mv http://purl.org/coar/resource_type/c_6501
dc.type.content.spa.fl_str_mv Text
dc.type.driver.spa.fl_str_mv info:eu-repo/semantics/article
dc.type.redcol.spa.fl_str_mv http://purl.org/redcol/resource_type/ART
dc.type.version.spa.fl_str_mv info:eu-repo/semantics/acceptedVersion
format http://purl.org/coar/resource_type/c_6501
status_str acceptedVersion
dc.identifier.citation.spa.fl_str_mv Fernández, F., & Anguita, M. (2012). Computación científica paralela mediante uso de herramientas para paso de mensajes. INGE CUC, 8(1), 51-84. Recuperado a partir de https://revistascientificas.cuc.edu.co/ingecuc/article/view/223
dc.identifier.issn.spa.fl_str_mv 0122-6517, 2382-4700 electrónico
dc.identifier.uri.spa.fl_str_mv https://hdl.handle.net/11323/2658
dc.identifier.eissn.spa.fl_str_mv 2382-4700
dc.identifier.instname.spa.fl_str_mv Corporación Universidad de la Costa
dc.identifier.pissn.spa.fl_str_mv 0122-6517
dc.identifier.reponame.spa.fl_str_mv REDICUC - Repositorio CUC
dc.identifier.repourl.spa.fl_str_mv https://repositorio.cuc.edu.co/
identifier_str_mv Fernández, F., & Anguita, M. (2012). Computación científica paralela mediante uso de herramientas para paso de mensajes. INGE CUC, 8(1), 51-84. Recuperado a partir de https://revistascientificas.cuc.edu.co/ingecuc/article/view/223
0122-6517, 2382-4700 electrónico
2382-4700
Corporación Universidad de la Costa
0122-6517
REDICUC - Repositorio CUC
url https://hdl.handle.net/11323/2658
https://repositorio.cuc.edu.co/
dc.language.iso.none.fl_str_mv spa
language spa
dc.relation.ispartofseries.spa.fl_str_mv INGE CUC; Vol. 8, Núm. 1 (2012)
dc.relation.ispartofjournal.spa.fl_str_mv INGE CUC
INGE CUC
dc.relation.references.spa.fl_str_mv [1] C. B. Moler, Numerical Computing with MATLAB, Revised Reprint. SIAM, 2004, 2008. Para otras referencias autorizadas ver también http://www.mathworks.com/support/books/
[2] Web de The MathWorks, Disponible en: http://www.mathworks.com/products/pfo/
[3] Web de The MathWorks, Disponible en: http://www.mathworks.com/products/matlab/
[4] J . W. Eaton, D. Bateman y S. Hauberg, GNU Octave Manual. Network Theory Ltd., 2008.
[5] J . W. Eaton, “GNU Octave: History and outlook for the future” in Conference Proceedings of the 2005 AIChE Annual Meeting, Cincinnati Ohio, November 1, 2005.
[6] J . W. Eaton and J. B. Rawlings, “Ten Years of Octave - Recent Developments and Plans for the Future” in Proceedings of the 3rd International Workshop on Distributed Statistical Computing DSC-2003, Vienna, Austria, 2003.
[7] J . W. Eaton, “Octave: Past, Present and Future” in Proceedings of the 2nd International Workshop on Distributed Statistical Computing DSC-2001, Vienna, Austria, 2001.
[8] Web de Octave, Disponible en: http://www.gnu.org/software/octave/.
[9] A. Geist, A. Beguelin, J. Dongarra, W. Jiang, R. Manchek and V. Sunderam, PVM: Parallel Virtual Machine. A Users’ Guide and Tutorial for Networked Parallel Computing. The MIT Press, 1994.
[10] Web PVM, Disponible en: http://www.csm.ornl.gov/pvm/.
[11] MPI Forum, “MPI: A Message-Passing Interface standard.” Int. J. Supercomput. Appl. High Perform. Comput., vol. 8, no. 3/4, pp. 159-416, 1994. Ver también los documentos del MPI Forum: MPI 2.2 standard (2009), MPI 3.0 Draft (2012), University of Tennessee, Knoxville. Disponible en: http://www.mpi-forum.org/
[12] W. Gropp, E. Lusk and A. Skjellum, Using MPI: Portable Parallel Programming with the Message Passing Interface, 2nd Edition. The MIT Press, 1999.
[13] W. Gropp, E. Lusk and R. Thakur, Using MPI-2: Advanced Features of the Message- Passing Interface. The MIT Press, 1999.
[14] G. Burns, R. Daoud and J. Vaigl, “LAM: an open cluster environment for MPI” in Proceedings of Supercomputing symposium, 1994, pp. 379-386.
[15] J . Squyres and A. Lumsdaine, “A component architecture for LAM/MPI” in Proceedings of the 10th European PVM/MPI Users’ Group Meeting, Lect. Notes Comput. Sc., vol. 2840, pp. 379-387, 2003.
[16] Web LAM, Disponible en: http://www.lam-mpi.org/about/overview/.
[17] E. Gabriel et al., Open-MPI team, “Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation” in Proceedings, 11th European PVM/MPI Users’ Group Meeting, Budapest, Hungary, September 2004.
[18] Web Open-MPI, Disponible en: http://www.open-mpi.org/.
[19] J . Fernández, “PVMTB (Parallel Virtual Machine Toolbox)” in III Congreso Usuarios MATLAB’99, 17-19 Nov. 1999, UNED, Madrid, Spain. pp.523–532. Disponible en: http://atc.ugr.es/~javier/investigacion/papers/Users99.pdf
[20] J. Fernández, “Message passing under MATLAB” in Proceedings of the Advanced Simulation Technologies Conference ASTC’01, Seattle Washington, April 22- 26, 2001, pp. 73-82.
[21] J. Fernández, A. Cañas, A. F. Díaz, J. González, J. Ortega and A. Prieto, “Performance of message-passing MATLAB toolboxes” in Proc. of the VecPar 2002, Lect. Notes Comput. Sc., vol. 2565, pp. 228-241, 2003. URL de las Toolboxes http://www.ugr.es/~jfernand
[22] J . Fernández, M. Anguita, S. Mota, A. Cañas, E. Ortigosa and F.J. Rojas, “Parallel programming toolboxes for Octave (poster)” in Proc. of the VecPar 2004, Valencia, Spain, June 28-30 2004, pp. 797-806. Disponible en: http://www.ugr.es/~jfernand/ investigacion/papers/VecPar04.pdf
[23] J . Fernández, M. Anguita, E. Ros and J.L. Bernier, “SCE toolboxes for the development of high-level parallel applications” in Proc. of the 6th ICCS 2006, Part II, Reading, United Kingdom, May 28-31, 2006. Lect. Notes Comput. Sc., vol. 3992, pp. 518-525.
[24] R. Pfarrhofer, P. Bachhiesl, M. Kelz, H. Stögner and A. Uhl, “MDICE - A MAT LAB toolbox for efficient cluster computing” in Proc. of Parallel Computing (Parco’ 03), Dresden, Germany, September 2-5, 2003, pp. 535-542.
[25] R. Pfarrhofer, M. Kelz, P. Bachhiesl, H. Stögner and A. Uhl, “Distributed optimization of fiber optic network layout using MATLAB” in Proc. ICCSA 2004, Part III, Lect. Notes Comput. Sc., vol. 3045, 2004, pp. 538-547.
[26] D . Petcu, D. Dubu and M. Paprzycki, “Extending Maple to the Grid: Design and implementation” in Proc. of the 3rd ISPDC/ HeteroPar’04, University College Cork, Ireland, July 5th - 7th 2004, pp. 209-216. DOI: 10.1109/ISPDC.2004.25.
[27] D . Petcu, M. Paprzycki and D. Dubu, “Design and implementation of a Grid extension for Maple” Scientific Programming, vol. 13, no. 2, 2005, pp. 137-149.
[28] D. Petcu, “Editorial: Challenges concerning symbolic computations on grids” Scalable Computing: Practice and Experience, vol. 6, no. 3, September 2005, pp. iii-iv.
[29] S. Goasguen, A. R. Butt, K. D. Colby and M. S. Lundstrorn, “Parallelization of the nanoscale device simulator nanoMOS-2.0 using a 100 nodes linux cluster,” in Proc. of the 2nd IEEE Conference on Nanotechnology, pp. 409-412, 2002. DOI 10.1109/ NANO.2002.1032277.
[30] S. Goasguen, R. Venugopal and M. S. Lundstrom, “Modeling transport in nanoscale silicon and molecular devices on parallel machines,” in Proc. of the 3rd IEEE Conference on Nanotechnology, vol. 1, pp. 398-401, 2003. DOI 10.1109/ NANO.2003.1231802.
[31] S. D. Canto, A. P. de Madrid and S. D. Bencomo, “Dynamic programming on clusters for solving control problems” in Proc. of the 4th Asian Control Conference ASCC’02, Suntec, Singapore, September 25-27, 2002.
[32] M. Parrilla, J. Aranda and S. D. Canto, “Parallel evolutionary computation: application of an EA to controller design,” in Proc. IWINAC 2005, Lect. Notes Comput. Sc., vol. 3562, pp. 153-162. DOI: 10.1007/11499305_16.
[33] S. D. Canto, A. P. de Madrid and S. D. Bencomo, “Parallel dynamic programming on clusters of workstations,” in IEEE Transactions on Parallel and Distributed Systems, vol. 16, no. 9, pp. 785-798, 2005.
[34] M. Creel, “User-friendly parallel computations with Econometric examples,” in Proc. of the 11th Int. Conf. on Computing in Economics and Finance, paper no. 445, 2005, Jun 23-25, Washington DC.
[35] M. Creel, “Creating and using a non-dedicated HPC cluster with Parallel-Knoppix,” in Proc. of the 12th International Conference on Computing in Economics and Finance, no. 202, Cyprus, Jun 22-24, 2006.
[36] M. Creel, “User-friendly parallel computations with Econometric examples,” Computational Economics, vol. 26, no. 2, pp. 107-128, Springer, October 2005. DOI: 10.1007/s10614-005-6868-2.
[37] J . A. Vrugt, H. V. Gupta, B. Ó Nualláin and W. Bouten, “Real-Time data assimilation for operational ensemble streamflow forecasting,” Journal of Hydrometeorology, vol. 7, no. 3, pp. 548-565, June 2006. DOI: 10.1175/JH M504.1
[38] J . A. Vrugt, B. Ó Nualláin, B. A. Robinson, W. Bouten, S. C. Dekker and P. M. A. Sloot, “Application of parallel computing to stochastic parameter estimation in environmental models,” Computers & Geosciences, vol. 32, iss. 8, October 2006, pp. 1139-1155. DOI: 10.1016/ j.cageo.2005.10.015. Ver pies de página en p. 1140, Sect. 4, Figs. 6, 8, Sect. 6.
[39] J . A. Vrugt, H. V. Gupta, S. C. Dekker, S. Sorooshian, T. Wagener and W. Bouten, “Application of stochastic parameter optimization to the Sacramento Soil Moisture Accounting model,” Journal of Hydrology, vol. 325, 2006, pp. 288–307. DOI: 10.1016/j.jhydrol.2005.10. 041. pp. 291, 305.
[40] J . August and T. Kanade, “Scalable regularized tomography without repeated projections,” in Proc. 18th Int. Parallel and Distributed Processing Symposium (IPDPS’04), pp. 232-239, 26-30 April 2004, Santa Fe, New Mexico. DOI: 10.1109/IPDPS.2004.1303277. Sects. 4/5, p. 237, Fig. 5, p. 238.
[41] T. Varslot and S.-E. Måsøy, “Forward propagation of acoustic pressure pulses in 3D soft biological tissue,” Modelling, Identification and Control, vol. 27, no. 3, pp. 181- 190. Ver Sect. 5, p. 196, y último párrafo en las conclusiones.
[42] M. Zhao, V. Chadha and R. J. Figueiredo, “Supporting application-tailored Grid File System sessions with WSRF-based services,” in Proc. of the 14th IEEE Int. Symp. on High Perf. Distributed Computing HPDC-14, pp. 24-33, 2005. DOI 10.1109/ HPDC.2005.1520930.
[43] M. Zhao and R. J. Figueiredo, “Application- tailored cache consistency for Wide- Area File Systems,” in Proc. of the 26th International Conference on Distributed Computing Systems (ICDCS 2006), pp. 41-50, July 4-7 2006, Lisboa, Portugal. DOI: 10.1109/ICDCS.2006.17.
[44] J . Kepner and S. Ahalt, “MatlabMPI,” Journal of Parallel and Distributed Computing, vol. 64, iss. 8, pp. 997-1005, Elsevier, August 2004. DOI: 10.1016/j.jpdc.2004.03.018.
[45] R. Choy and A. Edelman, “Parallel MATLAB: doing it right,” Proceedings of the IEEE, vol. 93, iss. 2, Feb. 2005, pp. 331- 341. DOI: 10.1109/JPROC.2004.840490.
[46] S. Raghunathan, “Making a supercomputer do what you want: High-level tools for parallel programming,” Computing in Science & Engineering, vol. 8, no. 5, Sept.-Oct. 2006, pp. 70-80. DOI: 10.1109/ MCSE.2006.93.
[47] R. Soganci, F. Gürgen and H. Topcuoglu, “Parallel Implementation of a VQ-based text-independent speaker identification,” in Proc. 3rd ADVIS 2004, Lect. Notes Comput. Sc., vol. 3261, pp. 291-300, 2004.
[48] C. Bekas, E. Kokiopoulou and E. Gallopoulos, “The design of a distributed MATLAB- based environment for computing pseudospectra,” Future Generation Computing Systems, vol. 21, iss. 6, pp. 930- 941, Elsevier, Jun 2005. DOI: 10.1016/j. future.2003.12.017.
[49] J . Kepner, “Parallel Programming with MatlabMPI,” in Agenda 5th Annual Workshop on High Performance Embedded Computing HPEC’01, MIT Lincoln Laboratory, Lexington, MA, 27-29 Nov. 2001. Disponible en: http://arxiv.org/abs/astroph/0107406.
[50] E. Manolakos, “Rapid Prototyping of Matlab/Java Distributed Applications using the JavaPorts components,” in Proc. 6th Annual Workshop on High Performance Embedded Computing HPEC’02, MIT Lincoln Laboratory, Lexington, MA, 24- 26 Sept. 2002. Disponible en: http://www. ll.mit.edu/HPEC/agendas/proc02/presentations/pdfs/4.4-manolakos.PDF
[51] S. Gallopoulos, “PSEs in Computational Science and Engineering education & training,” in Advanced Environments and Tools for High Performance Computing, EuroConference on Problem Solving Environments and the Information Society, University of Thessaly, Greece, 14-19 June 2003. pp. 50, 77.
[52] G. Landi, E. L. Piccolomini and F. Zama, “A parallel software for the reconstruction of dynamic MRI sequences,” in Proc. 10th EuroPVM/MPI, Lect. Notes Comput. Sc., vol. 2840, pp. 511-519, Springer, 2003.
[53] T. Andersen, A. Enmark, D. Moraru, C. Fan, M. Owner-Petersen, H. Riewaldt, M. Browne and A. Shearer, “A parallel integrated model of the Euro50,” in Proc. Of the SPIE, vol. 5497, paper-ID [5497-25], Europe International Symposium on Astronomical Telescopes, 21-25 June 2004, Glasgow, Scotland, United Kingdom.
[54] M. Browne, T. Andersen, A. Enmark, D. Moraru and A. Shearer, “Parallelization of MATLAB for Euro50 integrated modeling,” in Proc. of the SPIE, vol. 5497, paper-ID [5497-71], Europe International Symposium on Astronomical Telescopes, 21-25 June 2004, Glasgow, Scotland, United Kingdom.
[55] D . Petcu, D. Tepeneu, M. Paprzycki, T. Mizutani and T. Ida, “Survey of symbolic computations on the Grid,” in Proc. of the 3rd Int. Conference Sciences of Electronic, Technologies of Information and Telecommunications SETIT 2005, Susa, Tunisia, March 27-31, 2005. Ver p. 4/11.
[56] D . Petcu, D. Tepeneu, M. Paprzycki, and T. Ida, “Symbolic computations on the Grid,” in B. Di Martino et al. (eds.), Engineering the Grid; Status and Perspective, America Scientific Publishers, Los Angeles, CA, Jan. 2006, Ch. 27, pp. 91-107.
[57] S. Goasguen, “High performance computing for nanoscale device simulation,” in The Army Research Office (ARO) FY2001 Defense University Research Initiative on Nanotechnology (DURINT), Kick-Off Meeting, Third-Year Review, July 24-25, 2003. Disponible en: http://nanolab.phy.stevens-tech.edu/DU RINT2003/PDF/Goasquen.pdf. pp. 9-22, 28/34. Enlace recuperable mediante Internet Archive Way Back Machine http://liveweb.archive.org/http://nanolab.phy.stevens-tech.edu/DU RINT2003/PDF/Goasquen.pdf.
[58] C. Shue, J. Hursey and A. Chauhan, “MPI over scripting languages: usability and performance tradeoffs,” IUCS Technical Report TR631, University of Indiana, Feb. 2006. Ver Fig. 5, pp. 11/13.
[59] M. Collette, B. Corey and J. Johnson, “High performace tools and technologies,” Technical Report UCRL-TR-209289, Lawrence Livermore National Laboratory (LLNL), US. Dept. of Energy, December 2004. pp. 67-68/79.
[60] R. Serban, LLNL: SundialsTB, a Matlab Interface to SUNDIALS. Disponible en: http://www.llnl.gov/CASC/sundials/documentation/stb_guide/sundialsTB.html
[61] M. Creel, OctaveForge: Econometrics package for Octave. Disponible en: http:// octave.sourceforge.net/econometrics/index.html.
[62] M. Creel, Universidad Autónoma de Barcelona, Spain: Parallel-Knoppix Linux, Disponible en: http://pareto.uab.es/mcreel/ParallelKnoppix/. Enlace recuperable mediante Internet WayBack Machine http://web.archive.org/web/20080819061319/http://pareto.uab.es/mcreel/ParallelKnoppix/
[63] M. Creel, Universidad Autónoma de Barcelona, Spain: PelicanHPC Linux, Disponible en: http://pareto.uab.es/mcreel/PelicanHPC/.
[64] VL-E Project, Virtual Laboratory for e- Science, Dutch EZ ICT innovation program. Disponible en: http://poc.vl-e.nl/.
[65] S. Goasguen, “A guided tour of nanoMOS code and some tips on how to parallelize it,” summer course “Electron Devices at the Nano/Molecular Scale,” Summer School at UIUC, University of Illinois, Urbana-Champaign, May, 21-22, 2002. Disponible en: http://www.mcc.uiuc.edu/summerschool/2002/Mark%20Lundstrom/Lundstrom_files/COURSE19.pdf, pp. 10-11, 13, 17/17.
[66] S. Goasguen, “On the use of MPI and PVM in Matlab,” Computing Research Institute Seminars, CS 111, March 26th, 2003. Disponible en: http://www. cs.purdue.edu/calendar/webevent. cgi?cmd=showevent&id=426
[67] M. Law, “MATLAB laboratory for MPI Toolbox (MPITB),” MATH2160 Mathematical & Statistical Software Laboratory, Nov. 2003, Hong-Kong Baptist University HKBU. Disponible en: http://www.sci.hkbu.edu.hk/tdgc/TeachingMaterial/MPITB/MPITB.pdf
[68] M. Law, “Guest Lecture on Cluster Computing,” COMP3320 Distributed Systems, Cluster Computing 2006 Lecture, HKBU. Disponible en: http://www.comp.hkbu.edu.hk/~jng/comp3320/3320-Cluster2006.ppt, pp. 16/22.
[69] M. Law, “Experiencing cluster computing, Class 2: Overview,” Learning Computational Science on a Parallel Architecture, tutorial 5, HKBU, 2002-2012. Disponible en: http://www.sci.hkbu.edu.hk/tdgc/tutorial/ExpClusterComp/ExpCluster-Comp02.ppt, pp. 6, 33/46.
[70] M. Law, “Recurring HPC course, Syllabus V: MPI Toolbox (2 hrs),” Learning Computational Science on a Parallel Architecture, tutorial 3, HKBU, 2002-2012. Disponible en: http://www.sci.hkbu.edu.hk/tdgc/tutorial/RHPCC/syllabus.php
[71] B. Skinner, “Introduction to MPITB,” Users’ presentations, Matlab support, Research Computing website, University of Technology Sydney (UTS). Disponible en: http://services.eng.uts.edu.au/ResearchComputing/support/matlab.
[72] A. H. Davis, “Structuring collaborative visualization using facet trees: Design and Implementation of a Prototype System,” Honours Thesis, School of CIT, Griffith University, Australia, October 2000. Disponible en: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.133.1715&rep=rep1&type=pdf, pp. 10/163
[73] L. Y. Choy, “MATLAB*P 2.0: Interactive supercomputing made practical,” M. Sc. Thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Sept. 2002. Disponible en: http://people.csail.mit.edu/cly/thesis.ps.gz, pp. 12, 14/67.
[74] H . Singh, “Parallel programming in MATLAB,” M. Sc. Thesis, University of East Anglia, 2002. Disponible en: http://www.uea.ac.uk/~a207713/ thesis/report.pdf, pp. 26-27, 65, 69/108
[75] S. Merchant, “Approaches for MATLAB applications acceleration using HPRC,” M. Sc. Thesis, ECE Department, University of Tennessee, Knoxville, August 2003. Disponible en: http://web.eecs.utk.edu/~gdp/pdf/merchant-ms-thesis.pdf, pp.10/163.
[76] A. Rajeev, “Peer-to-peer support for Matlab- style computing,” M. Sc Thesis, Texas A&M University, May 2004. Disponible en: http://repository.tamu.edu//handle/1969.1/503, pp. 10-12, 49/53.
[77] R. Choy, “Parallel Matlab survey,” Disponible en: http://people.csail.mit.edu/cly/survey.html. Internet WayBack Machine http://web.archive.org/web/20070705000531/http://people.csail.mit.edu/cly/survey.html
[78] C. Moler, The Mathworks, Natick, MA, personal communication, comp.soft-sys. matlab newsgroup, Nov. 21, 2001. Disponible en: http://groups.google.com/group/comp.soft-sys.matlab/browse_frm/thread/cb42ee88cd8af834/8b08b6005b61be54.
[79] S. Pawletta, T. Pawletta, W. Drewelow, P. Duenow and M. Suesse, “A Matlab toolbox for distributed and parallel processing,” Int. Matlab Conference 95, Cambridge, MA, 8 pages.
[80] S. Pawletta, T. Pawletta and W. Drewelow, “Distributed and parallel simulation in an interactive environment,” in Proc. of the 1995 EUROSIM Conference, EUROSIM ‘95, Vienna, Austria, Elsevier Science Publisher B.V., September 1995, pp. 345-350.
[81] S. Pawletta, T. Pawletta and W. Drewelow, “HLA-based Simulation within an Interactive Engineering Environment,” In Proc. 4th IEEE Int. Workshop on Distributed Simulation and Real-Time Applications, DS-RT’2000, San Francisco, CA, USA, August 2000, pp. 97-102. DOI: 10.1109/DISRTA.2000.874068.
[82] S. Pawletta, W. Drewelow and T. Pawletta, “Distributed and Parallel Processing with Matlab -- The DP Toolbox,” Simulation News Europe (SNE), Hrsg. ARGESIM/ ASIM, Wien, 2001, no. 31, pp. 13-14.
[83] T. Pawletta, C. Deatcu, O. Hagendorf, S. Pawletta and G. Colquhoun, “DEVS-Based Modeling and Simulation in Scientific and Technical Computing Environments,” in Proc. of DEVS Integrative M&S Symposium (DEVS’06) - 2006 Spring Simulation Multiconference (SpringSim’06), Huntsville/ AL, USA, April 2-6, 2006, pp. 151-158.
[84] J . Hollingsworth, K. Liu and P. Pauca, “PT v. 1.00: Manual and Reference Pages,” Technical Report, Mathematics and Computer Science Department, Wake Forest University, 1996. Disponible en: http://www.math.wfu.edu/pt/pt.html.
[85] A. E. Trefethen, V. S. Menon, C. Chang, G. Czajkowski, C. Myers and L. N. Trefethen, “MultiMATLAB: MATLAB on Multiple Processors,” Technical Report: TR96-1586, Cornell University, NY, USA, 1996. Disponible en: http://portal.acm.org/citation.cfm?id=866863. See also URL: http://www.cs.cornell.edu/Info/People/lnt/multimatlab.html
[86] V. Menon and A. E. Trefethen, “Multi- MATLAB: integrating MATLAB with high-performance parallel computing,” in Proceedings of the 1997 ACM/IEEE Conference on Supercomputing (San Jose, CA, November 15-21, 1997). Supercomputing’ 97. 18 pages. DOI: http://doi.acm. org/10.1145/509593.509623.
[87] J . Zollweg, CMTM (Cornell Multitask Toolbox for MATLAB) web, Disponible en: http://www.tc.cornell.edu/Services/Support/Forms/cmtm.
[88] P. Husbands and C. Isbell, PPserver (Parallel Problems Server) web, Disponible en: http://crd.lbl.gov/~parry/text/ppserver/
[89] P. Husbands and C. Isbell, “The Parallel Problems Server: A Client-Server Model for Large Scale Scientific Computation,” in Proceedings of the Third International Conference on Vector and Parallel Processing, VecPar’98. Portugal, 1998, pp. 156-169.
[90] P. Husbands, C. Isbell and A. Edelman, “Interactive Supercomputing with MITMatlab,” MIT AI Memo 1642. Presented at: the Second IMA Conference on Parallel Computation. Oxford, 1998. 10 pages.
[91] P. Husbands and C. Isbell, “MATLAB*P: A Tool for Interactive Supercomputing,” in Proc. 9th SIAM Conference on Parallel Processing for Scientific Computing, 1999.
[92] C. Isbell and P. Husbands, “The Parallel Problems Server: An Interactive Tool for Large Scale Machine Learning,” in Advances in Neural Information Processing Systems, vol. 12, 2000.
[93] C. Moler, “Objectively speaking,” Cleve’s Corner, MATLAB News&Notes, Winter 1999. The Mathworks. Disponible en: http://www.mathworks.com/company/newsletters/news_notes/clevescorner/
[94] C. Moler, “MATLAB incorporates LAPACK,” Cleve’s Corner, MATLAB News&Notes, Winter 2000. The Mathworks. Disponible en: http://www.mathworks.com/company/newsletters/news_notes/clevescorner/
[95] R. Choy, D. Cheng, A. Edelman, J. Gilbert and V. Shah, “Star-P: High Productivity Parallel Computing,” in Proc. 8th HPEC 2004, MIT Lincoln Laboratory, 28-30 Sept. 2004. Disponible en: http://www.ll.mit.edu/HPEC/agendas/proc04/abstracts/choy_ron.pdf.
[96] V. Shah and J. R. Gilbert, “Sparse Matrices in Matlab*P: Design and Implementation,” in Proc. 11th International Conference High Performance Computing - HiPC 2004, Bangalore, India, December 19-22, 2004, pp. 144-155. DOI: 10.1007/b104576.
[97] J . Gilbert, V. Shah, T. Letsche, S. Reinhardt and A. Edelman, “An Interactive Approach to Parallel Combinatorial Algorithms with Star-P”, in Proc. 9th HPEC 2005, MIT Lincoln Laboratory, 20-22 Sept. 2005. Disponible en: http://www.ll.mit.edu/HPEC/agendas/proc05/Day_2/Abstracts/ 1030_Edelman_A.PDF
[98] A. Edelman, P. Husbands and S. Leibman, “Interactive Supercomputing’s Star-P platform: Parallel MATLAB and MPI Homework Classroom Study on High Level Language Productivity,” in Proc. 10th High Performance Embedded Computing Workshop (HPEC 2006), MIT Lincoln Lab., 2006.
[99] A. Edelman, “Parallel MATLAB(R) doing it right,” in 6th Annual Workshop on Linux Clusters for Super Computing, LCSC’05, October 17-19, 2005. National Supercomputer Centre (NSC). Linköping University, Sweden. Disponible en: http://www.nsc.liu.se/lcsc2005/programme.html#LCSC,
[100] J . Kepner and N. Travinin, “Parallel Matlab: The Next Generation,” in Proc. 7th HPEC 2003, MIT Lincoln Laboratory, 23-25 Sept. 2003.
[101] R. Haney, A. Funk, J. Kepner, H. Kim, C. Rader, A. Reuther and N. Travinin, “pMatlab takes the HPCchallenge,” in Proc. 8th HPEC 2004, MIT Lincoln Laboratory, 28-30 Sept. 2004.
[102] J . Kepner, T. Currie, H. Kim, B. Mathew, A. McCabe, M. Moore, D. Rabinkin, A. Reuther, A. Rhoades, L. Tella and N. Travinin, “Deployment of SAR and GMTI Signal Processing on a Boeing 707 Aircraft using pMatlab and a Bladed Linux Cluster,” in Proc. 8th HPEC 2004, MIT Lincoln Laboratory, 28-30 Sept. 2004.
[103] J . Kepner, “HPC Productivity: An Overarching View,” International Journal of High Performance Computing Applications, Special Issue on HPC Productivity, J. Kepner (editor), vol. 18, no. 4, pp. 393- 397, Nov. 2004.
[104] N. Travinin, R. Bond, J. Kepner and H. Kim, “pMatlab: High Productivity, High Performance Scientific Computing,” 2005 SIAM Conference on Computational Science and Engineering, February 12, Orlando, FL. USA.
[105] N. Travinin, H. Hoffmann, R. Bond, H. Chan, J. Kepner and E. Wong, “pMapper: Automated Mapping of Parallel Matlab Programs,” in Proc. 9th HPEC 2005, MIT Lincoln Laboratory, 20-22 Sept. 2005. Disponible en: http://www.ll.mit.edu/HPEC/agendas/proc05/Day_2/Abstracts/1345_ Travinin_A.PDF
[106] N. Bliss and J. Kepner, “pMatlab parallel matlab library,” International Journal of High Performance Computing Applications, Special Issue on High Level Programming Languages and Models, J. Kepner and H. Zima (editors), 2006.
[107] The MathWorks, Distributed ComputingToolbox, Enlaces de 2007 recuperables mediante Internet WayBack Machine, URL: http://web.archive.org/web/20071217040037/ http://www.mathworks.com/products/distribtb/
[108] C. Moler, “Parallel MATLAB,” invited presentation in Workshop on State-of-theart in Scientific and Parallel Computing, PARA’06, Umeå, Sweden, June 18-21, 2006. Session IP4, Disponible en: http://web.archive.org/web/20070126024524/ http://www.hpc2n.umu.se/para06/index.php?content=ip4, http://www.hpc2n.umu.se/node/647
[109] J . Gilbert, S. Reinhardt and V. Shah, “High-performance graph algorithms from parallel sparse matrices,” in Workshop on State-of-the-art in Scientific and Parallel Computing, PARA’06, Umeå, Sweden, June 18-21, 2006. Session MS5, Disponible en: http://www.hpc2n.umu.se/para06/ index.php?content=ms5
[110] The MathWorks, Media Newsletter April 2006, article “Distributed Computing Speeds Application Development for Technical Computing” Disponible en: http://www.mathworks.com/company/pressroom/newsletter/may06/distrib.html
[111] Workshop on distributed computing with MATLAB, March 20-22, 2006, Arctic Region Supercomputing Center (ARSC), University of Alaska Fairbanks campus. Disponible en: http://www.arsc.edu/support/training/MATLAB_2006.html.
[112] C. Moler, “Is it Finally the Time for a Parallel Matlab?,” Householder Symposium XVI, 2005, Householder Meeting on Numerical Linear Algebra, May 23-27, 2005, Champion, Pennsylvania, USA.
[113] A. Edelman et al., Computational Research in Boston (CRiB) seminar series. Disponible en: http://www-math.mit.edu/crib/
[114] A. Edelman, R. Choy, J. Gilbert and V. Shah, “Parallel Computing Made Easy with STAR-P,” short course in SIAM Conference on Parallel Processing for Scientific Computing (PP04), San Francisco, CA, Feb. 24, 2004. Disponible en: http://www.siam.org/meetings/pp04/edelman.htm
[114] A. Edelman, R. Choy, J. Gilbert and V. Shah, “Parallel Computing Made Easy with STAR-P,” short course in SIAM Conference on Parallel Processing for Scientific Computing (PP04), San Francisco, CA, Feb. 24, 2004. Disponible en: http://www.siam.org/meetings/pp04/ edelman.htm
[115] A. Edelman, “Interactive Parallel MATLAB® with Star-p,” Seminar in Umeå Center for Interaction Technology, Oct. 20th 2005. Disponible en: http://www. ucit.umu.se/main.php?view=sem/9. Enlace recuperable mediante Internet WayBack Machine http://web.archive. org/web/20070520201742/http://www.ucit.umu.se/main.php?view=sem/9
[116] The MathWorks, SC|05 & SC|06 booth press releases, Disponible en: http://web. archive.org/web/20080905111539/http://www.mathworks.com/company/pressroom/articles/article11350.html, http://web.archive.org/web/20071012121615/http://www.mathworks.com/company/events/tradeshows/tradeshow12352.html
[117] J. Kepner, A. Reuther, L. Dean and S. Grad-Freilich, “Parallel and Distributed Computing with MATLAB,” Tutorial S07 in SuperComputing 2005, SC|05. Disponible en: http://sc05.supercomputing.org/schedule/event_detail.php?evid=5115
[118] A. Edelman, J. Nehrbass, “Interactive Supercomputing with Star-P and MATLAB,” Tutorial S11 in SuperComputing 2005, SC|05. Disponible en: http://sc05.supercomputing.org/schedule/event_detail.php?evid=5104
[119] J. Kepner, “HPCS (Workshop on High Productivity Computing Systems),” Workshop in SuperComputing 2005, SC|05. Disponible en: http://sc05.supercomputing.org/schedule/event_detail.php?evid=5284
[120] S. Chapin and J. Worringen, “Operating Systems,” International Journal of High Performance Computing Applications, vol. 15, no. 2, pp. 115-123. SAGE publications, 2001.
[121] M. Baker, “Preface,” International Journal of High Performance Computing Applications, vol. 15, no. 2, p. 91. SAGE publications, 2001.
[122] B. Wilkinson and M. Allen, Parallel Programming: Techniques and Applications using Networked Workstations and Parallel Computers, 2nd Ed., Pearson - Prentice Hall, NJ, 1999.
[123] Wikipedia, Mandelbrot set. Disponible en: http://en.wikipedia.org/wiki/Mandelbrot_set
dc.relation.ispartofjournalabbrev.spa.fl_str_mv INGE CUC
dc.rights.accessrights.spa.fl_str_mv info:eu-repo/semantics/openAccess
dc.rights.coar.spa.fl_str_mv http://purl.org/coar/access_right/c_abf2
eu_rights_str_mv openAccess
rights_invalid_str_mv http://purl.org/coar/access_right/c_abf2
dc.format.mimetype.spa.fl_str_mv application/pdf
dc.publisher.spa.fl_str_mv Corporación Universidad de la Costa
dc.source.spa.fl_str_mv INGE CUC
institution Corporación Universidad de la Costa
dc.source.url.spa.fl_str_mv https://revistascientificas.cuc.edu.co/ingecuc/article/view/223
bitstream.url.fl_str_mv https://repositorio.cuc.edu.co/bitstreams/289d4c3d-c70b-4066-b966-4ce7c39f5414/download
https://repositorio.cuc.edu.co/bitstreams/235567e2-c19d-4d7f-8293-d6304c05b676/download
https://repositorio.cuc.edu.co/bitstreams/46538ab6-a559-4f2f-8ea6-8a388913c6b4/download
https://repositorio.cuc.edu.co/bitstreams/b15c66bc-7b50-40ec-8930-f837f522ee25/download
bitstream.checksum.fl_str_mv 91793af6c33fc2875b0b4c1b79cc97e8
8a4605be74aa9ea9d79846c1fba20a33
0754abd2f738b715a6a299d09de27509
c86c94a73be3d99cbeaf3b06ec83b4e1
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
MD5
repository.name.fl_str_mv Repositorio de la Universidad de la Costa CUC
repository.mail.fl_str_mv repdigital@cuc.edu.co
_version_ 1811760766967087104
spelling Fernández, Francisco J.Anguita, Mancia2019-02-21T00:07:57Z2019-02-21T00:07:57Z2012-10-31Fernández, F., & Anguita, M. (2012). Computación científica paralela mediante uso de herramientas para paso de mensajes. INGE CUC, 8(1), 51-84. Recuperado a partir de https://revistascientificas.cuc.edu.co/ingecuc/article/view/2230122-6517, 2382-4700 electrónicohttps://hdl.handle.net/11323/26582382-4700Corporación Universidad de la Costa0122-6517REDICUC - Repositorio CUChttps://repositorio.cuc.edu.co/Los usuarios de Entornos de Computación Científica (SCE, por sus siglas en inglés) siempre requieren mayor potencia de cálculo para sus aplicaciones. Utilizando las herramientas propuestas, los usuarios de las conocidas plataformas Matlab® y Octave, en un cluster de computadores, pueden paralelizar sus aplicaciones interpretadas utilizando paso de mensajes, como el proporcionado por PVM (Parallel Virtual Machine) o MPI (Message Passing Interface). Para muchas aplicaciones SCE es posible encontrar un esquema de paralelización con ganancia en velocidad casi lineal. Estas herramientas son interfaces prácticamente exhaustivas a las correspondientes librerías, soportan todos los tipos de datos compatibles en el SCE base y se han diseñado teniendo en cuenta el rendimiento y la facilidad de mantenimiento. En este artículo se resumen trabajos anteriores, su repercusión, y algunos resultados obtenidos por usuarios finales. Con base en la herramienta más reciente, la Toolbox MPI para Octave, se describen brevemente sus características principales, y se presenta un estudio de caso, el conjunto de Mandelbrotusers of Scientific Computing Environments (SCE) always demand more computing power for their CPu-intensive SCE applications. using the proposed toolboxes, users of the well-known Matlab® and Octave platforms in a computer cluster can parallelize their interpreted applications using the native multi-computer programming paradigm of message-passing, such as that provided by PVM (Parallel Virtual Machine) and MPI (Message Passing Inter-face). For many SCE applications, a parallelization scheme can be found so that the resulting speedup is nearly linear on the number of computers used. The toolboxes are almost compre-hensive interfaces to the corresponding libraries, they support all the compatible data types in the base SCE and they have been designed with performance and maintainability in mind. In this paper, we summarize our previous work, its repercussion, and some results obtained by end-users. Focusing on our most recent MPI Toolbox for Octave, we briefly describe its main features, and introduce a case study: the Mandelbrot setFernández, Francisco J.-5988be9e-6feb-47f0-aeed-3b1b02e99726-0Anguita, Mancia-2d88536d-b50b-41b5-8fff-0afbbd012a4d-0application/pdfspaCorporación Universidad de la CostaINGE CUC; Vol. 8, Núm. 1 (2012)INGE CUCINGE CUC[1] C. B. Moler, Numerical Computing with MATLAB, Revised Reprint. SIAM, 2004, 2008. Para otras referencias autorizadas ver también http://www.mathworks.com/support/books/[2] Web de The MathWorks, Disponible en: http://www.mathworks.com/products/pfo/[3] Web de The MathWorks, Disponible en: http://www.mathworks.com/products/matlab/[4] J . W. Eaton, D. Bateman y S. Hauberg, GNU Octave Manual. Network Theory Ltd., 2008.[5] J . W. Eaton, “GNU Octave: History and outlook for the future” in Conference Proceedings of the 2005 AIChE Annual Meeting, Cincinnati Ohio, November 1, 2005.[6] J . W. Eaton and J. B. Rawlings, “Ten Years of Octave - Recent Developments and Plans for the Future” in Proceedings of the 3rd International Workshop on Distributed Statistical Computing DSC-2003, Vienna, Austria, 2003.[7] J . W. Eaton, “Octave: Past, Present and Future” in Proceedings of the 2nd International Workshop on Distributed Statistical Computing DSC-2001, Vienna, Austria, 2001.[8] Web de Octave, Disponible en: http://www.gnu.org/software/octave/.[9] A. Geist, A. Beguelin, J. Dongarra, W. Jiang, R. Manchek and V. Sunderam, PVM: Parallel Virtual Machine. A Users’ Guide and Tutorial for Networked Parallel Computing. The MIT Press, 1994.[10] Web PVM, Disponible en: http://www.csm.ornl.gov/pvm/.[11] MPI Forum, “MPI: A Message-Passing Interface standard.” Int. J. Supercomput. Appl. High Perform. Comput., vol. 8, no. 3/4, pp. 159-416, 1994. Ver también los documentos del MPI Forum: MPI 2.2 standard (2009), MPI 3.0 Draft (2012), University of Tennessee, Knoxville. Disponible en: http://www.mpi-forum.org/[12] W. Gropp, E. Lusk and A. Skjellum, Using MPI: Portable Parallel Programming with the Message Passing Interface, 2nd Edition. The MIT Press, 1999.[13] W. Gropp, E. Lusk and R. Thakur, Using MPI-2: Advanced Features of the Message- Passing Interface. The MIT Press, 1999.[14] G. Burns, R. Daoud and J. Vaigl, “LAM: an open cluster environment for MPI” in Proceedings of Supercomputing symposium, 1994, pp. 379-386.[15] J . Squyres and A. Lumsdaine, “A component architecture for LAM/MPI” in Proceedings of the 10th European PVM/MPI Users’ Group Meeting, Lect. Notes Comput. Sc., vol. 2840, pp. 379-387, 2003.[16] Web LAM, Disponible en: http://www.lam-mpi.org/about/overview/.[17] E. Gabriel et al., Open-MPI team, “Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation” in Proceedings, 11th European PVM/MPI Users’ Group Meeting, Budapest, Hungary, September 2004.[18] Web Open-MPI, Disponible en: http://www.open-mpi.org/.[19] J . Fernández, “PVMTB (Parallel Virtual Machine Toolbox)” in III Congreso Usuarios MATLAB’99, 17-19 Nov. 1999, UNED, Madrid, Spain. pp.523–532. Disponible en: http://atc.ugr.es/~javier/investigacion/papers/Users99.pdf[20] J. Fernández, “Message passing under MATLAB” in Proceedings of the Advanced Simulation Technologies Conference ASTC’01, Seattle Washington, April 22- 26, 2001, pp. 73-82.[21] J. Fernández, A. Cañas, A. F. Díaz, J. González, J. Ortega and A. Prieto, “Performance of message-passing MATLAB toolboxes” in Proc. of the VecPar 2002, Lect. Notes Comput. Sc., vol. 2565, pp. 228-241, 2003. URL de las Toolboxes http://www.ugr.es/~jfernand[22] J . Fernández, M. Anguita, S. Mota, A. Cañas, E. Ortigosa and F.J. Rojas, “Parallel programming toolboxes for Octave (poster)” in Proc. of the VecPar 2004, Valencia, Spain, June 28-30 2004, pp. 797-806. Disponible en: http://www.ugr.es/~jfernand/ investigacion/papers/VecPar04.pdf[23] J . Fernández, M. Anguita, E. Ros and J.L. Bernier, “SCE toolboxes for the development of high-level parallel applications” in Proc. of the 6th ICCS 2006, Part II, Reading, United Kingdom, May 28-31, 2006. Lect. Notes Comput. Sc., vol. 3992, pp. 518-525.[24] R. Pfarrhofer, P. Bachhiesl, M. Kelz, H. Stögner and A. Uhl, “MDICE - A MAT LAB toolbox for efficient cluster computing” in Proc. of Parallel Computing (Parco’ 03), Dresden, Germany, September 2-5, 2003, pp. 535-542.[25] R. Pfarrhofer, M. Kelz, P. Bachhiesl, H. Stögner and A. Uhl, “Distributed optimization of fiber optic network layout using MATLAB” in Proc. ICCSA 2004, Part III, Lect. Notes Comput. Sc., vol. 3045, 2004, pp. 538-547.[26] D . Petcu, D. Dubu and M. Paprzycki, “Extending Maple to the Grid: Design and implementation” in Proc. of the 3rd ISPDC/ HeteroPar’04, University College Cork, Ireland, July 5th - 7th 2004, pp. 209-216. DOI: 10.1109/ISPDC.2004.25.[27] D . Petcu, M. Paprzycki and D. Dubu, “Design and implementation of a Grid extension for Maple” Scientific Programming, vol. 13, no. 2, 2005, pp. 137-149.[28] D. Petcu, “Editorial: Challenges concerning symbolic computations on grids” Scalable Computing: Practice and Experience, vol. 6, no. 3, September 2005, pp. iii-iv.[29] S. Goasguen, A. R. Butt, K. D. Colby and M. S. Lundstrorn, “Parallelization of the nanoscale device simulator nanoMOS-2.0 using a 100 nodes linux cluster,” in Proc. of the 2nd IEEE Conference on Nanotechnology, pp. 409-412, 2002. DOI 10.1109/ NANO.2002.1032277.[30] S. Goasguen, R. Venugopal and M. S. Lundstrom, “Modeling transport in nanoscale silicon and molecular devices on parallel machines,” in Proc. of the 3rd IEEE Conference on Nanotechnology, vol. 1, pp. 398-401, 2003. DOI 10.1109/ NANO.2003.1231802.[31] S. D. Canto, A. P. de Madrid and S. D. Bencomo, “Dynamic programming on clusters for solving control problems” in Proc. of the 4th Asian Control Conference ASCC’02, Suntec, Singapore, September 25-27, 2002.[32] M. Parrilla, J. Aranda and S. D. Canto, “Parallel evolutionary computation: application of an EA to controller design,” in Proc. IWINAC 2005, Lect. Notes Comput. Sc., vol. 3562, pp. 153-162. DOI: 10.1007/11499305_16.[33] S. D. Canto, A. P. de Madrid and S. D. Bencomo, “Parallel dynamic programming on clusters of workstations,” in IEEE Transactions on Parallel and Distributed Systems, vol. 16, no. 9, pp. 785-798, 2005.[34] M. Creel, “User-friendly parallel computations with Econometric examples,” in Proc. of the 11th Int. Conf. on Computing in Economics and Finance, paper no. 445, 2005, Jun 23-25, Washington DC.[35] M. Creel, “Creating and using a non-dedicated HPC cluster with Parallel-Knoppix,” in Proc. of the 12th International Conference on Computing in Economics and Finance, no. 202, Cyprus, Jun 22-24, 2006.[36] M. Creel, “User-friendly parallel computations with Econometric examples,” Computational Economics, vol. 26, no. 2, pp. 107-128, Springer, October 2005. DOI: 10.1007/s10614-005-6868-2.[37] J . A. Vrugt, H. V. Gupta, B. Ó Nualláin and W. Bouten, “Real-Time data assimilation for operational ensemble streamflow forecasting,” Journal of Hydrometeorology, vol. 7, no. 3, pp. 548-565, June 2006. DOI: 10.1175/JH M504.1[38] J . A. Vrugt, B. Ó Nualláin, B. A. Robinson, W. Bouten, S. C. Dekker and P. M. A. Sloot, “Application of parallel computing to stochastic parameter estimation in environmental models,” Computers & Geosciences, vol. 32, iss. 8, October 2006, pp. 1139-1155. DOI: 10.1016/ j.cageo.2005.10.015. Ver pies de página en p. 1140, Sect. 4, Figs. 6, 8, Sect. 6.[39] J . A. Vrugt, H. V. Gupta, S. C. Dekker, S. Sorooshian, T. Wagener and W. Bouten, “Application of stochastic parameter optimization to the Sacramento Soil Moisture Accounting model,” Journal of Hydrology, vol. 325, 2006, pp. 288–307. DOI: 10.1016/j.jhydrol.2005.10. 041. pp. 291, 305.[40] J . August and T. Kanade, “Scalable regularized tomography without repeated projections,” in Proc. 18th Int. Parallel and Distributed Processing Symposium (IPDPS’04), pp. 232-239, 26-30 April 2004, Santa Fe, New Mexico. DOI: 10.1109/IPDPS.2004.1303277. Sects. 4/5, p. 237, Fig. 5, p. 238.[41] T. Varslot and S.-E. Måsøy, “Forward propagation of acoustic pressure pulses in 3D soft biological tissue,” Modelling, Identification and Control, vol. 27, no. 3, pp. 181- 190. Ver Sect. 5, p. 196, y último párrafo en las conclusiones.[42] M. Zhao, V. Chadha and R. J. Figueiredo, “Supporting application-tailored Grid File System sessions with WSRF-based services,” in Proc. of the 14th IEEE Int. Symp. on High Perf. Distributed Computing HPDC-14, pp. 24-33, 2005. DOI 10.1109/ HPDC.2005.1520930.[43] M. Zhao and R. J. Figueiredo, “Application- tailored cache consistency for Wide- Area File Systems,” in Proc. of the 26th International Conference on Distributed Computing Systems (ICDCS 2006), pp. 41-50, July 4-7 2006, Lisboa, Portugal. DOI: 10.1109/ICDCS.2006.17.[44] J . Kepner and S. Ahalt, “MatlabMPI,” Journal of Parallel and Distributed Computing, vol. 64, iss. 8, pp. 997-1005, Elsevier, August 2004. DOI: 10.1016/j.jpdc.2004.03.018.[45] R. Choy and A. Edelman, “Parallel MATLAB: doing it right,” Proceedings of the IEEE, vol. 93, iss. 2, Feb. 2005, pp. 331- 341. DOI: 10.1109/JPROC.2004.840490.[46] S. Raghunathan, “Making a supercomputer do what you want: High-level tools for parallel programming,” Computing in Science & Engineering, vol. 8, no. 5, Sept.-Oct. 2006, pp. 70-80. DOI: 10.1109/ MCSE.2006.93.[47] R. Soganci, F. Gürgen and H. Topcuoglu, “Parallel Implementation of a VQ-based text-independent speaker identification,” in Proc. 3rd ADVIS 2004, Lect. Notes Comput. Sc., vol. 3261, pp. 291-300, 2004.[48] C. Bekas, E. Kokiopoulou and E. Gallopoulos, “The design of a distributed MATLAB- based environment for computing pseudospectra,” Future Generation Computing Systems, vol. 21, iss. 6, pp. 930- 941, Elsevier, Jun 2005. DOI: 10.1016/j. future.2003.12.017.[49] J . Kepner, “Parallel Programming with MatlabMPI,” in Agenda 5th Annual Workshop on High Performance Embedded Computing HPEC’01, MIT Lincoln Laboratory, Lexington, MA, 27-29 Nov. 2001. Disponible en: http://arxiv.org/abs/astroph/0107406.[50] E. Manolakos, “Rapid Prototyping of Matlab/Java Distributed Applications using the JavaPorts components,” in Proc. 6th Annual Workshop on High Performance Embedded Computing HPEC’02, MIT Lincoln Laboratory, Lexington, MA, 24- 26 Sept. 2002. Disponible en: http://www. ll.mit.edu/HPEC/agendas/proc02/presentations/pdfs/4.4-manolakos.PDF[51] S. Gallopoulos, “PSEs in Computational Science and Engineering education & training,” in Advanced Environments and Tools for High Performance Computing, EuroConference on Problem Solving Environments and the Information Society, University of Thessaly, Greece, 14-19 June 2003. pp. 50, 77.[52] G. Landi, E. L. Piccolomini and F. Zama, “A parallel software for the reconstruction of dynamic MRI sequences,” in Proc. 10th EuroPVM/MPI, Lect. Notes Comput. Sc., vol. 2840, pp. 511-519, Springer, 2003.[53] T. Andersen, A. Enmark, D. Moraru, C. Fan, M. Owner-Petersen, H. Riewaldt, M. Browne and A. Shearer, “A parallel integrated model of the Euro50,” in Proc. Of the SPIE, vol. 5497, paper-ID [5497-25], Europe International Symposium on Astronomical Telescopes, 21-25 June 2004, Glasgow, Scotland, United Kingdom.[54] M. Browne, T. Andersen, A. Enmark, D. Moraru and A. Shearer, “Parallelization of MATLAB for Euro50 integrated modeling,” in Proc. of the SPIE, vol. 5497, paper-ID [5497-71], Europe International Symposium on Astronomical Telescopes, 21-25 June 2004, Glasgow, Scotland, United Kingdom.[55] D . Petcu, D. Tepeneu, M. Paprzycki, T. Mizutani and T. Ida, “Survey of symbolic computations on the Grid,” in Proc. of the 3rd Int. Conference Sciences of Electronic, Technologies of Information and Telecommunications SETIT 2005, Susa, Tunisia, March 27-31, 2005. Ver p. 4/11.[56] D . Petcu, D. Tepeneu, M. Paprzycki, and T. Ida, “Symbolic computations on the Grid,” in B. Di Martino et al. (eds.), Engineering the Grid; Status and Perspective, America Scientific Publishers, Los Angeles, CA, Jan. 2006, Ch. 27, pp. 91-107.[57] S. Goasguen, “High performance computing for nanoscale device simulation,” in The Army Research Office (ARO) FY2001 Defense University Research Initiative on Nanotechnology (DURINT), Kick-Off Meeting, Third-Year Review, July 24-25, 2003. Disponible en: http://nanolab.phy.stevens-tech.edu/DU RINT2003/PDF/Goasquen.pdf. pp. 9-22, 28/34. Enlace recuperable mediante Internet Archive Way Back Machine http://liveweb.archive.org/http://nanolab.phy.stevens-tech.edu/DU RINT2003/PDF/Goasquen.pdf.[58] C. Shue, J. Hursey and A. Chauhan, “MPI over scripting languages: usability and performance tradeoffs,” IUCS Technical Report TR631, University of Indiana, Feb. 2006. Ver Fig. 5, pp. 11/13.[59] M. Collette, B. Corey and J. Johnson, “High performace tools and technologies,” Technical Report UCRL-TR-209289, Lawrence Livermore National Laboratory (LLNL), US. Dept. of Energy, December 2004. pp. 67-68/79.[60] R. Serban, LLNL: SundialsTB, a Matlab Interface to SUNDIALS. Disponible en: http://www.llnl.gov/CASC/sundials/documentation/stb_guide/sundialsTB.html[61] M. Creel, OctaveForge: Econometrics package for Octave. Disponible en: http:// octave.sourceforge.net/econometrics/index.html.[62] M. Creel, Universidad Autónoma de Barcelona, Spain: Parallel-Knoppix Linux, Disponible en: http://pareto.uab.es/mcreel/ParallelKnoppix/. Enlace recuperable mediante Internet WayBack Machine http://web.archive.org/web/20080819061319/http://pareto.uab.es/mcreel/ParallelKnoppix/[63] M. Creel, Universidad Autónoma de Barcelona, Spain: PelicanHPC Linux, Disponible en: http://pareto.uab.es/mcreel/PelicanHPC/.[64] VL-E Project, Virtual Laboratory for e- Science, Dutch EZ ICT innovation program. Disponible en: http://poc.vl-e.nl/.[65] S. Goasguen, “A guided tour of nanoMOS code and some tips on how to parallelize it,” summer course “Electron Devices at the Nano/Molecular Scale,” Summer School at UIUC, University of Illinois, Urbana-Champaign, May, 21-22, 2002. Disponible en: http://www.mcc.uiuc.edu/summerschool/2002/Mark%20Lundstrom/Lundstrom_files/COURSE19.pdf, pp. 10-11, 13, 17/17.[66] S. Goasguen, “On the use of MPI and PVM in Matlab,” Computing Research Institute Seminars, CS 111, March 26th, 2003. Disponible en: http://www. cs.purdue.edu/calendar/webevent. cgi?cmd=showevent&id=426[67] M. Law, “MATLAB laboratory for MPI Toolbox (MPITB),” MATH2160 Mathematical & Statistical Software Laboratory, Nov. 2003, Hong-Kong Baptist University HKBU. Disponible en: http://www.sci.hkbu.edu.hk/tdgc/TeachingMaterial/MPITB/MPITB.pdf[68] M. Law, “Guest Lecture on Cluster Computing,” COMP3320 Distributed Systems, Cluster Computing 2006 Lecture, HKBU. Disponible en: http://www.comp.hkbu.edu.hk/~jng/comp3320/3320-Cluster2006.ppt, pp. 16/22.[69] M. Law, “Experiencing cluster computing, Class 2: Overview,” Learning Computational Science on a Parallel Architecture, tutorial 5, HKBU, 2002-2012. Disponible en: http://www.sci.hkbu.edu.hk/tdgc/tutorial/ExpClusterComp/ExpCluster-Comp02.ppt, pp. 6, 33/46.[70] M. Law, “Recurring HPC course, Syllabus V: MPI Toolbox (2 hrs),” Learning Computational Science on a Parallel Architecture, tutorial 3, HKBU, 2002-2012. Disponible en: http://www.sci.hkbu.edu.hk/tdgc/tutorial/RHPCC/syllabus.php[71] B. Skinner, “Introduction to MPITB,” Users’ presentations, Matlab support, Research Computing website, University of Technology Sydney (UTS). Disponible en: http://services.eng.uts.edu.au/ResearchComputing/support/matlab.[72] A. H. Davis, “Structuring collaborative visualization using facet trees: Design and Implementation of a Prototype System,” Honours Thesis, School of CIT, Griffith University, Australia, October 2000. Disponible en: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.133.1715&rep=rep1&type=pdf, pp. 10/163[73] L. Y. Choy, “MATLAB*P 2.0: Interactive supercomputing made practical,” M. Sc. Thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Sept. 2002. Disponible en: http://people.csail.mit.edu/cly/thesis.ps.gz, pp. 12, 14/67.[74] H . Singh, “Parallel programming in MATLAB,” M. Sc. Thesis, University of East Anglia, 2002. Disponible en: http://www.uea.ac.uk/~a207713/ thesis/report.pdf, pp. 26-27, 65, 69/108[75] S. Merchant, “Approaches for MATLAB applications acceleration using HPRC,” M. Sc. Thesis, ECE Department, University of Tennessee, Knoxville, August 2003. Disponible en: http://web.eecs.utk.edu/~gdp/pdf/merchant-ms-thesis.pdf, pp.10/163.[76] A. Rajeev, “Peer-to-peer support for Matlab- style computing,” M. Sc Thesis, Texas A&M University, May 2004. Disponible en: http://repository.tamu.edu//handle/1969.1/503, pp. 10-12, 49/53.[77] R. Choy, “Parallel Matlab survey,” Disponible en: http://people.csail.mit.edu/cly/survey.html. Internet WayBack Machine http://web.archive.org/web/20070705000531/http://people.csail.mit.edu/cly/survey.html[78] C. Moler, The Mathworks, Natick, MA, personal communication, comp.soft-sys. matlab newsgroup, Nov. 21, 2001. Disponible en: http://groups.google.com/group/comp.soft-sys.matlab/browse_frm/thread/cb42ee88cd8af834/8b08b6005b61be54.[79] S. Pawletta, T. Pawletta, W. Drewelow, P. Duenow and M. Suesse, “A Matlab toolbox for distributed and parallel processing,” Int. Matlab Conference 95, Cambridge, MA, 8 pages.[80] S. Pawletta, T. Pawletta and W. Drewelow, “Distributed and parallel simulation in an interactive environment,” in Proc. of the 1995 EUROSIM Conference, EUROSIM ‘95, Vienna, Austria, Elsevier Science Publisher B.V., September 1995, pp. 345-350.[81] S. Pawletta, T. Pawletta and W. Drewelow, “HLA-based Simulation within an Interactive Engineering Environment,” In Proc. 4th IEEE Int. Workshop on Distributed Simulation and Real-Time Applications, DS-RT’2000, San Francisco, CA, USA, August 2000, pp. 97-102. DOI: 10.1109/DISRTA.2000.874068.[82] S. Pawletta, W. Drewelow and T. Pawletta, “Distributed and Parallel Processing with Matlab -- The DP Toolbox,” Simulation News Europe (SNE), Hrsg. ARGESIM/ ASIM, Wien, 2001, no. 31, pp. 13-14.[83] T. Pawletta, C. Deatcu, O. Hagendorf, S. Pawletta and G. Colquhoun, “DEVS-Based Modeling and Simulation in Scientific and Technical Computing Environments,” in Proc. of DEVS Integrative M&S Symposium (DEVS’06) - 2006 Spring Simulation Multiconference (SpringSim’06), Huntsville/ AL, USA, April 2-6, 2006, pp. 151-158.[84] J . Hollingsworth, K. Liu and P. Pauca, “PT v. 1.00: Manual and Reference Pages,” Technical Report, Mathematics and Computer Science Department, Wake Forest University, 1996. Disponible en: http://www.math.wfu.edu/pt/pt.html.[85] A. E. Trefethen, V. S. Menon, C. Chang, G. Czajkowski, C. Myers and L. N. Trefethen, “MultiMATLAB: MATLAB on Multiple Processors,” Technical Report: TR96-1586, Cornell University, NY, USA, 1996. Disponible en: http://portal.acm.org/citation.cfm?id=866863. See also URL: http://www.cs.cornell.edu/Info/People/lnt/multimatlab.html[86] V. Menon and A. E. Trefethen, “Multi- MATLAB: integrating MATLAB with high-performance parallel computing,” in Proceedings of the 1997 ACM/IEEE Conference on Supercomputing (San Jose, CA, November 15-21, 1997). Supercomputing’ 97. 18 pages. DOI: http://doi.acm. org/10.1145/509593.509623.[87] J . Zollweg, CMTM (Cornell Multitask Toolbox for MATLAB) web, Disponible en: http://www.tc.cornell.edu/Services/Support/Forms/cmtm.[88] P. Husbands and C. Isbell, PPserver (Parallel Problems Server) web, Disponible en: http://crd.lbl.gov/~parry/text/ppserver/[89] P. Husbands and C. Isbell, “The Parallel Problems Server: A Client-Server Model for Large Scale Scientific Computation,” in Proceedings of the Third International Conference on Vector and Parallel Processing, VecPar’98. Portugal, 1998, pp. 156-169.[90] P. Husbands, C. Isbell and A. Edelman, “Interactive Supercomputing with MITMatlab,” MIT AI Memo 1642. Presented at: the Second IMA Conference on Parallel Computation. Oxford, 1998. 10 pages.[91] P. Husbands and C. Isbell, “MATLAB*P: A Tool for Interactive Supercomputing,” in Proc. 9th SIAM Conference on Parallel Processing for Scientific Computing, 1999.[92] C. Isbell and P. Husbands, “The Parallel Problems Server: An Interactive Tool for Large Scale Machine Learning,” in Advances in Neural Information Processing Systems, vol. 12, 2000.[93] C. Moler, “Objectively speaking,” Cleve’s Corner, MATLAB News&Notes, Winter 1999. The Mathworks. Disponible en: http://www.mathworks.com/company/newsletters/news_notes/clevescorner/[94] C. Moler, “MATLAB incorporates LAPACK,” Cleve’s Corner, MATLAB News&Notes, Winter 2000. The Mathworks. Disponible en: http://www.mathworks.com/company/newsletters/news_notes/clevescorner/[95] R. Choy, D. Cheng, A. Edelman, J. Gilbert and V. Shah, “Star-P: High Productivity Parallel Computing,” in Proc. 8th HPEC 2004, MIT Lincoln Laboratory, 28-30 Sept. 2004. Disponible en: http://www.ll.mit.edu/HPEC/agendas/proc04/abstracts/choy_ron.pdf.[96] V. Shah and J. R. Gilbert, “Sparse Matrices in Matlab*P: Design and Implementation,” in Proc. 11th International Conference High Performance Computing - HiPC 2004, Bangalore, India, December 19-22, 2004, pp. 144-155. DOI: 10.1007/b104576.[97] J . Gilbert, V. Shah, T. Letsche, S. Reinhardt and A. Edelman, “An Interactive Approach to Parallel Combinatorial Algorithms with Star-P”, in Proc. 9th HPEC 2005, MIT Lincoln Laboratory, 20-22 Sept. 2005. Disponible en: http://www.ll.mit.edu/HPEC/agendas/proc05/Day_2/Abstracts/ 1030_Edelman_A.PDF[98] A. Edelman, P. Husbands and S. Leibman, “Interactive Supercomputing’s Star-P platform: Parallel MATLAB and MPI Homework Classroom Study on High Level Language Productivity,” in Proc. 10th High Performance Embedded Computing Workshop (HPEC 2006), MIT Lincoln Lab., 2006.[99] A. Edelman, “Parallel MATLAB(R) doing it right,” in 6th Annual Workshop on Linux Clusters for Super Computing, LCSC’05, October 17-19, 2005. National Supercomputer Centre (NSC). Linköping University, Sweden. Disponible en: http://www.nsc.liu.se/lcsc2005/programme.html#LCSC,[100] J . Kepner and N. Travinin, “Parallel Matlab: The Next Generation,” in Proc. 7th HPEC 2003, MIT Lincoln Laboratory, 23-25 Sept. 2003.[101] R. Haney, A. Funk, J. Kepner, H. Kim, C. Rader, A. Reuther and N. Travinin, “pMatlab takes the HPCchallenge,” in Proc. 8th HPEC 2004, MIT Lincoln Laboratory, 28-30 Sept. 2004.[102] J . Kepner, T. Currie, H. Kim, B. Mathew, A. McCabe, M. Moore, D. Rabinkin, A. Reuther, A. Rhoades, L. Tella and N. Travinin, “Deployment of SAR and GMTI Signal Processing on a Boeing 707 Aircraft using pMatlab and a Bladed Linux Cluster,” in Proc. 8th HPEC 2004, MIT Lincoln Laboratory, 28-30 Sept. 2004.[103] J . Kepner, “HPC Productivity: An Overarching View,” International Journal of High Performance Computing Applications, Special Issue on HPC Productivity, J. Kepner (editor), vol. 18, no. 4, pp. 393- 397, Nov. 2004.[104] N. Travinin, R. Bond, J. Kepner and H. Kim, “pMatlab: High Productivity, High Performance Scientific Computing,” 2005 SIAM Conference on Computational Science and Engineering, February 12, Orlando, FL. USA.[105] N. Travinin, H. Hoffmann, R. Bond, H. Chan, J. Kepner and E. Wong, “pMapper: Automated Mapping of Parallel Matlab Programs,” in Proc. 9th HPEC 2005, MIT Lincoln Laboratory, 20-22 Sept. 2005. Disponible en: http://www.ll.mit.edu/HPEC/agendas/proc05/Day_2/Abstracts/1345_ Travinin_A.PDF[106] N. Bliss and J. Kepner, “pMatlab parallel matlab library,” International Journal of High Performance Computing Applications, Special Issue on High Level Programming Languages and Models, J. Kepner and H. Zima (editors), 2006.[107] The MathWorks, Distributed ComputingToolbox, Enlaces de 2007 recuperables mediante Internet WayBack Machine, URL: http://web.archive.org/web/20071217040037/ http://www.mathworks.com/products/distribtb/[108] C. Moler, “Parallel MATLAB,” invited presentation in Workshop on State-of-theart in Scientific and Parallel Computing, PARA’06, Umeå, Sweden, June 18-21, 2006. Session IP4, Disponible en: http://web.archive.org/web/20070126024524/ http://www.hpc2n.umu.se/para06/index.php?content=ip4, http://www.hpc2n.umu.se/node/647[109] J . Gilbert, S. Reinhardt and V. Shah, “High-performance graph algorithms from parallel sparse matrices,” in Workshop on State-of-the-art in Scientific and Parallel Computing, PARA’06, Umeå, Sweden, June 18-21, 2006. Session MS5, Disponible en: http://www.hpc2n.umu.se/para06/ index.php?content=ms5[110] The MathWorks, Media Newsletter April 2006, article “Distributed Computing Speeds Application Development for Technical Computing” Disponible en: http://www.mathworks.com/company/pressroom/newsletter/may06/distrib.html[111] Workshop on distributed computing with MATLAB, March 20-22, 2006, Arctic Region Supercomputing Center (ARSC), University of Alaska Fairbanks campus. Disponible en: http://www.arsc.edu/support/training/MATLAB_2006.html.[112] C. Moler, “Is it Finally the Time for a Parallel Matlab?,” Householder Symposium XVI, 2005, Householder Meeting on Numerical Linear Algebra, May 23-27, 2005, Champion, Pennsylvania, USA.[113] A. Edelman et al., Computational Research in Boston (CRiB) seminar series. Disponible en: http://www-math.mit.edu/crib/[114] A. Edelman, R. Choy, J. Gilbert and V. Shah, “Parallel Computing Made Easy with STAR-P,” short course in SIAM Conference on Parallel Processing for Scientific Computing (PP04), San Francisco, CA, Feb. 24, 2004. Disponible en: http://www.siam.org/meetings/pp04/edelman.htm[114] A. Edelman, R. Choy, J. Gilbert and V. Shah, “Parallel Computing Made Easy with STAR-P,” short course in SIAM Conference on Parallel Processing for Scientific Computing (PP04), San Francisco, CA, Feb. 24, 2004. Disponible en: http://www.siam.org/meetings/pp04/ edelman.htm[115] A. Edelman, “Interactive Parallel MATLAB® with Star-p,” Seminar in Umeå Center for Interaction Technology, Oct. 20th 2005. Disponible en: http://www. ucit.umu.se/main.php?view=sem/9. Enlace recuperable mediante Internet WayBack Machine http://web.archive. org/web/20070520201742/http://www.ucit.umu.se/main.php?view=sem/9[116] The MathWorks, SC|05 & SC|06 booth press releases, Disponible en: http://web. archive.org/web/20080905111539/http://www.mathworks.com/company/pressroom/articles/article11350.html, http://web.archive.org/web/20071012121615/http://www.mathworks.com/company/events/tradeshows/tradeshow12352.html[117] J. Kepner, A. Reuther, L. Dean and S. Grad-Freilich, “Parallel and Distributed Computing with MATLAB,” Tutorial S07 in SuperComputing 2005, SC|05. Disponible en: http://sc05.supercomputing.org/schedule/event_detail.php?evid=5115[118] A. Edelman, J. Nehrbass, “Interactive Supercomputing with Star-P and MATLAB,” Tutorial S11 in SuperComputing 2005, SC|05. Disponible en: http://sc05.supercomputing.org/schedule/event_detail.php?evid=5104[119] J. Kepner, “HPCS (Workshop on High Productivity Computing Systems),” Workshop in SuperComputing 2005, SC|05. Disponible en: http://sc05.supercomputing.org/schedule/event_detail.php?evid=5284[120] S. Chapin and J. Worringen, “Operating Systems,” International Journal of High Performance Computing Applications, vol. 15, no. 2, pp. 115-123. SAGE publications, 2001.[121] M. Baker, “Preface,” International Journal of High Performance Computing Applications, vol. 15, no. 2, p. 91. SAGE publications, 2001.[122] B. Wilkinson and M. Allen, Parallel Programming: Techniques and Applications using Networked Workstations and Parallel Computers, 2nd Ed., Pearson - Prentice Hall, NJ, 1999.[123] Wikipedia, Mandelbrot set. Disponible en: http://en.wikipedia.org/wiki/Mandelbrot_setINGE CUCINGE CUChttps://revistascientificas.cuc.edu.co/ingecuc/article/view/223Computación científicaProgramación paralelaComputación de altas prestacionesComputación clusterPaso de mensajesMatlab paraleloScientific computingParallel programmingHigh performance computingCluster computingMessage-passingParallel MatlabComputación científica paralela mediante uso de herramientas para paso de mensajesParallel scientific computing with message-passing toolboxesArtículo de revistahttp://purl.org/coar/resource_type/c_6501http://purl.org/coar/resource_type/c_2df8fbb1Textinfo:eu-repo/semantics/articlehttp://purl.org/redcol/resource_type/ARTinfo:eu-repo/semantics/acceptedVersioninfo:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2PublicationORIGINALComputación científica paralela mediante uso de herramientas para paso de mensajes.pdfComputación científica paralela mediante uso de herramientas para paso de mensajes.pdfapplication/pdf2130497https://repositorio.cuc.edu.co/bitstreams/289d4c3d-c70b-4066-b966-4ce7c39f5414/download91793af6c33fc2875b0b4c1b79cc97e8MD51LICENSElicense.txtlicense.txttext/plain; charset=utf-81748https://repositorio.cuc.edu.co/bitstreams/235567e2-c19d-4d7f-8293-d6304c05b676/download8a4605be74aa9ea9d79846c1fba20a33MD52THUMBNAILComputación científica paralela mediante uso de herramientas para paso de mensajes.pdf.jpgComputación científica paralela mediante uso de herramientas para paso de mensajes.pdf.jpgimage/jpeg39957https://repositorio.cuc.edu.co/bitstreams/46538ab6-a559-4f2f-8ea6-8a388913c6b4/download0754abd2f738b715a6a299d09de27509MD54TEXTComputación científica paralela mediante uso de herramientas para paso de mensajes.pdf.txtComputación científica paralela mediante uso de herramientas para paso de mensajes.pdf.txttext/plain94212https://repositorio.cuc.edu.co/bitstreams/b15c66bc-7b50-40ec-8930-f837f522ee25/downloadc86c94a73be3d99cbeaf3b06ec83b4e1MD5511323/2658oai:repositorio.cuc.edu.co:11323/26582024-09-17 11:03:18.115open.accesshttps://repositorio.cuc.edu.coRepositorio de la Universidad de la Costa CUCrepdigital@cuc.edu.coTk9URTogUExBQ0UgWU9VUiBPV04gTElDRU5TRSBIRVJFClRoaXMgc2FtcGxlIGxpY2Vuc2UgaXMgcHJvdmlkZWQgZm9yIGluZm9ybWF0aW9uYWwgcHVycG9zZXMgb25seS4KCk5PTi1FWENMVVNJVkUgRElTVFJJQlVUSU9OIExJQ0VOU0UKCkJ5IHNpZ25pbmcgYW5kIHN1Ym1pdHRpbmcgdGhpcyBsaWNlbnNlLCB5b3UgKHRoZSBhdXRob3Iocykgb3IgY29weXJpZ2h0Cm93bmVyKSBncmFudHMgdG8gRFNwYWNlIFVuaXZlcnNpdHkgKERTVSkgdGhlIG5vbi1leGNsdXNpdmUgcmlnaHQgdG8gcmVwcm9kdWNlLAp0cmFuc2xhdGUgKGFzIGRlZmluZWQgYmVsb3cpLCBhbmQvb3IgZGlzdHJpYnV0ZSB5b3VyIHN1Ym1pc3Npb24gKGluY2x1ZGluZwp0aGUgYWJzdHJhY3QpIHdvcmxkd2lkZSBpbiBwcmludCBhbmQgZWxlY3Ryb25pYyBmb3JtYXQgYW5kIGluIGFueSBtZWRpdW0sCmluY2x1ZGluZyBidXQgbm90IGxpbWl0ZWQgdG8gYXVkaW8gb3IgdmlkZW8uCgpZb3UgYWdyZWUgdGhhdCBEU1UgbWF5LCB3aXRob3V0IGNoYW5naW5nIHRoZSBjb250ZW50LCB0cmFuc2xhdGUgdGhlCnN1Ym1pc3Npb24gdG8gYW55IG1lZGl1bSBvciBmb3JtYXQgZm9yIHRoZSBwdXJwb3NlIG9mIHByZXNlcnZhdGlvbi4KCllvdSBhbHNvIGFncmVlIHRoYXQgRFNVIG1heSBrZWVwIG1vcmUgdGhhbiBvbmUgY29weSBvZiB0aGlzIHN1Ym1pc3Npb24gZm9yCnB1cnBvc2VzIG9mIHNlY3VyaXR5LCBiYWNrLXVwIGFuZCBwcmVzZXJ2YXRpb24uCgpZb3UgcmVwcmVzZW50IHRoYXQgdGhlIHN1Ym1pc3Npb24gaXMgeW91ciBvcmlnaW5hbCB3b3JrLCBhbmQgdGhhdCB5b3UgaGF2ZQp0aGUgcmlnaHQgdG8gZ3JhbnQgdGhlIHJpZ2h0cyBjb250YWluZWQgaW4gdGhpcyBsaWNlbnNlLiBZb3UgYWxzbyByZXByZXNlbnQKdGhhdCB5b3VyIHN1Ym1pc3Npb24gZG9lcyBub3QsIHRvIHRoZSBiZXN0IG9mIHlvdXIga25vd2xlZGdlLCBpbmZyaW5nZSB1cG9uCmFueW9uZSdzIGNvcHlyaWdodC4KCklmIHRoZSBzdWJtaXNzaW9uIGNvbnRhaW5zIG1hdGVyaWFsIGZvciB3aGljaCB5b3UgZG8gbm90IGhvbGQgY29weXJpZ2h0LAp5b3UgcmVwcmVzZW50IHRoYXQgeW91IGhhdmUgb2J0YWluZWQgdGhlIHVucmVzdHJpY3RlZCBwZXJtaXNzaW9uIG9mIHRoZQpjb3B5cmlnaHQgb3duZXIgdG8gZ3JhbnQgRFNVIHRoZSByaWdodHMgcmVxdWlyZWQgYnkgdGhpcyBsaWNlbnNlLCBhbmQgdGhhdApzdWNoIHRoaXJkLXBhcnR5IG93bmVkIG1hdGVyaWFsIGlzIGNsZWFybHkgaWRlbnRpZmllZCBhbmQgYWNrbm93bGVkZ2VkCndpdGhpbiB0aGUgdGV4dCBvciBjb250ZW50IG9mIHRoZSBzdWJtaXNzaW9uLgoKSUYgVEhFIFNVQk1JU1NJT04gSVMgQkFTRUQgVVBPTiBXT1JLIFRIQVQgSEFTIEJFRU4gU1BPTlNPUkVEIE9SIFNVUFBPUlRFRApCWSBBTiBBR0VOQ1kgT1IgT1JHQU5JWkFUSU9OIE9USEVSIFRIQU4gRFNVLCBZT1UgUkVQUkVTRU5UIFRIQVQgWU9VIEhBVkUKRlVMRklMTEVEIEFOWSBSSUdIVCBPRiBSRVZJRVcgT1IgT1RIRVIgT0JMSUdBVElPTlMgUkVRVUlSRUQgQlkgU1VDSApDT05UUkFDVCBPUiBBR1JFRU1FTlQuCgpEU1Ugd2lsbCBjbGVhcmx5IGlkZW50aWZ5IHlvdXIgbmFtZShzKSBhcyB0aGUgYXV0aG9yKHMpIG9yIG93bmVyKHMpIG9mIHRoZQpzdWJtaXNzaW9uLCBhbmQgd2lsbCBub3QgbWFrZSBhbnkgYWx0ZXJhdGlvbiwgb3RoZXIgdGhhbiBhcyBhbGxvd2VkIGJ5IHRoaXMKbGljZW5zZSwgdG8geW91ciBzdWJtaXNzaW9uLgo=