BSC Training Course: Parallel Programming Workshop

Data: 14/Oct/2024 Time: 09:30 - 18/Oct/2024 Time: 17:30

Place:

C6-E101, UPC Campus Nord, Barcelona

 

Target group: Level: (All courses are designed for specialists with at least 1st cycle degree or similar background experience) INTERMEDIATE: for trainees with some theoretical and practical knowledge; ADVANCED: for trainees able to work independently and requiring guidance for solving complex problems.

Cost: There is no registration fee.

Primary tabs

Objectives

The objectives of this course are to understand the fundamental concepts supporting message-passing and shared memory programming models. The course covers the two widely used programming models: MPI for the distributed-memory environments, and OpenMP for the shared-memory architectures. The course also presents the main tools developed at BSC to get information and analyze the execution of parallel applications, Paraver and Extrae.

It also presents the Parallware Assistant tool, which is able to automatically parallelize a large number of program structures, and provide hints to the programmer with respect to how to change the code to improve parallelization. It deals with debugging alternatives, including the use of GDB and Totalview. The use of OpenMP in conjunction with MPI to better exploit the shared-memory capabilities of current compute nodes in clustered architectures is also considered. Paraver will be used along the course as the tool to understand the behavior and performance of parallelized codes. The course is taught using formal lectures and practical/programming sessions to reinforce the key concepts and set up the compilation/execution environment.

 

Requirements

Prerequisites: Fortran, C or C++ programming. All examples in the course will be done in C.
Attendants can bring their own applications and work with them during the course for parallelization and analysis.

Software requirements: SSH client (to connect HPC systems), X Server (enabling remote visual tools).

Learning Outcomes

The students who finish this course will be able to develop benchmarks and applications with the MPI, OpenMP and mixed MPI/OpenMP programming models, as well as analyze their execution and tune their behaviour in parallel architectures.

 

 

 

 

Academic Staff

Course Convener: Xavier Martorell, CS/Programming Models

Lecturers: 

BSC - Computer Sciences department

Judit Giménez - Performance Tools - Group Manager
German Llort - Performance Tools - Senior Researcher
Marc Jordà - Accelerators and Communications for High Performance Computing - Research Engineer
Antonio Peña - Accelerators and Communications for High Performance Computing - Senior Researcher
Javier Teruel -  Best Practices for Performance and Programmability  - Group Coordinator
Xavier Martorell - Programming Models - Parallel programming model - Group Manager

Materials

INTELLECTUAL PROPERTY RIGHTS NOTICE:

  • The User may only download, make and retain a copy of the materials for his/her use for non‐commercial and research purposes.
  • The User may not commercially use the material, unless has been granted prior written consent by the Licensor to do so; and cannot remove, obscure or modify copyright notices, text acknowledging or other means of identification or disclaimers as they appear.
  • For further details, please contact BSC‐CNS patc [at] bsc [dot] es

Further information

BSC Training Courses do not charge fees.
PLEASE BRING YOUR OWN LAPTOP.

CONTACT US for further details about MSc, PhD, Post Doc studies, exchanges and collaboration in education and training with BSC.

For further details about Postgraduate Studies in UPC - Barcelona School of Informatics (FiB), visit the website.

Sponsor: BSC