SC12 Home > SC12 Schedule > SC12 Presentation - Asynchronous Hybrid and Heterogeneous Parallel Programming with MPI/OmpSs for Exascale Systems

SCHEDULE: NOV 10-16, 2012

When viewing the Technical Program schedule, on the far righthand side is a column labeled "PLANNER." Use this planner to build your own schedule. Once you select an event and want to add it to your personal schedule, just click on the calendar icon of your choice (outlook calendar, ical calendar or google calendar) and that event will be stored there. As you select events in this manner, you will have your own schedule to guide you through the week.

Asynchronous Hybrid and Heterogeneous Parallel Programming with MPI/OmpSs for Exascale Systems

SESSION: Asynchronous Hybrid and Heterogeneous Parallel Programming with MPI/OmpSs for Exascale Systems

EVENT TYPE: Tutorials

TIME: 1:30PM - 5:00PM

Presenter(s):Jesus Labarta, Xavier Martorell, Christoph Niethammer, Costas Bekas

ROOM:251-E

ABSTRACT:
Due to its asynchronous nature and look-ahead capabilities, MPI/OmpSs is a promising programming model approach for future exascale systems, with the potential to exploit unprecedented amounts of parallelism, while coping with memory latency, network latency and load imbalance. Many large-scale applications are already seeing very positive results from their ports to MPI/OmpSs (see EU projects Montblanc, TEXT). We will first cover the basic concepts of the programming model. OmpSs can be seen as an extension of the OpenMP model. Unlike OpenMP, however, task dependencies are determined at runtime thanks to the directionality of data arguments. The OmpSs runtime supports asynchronous execution of tasks on heterogeneous systems such as SMPs, GPUs and clusters thereof. The integration of OmpSs with MPI facilitates the migration of current MPI applications and improves, automatically, the performance of these applications by overlapping computation with communication between tasks on remote nodes. The tutorial will also cover the constellation of development and performance tools available for the MPI/OmpSs programming model: the methodology to determine OmpSs tasks, the Ayudame/Temanejo debugging toolset, and the Paraver performance analysis tools. Experiences on the parallelization of real applications using MPI/OmpSs will also be presented. The tutorial will also include a demo.

Chair/Presenter Details:

Jesus Labarta - Barcelona Supercomputing Center

Xavier Martorell - Technical University of Catalunya

Christoph Niethammer - High Performance Computing Center Stuttgart

Costas Bekas - IBM Zurich Research Laboratory

Add to iCal  Click here to download .ics calendar file

Add to Outlook  Click here to download .vcs calendar file

Add to Google Calendarss  Click here to add event to your Google Calendar

Asynchronous Hybrid and Heterogeneous Parallel Programming with MPI/OmpSs for Exascale Systems

SESSION: Asynchronous Hybrid and Heterogeneous Parallel Programming with MPI/OmpSs for Exascale Systems

EVENT TYPE:

TIME: 1:30PM - 5:00PM

Presenter(s):Jesus Labarta, Xavier Martorell, Christoph Niethammer, Costas Bekas

ROOM:251-E

ABSTRACT:
Due to its asynchronous nature and look-ahead capabilities, MPI/OmpSs is a promising programming model approach for future exascale systems, with the potential to exploit unprecedented amounts of parallelism, while coping with memory latency, network latency and load imbalance. Many large-scale applications are already seeing very positive results from their ports to MPI/OmpSs (see EU projects Montblanc, TEXT). We will first cover the basic concepts of the programming model. OmpSs can be seen as an extension of the OpenMP model. Unlike OpenMP, however, task dependencies are determined at runtime thanks to the directionality of data arguments. The OmpSs runtime supports asynchronous execution of tasks on heterogeneous systems such as SMPs, GPUs and clusters thereof. The integration of OmpSs with MPI facilitates the migration of current MPI applications and improves, automatically, the performance of these applications by overlapping computation with communication between tasks on remote nodes. The tutorial will also cover the constellation of development and performance tools available for the MPI/OmpSs programming model: the methodology to determine OmpSs tasks, the Ayudame/Temanejo debugging toolset, and the Paraver performance analysis tools. Experiences on the parallelization of real applications using MPI/OmpSs will also be presented. The tutorial will also include a demo.

Chair/Presenter Details:

Jesus Labarta - Barcelona Supercomputing Center

Xavier Martorell - Technical University of Catalunya

Christoph Niethammer - High Performance Computing Center Stuttgart

Costas Bekas - IBM Zurich Research Laboratory

Add to iCal  Click here to download .ics calendar file

Add to Outlook  Click here to download .vcs calendar file

Add to Google Calendarss  Click here to add event to your Google Calendar