BEGIN:VCALENDAR PRODID:-//Microsoft Corporation//Outlook MIMEDIR//EN VERSION:1.0 BEGIN:VEVENT DTSTART:20121112T203000Z DTEND:20121113T000000Z LOCATION:251-E DESCRIPTION;ENCODING=QUOTED-PRINTABLE:ABSTRACT: Due to its asynchronous nature and look-ahead capabilities, MPI/OmpSs is a promising=0Aprogramming model approach for future exascale systems, with the potential to exploit=0Aunprecedented amounts of parallelism, while coping with memory latency, network latency=0Aand load imbalance. Many large-scale applications are already seeing very positive results=0Afrom their ports to MPI/OmpSs (see EU projects Montblanc, TEXT). We will first cover the=0Abasic concepts of the programming model. OmpSs can be seen as an extension of the=0AOpenMP model. Unlike OpenMP, however, task dependencies are determined at runtime=0Athanks to the directionality of data arguments. The OmpSs runtime supports asynchronous=0Aexecution of tasks on heterogeneous systems such as SMPs, GPUs and clusters thereof. The=0Aintegration of OmpSs with MPI facilitates the migration of current MPI applications and=0Aimproves, automatically, the performance of these applications by overlapping computation=0Awith communication between tasks on remote nodes. The tutorial will also cover the constellation of development and performance tools available for the MPI/OmpSs programming model: the methodology to determine OmpSs tasks, the Ayudame/Temanejo debugging toolset, and the Paraver performance analysis tools. Experiences on the parallelization of real applications using MPI/OmpSs will also be presented. The tutorial will also include a demo. SUMMARY:Asynchronous Hybrid and Heterogeneous Parallel Programming with MPI/OmpSs for Exascale Systems PRIORITY:3 END:VEVENT END:VCALENDAR BEGIN:VCALENDAR PRODID:-//Microsoft Corporation//Outlook MIMEDIR//EN VERSION:1.0 BEGIN:VEVENT DTSTART:20121112T203000Z DTEND:20121113T000000Z LOCATION:251-E DESCRIPTION;ENCODING=QUOTED-PRINTABLE:ABSTRACT: Due to its asynchronous nature and look-ahead capabilities, MPI/OmpSs is a promising=0Aprogramming model approach for future exascale systems, with the potential to exploit=0Aunprecedented amounts of parallelism, while coping with memory latency, network latency=0Aand load imbalance. Many large-scale applications are already seeing very positive results=0Afrom their ports to MPI/OmpSs (see EU projects Montblanc, TEXT). We will first cover the=0Abasic concepts of the programming model. OmpSs can be seen as an extension of the=0AOpenMP model. Unlike OpenMP, however, task dependencies are determined at runtime=0Athanks to the directionality of data arguments. The OmpSs runtime supports asynchronous=0Aexecution of tasks on heterogeneous systems such as SMPs, GPUs and clusters thereof. The=0Aintegration of OmpSs with MPI facilitates the migration of current MPI applications and=0Aimproves, automatically, the performance of these applications by overlapping computation=0Awith communication between tasks on remote nodes. The tutorial will also cover the constellation of development and performance tools available for the MPI/OmpSs programming model: the methodology to determine OmpSs tasks, the Ayudame/Temanejo debugging toolset, and the Paraver performance analysis tools. Experiences on the parallelization of real applications using MPI/OmpSs will also be presented. The tutorial will also include a demo. SUMMARY:Asynchronous Hybrid and Heterogeneous Parallel Programming with MPI/OmpSs for Exascale Systems PRIORITY:3 END:VEVENT END:VCALENDAR