EP017 HPX: A cure for performance impaired parallel applications

EP017 HPX: A cure for performance impaired parallel applications

May 1, 2019. | By: Admin

Outline

In episode 17, we interviewed Adrian Serio the Scientific Program Coordinator of the STELLAR group about the C++ Standard Library for Concurrency and Parallelism (HPX). We started with a general discussion about parallel computing, where it comes from, where it is going and what can we still expect to gain. We then clarified what are C++ standards and how HPX is developed to be standard compliant. HPX was compared to other parallelism libraries such as MPI and we learned that HPX is a foundation to develop other software for domain specific applications. Adrian informed us how HPX can be used to take advantage of hardware accelerators such as Intel Xeon Phi or GPUs. We looked at the inception of the project and the sources of contributions to the project

About Adrian Serio:

Adrian Serio is the Scientific Program Coordinator for the STELLAR group at Louisiana State University. In this role, he assists in the development of HPX, a distributed C++ runtime system for parallelism and concurrency. Adrian was surprised to find himself working in HPC after graduating with a bachelor degree in biological engineering in 2011. Nevertheless, he loves working in a university setting where he gets exposed to cutting edge research and work with an international team of collaborators.

About the C++ Standard Library for Concurrency and Parallelism (HPX):

High Performance ParalleX (HPX) is an environment for high performance computing. It is currently under active development by the STELLAR group at Louisiana State University. Focused on scientific computing, it provides an alternative execution model to conventional approaches such as MPI. HPX aims to overcome the challenges MPI faces with increasing large supercomputers by using asynchronous communication between nodes and lightweight control objects instead of global barriers, allowing application developers to exploit fine-grained parallelism.

Links

Listen to this episode here or add our rss feed to your favourite podcast application.

You can also download this episode as MP3 or OGG.

Subscribe

Subscribe to this podcast via RSS (mp3).

Subscribe to this podcast via RSS (ogg).

Listen on Google Play Music

Listen in Google Podcast

Listen on Itunes

Listen on Spotify

Listen on iHeartRadio

Listen on Stitcher

Categories

Podcast 31

User 9

Developer 23

Undefined 1

Recent Episodes

Popular Tags

Podcast (31) User (9) Developer (23) Undefined (1)

About

Podcasts about Free/Libre and Open Source Software (FLOSS) for Science

Crowd sourcing content

For contributing, please provide an issue here