Software


Abstraction Library for Parallel Kernel Acceleration

The alpaka library is a header-only C++14 abstraction library for accelerator development.

Its aim is to provide performance portability across accelerators through the abstraction (not hiding!) of the underlying levels of parallelism.

It is platform independent and supports the concurrent and cooperative use of multiple devices such as the hosts CPU as well as attached accelerators as for instance CUDA GPUs and Xeon Phis (currently native execution only). A multitude of accelerator back-end variants using CUDA, OpenMP (2.0/4.0), Boost.Fiber, std::thread and also serial execution is provided and can be selected depending on the device. Only one implementation of the user kernel is required by representing them as function objects with a special interface. There is no need to write special CUDA, OpenMP or custom threading code. Accelerator back-ends can be mixed within a device queue. The decision which accelerator back-end executes which kernel can be made at runtime.

The abstraction used is very similar to the CUDA grid-blocks-threads division strategy. Algorithms that should be parallelized have to be divided into a multi-dimensional grid consisting of small uniform work items. These functions are called kernels and are executed in parallel threads. The threads in the grid are organized in blocks. All threads in a block are executed in parallel and can interact via fast shared memory. Blocks are executed independently and can not interact in any way. The block execution order is unspecified and depends on the accelerator in use. By using this abstraction the execution can be optimally adapted to the available hardware.

Further information:

https://github.com/alpaka-group/alpaka

Online Tutorial:

Reference:

E. Zenker, B. Worpitz, R. Widera, A. Huebl, G. Juckeland, A. Knüpfer, W. E. Nagel, M. Bussmann (2016). Alpaka – An Abstraction Library for Parallel Kernel Acceleration, IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), doi: 10.1109/IPDPSW.2016.50

Contact:


Average-atom code for Matter under Extreme Conditions

atoMEC is a python-based average-atom code for simulations of high energy density phenomena such as in warm dense matter. It is designed as an open-source and modular python package.

atoMEC uses Kohn-Sham density functional theory, in combination with an average-atom approximation, to solve the electronic structure problem for single-element materials at finite temperature. It is currently able to compute several important properties, such as total energies, Kohn-Sham eigenvalues, pressure and mean ionization states, for various materials under a wide range of densities and temperatures. Several developments and extensions are planned, for example time-dependent phenomena, non-equilibrium and non-adiabatic effects, and new approximations for finite temperatures.

More information on the average-atom methodology and Kohn-Sham density functional theory can be found (for example) in reference [1].

Further information:

https://github.com/atomec-project/atoMEC

References:

[1] T. J. Callow, E. Kraisler, S. B. Hansen, and A. Cangi (2021). First-principles derivation and properties of density-functional average-atom models. arXiv preprint arXiv:2103.09928.

[2] T. J. Callow, D. Kotik, E. Tsvetoslavova Stankulova, E. Kraisler, and A. Cangi (2021). atoMEC (Version 1.0.0) [Computer software]. https://doi.org/10.5281/zenodo.5205718.

Contact:


Low-Level Abstraction of Memory Access

LLAMA is a cross-platform C++17 template header-only library for the abstraction of memory access patterns. It distinguishes between the view of the algorithm on the memory and the real layout in the background. This enables performance portability for multicore, manycore and gpu applications with the very same code.

In contrast to many other solutions LLAMA can define nested data structures of arbitrary depths. It is not limited to struct of array and array of struct data layouts but also capable to explicitly define memory layouts with padding, blocking, striding or any other run time or compile time access pattern.

To achieve this goal LLAMA is split into mostly independent, orthogonal parts completely written in modern C++17 to run on as many architectures and with as many compilers as possible while still supporting extensions needed e.g. to run on GPU or other many core hardware.

Further information:

https://github.com/alpaka-group/llama
video recording (from 00:40 min onwards) and slides of a May 2021 Compute Accelerator Forum presentation

Reference:

B. M. Gruber, G. Amadio, J. Blomer, A. Matthes, R. Widera, M. Bussmann (2021). LLAMA: The Low-Level Abstraction For Memory Access. arxiv.org/abs/2106.04284

Contact:


Materials Learning Algorithms

MALA is a data-driven framework to generate surrogate models of density functional theory calculations based on machine learning. Its purpose is to enable multiscale modeling by bypassing computationally expensive steps in state-of-the-art density functional simulations.

MALA is designed as a modular and open-source python package. It enables users to perform the entire modeling toolchain using only a few lines of code. MALA is jointly developed by the Sandia National Laboratories (SNL) and the Center for Advanced Systems Understanding (CASUS).

Further information:

https://github.com/mala-project/mala

Reference:

J. A. Ellis, L. Fiedler, G. A. Popoola, N. A. Modine, J. A. Stephens, A. P. Thompson, A. Cangi, S. Rajamanickam (2021). Accelerating Finite-temperature Kohn-Sham Density Functional Theory with Deep Neural Networks. Phys. Rev. B 104, 035120

Contact:


Multivariate Interpolation in Python

The Python package minterpy is based on an optimised implementation of the multivariate interpolation algorithm given by M. Hecht et al. [1,2]. It thereby provides software solutions that lift the curse of dimensionality from interpolation tasks. While interpolation occurs as the bottleneck of most computational challenges, minterpy aims to free empirical sciences from their computational limitations.

minterpy is continuously extended and improved by adding further functionality and modules that provide novel digital solutions to a broad field of computational challenges, including but not limited to:

  • multivariate interpolation
  • non-linear polynomial regression
  • numerical integration
  • global (black-box) optimization
  • surface level-set methods
  • non-periodic spectral partial differential equations (PDE) solvers on flat and complex geometries
  • machine learning regularization
  • data reconstruction
  • computational solutions in algebraic geometry

minterpy is an open-source Python package that makes it easily accessible and allows for further development and improvement by the Python community.

Further information:

https://github.com/casus/minterpy

References:

[1] M. Hecht, K. Gonciarz, J. Michelfeit, V. Sivkin and I. F. Sbalzarini (2020). Multivariate interpolation on unisolvent nodes–lifting the curse of dimensionality. arxiv.org/abs/2010.10824

[2] M. Hecht and I. F. Sbalzarini (2018). Fast interpolation and Fourier transform in high-dimensional spaces. In Intelligent Computing. Proc. 2018 IEEE Computing Conf., Vol. 2, volume 857 of Advances in Intelligent Systems and Computing, pages 53–75, London, UK,  Springer Nature

Contact:


Open and F.A.I.R. IO at the Exascale

openPMD is an open meta-data schema that provides meaning and self-description for data sets in science and engineering. The openPMD-api, developed openly in collaboration of the Center for Advanced Systems Understanding, Helmholtz-Zentrum Dresden-Rossendorf and Lawrence Berkeley National Laboratory, is an IO middleware library that assists domain-scientists with data description along the openPMD standard for FAIR particle-mesh data, used already in numerous physics simulations.

In comparison to direct implementations of application codes against high-speed IO backends, using openPMD-api saves thousands of lines of code per application, reduces integration time for application developers, promotes sharing of best practices for IO library tuning options, provides a high-level interface to describe scientific data and conserves this standardized meta-data for analysis and coupling workflows.

The openPMD-api implements high-speed backends such as HDF5, ADIOS1 and ADIOS2, thus preparing scientific codes for the IO efficiency required at the Exascale. These different backends as well as their configurations can be picked at runtime, guaranteeing flexibility of IO up to the choice of streaming IO (via ADIOS2). A serial JSON backend serves for prototyping and learning purposes, as well as for users that prefer ASCII output.

Additional language bindings on top of the core C++ implementation ease integration into many established post-processing frameworks, such as Python bindings for parallel readers into ParaView, Dask and domain-specific analysis tools.

Further information:

https://github.com/openPMD/openPMD-api/

https://openpmd.org

References:

Axel Huebl et al. (2015). openPMD: A meta data standard for particle and mesh based data. https://doi.org/10.5281/zenodo.591699

Fabian Koller et al. (2018). openPMD-api: C++ & Python API for Scientific I/O with openPMD. https://doi.org/10.14278/rodare.27

Franz Poeschel et al. (2022). Transitioning from File-Based HPC Workflows to Streaming Data Pipelines with openPMD and ADIOS2. https://doi.org/10.1007%2F978-3-030-96498-6_6

Contact: