CASUS Software Repository

  – Abstraction Library for Parallel Kernel Acceleration

The alpaka library is a header-only C++14 abstraction library for accelerator development.

Its aim is to provide performance portability across accelerators through the abstraction (not hiding!) of the underlying levels of parallelism.

It is platform independent and supports the concurrent and cooperative use of multiple devices such as the hosts CPU as well as attached accelerators as for instance CUDA GPUs and Xeon Phis (currently native execution only). A multitude of accelerator back-end variants using CUDA, OpenMP (2.0/4.0), Boost.Fiber, std::thread and also serial execution is provided and can be selected depending on the device. Only one implementation of the user kernel is required by representing them as function objects with a special interface. There is no need to write special CUDA, OpenMP or custom threading code. Accelerator back-ends can be mixed within a device queue. The decision which accelerator back-end executes which kernel can be made at runtime.

The abstraction used is very similar to the CUDA grid-blocks-threads division strategy. Algorithms that should be parallelized have to be divided into a multi-dimensional grid consisting of small uniform work items. These functions are called kernels and are executed in parallel threads. The threads in the grid are organized in blocks. All threads in a block are executed in parallel and can interact via fast shared memory. Blocks are executed independently and can not interact in any way. The block execution order is unspecified and depends on the accelerator in use. By using this abstraction the execution can be optimally adapted to the available hardware.

Further information:

Online Tutorial:


E. Zenker, B. Worpitz, R. Widera, A. Huebl, G. Juckeland, A. Knüpfer, W. E. Nagel, M. Bussmann: Alpaka – An Abstraction Library for Parallel Kernel Acceleration, 2016 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), 2016, DOI: 10.1109/IPDPSW.2016.50


 – Average-atom code for Matter under Extreme Conditions

atoMEC is a python-based average-atom code for simulations of high energy density phenomena such as in warm dense matter. It is designed as an open-source and modular python package.

atoMEC uses Kohn-Sham density functional theory, in combination with an average-atom approximation, to solve the electronic structure problem for single-element materials at finite temperature. It is currently able to compute several important properties, such as total energies, Kohn-Sham eigenvalues, pressure and mean ionization states, for various materials under a wide range of densities and temperatures. Several developments and extensions are planned, for example time-dependent phenomena, non-equilibrium and non-adiabatic effects, and new approximations for finite temperatures.

More information on the average-atom methodology and Kohn-Sham density functional theory can be found (for example) in reference [1].

Further information:


[1] T. J. Callow, E. Kraisler, S. B. Hansen, and A. Cangi, (2021). First-principles derivation and properties of density-functional average-atom models. arXiv preprint arXiv:2103.09928.

[2] Callow, T., Kotik, D., Tsvetoslavova Stankulova, E., Kraisler, E., & Cangi, A. (2021). atoMEC (Version 1.0.0) [Computer software].


– Materials Learning Algorithms

MALA is a data-driven framework to generate surrogate models of density functional theory calculations based on machine learning. Its purpose is to enable multiscale modeling by bypassing computationally expensive steps in state-of-the-art density functional simulations.

MALA is designed as a modular and open-source python package. It enables users to perform the entire modeling toolchain using only a few lines of code. MALA is jointly developed by the Sandia National Laboratories (SNL) and the Center for Advanced Systems Understanding (CASUS).

Further information:


J. A. Ellis, L. Fiedler, G. A. Popoola, N. A. Modine, J. A. Stephens, A. P. Thompson, A. Cangi, S. Rajamanickam (2021). Accelerating Finite-temperature Kohn-Sham Density Functional Theory with Deep Neural Networks. Phys. Rev. B 104, 035120 (2021)