DEV Community

Cover image for Spack: Package manager for MPI Cluster
Rohan Babbar
Rohan Babbar

Posted on

Spack: Package manager for MPI Cluster

While working and managing my MPI Cluster, I wanted something which can easily manage my software packages and support multiple versions of the same package while also supporting multiple MPI implementations. I used to spend hours fixing the old builds and making sure the experiments I run work without anything bugging me.

Few months back I found Spack, Spack is an open source flexible package manager to easily install software packages for your HPC environment. Some advantages which it offers which are great if someone is trying to create an HPC cluster:

  1. Spack is open source and can be installed quite easily. Many packages, containers like podman, apptainer can be found and installed using a single command.
  2. Spack supports multiple versions of the same package, for example you can install two separate versions of mpich and load whichever you want. Spack creates separate builds for each.
  3. Spack supports multiple implementations of software packages and handles loading and unloading of packages with ease which is great.
  4. Spack also provides environments, its used is pretty similar to what we have in conda.

Quick commands

Spack supports a lot of commands but here are some I personally recommend/use:

Command Description
spack install/uninstall <package>@version Installing/Uninstalling different packages in your cluster
spack load/unload <package> Load and unload which package would you want to use, with all other dependencies handled automatically.
spack list See all available packages which are provided
spack env create <env>
spack env activate <env>
spack add <package>
spack install
Activate a spack env and install packages to create reproducible environment

Another important package I personally use is LMod(Lua Modules), this makes loading and unloading of packages easier. Spack integrates with LMod to generate module files and commands to manipulate them.

How do I use it in my MPI Cluster

  • Ensure that each of my nodes has Spack as their package manager, with each node having mpich and openmpi preinstalled using Spack. If required, I install other packages as well.

  • SSH into the login node and write and run scripts which will perform computations on the compute nodes and use spack or lmod commands to load/unload software packages.

  • Run experiments and get your results; Spack handles all the dependencies automatically when loading new packages.

Conclusion

Spack even though is slightly slower when installing packages, because it builds from source, its flexibility across different supercomputing and HPC environments is great. The number of packages which are currently available is huge and is ever increasing making it an immediate choice for HPC experiments.

Top comments (0)