Skip to content

Installation

You can build and run RDycore on the following platforms:

  • Linux and Mac laptops and workstations
  • Frontier (Oak Ridge National Laboratory)
  • Perlmutter (NERSC)

Required Software

To build RDycore, you need:

  • CMake v3.14+ for configuration
  • GNU Make or Ninja for compilation and testing
  • reliable C, C++, and Fortran compilers
  • a working MPI installation (like OpenMPI or MPICH)
  • PETSc, built with the following third-party libraries:
    • cgns
    • exodusii
    • fblaslapack
    • hdf5
    • libceed
    • metis
    • muparser
    • netcdf
    • parmetis
    • pnetcdf
    • zlib

You can obtain all of these freely on the Linux and Mac platforms. On Linux, just use your favorite package manager. On a Mac, you can get the Clang C/C++ compiler by installing XCode, and then use a package manager like Homebrew or MacPorts to get the rest.

Which version of PETSc?

Check our automated testing workflow for the proper Git hash to use to build RDycore. The linked line specifies a Docker image containing the "blessed" version of PETSc, which can be read as follows:

coherellc/rdycore-petsc:fc288817-int32
  • coherellc is the name of the DockerHub organization hosting the image
  • rdycore-petsc is the the name of the Docker image
  • fc288817 is the Git hash within the PETSc repository used to build RDycore
  • int32 (or int64) indicates whether the PETSc installation within the image uses 32-bit or 64-bit integers for the PetscInt data type.

See our PETSc Dockerfile for an example of the configure command we use to build PETSc in our continous integration environment.

Clone the Repository

First, go get the source code at GitHub:

git clone git@github.com:RDycore/RDycore.git
git clone https://github.com/RDycore/RDycore.git

This places an RDycore folder into your current path. Then we need to update the submodules.

cd RDycore
git submodule update --init --recursive

Configure RDycore

RDycore uses CMake, and can be easily configured as long as PETSc is installed and the PETSC_DIR and PETSC_ARCH environment variables are set properly. Usually all you need to do is change to your RDycore source directory and type

cmake -S . -B build

where build is the name of your build directory relative to the source directory. If you want to install RDycore somewhere afterward, e.g. to be able to configure E3SM to use it, you can set the prefix for the installation path using the CMAKE_INSTALL_PREFIX parameter:

cmake -S . -B build -DCMAKE_INSTALL_PREFIX=/path/to/install

If you want build RDycore with Ninja instead of Make, you can ask CMake to generate Ninja files instead of Makefiles with the -G (generator) flag:

cmake -S . -B build -G Ninja

Supported configuration options

CMake allows you to specify build options with the -D flag, as indicated in Step 3 above. Here are the options supported by RDycore:

  • CMAKE_INSTALL_PREFIX=/path/to/install: a path to which the RDycore library and driver are installed with make install (as in Step 7)
  • CMAKE_BUILD_TYPE=Debug|Release: controls whether a build has debugging information or whether it is optimized
  • CMAKE_VERBOSE_MAKEFILE=ON|OFF: if ON, displays compiler and linker output while building. Otherwise displays only the file being built.
  • ENABLE_COVERAGE=ON|OFF: if ON, enables code coverage instrumentation.
  • ENABLE_FORTRAN=ON|OFF: if ON, enables Fortran 90 bindings. This is ON by default, but can be used to work around situations in which Fortran compilers are causing issues.

Since RDycore gets most of its configuration information from PETSc, we don't need to use most other CMake options.

Considerations for Apple hardware

If you're on a Mac, make sure you have installed the XCode Command Line Tools. If you have, these tools should be located in /Library/Developer/CommandLineTools/usr/bin/, so add this directory to your PATH.

Build, Test, and Install RDycore

After you've configured RDycore, you can build it:

  1. Change to your build directory (e.g. cd build)
  2. Type make -j or ninja to build the library.
  3. To run tests for the library (and the included drivers), type make test or ninja test.
  4. To install the model to the location (indicated by your CMAKE_INSTALL_PREFIX, if you specified it), type make install or ninja install. By default, products are installed in the include, lib, bin, and share subdirectories of this prefix.

Running Tests

RDycore uses CTest, CMake's testing program, to run its tests. CTest is very fancy and allows us to run tests selectively and in various ways, but all you need to do to run all the tests for RDycore is to change to your build directory and type

make test

or

ninja test

This runs every test defined in your build configuration and dumps the results to Testing/Temporary/LastTest.log.

Measuring Code Coverage

RDycore can use gcov or lcov to analyze code coverage (the fraction of source code that is exercised by programs and tests) with the GCC or Clang compilers.

To instrument the rdycore library and unit tests for code coverage analysis, pass the -DENABLE_COVERAGE=ON flag to CMake when configuring your build. Then, after building and running tests, type

make coverage

or

ninja coverage

to generate a single report (coverage.info) containing all coverage information. See the documentation for gcov and lcov (linked above) for details on how to interpret thіs information.

Checking for memory errors and leaks with Valgrind

If you're using a Linux system and have Valgrind installed, you can run our tests using Valgrind's memcheck tool with

make memcheck

or

ninja memcheck

Making code changes and rebuilding

Notice that you must build RDycore in a build tree, separate from its source trees. This is standard practice in CMake-based build systems, and it allows you to build several different configurations without leaving generated and compiled files all over your source directory. However, you might have to change the way you work in order to be productive in this kind of environment.

When you make a code change, make sure you build from the build directory that you created in step 1 above:

cd /path/to/RDycore/build
make -j  # or ninja

You can also run tests from this build directory with make test.

This is very different from how some people like to work. One method of making this easier is to use an editor in a dedicated window, and have another window open with a terminal, sitting in your build directory. If you're using a fancy modern editor, it might have a CMake-based workflow that handles all of this for you.

The build directory has a structure that mirrors the source directory. If you're using Make (and not Ninja) to build RDycore, you can type make in any one of its subdirectories to do partial builds. In practice, though, it's safest to always build from the top of the build tree.

Preinstalled PETSc for RDycore on certain DOE machines

The RDycore team supports installation of the model at following DOE machines:

  1. Perlmutter at NERSC
  2. Frontier at OLCF

First, run the following shell script to set PETSc-related environmental variables and load appropriate modules.

`source config/set_petsc_settings.sh --mach <machine_name> --config <configuration>`,

Multiple configurations of PETSc have been pre-installed on these supported machines under RDycore's project directories. Information about the available PETSc configurations can be obtained via ./config/set_petsc_settings.sh.

The Perlmutter system has two types of compute nodes: CPU-only and CPU-GPU nodes, and RDycore needs to be build separately for each type of compute node. The CPU-only nodes have 128 cores (2 x 64-core AMD EPYC CPUs), while the CPU-GPU nodes have 1 x 64-core AMD EPYC CPU and 4 x NVIDIA A100. RDycore uses PETSc's and libCEED's support of CUDA to run on Perlmutter GPUs.

Frontier has a single type of compute node that has 64-core AMD and 4x AMD MI250X GPUs. Each GPU has 2 Graphics Compute Dies (GCDs) for a total of 8 GCDs per node. Of the 64-cores, only 56 are allocatable cores instead of 64 cores. RDycore uses PETSc's and libCEED's support of HIP to run on AMD GPUs.

NOTE: Replace make -j4 with ninja below to use Ninja instead of Make.

Example: Building and running RDycore on Perlmutter CPU nodes

cd /path/to/RDycore

# Set PETSc environment variables for Perlmutter CPU nodes
source config/set_petsc_settings.sh --mach pm-cpu --config 1

# Build RDycore
cmake -S . -B build-$PETSC_ARCH -DCMAKE_INSTALL_PREFIX=$PWD/build-$PETSC_ARCH
cd build-$PETSC_ARCH
make -j4 install

# Use an interactive job queue
salloc --nodes 1 --qos interactive --time 00:30:00 --constraint cpu \
--account=<project-id>

# Change to the directory containing tests
cd driver/tests/swe_roe

# Run on 4 MPI tasks on CPUs
srun -N 1 -n 4 ../../rdycore ex2b_ic_file.yaml -ceed /cpu/self -log_view

Example: Building and running RDycore on Perlmutter GPU nodes

cd /path/to/RDycore

# Set PETSc environment variables for Perlmutter GPU nodes
source config/set_petsc_settings.sh --mach pm-gpu --config 1

# Build RDycore
cmake -S . -B build-$PETSC_ARCH -DCMAKE_INSTALL_PREFIX=$PWD/build-$PETSC_ARCH
cd build-$PETSC_ARCH
make -j4 install

# Use an interactive job queue
salloc --nodes 1 --qos interactive --time 00:30:00 --constraint gpu \
--gpus 4 --account=<project-id>_g

# Change to the directory containing tests
cd driver/tests/swe_roe

# Run on 4 GPUs using CUDA
srun -N 1 -n 4 -c 32 ../../rdycore ex2b_ic_file.yaml \
-ceed /gpu/cuda -dm_vec_type cuda -log_view -log_view_gpu_time

Example: Building and running RDycore on Frontier

cd /path/to/RDycore

# Set PETSc environment variables for Frontier
source config/set_petsc_settings.sh --mach frontier --config 1

# Build RDycore
cmake -S . -B build-$PETSC_ARCH -DCMAKE_INSTALL_PREFIX=$PWD/build-$PETSC_ARCH
cd build-$PETSC_ARCH
make -j4 install

# Use an interactive job queue
salloc -N 1 -A <project-id> -t 0:30:00 -p batch

# Change to the directory containing tests
cd driver/tests/swe_roe

# Run on CPUs
srun -N 1 -n8 -c1 ../../rdycore ex2b_ic_file.yaml -ceed /cpu/self -log_view

# Run on 8 GPUs using HIP
srun -N 1 -n8 -c1 ../../rdycore ex2b_ic_file.yaml \
-ceed /gpu/hip -dm_vec_type hip -log_view -log_view_gpu_time

Frequently Asked Questions (FAQ)

When I try to configure RDycore, PETSc is not found! I get an error message like this:

-- Checking for module 'PETSc'
--   No package 'PETSc' found
CMake Error at <gibberish>
  A required package was not found
Call Stack (most recent call first):
  <more gibberish>/Modules/FindPkgConfig.cmake:829 (_pkg_check_modules_internal)
  CMakeLists.txt:39 (pkg_check_modules)


-- Configuring incomplete, errors occurred!

How do I tell CMake where to find PETSc?

RDycore uses the UNIX pkg-config tool to figure out where PETSc is installed. Specifically, it looks for a file named <package-name>.pc that contains relevant information about the installation. You can tell it where to search using the PKG_CONFIG_PATH environment variable similarly to how you use the PATH variable to search for the UNIX commands you use.

Typically, PETSc installs PETSc.pc in the following location:

$PETSC_DIR/$PETSC_ARCH/lib/pkgconfig/PETSc.pc

So if you modify PKG_CONFIG_PATH to contain $PETSC_DIR/$PETSC_ARCH/lib/pkgconfig, (e.g. export PKG_CONFIG_PATH="$PETSC_DIR/$PETSC_ARCH/lib/pkgconfig:$PKG_CONFIG_PATH"), CMake can find your PETSc installation. This is probably required if you are installing PETSc in a peculiar place.

When I type make, it builds some things and then just sits there for hours! What's going on?

Currently (around April 8, 2025), CMake is not able to generate Makefiles for PETSc's Fortran bindings, so you should use Ninja instead to build RDycore. Just add -G Ninja to your CMake configuration command and use ninja instead of make (without the -j flag, as Ninja always does parallel builds).