2.2. Install from source code#

Please follow our GitHub webpage to download the latest released version and development version.

Or get the DeePMD-kit source code by git clone

cd /some/workspace
git clone https://github.com/deepmodeling/deepmd-kit.git deepmd-kit

For convenience, you may want to record the location of the source to a variable, saying deepmd_source_dir by

cd deepmd-kit
deepmd_source_dir=`pwd`

2.2.1. Install the Python interface#

2.2.1.1. Install Backend’s Python interface#

First, check the Python version on your machine. Python 3.9 or above is required.

python --version

We follow the virtual environment approach to install the backend’s Python interface. Now we assume that the Python interface will be installed in the virtual environment directory $deepmd_venv:

virtualenv -p python3 $deepmd_venv
source $deepmd_venv/bin/activate
pip install --upgrade pip

The full instruction to install TensorFlow can be found on the official TensorFlow website. TensorFlow 2.7 or later is supported.

pip install --upgrade tensorflow

If one does not need the GPU support of DeePMD-kit and is concerned about package size, the CPU-only version of TensorFlow should be installed by

pip install --upgrade tensorflow-cpu

One can also use conda to install TensorFlow from conda-forge.

To verify the installation, run

python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"

One can also build the TensorFlow Python interface from source for customized hardware optimization, such as CUDA, ROCM, or OneDNN support.

To install PyTorch, run

pip install torch

Follow PyTorch documentation to install PyTorch built against different CUDA versions or without CUDA.

One can also use conda to install PyTorch from conda-forge.

To install JAX AI Stack, run

pip install jax-ai-stack

One can also install packages in JAX AI Stack manually. Follow JAX documentation to install JAX built against different CUDA versions or without CUDA.

One can also use conda to install JAX from conda-forge.

It is important that every time a new shell is started and one wants to use DeePMD-kit, the virtual environment should be activated by

source $deepmd_venv/bin/activate

if one wants to skip out of the virtual environment, he/she can do

deactivate

If one has multiple python interpreters named something like python3.x, it can be specified by, for example

virtualenv -p python3.9 $deepmd_venv

One should remember to activate the virtual environment every time he/she uses DeePMD-kit.

2.2.1.2. Install the DeePMD-kit’s python interface#

Check the compiler version on your machine

gcc --version

By default, DeePMD-kit uses C++ 14, so the compiler needs to support C++ 14 (GCC 5 or later). The backend package may use a higher C++ standard version, and thus require a higher compiler version (for example, GCC 7 for C++ 17).

Note that TensorFlow may have specific requirements for the compiler version to support the C++ standard version and _GLIBCXX_USE_CXX11_ABI used by TensorFlow. It is recommended to use the same compiler version as TensorFlow, which can be printed by python -c "import tensorflow;print(tensorflow.version.COMPILER_VERSION)".

You can set the environment variable export DP_ENABLE_PYTORCH=1 to enable customized C++ OPs in the PyTorch backend. Note that PyTorch may have specific requirements for the compiler version to support the C++ standard version and _GLIBCXX_USE_CXX11_ABI used by PyTorch.

Execute

cd $deepmd_source_dir
pip install .

One may set the following environment variables before executing pip:

DP_VARIANT#

Choices: cpu, cuda, rocm; Default: cpu

Build CPU variant or GPU variant with CUDA or ROCM support.

CUDAToolkit_ROOT#

Type: Path; Default: Detected automatically

The path to the CUDA toolkit directory. CUDA 9.0 or later is supported. NVCC is required.

ROCM_ROOT#

Type: Path; Default: Detected automatically

The path to the ROCM toolkit directory. If ROCM_ROOT is not set, it will look for ROCM_PATH; if ROCM_PATH is also not set, it will be detected using hipconfig --rocmpath.

DP_ENABLE_TENSORFLOW#

Choices: 0, 1; Default: 1

TensorFlow Enable the TensorFlow backend.

DP_ENABLE_PYTORCH#

Choices: 0, 1; Default: 0

PyTorch Enable customized C++ OPs for the PyTorch backend. PyTorch can still run without customized C++ OPs, but features will be limited.

TENSORFLOW_ROOT#

Type: Path; Default: Detected automatically

TensorFlow The path to TensorFlow Python library. If not given, by default the installer only finds TensorFlow under user site-package directory (site.getusersitepackages()) or system site-package directory (sysconfig.get_path("purelib")) due to limitation of PEP-517. If not found, the latest TensorFlow (or the environment variable TENSORFLOW_VERSION if given) from PyPI will be built against.

PYTORCH_ROOT#

Type: Path; Default: Detected automatically

PyTorch The path to PyTorch Python library. If not given, by default, the installer only finds PyTorch under the user site-package directory (site.getusersitepackages()) or the system site-package directory (sysconfig.get_path("purelib")) due to the limitation of PEP-517. If not found, the latest PyTorch (or the environment variable PYTORCH_VERSION if given) from PyPI will be built against.

DP_ENABLE_NATIVE_OPTIMIZATION#

Choices: 0, 1; Default: 0

Enable compilation optimization for the native machine’s CPU type. Do not enable it if generated code will run on different CPUs.

CMAKE_ARGS#

Type: string

Additional CMake arguments.

<LANG>FLAGS#

<LANG>=CXX, CUDA or HIP

Type: string

Default compilation flags to be used when compiling <LANG> files. See CMake documentation for details.

Other CMake environment variables may also be critical.

To test the installation, one should first jump out of the source directory

cd /some/other/workspace

then execute

dp -h

It will print the help information like

usage: dp [-h]
          [-b {jax,tensorflow,tf,pytorch,pt} | --jax | --tensorflow | --pytorch]
          [--version]
          {transfer,train,freeze,test,compress,doc-train-input,model-devi,convert-from,neighbor-stat,change-bias,train-nvnmd,gui,convert-backend,show}
          ...

DeePMD-kit: A deep learning package for many-body potential energy representation and molecular dynamics

options:
  -h, --help            show this help message and exit
  -b {jax,tensorflow,tf,pytorch,pt}, --backend {jax,tensorflow,tf,pytorch,pt}
                        The backend of the model. Default can be set by environment variable DP_BACKEND. (default: tensorflow)
  --jax                 Alias for --backend jax (default: None)
  --tensorflow, --tf    Alias for --backend tensorflow (default: None)
  --pytorch, --pt       Alias for --backend pytorch (default: None)
  --version             show program's version number and exit

Valid subcommands:
  {transfer,train,freeze,test,compress,doc-train-input,model-devi,convert-from,neighbor-stat,change-bias,train-nvnmd,gui,convert-backend,show}
    transfer            (Supported backend: TensorFlow) pass parameters to another model
    train               train a model
    freeze              freeze the model
    test                test the model
    compress            Compress a model
    doc-train-input     print the documentation (in rst format) of input training parameters.
    model-devi          calculate model deviation
    convert-from        (Supported backend: TensorFlow) convert lower model version to supported version
    neighbor-stat       Calculate neighbor statistics
    change-bias         (Supported backend: PyTorch) Change model out bias according to the input data.
    train-nvnmd         (Supported backend: TensorFlow) train nvnmd model
    gui                 Serve DP-GUI.
    convert-backend     Convert model to another backend.
    show                Show the information of a model

Use --tf or --pt to choose the backend:
    dp --tf train input.json
    dp --pt train input.json

2.2.1.3. Install horovod and mpi4py TensorFlow#

Horovod and mpi4py are used for parallel training. For better performance on GPU, please follow the tuning steps in Horovod on GPU.

# With GPU, prefer NCCL as a communicator.
HOROVOD_WITHOUT_GLOO=1 HOROVOD_WITH_TENSORFLOW=1 HOROVOD_GPU_OPERATIONS=NCCL HOROVOD_NCCL_HOME=/path/to/nccl pip install horovod mpi4py

If your work in a CPU environment, please prepare runtime as below:

# By default, MPI is used as communicator.
HOROVOD_WITHOUT_GLOO=1 HOROVOD_WITH_TENSORFLOW=1 pip install horovod mpi4py

To ensure Horovod has been built with proper framework support enabled, one can invoke the horovodrun --check-build command, e.g.,

$ horovodrun --check-build

Horovod v0.22.1:

Available Frameworks:
    [X] TensorFlow
    [X] PyTorch
    [ ] MXNet

Available Controllers:
    [X] MPI
    [X] Gloo

Available Tensor Operations:
    [X] NCCL
    [ ] DDL
    [ ] CCL
    [X] MPI
    [X] Gloo

Since version 2.0.1, Horovod and mpi4py with MPICH support are shipped with the installer.

If you don’t install Horovod, DeePMD-kit will fall back to serial mode.

2.2.2. Install the C++ interface#

If one does not need to use DeePMD-kit with LAMMPS or i-PI, then the python interface installed in the previous section does everything and he/she can safely skip this section.

2.2.2.1. Install Backends’ C++ interface (optional)#

The C++ interfaces of both TensorFlow and JAX backends are based on the TensorFlow C++ library.

Since TensorFlow 2.12, TensorFlow C++ library (libtensorflow_cc) is packaged inside the Python library. Thus, you can skip building TensorFlow C++ library manually. If that does not work for you, you can still build it manually.

The C++ interface of DeePMD-kit was tested with compiler GCC >= 4.8. It is noticed that the i-PI support is only compiled with GCC >= 4.8. Note that TensorFlow may have specific requirements for the compiler version.

First, the C++ interface of TensorFlow should be installed. It is noted that the version of TensorFlow should be consistent with the python interface. You may follow the instruction or run the script $deepmd_source_dir/source/install/build_tf.py to install the corresponding C++ interface.

If you have installed PyTorch using pip, you can use libtorch inside the PyTorch Python package. You can also download libtorch prebuilt library from the PyTorch website.

The JAX backend only depends on the TensorFlow C API, which is included in both TensorFlow C++ library and TensorFlow C library. If you want to use the TensorFlow C++ library, just enable the TensorFlow backend (which depends on the TensorFlow C++ library) and nothing else needs to do. If you want to use the TensorFlow C library and disable the TensorFlow backend, download the TensorFlow C library from this page.

2.2.2.2. Install DeePMD-kit’s C++ interface#

Now go to the source code directory of DeePMD-kit and make a building place.

cd $deepmd_source_dir/source
mkdir build
cd build

The installation requires CMake 3.16 or later for the CPU version, CMake 3.23 or later for the CUDA support, and CMake 3.21 or later for the ROCM support. One can install CMake via pip if it is not installed or the installed version does not satisfy the requirement:

pip install -U cmake

You must enable at least one backend. If you enable two or more backends, these backend libraries must be built in a compatible way, e.g. using the same _GLIBCXX_USE_CXX11_ABI flag. We recommend using conda packages from conda-forge, which are usually compatible to each other.

I assume you have activated the TensorFlow Python environment and want to install DeePMD-kit into path $deepmd_root, then execute CMake

cmake -DENABLE_TENSORFLOW=TRUE -DUSE_TF_PYTHON_LIBS=TRUE -DCMAKE_INSTALL_PREFIX=$deepmd_root ..

If you specify -DUSE_TF_PYTHON_LIBS=FALSE, you need to give the location where TensorFlow’s C++ interface is installed to -DTENSORFLOW_ROOT=${tensorflow_root}.

I assume you have installed the PyTorch (either Python or C++ interface) to $torch_root, then execute CMake

cmake -DENABLE_PYTORCH=TRUE -DCMAKE_PREFIX_PATH=$torch_root -DCMAKE_INSTALL_PREFIX=$deepmd_root ..

You can specify -DUSE_PT_PYTHON_LIBS=TRUE to use libtorch from the Python installation, but you need to be careful that PyTorch PyPI packages are still built using _GLIBCXX_USE_CXX11_ABI=0, which may be not compatible with other libraries.

cmake -DENABLE_PYTORCH=TRUE -DUSE_PT_PYTHON_LIBS=TRUE -DCMAKE_INSTALL_PREFIX=$deepmd_root ..

If you want to use the TensorFlow C++ library, just enable the TensorFlow backend and nothing else needs to do. If you want to use the TensorFlow C library and disable the TensorFlow backend, set ENABLE_JAX to ON and CMAKE_PREFIX_PATH to the root directory of the TensorFlow C library.

cmake -DENABLE_JAX=ON -D CMAKE_PREFIX_PATH=${tensorflow_c_root} ..

One may add the following CMake variables to cmake using the -D <var>=<value> option:

ENABLE_TENSORFLOW#

Type: BOOL (ON/OFF), Default: OFF

TensorFlow JAX Whether building the TensorFlow backend and the JAX backend. Setting this option to ON will also set ENABLE_JAX to ON.

ENABLE_PYTORCH#

Type: BOOL (ON/OFF), Default: OFF

PyTorch Whether building the PyTorch backend.

ENABLE_JAX#

Type: BOOL (ON/OFF), Default: OFF

JAX Build the JAX backend. If ENABLE_TENSORFLOW is ON, the TensorFlow C++ library is used to build the JAX backend; If ENABLE_TENSORFLOW is OFF, the TensorFlow C library is used to build the JAX backend.

TENSORFLOW_ROOT#

Type: PATH

TensorFlow JAX The Path to TensorFlow’s C++ interface.

CMAKE_INSTALL_PREFIX#

Type: PATH

The Path where DeePMD-kit will be installed. See also CMake documentation.

USE_CUDA_TOOLKIT#

Type: BOOL (ON/OFF), Default: OFF

If TRUE, Build GPU support with CUDA toolkit.

CUDAToolkit_ROOT#

Type: PATH, Default: Search automatically

The path to the CUDA toolkit directory. CUDA 9.0 or later is supported. NVCC is required. See also CMake documentation.

USE_ROCM_TOOLKIT#

Type: BOOL (ON/OFF), Default: OFF

If TRUE, Build GPU support with ROCM toolkit.

CMAKE_HIP_COMPILER_ROCM_ROOT#

Type: PATH, Default: Search automatically

The path to the ROCM toolkit directory. See also ROCm documentation.

LAMMPS_SOURCE_ROOT#

Type: PATH

Only necessary for using LAMMPS plugin mode. The path to the LAMMPS source code. LAMMPS 8Apr2021 or later is supported. If not assigned, the plugin mode will not be enabled.

USE_TF_PYTHON_LIBS#

Type: BOOL (ON/OFF), Default: OFF

TensorFlow If TRUE, Build C++ interface with TensorFlow’s Python libraries (TensorFlow’s Python Interface is required). There’s no need for building TensorFlow’s C++ interface.

USE_PT_PYTHON_LIBS#

Type: BOOL (ON/OFF), Default: OFF

PyTorch If TRUE, Build C++ interface with PyTorch’s Python libraries (PyTorch’s Python Interface is required). There’s no need for downloading PyTorch’s C++ libraries.

ENABLE_NATIVE_OPTIMIZATION#

Type: BOOL (ON/OFF), Default: OFF

Enable compilation optimization for the native machine’s CPU type. Do not enable it if generated code will run on different CPUs.

CMAKE_<LANG>_FLAGS#

(<LANG>=CXX, CUDA or HIP)

Type: STRING

Default compilation flags to be used when compiling <LANG> files. See also CMake documentation.


If the CMake has been executed successfully, then run the following make commands to build the package:

make -j4
make install

Option -j4 means using 4 processes in parallel. You may want to use a different number according to your hardware.

If everything works fine, you will have the executable and libraries installed in $deepmd_root/bin and $deepmd_root/lib

$ ls $deepmd_root/bin
$ ls $deepmd_root/lib