2.2. Install from source code
Please follow our GitHub webpage to download the latest released version and development version.
Or get the DeePMD-kit source code by git clone
cd /some/workspace
git clone https://github.com/deepmodeling/deepmd-kit.git deepmd-kit
For convenience, you may want to record the location of the source to a variable, saying deepmd_source_dir
by
cd deepmd-kit
deepmd_source_dir=`pwd`
2.2.1. Install the Python interface
2.2.1.1. Install Backend’s Python interface
First, check the Python version on your machine. Python 3.8 or above is required.
python --version
We follow the virtual environment approach to install the backend’s Python interface. Now we assume that the Python interface will be installed in the virtual environment directory $deepmd_venv
:
virtualenv -p python3 $deepmd_venv
source $deepmd_venv/bin/activate
pip install --upgrade pip
The full instruction to install TensorFlow can be found on the official TensorFlow website. TensorFlow 2.2 or later is supported.
pip install --upgrade tensorflow
If one does not need the GPU support of DeePMD-kit and is concerned about package size, the CPU-only version of TensorFlow should be installed by
pip install --upgrade tensorflow-cpu
One can also use conda to install TensorFlow from conda-forge.
To verify the installation, run
python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
One can also build the TensorFlow Python interface from source for customized hardware optimization, such as CUDA, ROCM, or OneDNN support.
To install PyTorch, run
pip install torch
Follow PyTorch documentation to install PyTorch built against different CUDA versions or without CUDA.
One can also use conda to install PyTorch from conda-forge.
It is important that every time a new shell is started and one wants to use DeePMD-kit
, the virtual environment should be activated by
source $deepmd_venv/bin/activate
if one wants to skip out of the virtual environment, he/she can do
deactivate
If one has multiple python interpreters named something like python3.x, it can be specified by, for example
virtualenv -p python3.8 $deepmd_venv
One should remember to activate the virtual environment every time he/she uses DeePMD-kit.
2.2.1.2. Install the DeePMD-kit’s python interface
Check the compiler version on your machine
gcc --version
The compiler GCC 4.8 or later is supported in the DeePMD-kit.
Note that TensorFlow may have specific requirements for the compiler version to support the C++ standard version and _GLIBCXX_USE_CXX11_ABI
used by TensorFlow. It is recommended to use the same compiler version as TensorFlow, which can be printed by python -c "import tensorflow;print(tensorflow.version.COMPILER_VERSION)"
.
You can set the environment variable export DP_ENABLE_PYTORCH=1
to enable customized C++ OPs in the PyTorch backend. Note that PyTorch may have specific requirements for the compiler version to support the C++ standard version and _GLIBCXX_USE_CXX11_ABI
used by PyTorch.
The customized C++ OPs are not enabled by default because TensorFlow and PyTorch packages from the PyPI use different _GLIBCXX_USE_CXX11_ABI
flags. We recommend conda-forge packages in this case.
Execute
cd $deepmd_source_dir
pip install .
One may set the following environment variables before executing pip
:
Environment variables | Allowed value | Default value | Usage |
---|---|---|---|
DP_VARIANT |
|
| Build CPU variant or GPU variant with CUDA or ROCM support. |
CUDAToolkit_ROOT | Path | Detected automatically | The path to the CUDA toolkit directory. CUDA 9.0 or later is supported. NVCC is required. |
ROCM_ROOT | Path | Detected automatically | The path to the ROCM toolkit directory. |
DP_ENABLE_TENSORFLOW | 0, 1 | 1 | Enable the TensorFlow backend. |
DP_ENABLE_PYTORCH | 0, 1 | 0 | Enable customized C++ OPs for the PyTorch backend. PyTorch can still run without customized C++ OPs, but features will be limited. |
TENSORFLOW_ROOT | Path | Detected automatically | The path to TensorFlow Python library. By default the installer only finds TensorFlow under user site-package directory ( |
DP_ENABLE_NATIVE_OPTIMIZATION | 0, 1 | 0 | Enable compilation optimization for the native machine’s CPU type. Do not enable it if generated code will run on different CPUs. |
CMAKE_ARGS | str | - | Additional CMake arguments |
<LANG>FLAGS ( | str | - | Default compilation flags to be used when compiling |
To test the installation, one should first jump out of the source directory
cd /some/other/workspace
then execute
dp -h
It will print the help information like
usage: dp [-h] {train,freeze,test} ...
DeePMD-kit: A deep learning package for many-body potential energy
representation and molecular dynamics
optional arguments:
-h, --help show this help message and exit
Valid subcommands:
{train,freeze,test}
train train a model
freeze freeze the model
test test the model
2.2.1.3. Install horovod and mpi4py
Horovod and mpi4py are used for parallel training. For better performance on GPU, please follow the tuning steps in Horovod on GPU.
# With GPU, prefer NCCL as a communicator.
HOROVOD_WITHOUT_GLOO=1 HOROVOD_WITH_TENSORFLOW=1 HOROVOD_GPU_OPERATIONS=NCCL HOROVOD_NCCL_HOME=/path/to/nccl pip install horovod mpi4py
If your work in a CPU environment, please prepare runtime as below:
# By default, MPI is used as communicator.
HOROVOD_WITHOUT_GLOO=1 HOROVOD_WITH_TENSORFLOW=1 pip install horovod mpi4py
To ensure Horovod has been built with proper framework support enabled, one can invoke the horovodrun --check-build
command, e.g.,
$ horovodrun --check-build
Horovod v0.22.1:
Available Frameworks:
[X] TensorFlow
[X] PyTorch
[ ] MXNet
Available Controllers:
[X] MPI
[X] Gloo
Available Tensor Operations:
[X] NCCL
[ ] DDL
[ ] CCL
[X] MPI
[X] Gloo
Since version 2.0.1, Horovod and mpi4py with MPICH support are shipped with the installer.
If you don’t install Horovod, DeePMD-kit will fall back to serial mode.
2.2.2. Install the C++ interface
If one does not need to use DeePMD-kit with Lammps or I-Pi, then the python interface installed in the previous section does everything and he/she can safely skip this section.
2.2.2.1. Install Backends’ C++ interface (optional)
Since TensorFlow 2.12, TensorFlow C++ library (libtensorflow_cc
) is packaged inside the Python library. Thus, you can skip building TensorFlow C++ library manually. If that does not work for you, you can still build it manually.
The C++ interface of DeePMD-kit was tested with compiler GCC >= 4.8. It is noticed that the I-Pi support is only compiled with GCC >= 4.8. Note that TensorFlow may have specific requirements for the compiler version.
First, the C++ interface of Tensorflow should be installed. It is noted that the version of Tensorflow should be consistent with the python interface. You may follow the instruction or run the script $deepmd_source_dir/source/install/build_tf.py
to install the corresponding C++ interface.
If you have installed PyTorch using pip, you can use libtorch inside the PyTorch Python package. You can also download libtorch prebuilt library from the PyTorch website.
2.2.2.2. Install DeePMD-kit’s C++ interface
Now go to the source code directory of DeePMD-kit and make a building place.
cd $deepmd_source_dir/source
mkdir build
cd build
The installation requires CMake 3.16 or later for the CPU version, CMake 3.23 or later for the CUDA support, and CMake 3.21 or later for the ROCM support. One can install CMake via pip
if it is not installed or the installed version does not satisfy the requirement:
pip install -U cmake
You must enable at least one backend. If you enable two or more backends, these backend libraries must be built in a compatible way, e.g. using the same _GLIBCXX_USE_CXX11_ABI
flag. We recommend using conda pacakges from conda-forge, which are usually compatible to each other.
I assume you have activated the TensorFlow Python environment and want to install DeePMD-kit into path $deepmd_root
, then execute CMake
cmake -DENABLE_TENSORFLOW=TRUE -DUSE_TF_PYTHON_LIBS=TRUE -DCMAKE_INSTALL_PREFIX=$deepmd_root ..
If you specify -DUSE_TF_PYTHON_LIBS=FALSE
, you need to give the location where TensorFlow’s C++ interface is installed to -DTENSORFLOW_ROOT=${tensorflow_root}
.
I assume you have installed the PyTorch (either Python or C++ interface) to $torch_root
, then execute CMake
cmake -DENABLE_PYTORCH=TRUE -DCMAKE_PREFIX_PATH=$torch_root -DCMAKE_INSTALL_PREFIX=$deepmd_root ..
You can specify -DUSE_PT_PYTHON_LIBS=TRUE
to use libtorch from the Python installation, but you need to be careful that PyTorch PyPI packages are still built using _GLIBCXX_USE_CXX11_ABI=0
, which may be not compatible with other libraries.
cmake -DENABLE_PYTORCH=TRUE -DUSE_PT_PYTHON_LIBS=TRUE -DCMAKE_INSTALL_PREFIX=$deepmd_root ..
One may add the following arguments to cmake
:
CMake Aurgements | Allowed value | Default value | Usage |
---|---|---|---|
-DENABLE_TENSORFLOW=<value> |
|
| Whether building the TensorFlow backend. |
-DENABLE_PYTORCH=<value> |
|
| Whether building the PyTorch backend. |
-DTENSORFLOW_ROOT=<value> | Path | - | The Path to TensorFlow’s C++ interface. |
-DCMAKE_INSTALL_PREFIX=<value> | Path | - | The Path where DeePMD-kit will be installed. |
-DUSE_CUDA_TOOLKIT=<value> |
|
| If |
-DCUDAToolkit_ROOT=<value> | Path | Detected automatically | The path to the CUDA toolkit directory. CUDA 9.0 or later is supported. NVCC is required. |
-DUSE_ROCM_TOOLKIT=<value> |
|
| If |
-DCMAKE_HIP_COMPILER_ROCM_ROOT=<value> | Path | Detected automatically | The path to the ROCM toolkit directory. |
-DLAMMPS_SOURCE_ROOT=<value> | Path | - | Only neccessary for LAMMPS plugin mode. The path to the LAMMPS source code. LAMMPS 8Apr2021 or later is supported. If not assigned, the plugin mode will not be enabled. |
-DUSE_TF_PYTHON_LIBS=<value> |
|
| If |
-DUSE_PT_PYTHON_LIBS=<value> |
|
| If |
-DENABLE_NATIVE_OPTIMIZATION=<value> |
|
| Enable compilation optimization for the native machine’s CPU type. Do not enable it if generated code will run on different CPUs. |
-DCMAKE_<LANG>_FLAGS=<value> ( | str | - | Default compilation flags to be used when compiling |
If the CMake has been executed successfully, then run the following make commands to build the package:
make -j4
make install
Option -j4
means using 4 processes in parallel. You may want to use a different number according to your hardware.
If everything works fine, you will have the executable and libraries installed in $deepmd_root/bin
and $deepmd_root/lib
$ ls $deepmd_root/bin
$ ls $deepmd_root/lib