deepmd.dpmodel.descriptor.dpa1

Module Contents

Classes

DescrptDPA1

Attention-based descriptor which is proposed in the pretrainable DPA-1[1] model.

NeighborGatedAttention

The unit operation of a native model.

NeighborGatedAttentionLayer

The unit operation of a native model.

GatedAttentionLayer

The unit operation of a native model.

Functions

np_softmax(x[, axis])

np_normalize(x[, axis])

Attributes

__version__

deepmd.dpmodel.descriptor.dpa1.__version__ = 'unknown'[source]
deepmd.dpmodel.descriptor.dpa1.np_softmax(x, axis=-1)[source]
deepmd.dpmodel.descriptor.dpa1.np_normalize(x, axis=-1)[source]
class deepmd.dpmodel.descriptor.dpa1.DescrptDPA1(rcut: float, rcut_smth: float, sel: List[int] | int, ntypes: int, neuron: List[int] = [25, 50, 100], axis_neuron: int = 8, tebd_dim: int = 8, tebd_input_mode: str = 'concat', resnet_dt: bool = False, trainable: bool = True, type_one_side: bool = False, attn: int = 128, attn_layer: int = 2, attn_dotr: bool = True, attn_mask: bool = False, exclude_types: List[List[int]] = [], env_protection: float = 0.0, set_davg_zero: bool = False, activation_function: str = 'tanh', precision: str = DEFAULT_PRECISION, scaling_factor=1.0, normalize: bool = True, temperature: float | None = None, trainable_ln: bool = True, ln_eps: float | None = 1e-05, smooth_type_embedding: bool = True, concat_output_tebd: bool = True, spin: Any | None = None, seed: int | None = None)[source]

Bases: deepmd.dpmodel.NativeOP, deepmd.dpmodel.descriptor.base_descriptor.BaseDescriptor

Attention-based descriptor which is proposed in the pretrainable DPA-1[1] model.

This descriptor, \(\mathcal{D}^i \in \mathbb{R}^{M \times M_{<}}\), is given by

\[\mathcal{D}^i = \frac{1}{N_c^2}(\hat{\mathcal{G}}^i)^T \mathcal{R}^i (\mathcal{R}^i)^T \hat{\mathcal{G}}^i_<,\]

where \(\hat{\mathcal{G}}^i\) represents the embedding matrix:math:mathcal{G}^i after additional self-attention mechanism and \(\mathcal{R}^i\) is defined by the full case in the se_e2_a descriptor. Note that we obtain \(\mathcal{G}^i\) using the type embedding method by default in this descriptor.

To perform the self-attention mechanism, the queries \(\mathcal{Q}^{i,l} \in \mathbb{R}^{N_c\times d_k}\), keys \(\mathcal{K}^{i,l} \in \mathbb{R}^{N_c\times d_k}\), and values \(\mathcal{V}^{i,l} \in \mathbb{R}^{N_c\times d_v}\) are first obtained:

\[\left(\mathcal{Q}^{i,l}\right)_{j}=Q_{l}\left(\left(\mathcal{G}^{i,l-1}\right)_{j}\right),\]
\[\left(\mathcal{K}^{i,l}\right)_{j}=K_{l}\left(\left(\mathcal{G}^{i,l-1}\right)_{j}\right),\]
\[\left(\mathcal{V}^{i,l}\right)_{j}=V_{l}\left(\left(\mathcal{G}^{i,l-1}\right)_{j}\right),\]

where \(Q_{l}\), \(K_{l}\), \(V_{l}\) represent three trainable linear transformations that output the queries and keys of dimension \(d_k\) and values of dimension \(d_v\), and \(l\) is the index of the attention layer. The input embedding matrix to the attention layers, denoted by \(\mathcal{G}^{i,0}\), is chosen as the two-body embedding matrix.

Then the scaled dot-product attention method is adopted:

\[A(\mathcal{Q}^{i,l}, \mathcal{K}^{i,l}, \mathcal{V}^{i,l}, \mathcal{R}^{i,l})=\varphi\left(\mathcal{Q}^{i,l}, \mathcal{K}^{i,l},\mathcal{R}^{i,l}\right)\mathcal{V}^{i,l},\]

where \(\varphi\left(\mathcal{Q}^{i,l}, \mathcal{K}^{i,l},\mathcal{R}^{i,l}\right) \in \mathbb{R}^{N_c\times N_c}\) is attention weights. In the original attention method, one typically has \(\varphi\left(\mathcal{Q}^{i,l}, \mathcal{K}^{i,l}\right)=\mathrm{softmax}\left(\frac{\mathcal{Q}^{i,l} (\mathcal{K}^{i,l})^{T}}{\sqrt{d_{k}}}\right)\), with \(\sqrt{d_{k}}\) being the normalization temperature. This is slightly modified to incorporate the angular information:

\[\varphi\left(\mathcal{Q}^{i,l}, \mathcal{K}^{i,l},\mathcal{R}^{i,l}\right) = \mathrm{softmax}\left(\frac{\mathcal{Q}^{i,l} (\mathcal{K}^{i,l})^{T}}{\sqrt{d_{k}}}\right) \odot \hat{\mathcal{R}}^{i}(\hat{\mathcal{R}}^{i})^{T},\]
where \(\hat{\mathcal{R}}^{i} \in \mathbb{R}^{N_c\times 3}\) denotes normalized relative coordinates,

\(\hat{\mathcal{R}}^{i}_{j} = \frac{\boldsymbol{r}_{ij}}{\lVert \boldsymbol{r}_{ij} \lVert}\) and \(\odot\) means element-wise multiplication.

Then layer normalization is added in a residual way to finally obtain the self-attention local embedding matrix

\(\hat{\mathcal{G}}^{i} = \mathcal{G}^{i,L_a}\) after \(L_a\) attention layers:[^1]

\[\mathcal{G}^{i,l} = \mathcal{G}^{i,l-1} + \mathrm{LayerNorm}(A(\mathcal{Q}^{i,l}, \mathcal{K}^{i,l}, \mathcal{V}^{i,l}, \mathcal{R}^{i,l})).\]
Parameters:
rcut: float

The cut-off radius \(r_c\)

rcut_smth: float

From where the environment matrix should be smoothed \(r_s\)

sellist[int], int

list[int]: sel[i] specifies the maxmum number of type i atoms in the cut-off radius int: the total maxmum number of atoms in the cut-off radius

ntypesint

Number of element types

neuronlist[int]

Number of neurons in each hidden layers of the embedding net \(\mathcal{N}\)

axis_neuron: int

Number of the axis neuron \(M_2\) (number of columns of the sub-matrix of the embedding matrix)

tebd_dim: int

Dimension of the type embedding

tebd_input_mode: str

The way to mix the type embeddings. Supported options are concat. (TODO need to support stripped_type_embedding option)

resnet_dt: bool

Time-step dt in the resnet construction: y = x + dt * phi (Wx + b)

trainable: bool

If the weights of this descriptors are trainable.

trainable_ln: bool

Whether to use trainable shift and scale weights in layer normalization.

ln_eps: float, Optional

The epsilon value for layer normalization.

type_one_side: bool

If ‘False’, type embeddings of both neighbor and central atoms are considered. If ‘True’, only type embeddings of neighbor atoms are considered. Default is ‘False’.

attn: int

Hidden dimension of the attention vectors

attn_layer: int

Number of attention layers

attn_dotr: bool

If dot the angular gate to the attention weights

attn_mask: bool

(Only support False to keep consistent with other backend references.) (Not used in this version. True option is not implemented.) If mask the diagonal of attention weights

exclude_typesList[List[int]]

The excluded pairs of types which have no interaction with each other. For example, [[0, 1]] means no interaction between type 0 and type 1.

env_protection: float

Protection parameter to prevent division by zero errors during environment matrix calculations.

set_davg_zero: bool

Set the shift of embedding net input to zero.

activation_function: str

The activation function in the embedding net. Supported options are “linear”, “softplus”, “relu6”, “relu”, “tanh”, “sigmoid”, “none”, “gelu”, “gelu_tf”.

precision: str

The precision of the embedding net parameters. Supported options are “float64”, “float16”, “float32”, “default”.

scaling_factor: float

The scaling factor of normalization in calculations of attention weights. If temperature is None, the scaling of attention weights is (N_dim * scaling_factor)**0.5

normalize: bool

Whether to normalize the hidden vectors in attention weights calculation.

temperature: float

If not None, the scaling of attention weights is temperature itself.

smooth_type_embedding: bool

Whether to use smooth process in attention weights calculation.

concat_output_tebd: bool

Whether to concat type embedding at the output of the descriptor.

spin

(Only support None to keep consistent with other backend references.) (Not used in this version. Not-none option is not implemented.) The old implementation of deepspin.

References

[1]

Duo Zhang, Hangrui Bi, Fu-Zhi Dai, Wanrun Jiang, Linfeng Zhang, and Han Wang. 2022. DPA-1: Pretraining of Attention-based Deep Potential Model for Molecular Simulation. arXiv preprint arXiv:2208.08236.

property dim_out[source]

Returns the output dimension of this descriptor.

__setitem__(key, value)[source]
__getitem__(key)[source]
get_dim_out()[source]

Returns the output dimension of this descriptor.

get_dim_emb()[source]

Returns the embedding (g2) dimension of this descriptor.

get_rcut()[source]

Returns cutoff radius.

get_sel()[source]

Returns cutoff radius.

mixed_types()[source]

If true, the discriptor 1. assumes total number of atoms aligned across frames; 2. requires a neighbor list that does not distinguish different atomic types.

If false, the discriptor 1. assumes total number of atoms of each atom type aligned across frames; 2. requires a neighbor list that distinguishes different atomic types.

abstract share_params(base_class, shared_level, resume=False)[source]

Share the parameters of self to the base_class with shared_level during multitask training. If not start from checkpoint (resume is False), some seperated parameters (e.g. mean and stddev) will be re-calculated across different classes.

get_ntypes() int[source]

Returns the number of element types.

abstract compute_input_stats(merged: List[dict], path: deepmd.utils.path.DPPath | None = None)[source]

Update mean and stddev for descriptor elements.

cal_g(ss, embedding_idx)[source]
reinit_exclude(exclude_types: List[Tuple[int, int]] = [])[source]
call(coord_ext, atype_ext, nlist, mapping: numpy.ndarray | None = None)[source]

Compute the descriptor.

Parameters:
coord_ext

The extended coordinates of atoms. shape: nf x (nallx3)

atype_ext

The extended aotm types. shape: nf x nall

nlist

The neighbor list. shape: nf x nloc x nnei

mapping

The index mapping from extended to lcoal region. not used by this descriptor.

Returns:
descriptor

The descriptor. shape: nf x nloc x (ng x axis_neuron)

gr

The rotationally equivariant and permutationally invariant single particle representation. shape: nf x nloc x ng x 3

g2

The rotationally invariant pair-partical representation. this descriptor returns None

h2

The rotationally equivariant pair-partical representation. this descriptor returns None

sw

The smooth switch function.

serialize() dict[source]

Serialize the descriptor to dict.

classmethod deserialize(data: dict) DescrptDPA1[source]

Deserialize from dict.

classmethod update_sel(global_jdata: dict, local_jdata: dict)[source]

Update the selection and perform neighbor statistics.

Parameters:
global_jdatadict

The global data, containing the training section

local_jdatadict

The local data refer to the current class

class deepmd.dpmodel.descriptor.dpa1.NeighborGatedAttention(layer_num: int, nnei: int, embed_dim: int, hidden_dim: int, dotr: bool = False, do_mask: bool = False, scaling_factor: float = 1.0, normalize: bool = True, temperature: float | None = None, trainable_ln: bool = True, ln_eps: float = 1e-05, smooth: bool = True, precision: str = DEFAULT_PRECISION)[source]

Bases: deepmd.dpmodel.NativeOP

The unit operation of a native model.

call(input_G, nei_mask, input_r: numpy.ndarray | None = None, sw: numpy.ndarray | None = None)[source]

Forward pass in NumPy implementation.

__getitem__(key)[source]
__setitem__(key, value)[source]
serialize()[source]

Serialize the networks to a dict.

Returns:
dict

The serialized networks.

classmethod deserialize(data: dict) NeighborGatedAttention[source]

Deserialize the networks from a dict.

Parameters:
datadict

The dict to deserialize from.

class deepmd.dpmodel.descriptor.dpa1.NeighborGatedAttentionLayer(nnei: int, embed_dim: int, hidden_dim: int, dotr: bool = False, do_mask: bool = False, scaling_factor: float = 1.0, normalize: bool = True, temperature: float | None = None, trainable_ln: bool = True, ln_eps: float = 1e-05, smooth: bool = True, precision: str = DEFAULT_PRECISION)[source]

Bases: deepmd.dpmodel.NativeOP

The unit operation of a native model.

call(x, nei_mask, input_r: numpy.ndarray | None = None, sw: numpy.ndarray | None = None)[source]

Forward pass in NumPy implementation.

serialize() dict[source]

Serialize the networks to a dict.

Returns:
dict

The serialized networks.

classmethod deserialize(data) NeighborGatedAttentionLayer[source]

Deserialize the networks from a dict.

Parameters:
datadict

The dict to deserialize from.

class deepmd.dpmodel.descriptor.dpa1.GatedAttentionLayer(nnei: int, embed_dim: int, hidden_dim: int, dotr: bool = False, do_mask: bool = False, scaling_factor: float = 1.0, normalize: bool = True, temperature: float | None = None, bias: bool = True, smooth: bool = True, precision: str = DEFAULT_PRECISION)[source]

Bases: deepmd.dpmodel.NativeOP

The unit operation of a native model.

call(query, nei_mask, input_r=None, sw=None, attnw_shift=20.0)[source]

Forward pass in NumPy implementation.

serialize()[source]
classmethod deserialize(data)[source]