deepmd.tf.fit

Submodules

Package Contents

Classes

DipoleFittingSeA

Fit the atomic dipole with descriptor se_a.

DOSFitting

Fitting the density of states (DOS) of the system.

EnerFitting

Fitting the energy of the system. The force and the virial can also be trained.

Fitting

A class to remove type from input arguments.

GlobalPolarFittingSeA

Fit the system polarizability with descriptor se_a.

PolarFittingSeA

Fit the atomic polarizability with descriptor se_a.

class deepmd.tf.fit.DipoleFittingSeA(ntypes: int, dim_descrpt: int, embedding_width: int, neuron: List[int] = [120, 120, 120], resnet_dt: bool = True, sel_type: List[int] | None = None, seed: int | None = None, activation_function: str = 'tanh', precision: str = 'default', uniform_seed: bool = False, mixed_types: bool = False, **kwargs)[source]

Bases: deepmd.tf.fit.fitting.Fitting

Fit the atomic dipole with descriptor se_a.

Parameters:
ntypes

The ntypes of the descrptor \(\mathcal{D}\)

dim_descrpt

The dimension of the descrptor \(\mathcal{D}\)

embedding_width

The rotation matrix dimension of the descrptor \(\mathcal{D}\)

neuronList[int]

Number of neurons in each hidden layer of the fitting net

resnet_dtbool

Time-step dt in the resnet construction: y = x + dt * phi (Wx + b)

sel_typeList[int]

The atom types selected to have an atomic dipole prediction. If is None, all atoms are selected.

seedint

Random seed for initializing the network parameters.

activation_functionstr

The activation function in the embedding net. Supported options are “relu”, “tanh”, “none”, “linear”, “softplus”, “sigmoid”, “relu6”, “gelu”, “gelu_tf”.

precisionstr

The precision of the embedding net parameters. Supported options are “float32”, “default”, “float16”, “float64”.

uniform_seed

Only for the purpose of backward compatibility, retrieves the old behavior of using the random seed

mixed_typesbool

If true, use a uniform fitting net for all atom types, otherwise use different fitting nets for different atom types.

get_sel_type() int[source]

Get selected type.

get_out_size() int[source]

Get the output size. Should be 3.

_build_lower(start_index, natoms, inputs, rot_mat, suffix='', reuse=None)[source]
build(input_d: deepmd.tf.env.tf.Tensor, rot_mat: deepmd.tf.env.tf.Tensor, natoms: deepmd.tf.env.tf.Tensor, input_dict: dict | None = None, reuse: bool | None = None, suffix: str = '') deepmd.tf.env.tf.Tensor[source]

Build the computational graph for fitting net.

Parameters:
input_d

The input descriptor

rot_mat

The rotation matrix from the descriptor.

natoms

The number of atoms. This tensor has the length of Ntypes + 2 natoms[0]: number of local atoms natoms[1]: total number of atoms held by this processor natoms[i]: 2 <= i < Ntypes+2, number of type i atoms

input_dict

Additional dict for inputs.

reuse

The weights in the networks should be reused when get the variable.

suffix

Name suffix to identify this descriptor

Returns:
dipole

The atomic dipole.

init_variables(graph: deepmd.tf.env.tf.Graph, graph_def: deepmd.tf.env.tf.GraphDef, suffix: str = '') None[source]

Init the fitting net variables with the given dict.

Parameters:
graphtf.Graph

The input frozen model graph

graph_deftf.GraphDef

The input frozen model graph_def

suffixstr

suffix to name scope

enable_mixed_precision(mixed_prec: dict | None = None) None[source]

Reveive the mixed precision setting.

Parameters:
mixed_prec

The mixed precision setting used in the embedding net

get_loss(loss: dict, lr) deepmd.tf.loss.loss.Loss[source]

Get the loss function.

Parameters:
lossdict

the loss dict

lrLearningRateExp

the learning rate

Returns:
Loss

the loss function

serialize(suffix: str) dict[source]

Serialize the model.

Returns:
dict

The serialized data

classmethod deserialize(data: dict, suffix: str)[source]

Deserialize the model.

Parameters:
datadict

The serialized data

Returns:
Model

The deserialized model

class deepmd.tf.fit.DOSFitting(ntypes: int, dim_descrpt: int, neuron: List[int] = [120, 120, 120], resnet_dt: bool = True, numb_fparam: int = 0, numb_aparam: int = 0, numb_dos: int = 300, rcond: float | None = None, trainable: List[bool] | None = None, seed: int | None = None, activation_function: str = 'tanh', precision: str = 'default', uniform_seed: bool = False, layer_name: List[str | None] | None = None, use_aparam_as_mask: bool = False, mixed_types: bool = False, **kwargs)[source]

Bases: deepmd.tf.fit.fitting.Fitting

Fitting the density of states (DOS) of the system. The energy should be shifted by the fermi level.

Parameters:
ntypes

The ntypes of the descrptor \(\mathcal{D}\)

dim_descrpt

The dimension of the descrptor \(\mathcal{D}\)

neuron

Number of neurons \(N\) in each hidden layer of the fitting net

resnet_dt

Time-step dt in the resnet construction: \(y = x + dt * \phi (Wx + b)\)

numb_fparam

Number of frame parameter

numb_aparam

Number of atomic parameter

! numb_dos (added)

Number of gridpoints on which the DOS is evaluated (NEDOS in VASP)

rcond

The condition number for the regression of atomic energy.

trainable

If the weights of fitting net are trainable. Suppose that we have \(N_l\) hidden layers in the fitting net, this list is of length \(N_l + 1\), specifying if the hidden layers and the output layer are trainable.

seed

Random seed for initializing the network parameters.

activation_function

The activation function \(\boldsymbol{\phi}\) in the embedding net. Supported options are “relu”, “tanh”, “none”, “linear”, “softplus”, “sigmoid”, “relu6”, “gelu”, “gelu_tf”.

precision

The precision of the embedding net parameters. Supported options are “float32”, “default”, “float16”, “float64”.

uniform_seed

Only for the purpose of backward compatibility, retrieves the old behavior of using the random seed

layer_namelist[Optional[str]], optional

The name of the each layer. If two layers, either in the same fitting or different fittings, have the same name, they will share the same neural network parameters.

use_aparam_as_mask: bool, optional

If True, the atomic parameters will be used as a mask that determines the atom is real/virtual. And the aparam will not be used as the atomic parameters for embedding.

mixed_typesbool

If true, use a uniform fitting net for all atom types, otherwise use different fitting nets for different atom types.

get_numb_fparam() int[source]

Get the number of frame parameters.

get_numb_aparam() int[source]

Get the number of atomic parameters.

get_numb_dos() int[source]

Get the number of gridpoints in energy space.

compute_output_stats(all_stat: dict, mixed_type: bool = False) None[source]

Compute the ouput statistics.

Parameters:
all_stat

must have the following components: all_stat[‘dos’] of shape n_sys x n_batch x n_frame x numb_dos can be prepared by model.make_stat_input

mixed_type

Whether to perform the mixed_type mode. If True, the input data has the mixed_type format (see doc/model/train_se_atten.md), in which frames in a system may have different natoms_vec(s), with the same nloc.

_compute_output_stats(all_stat, rcond=0.001, mixed_type=False)[source]
compute_input_stats(all_stat: dict, protection: float = 0.01) None[source]

Compute the input statistics.

Parameters:
all_stat

if numb_fparam > 0 must have all_stat[‘fparam’] if numb_aparam > 0 must have all_stat[‘aparam’] can be prepared by model.make_stat_input

protection

Divided-by-zero protection

_compute_std(sumv2, sumv, sumn)[source]
_build_lower(start_index, natoms, inputs, fparam=None, aparam=None, bias_dos=0.0, type_suffix='', suffix='', reuse=None)[source]
build(inputs: deepmd.tf.env.tf.Tensor, natoms: deepmd.tf.env.tf.Tensor, input_dict: dict | None = None, reuse: bool | None = None, suffix: str = '') deepmd.tf.env.tf.Tensor[source]

Build the computational graph for fitting net.

Parameters:
inputs

The input descriptor

input_dict

Additional dict for inputs. if numb_fparam > 0, should have input_dict[‘fparam’] if numb_aparam > 0, should have input_dict[‘aparam’]

natoms

The number of atoms. This tensor has the length of Ntypes + 2 natoms[0]: number of local atoms natoms[1]: total number of atoms held by this processor natoms[i]: 2 <= i < Ntypes+2, number of type i atoms

reuse

The weights in the networks should be reused when get the variable.

suffix

Name suffix to identify this descriptor

Returns:
ener

The system energy

init_variables(graph: deepmd.tf.env.tf.Graph, graph_def: deepmd.tf.env.tf.GraphDef, suffix: str = '') None[source]

Init the fitting net variables with the given dict.

Parameters:
graphtf.Graph

The input frozen model graph

graph_deftf.GraphDef

The input frozen model graph_def

suffixstr

suffix to name scope

enable_mixed_precision(mixed_prec: dict | None = None) None[source]

Reveive the mixed precision setting.

Parameters:
mixed_prec

The mixed precision setting used in the embedding net

get_loss(loss: dict, lr) deepmd.tf.loss.loss.Loss[source]

Get the loss function.

Parameters:
lossdict

the loss dict

lrLearningRateExp

the learning rate

Returns:
Loss

the loss function

classmethod deserialize(data: dict, suffix: str = '')[source]

Deserialize the model.

Parameters:
datadict

The serialized data

Returns:
Model

The deserialized model

serialize(suffix: str = '') dict[source]

Serialize the model.

Returns:
dict

The serialized data

class deepmd.tf.fit.EnerFitting(ntypes: int, dim_descrpt: int, neuron: List[int] = [120, 120, 120], resnet_dt: bool = True, numb_fparam: int = 0, numb_aparam: int = 0, rcond: float | None = None, tot_ener_zero: bool = False, trainable: List[bool] | None = None, seed: int | None = None, atom_ener: List[float] = [], activation_function: str = 'tanh', precision: str = 'default', uniform_seed: bool = False, layer_name: List[str | None] | None = None, use_aparam_as_mask: bool = False, spin: deepmd.tf.utils.spin.Spin | None = None, mixed_types: bool = False, **kwargs)[source]

Bases: deepmd.tf.fit.fitting.Fitting

Fitting the energy of the system. The force and the virial can also be trained.

The potential energy \(E\) is a fitting network function of the descriptor \(\mathcal{D}\):

\[E(\mathcal{D}) = \mathcal{L}^{(n)} \circ \mathcal{L}^{(n-1)} \circ \cdots \circ \mathcal{L}^{(1)} \circ \mathcal{L}^{(0)}\]

The first \(n\) hidden layers \(\mathcal{L}^{(0)}, \cdots, \mathcal{L}^{(n-1)}\) are given by

\[\mathbf{y}=\mathcal{L}(\mathbf{x};\mathbf{w},\mathbf{b})= \boldsymbol{\phi}(\mathbf{x}^T\mathbf{w}+\mathbf{b})\]

where \(\mathbf{x} \in \mathbb{R}^{N_1}\) is the input vector and \(\mathbf{y} \in \mathbb{R}^{N_2}\) is the output vector. \(\mathbf{w} \in \mathbb{R}^{N_1 \times N_2}\) and \(\mathbf{b} \in \mathbb{R}^{N_2}\) are weights and biases, respectively, both of which are trainable if trainable[i] is True. \(\boldsymbol{\phi}\) is the activation function.

The output layer \(\mathcal{L}^{(n)}\) is given by

\[\mathbf{y}=\mathcal{L}^{(n)}(\mathbf{x};\mathbf{w},\mathbf{b})= \mathbf{x}^T\mathbf{w}+\mathbf{b}\]

where \(\mathbf{x} \in \mathbb{R}^{N_{n-1}}\) is the input vector and \(\mathbf{y} \in \mathbb{R}\) is the output scalar. \(\mathbf{w} \in \mathbb{R}^{N_{n-1}}\) and \(\mathbf{b} \in \mathbb{R}\) are weights and bias, respectively, both of which are trainable if trainable[n] is True.

Parameters:
ntypes

The ntypes of the descrptor \(\mathcal{D}\)

dim_descrpt

The dimension of the descrptor \(\mathcal{D}\)

neuron

Number of neurons \(N\) in each hidden layer of the fitting net

resnet_dt

Time-step dt in the resnet construction: \(y = x + dt * \phi (Wx + b)\)

numb_fparam

Number of frame parameter

numb_aparam

Number of atomic parameter

rcond

The condition number for the regression of atomic energy.

tot_ener_zero

Force the total energy to zero. Useful for the charge fitting.

trainable

If the weights of fitting net are trainable. Suppose that we have \(N_l\) hidden layers in the fitting net, this list is of length \(N_l + 1\), specifying if the hidden layers and the output layer are trainable.

seed

Random seed for initializing the network parameters.

atom_ener

Specifying atomic energy contribution in vacuum. The set_davg_zero key in the descrptor should be set.

activation_function

The activation function \(\boldsymbol{\phi}\) in the embedding net. Supported options are “relu”, “tanh”, “none”, “linear”, “softplus”, “sigmoid”, “relu6”, “gelu”, “gelu_tf”.

precision

The precision of the embedding net parameters. Supported options are “float32”, “default”, “float16”, “float64”.

uniform_seed

Only for the purpose of backward compatibility, retrieves the old behavior of using the random seed

layer_namelist[Optional[str]], optional

The name of the each layer. If two layers, either in the same fitting or different fittings, have the same name, they will share the same neural network parameters.

use_aparam_as_mask: bool, optional

If True, the atomic parameters will be used as a mask that determines the atom is real/virtual. And the aparam will not be used as the atomic parameters for embedding.

mixed_typesbool

If true, use a uniform fitting net for all atom types, otherwise use different fitting nets for different atom types.

get_numb_fparam() int[source]

Get the number of frame parameters.

get_numb_aparam() int[source]

Get the number of atomic parameters.

compute_output_stats(all_stat: dict, mixed_type: bool = False) None[source]

Compute the ouput statistics.

Parameters:
all_stat

must have the following components: all_stat[‘energy’] of shape n_sys x n_batch x n_frame can be prepared by model.make_stat_input

mixed_type

Whether to perform the mixed_type mode. If True, the input data has the mixed_type format (see doc/model/train_se_atten.md), in which frames in a system may have different natoms_vec(s), with the same nloc.

_compute_output_stats(all_stat, rcond=0.001, mixed_type=False)[source]
compute_input_stats(all_stat: dict, protection: float = 0.01) None[source]

Compute the input statistics.

Parameters:
all_stat

if numb_fparam > 0 must have all_stat[‘fparam’] if numb_aparam > 0 must have all_stat[‘aparam’] can be prepared by model.make_stat_input

protection

Divided-by-zero protection

_compute_std(sumv2, sumv, sumn)[source]
_build_lower(start_index, natoms, inputs, fparam=None, aparam=None, bias_atom_e=0.0, type_suffix='', suffix='', reuse=None)[source]
build(inputs: deepmd.tf.env.tf.Tensor, natoms: deepmd.tf.env.tf.Tensor, input_dict: dict | None = None, reuse: bool | None = None, suffix: str = '') deepmd.tf.env.tf.Tensor[source]

Build the computational graph for fitting net.

Parameters:
inputs

The input descriptor

input_dict

Additional dict for inputs. if numb_fparam > 0, should have input_dict[‘fparam’] if numb_aparam > 0, should have input_dict[‘aparam’]

natoms

The number of atoms. This tensor has the length of Ntypes + 2 natoms[0]: number of local atoms natoms[1]: total number of atoms held by this processor natoms[i]: 2 <= i < Ntypes+2, number of type i atoms

reuse

The weights in the networks should be reused when get the variable.

suffix

Name suffix to identify this descriptor

Returns:
ener

The system energy

init_variables(graph: deepmd.tf.env.tf.Graph, graph_def: deepmd.tf.env.tf.GraphDef, suffix: str = '') None[source]

Init the fitting net variables with the given dict.

Parameters:
graphtf.Graph

The input frozen model graph

graph_deftf.GraphDef

The input frozen model graph_def

suffixstr

suffix to name scope

change_energy_bias(data, frozen_model, origin_type_map, full_type_map, bias_adjust_mode='change-by-statistic', ntest=10) None[source]
enable_mixed_precision(mixed_prec: dict | None = None) None[source]

Reveive the mixed precision setting.

Parameters:
mixed_prec

The mixed precision setting used in the embedding net

get_loss(loss: dict, lr) deepmd.tf.loss.loss.Loss[source]

Get the loss function.

Parameters:
lossdict

The loss function parameters.

lrLearningRateExp

The learning rate.

Returns:
Loss

The loss function.

classmethod deserialize(data: dict, suffix: str = '')[source]

Deserialize the model.

Parameters:
datadict

The serialized data

Returns:
Model

The deserialized model

serialize(suffix: str = '') dict[source]

Serialize the model.

Returns:
dict

The serialized data

class deepmd.tf.fit.Fitting[source]

Bases: deepmd.tf.utils.PluginVariant, make_plugin_registry('fitting')

A class to remove type from input arguments.

property precision: deepmd.tf.env.tf.DType

Precision of fitting network.

abstract init_variables(graph: deepmd.tf.env.tf.Graph, graph_def: deepmd.tf.env.tf.GraphDef, suffix: str = '') None[source]

Init the fitting net variables with the given dict.

Parameters:
graphtf.Graph

The input frozen model graph

graph_deftf.GraphDef

The input frozen model graph_def

suffixstr

suffix to name scope

Notes

This method is called by others when the fitting supported initialization from the given variables.

abstract get_loss(loss: dict, lr) deepmd.tf.loss.loss.Loss[source]

Get the loss function.

Parameters:
lossdict

the loss dict

lrLearningRateExp

the learning rate

Returns:
Loss

the loss function

classmethod deserialize(data: dict, suffix: str = '') Fitting[source]

Deserialize the fitting.

There is no suffix in a native DP model, but it is important for the TF backend.

Parameters:
datadict

The serialized data

suffixstr, optional

Name suffix to identify this fitting

Returns:
Fitting

The deserialized fitting

abstract serialize(suffix: str = '') dict[source]

Serialize the fitting.

There is no suffix in a native DP model, but it is important for the TF backend.

Returns:
dict

The serialized data

suffixstr, optional

Name suffix to identify this fitting

serialize_network(ntypes: int, ndim: int, in_dim: int, neuron: List[int], activation_function: str, resnet_dt: bool, variables: dict, out_dim: int | None = 1, suffix: str = '') dict[source]

Serialize network.

Parameters:
ntypesint

The number of types

ndimint

The dimension of elements

in_dimint

The input dimension

neuronList[int]

The neuron list

activation_functionstr

The activation function

resnet_dtbool

Whether to use resnet

variablesdict

The input variables

suffixstr, optional

The suffix of the scope

out_dimint, optional

The output dimension

Returns:
dict

The converted network data

classmethod deserialize_network(data: dict, suffix: str = '') dict[source]

Deserialize network.

Parameters:
datadict

The input network data

suffixstr, optional

The suffix of the scope

Returns:
variablesdict

The input variables

class deepmd.tf.fit.GlobalPolarFittingSeA(descrpt: deepmd.tf.env.tf.Tensor, neuron: List[int] = [120, 120, 120], resnet_dt: bool = True, sel_type: List[int] | None = None, fit_diag: bool = True, scale: List[float] | None = None, diag_shift: List[float] | None = None, seed: int | None = None, activation_function: str = 'tanh', precision: str = 'default')[source]

Fit the system polarizability with descriptor se_a.

Parameters:
descrpttf.Tensor

The descrptor

neuronList[int]

Number of neurons in each hidden layer of the fitting net

resnet_dtbool

Time-step dt in the resnet construction: y = x + dt * phi (Wx + b)

sel_typeList[int]

The atom types selected to have an atomic polarizability prediction

fit_diagbool

Fit the diagonal part of the rotational invariant polarizability matrix, which will be converted to normal polarizability matrix by contracting with the rotation matrix.

scaleList[float]

The output of the fitting net (polarizability matrix) for type i atom will be scaled by scale[i]

diag_shiftList[float]

The diagonal part of the polarizability matrix of type i will be shifted by diag_shift[i]. The shift operation is carried out after scale.

seedint

Random seed for initializing the network parameters.

activation_functionstr

The activation function in the embedding net. Supported options are “relu”, “tanh”, “none”, “linear”, “softplus”, “sigmoid”, “relu6”, “gelu”, “gelu_tf”.

precisionstr

The precision of the embedding net parameters. Supported options are “float32”, “default”, “float16”, “float64”.

get_sel_type() int[source]

Get selected atom types.

get_out_size() int[source]

Get the output size. Should be 9.

build(input_d, rot_mat, natoms, input_dict: dict | None = None, reuse=None, suffix='') deepmd.tf.env.tf.Tensor[source]

Build the computational graph for fitting net.

Parameters:
input_d

The input descriptor

rot_mat

The rotation matrix from the descriptor.

natoms

The number of atoms. This tensor has the length of Ntypes + 2 natoms[0]: number of local atoms natoms[1]: total number of atoms held by this processor natoms[i]: 2 <= i < Ntypes+2, number of type i atoms

input_dict

Additional dict for inputs.

reuse

The weights in the networks should be reused when get the variable.

suffix

Name suffix to identify this descriptor

Returns:
polar

The system polarizability

init_variables(graph: deepmd.tf.env.tf.Graph, graph_def: deepmd.tf.env.tf.GraphDef, suffix: str = '') None[source]

Init the fitting net variables with the given dict.

Parameters:
graphtf.Graph

The input frozen model graph

graph_deftf.GraphDef

The input frozen model graph_def

suffixstr

suffix to name scope

enable_mixed_precision(mixed_prec: dict | None = None) None[source]

Reveive the mixed precision setting.

Parameters:
mixed_prec

The mixed precision setting used in the embedding net

get_loss(loss: dict, lr) deepmd.tf.loss.loss.Loss[source]

Get the loss function.

Parameters:
lossdict

the loss dict

lrLearningRateExp

the learning rate

Returns:
Loss

the loss function

class deepmd.tf.fit.PolarFittingSeA(ntypes: int, dim_descrpt: int, embedding_width: int, neuron: List[int] = [120, 120, 120], resnet_dt: bool = True, sel_type: List[int] | None = None, fit_diag: bool = True, scale: List[float] | None = None, shift_diag: bool = True, seed: int | None = None, activation_function: str = 'tanh', precision: str = 'default', uniform_seed: bool = False, mixed_types: bool = False, **kwargs)[source]

Bases: deepmd.tf.fit.fitting.Fitting

Fit the atomic polarizability with descriptor se_a.

Parameters:
ntypes

The ntypes of the descrptor \(\mathcal{D}\)

dim_descrpt

The dimension of the descrptor \(\mathcal{D}\)

embedding_width

The rotation matrix dimension of the descrptor \(\mathcal{D}\)

neuronList[int]

Number of neurons in each hidden layer of the fitting net

resnet_dtbool

Time-step dt in the resnet construction: y = x + dt * phi (Wx + b)

sel_typeList[int]

The atom types selected to have an atomic polarizability prediction. If is None, all atoms are selected.

fit_diagbool

Fit the diagonal part of the rotational invariant polarizability matrix, which will be converted to normal polarizability matrix by contracting with the rotation matrix.

scaleList[float]

The output of the fitting net (polarizability matrix) for type i atom will be scaled by scale[i]

diag_shiftList[float]

The diagonal part of the polarizability matrix of type i will be shifted by diag_shift[i]. The shift operation is carried out after scale.

seedint

Random seed for initializing the network parameters.

activation_functionstr

The activation function in the embedding net. Supported options are “relu”, “tanh”, “none”, “linear”, “softplus”, “sigmoid”, “relu6”, “gelu”, “gelu_tf”.

precisionstr

The precision of the embedding net parameters. Supported options are “float32”, “default”, “float16”, “float64”.

uniform_seed

Only for the purpose of backward compatibility, retrieves the old behavior of using the random seed

mixed_typesbool

If true, use a uniform fitting net for all atom types, otherwise use different fitting nets for different atom types.

get_sel_type() List[int][source]

Get selected atom types.

get_out_size() int[source]

Get the output size. Should be 9.

compute_output_stats(all_stat)[source]

Compute the output statistics.

Parameters:
all_stat

Dictionary of inputs. can be prepared by model.make_stat_input

_build_lower(start_index, natoms, inputs, rot_mat, suffix='', reuse=None)[source]
build(input_d: deepmd.tf.env.tf.Tensor, rot_mat: deepmd.tf.env.tf.Tensor, natoms: deepmd.tf.env.tf.Tensor, input_dict: dict | None = None, reuse: bool | None = None, suffix: str = '')[source]

Build the computational graph for fitting net.

Parameters:
input_d

The input descriptor

rot_mat

The rotation matrix from the descriptor.

natoms

The number of atoms. This tensor has the length of Ntypes + 2 natoms[0]: number of local atoms natoms[1]: total number of atoms held by this processor natoms[i]: 2 <= i < Ntypes+2, number of type i atoms

input_dict

Additional dict for inputs.

reuse

The weights in the networks should be reused when get the variable.

suffix

Name suffix to identify this descriptor

Returns:
atomic_polar

The atomic polarizability

init_variables(graph: deepmd.tf.env.tf.Graph, graph_def: deepmd.tf.env.tf.GraphDef, suffix: str = '') None[source]

Init the fitting net variables with the given dict.

Parameters:
graphtf.Graph

The input frozen model graph

graph_deftf.GraphDef

The input frozen model graph_def

suffixstr

suffix to name scope

enable_mixed_precision(mixed_prec: dict | None = None) None[source]

Reveive the mixed precision setting.

Parameters:
mixed_prec

The mixed precision setting used in the embedding net

get_loss(loss: dict, lr) deepmd.tf.loss.loss.Loss[source]

Get the loss function.

serialize(suffix: str) dict[source]

Serialize the model.

Returns:
dict

The serialized data

classmethod deserialize(data: dict, suffix: str)[source]

Deserialize the model.

Parameters:
datadict

The serialized data

Returns:
Model

The deserialized model