deepmd.pt.optimizer

Submodules

Package Contents

Classes

KFOptimizerWrapper

LKFOptimizer

Base class for all optimizers.

class deepmd.pt.optimizer.KFOptimizerWrapper(model: torch.nn.Module, optimizer: torch.optim.optimizer.Optimizer, atoms_selected: int, atoms_per_group: int, is_distributed: bool = False)[source]
update_energy(inputs: dict, Etot_label: torch.Tensor, update_prefactor: float = 1) None[source]
update_force(inputs: dict, Force_label: torch.Tensor, update_prefactor: float = 1) None[source]
update_denoise_coord(inputs: dict, clean_coord: torch.Tensor, update_prefactor: float = 1, mask_loss_coord: bool = True, coord_mask: torch.Tensor = None) None[source]
__sample(atoms_selected: int, atoms_per_group: int, natoms: int) numpy.ndarray[source]
class deepmd.pt.optimizer.LKFOptimizer(params, kalman_lambda=0.98, kalman_nue=0.9987, block_size=5120)[source]

Bases: torch.optim.optimizer.Optimizer

Base class for all optimizers.

Warning

Parameters need to be specified as collections that have a deterministic ordering that is consistent between runs. Examples of objects that don’t satisfy those properties are sets and iterators over values of dictionaries.

Parameters:
  • params (iterable) – an iterable of torch.Tensor s or dict s. Specifies what Tensors should be optimized.

  • defaults – (dict): a dict containing default values of optimization options (used when a parameter group doesn’t specify them).

__init_P()[source]
__get_blocksize()[source]
__get_nue()[source]
__split_weights(weight)[source]
__update(H, error, weights)[source]
set_grad_prefactor(grad_prefactor)[source]
step(error)[source]

Performs a single optimization step (parameter update).

Parameters:

closure (Callable) – A closure that reevaluates the model and returns the loss. Optional for most optimizers.

Note

Unless otherwise specified, this function should not modify the .grad field of the parameters.

get_device_id(index)[source]