deepmd.pt.model.descriptor.repformer_layer

Module Contents

Classes

Atten2Map

Base class for all neural network modules.

Atten2MultiHeadApply

Base class for all neural network modules.

Atten2EquiVarApply

Base class for all neural network modules.

LocalAtten

Base class for all neural network modules.

RepformerLayer

Base class for all neural network modules.

Functions

torch_linear(*args, **kwargs)

_make_nei_g1(→ torch.Tensor)

_apply_nlist_mask(→ torch.Tensor)

_apply_switch(→ torch.Tensor)

_apply_h_norm(→ torch.Tensor)

Normalize h by the std of vector length.

deepmd.pt.model.descriptor.repformer_layer.torch_linear(*args, **kwargs)[source]
deepmd.pt.model.descriptor.repformer_layer._make_nei_g1(g1_ext: torch.Tensor, nlist: torch.Tensor) torch.Tensor[source]
deepmd.pt.model.descriptor.repformer_layer._apply_nlist_mask(gg: torch.Tensor, nlist_mask: torch.Tensor) torch.Tensor[source]
deepmd.pt.model.descriptor.repformer_layer._apply_switch(gg: torch.Tensor, sw: torch.Tensor) torch.Tensor[source]
deepmd.pt.model.descriptor.repformer_layer._apply_h_norm(hh: torch.Tensor) torch.Tensor[source]

Normalize h by the std of vector length. do not have an idea if this is a good way.

class deepmd.pt.model.descriptor.repformer_layer.Atten2Map(ni: int, nd: int, nh: int, has_gate: bool = False, smooth: bool = True, attnw_shift: float = 20.0)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(g2: torch.Tensor, h2: torch.Tensor, nlist_mask: torch.Tensor, sw: torch.Tensor) torch.Tensor[source]
class deepmd.pt.model.descriptor.repformer_layer.Atten2MultiHeadApply(ni: int, nh: int)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(AA: torch.Tensor, g2: torch.Tensor) torch.Tensor[source]
class deepmd.pt.model.descriptor.repformer_layer.Atten2EquiVarApply(ni: int, nh: int)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(AA: torch.Tensor, h2: torch.Tensor) torch.Tensor[source]
class deepmd.pt.model.descriptor.repformer_layer.LocalAtten(ni: int, nd: int, nh: int, smooth: bool = True, attnw_shift: float = 20.0)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(g1: torch.Tensor, gg1: torch.Tensor, nlist_mask: torch.Tensor, sw: torch.Tensor) torch.Tensor[source]
class deepmd.pt.model.descriptor.repformer_layer.RepformerLayer(rcut, rcut_smth, sel: int, ntypes: int, g1_dim=128, g2_dim=16, axis_dim: int = 4, update_chnnl_2: bool = True, do_bn_mode: str = 'no', bn_momentum: float = 0.1, update_g1_has_conv: bool = True, update_g1_has_drrd: bool = True, update_g1_has_grrg: bool = True, update_g1_has_attn: bool = True, update_g2_has_g1g1: bool = True, update_g2_has_attn: bool = True, update_h2: bool = False, attn1_hidden: int = 64, attn1_nhead: int = 4, attn2_hidden: int = 16, attn2_nhead: int = 4, attn2_has_gate: bool = False, activation_function: str = 'tanh', update_style: str = 'res_avg', set_davg_zero: bool = True, smooth: bool = True)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

cal_1_dim(g1d: int, g2d: int, ax: int) int[source]
_update_h2(g2: torch.Tensor, h2: torch.Tensor, nlist_mask: torch.Tensor, sw: torch.Tensor) torch.Tensor[source]
_update_g1_conv(gg1: torch.Tensor, g2: torch.Tensor, nlist_mask: torch.Tensor, sw: torch.Tensor) torch.Tensor[source]
_cal_h2g2(g2: torch.Tensor, h2: torch.Tensor, nlist_mask: torch.Tensor, sw: torch.Tensor) torch.Tensor[source]
_cal_grrg(h2g2: torch.Tensor) torch.Tensor[source]
_update_g1_grrg(g2: torch.Tensor, h2: torch.Tensor, nlist_mask: torch.Tensor, sw: torch.Tensor) torch.Tensor[source]
_update_g2_g1g1(g1: torch.Tensor, gg1: torch.Tensor, nlist_mask: torch.Tensor, sw: torch.Tensor) torch.Tensor[source]
_apply_bn(bn_number: int, gg: torch.Tensor)[source]
_apply_nb_1(bn_number: int, gg: torch.Tensor) torch.Tensor[source]
_apply_nb_2(bn_number: int, gg: torch.Tensor) torch.Tensor[source]
_apply_bn_uni(bn_number: int, gg: torch.Tensor, mode: str = '1') torch.Tensor[source]
_apply_bn_comp(bn_number: int, gg: torch.Tensor) torch.Tensor[source]
forward(g1_ext: torch.Tensor, g2: torch.Tensor, h2: torch.Tensor, nlist: torch.Tensor, nlist_mask: torch.Tensor, sw: torch.Tensor)[source]
Parameters:
g1_extnf x nall x ng1 extended single-atom chanel
g2nf x nloc x nnei x ng2 pair-atom channel, invariant
h2nf x nloc x nnei x 3 pair-atom channel, equivariant
nlistnf x nloc x nnei neighbor list (padded neis are set to 0)
nlist_masknf x nloc x nnei masks of the neighbor list. real nei 1 otherwise 0
swnf x nloc x nnei switch function
Returns:
g1: nf x nloc x ng1 updated single-atom chanel
g2: nf x nloc x nnei x ng2 updated pair-atom channel, invariant
h2: nf x nloc x nnei x 3 updated pair-atom channel, equivariant
list_update_res_avg(update_list: List[torch.Tensor]) torch.Tensor[source]
list_update_res_incr(update_list: List[torch.Tensor]) torch.Tensor[source]
list_update(update_list: List[torch.Tensor]) torch.Tensor[source]
_bn_layer(nf: int = 1) Callable[source]