gigl.src.common.models.layers.decoder#
Classes#
| Generic enumeration. | |
| Base class for all neural network modules. | 
Module Contents#
- class gigl.src.common.models.layers.decoder.DecoderType[source]#
- Bases: - enum.Enum- Generic enumeration. - Derive from this class to define new enumerations. 
- class gigl.src.common.models.layers.decoder.LinkPredictionDecoder(decoder_type=DecoderType.inner_product, decoder_channel_list=None, act=F.relu, act_first=False, bias=False, plain_last=False, norm=None)[source]#
- Bases: - torch.nn.Module- Base class for all neural network modules. - Your models should also subclass this class. - Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes: - import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self) -> None: super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x)) - Submodules assigned in this way will be registered, and will have their parameters converted too when you call - to(), etc.- Note - As per the example above, an - __init__()call to the parent class must be made before assignment on the child.- Variables:
- training (bool) – Boolean represents whether this module is in training or evaluation mode. 
- Parameters:
- decoder_type (DecoderType) 
- decoder_channel_list (Optional[list[int]]) 
- act (Union[str, Callable, None]) 
- act_first (bool) 
- bias (Union[bool, list[bool]]) 
- plain_last (bool) 
- norm (Optional[Union[str, Callable]]) 
 
 - Initialize internal Module state, shared by both nn.Module and ScriptModule. 
