gigl.src.common.models.pyg.homogeneous#
Attributes#
Classes#
Base class for all neural network modules. |
|
Base class for all neural network modules. |
|
Base class for all neural network modules. |
|
Base class for all neural network modules. |
|
Base class for all neural network modules. |
|
Base class for all neural network modules. |
|
Base class for all neural network modules. |
|
Base class for all neural network modules. |
|
Base class for all neural network modules. |
Module Contents#
- class gigl.src.common.models.pyg.homogeneous.BasicHomogeneousGNN(in_dim, hid_dim, out_dim, conv_kwargs={}, edge_dim=None, num_layers=DEFAULT_NUM_GNN_HOPS, activation=F.relu, activation_before_norm=False, activation_after_last_conv=False, dropout=0.0, batchnorm=False, linear_layer=False, return_emb=False, should_l2_normalize_embedding_layer_output=False, jk_mode=None, jk_lstm_dim=None, feature_interaction_layer=None, feature_embedding_layer=None, **kwargs)[source]#
Bases:
torch.nn.Module
,gigl.src.common.types.model.GnnModel
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self) -> None: super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
in_dim (int)
hid_dim (int)
out_dim (int)
conv_kwargs (Dict[str, Any])
edge_dim (Optional[int])
num_layers (int)
activation (Callable)
activation_before_norm (bool)
activation_after_last_conv (bool)
dropout (float)
batchnorm (bool)
linear_layer (bool)
return_emb (bool)
should_l2_normalize_embedding_layer_output (bool)
jk_mode (Optional[str])
jk_lstm_dim (Optional[int])
feature_interaction_layer (Optional[gigl.src.common.models.pyg.nn.models.feature_interaction.FeatureInteraction])
feature_embedding_layer (Optional[gigl.src.common.models.pyg.nn.models.feature_embedding.FeatureEmbeddingLayer])
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(data, device=None)[source]#
- Parameters:
data (torch_geometric.data.Data)
device (Optional[torch.device])
- Return type:
torch.Tensor
- abstract init_conv_layers(in_dim, out_dim, edge_dim, hid_dim, num_layers, **kwargs)[source]#
- Parameters:
in_dim (Union[int, Tuple[int, int]])
out_dim (int)
edge_dim (Optional[int])
hid_dim (int)
num_layers (int)
- Return type:
torch.nn.ModuleList
- property graph_backend: gigl.src.common.types.model.GraphBackend[source]#
- Return type:
- class gigl.src.common.models.pyg.homogeneous.EdgeAttrGAT(in_dim, hid_dim, out_dim, conv_kwargs={}, edge_dim=None, num_layers=DEFAULT_NUM_GNN_HOPS, activation=F.relu, activation_before_norm=False, activation_after_last_conv=False, dropout=0.0, batchnorm=False, linear_layer=False, return_emb=False, should_l2_normalize_embedding_layer_output=False, jk_mode=None, jk_lstm_dim=None, feature_interaction_layer=None, feature_embedding_layer=None, **kwargs)[source]#
Bases:
BasicHomogeneousGNN
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self) -> None: super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
in_dim (int)
hid_dim (int)
out_dim (int)
conv_kwargs (Dict[str, Any])
edge_dim (Optional[int])
num_layers (int)
activation (Callable)
activation_before_norm (bool)
activation_after_last_conv (bool)
dropout (float)
batchnorm (bool)
linear_layer (bool)
return_emb (bool)
should_l2_normalize_embedding_layer_output (bool)
jk_mode (Optional[str])
jk_lstm_dim (Optional[int])
feature_interaction_layer (Optional[gigl.src.common.models.pyg.nn.models.feature_interaction.FeatureInteraction])
feature_embedding_layer (Optional[gigl.src.common.models.pyg.nn.models.feature_embedding.FeatureEmbeddingLayer])
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- class gigl.src.common.models.pyg.homogeneous.GAT(in_dim, hid_dim, out_dim, conv_kwargs={}, edge_dim=None, num_layers=DEFAULT_NUM_GNN_HOPS, activation=F.relu, activation_before_norm=False, activation_after_last_conv=False, dropout=0.0, batchnorm=False, linear_layer=False, return_emb=False, should_l2_normalize_embedding_layer_output=False, jk_mode=None, jk_lstm_dim=None, feature_interaction_layer=None, feature_embedding_layer=None, **kwargs)[source]#
Bases:
BasicHomogeneousGNN
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self) -> None: super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
in_dim (int)
hid_dim (int)
out_dim (int)
conv_kwargs (Dict[str, Any])
edge_dim (Optional[int])
num_layers (int)
activation (Callable)
activation_before_norm (bool)
activation_after_last_conv (bool)
dropout (float)
batchnorm (bool)
linear_layer (bool)
return_emb (bool)
should_l2_normalize_embedding_layer_output (bool)
jk_mode (Optional[str])
jk_lstm_dim (Optional[int])
feature_interaction_layer (Optional[gigl.src.common.models.pyg.nn.models.feature_interaction.FeatureInteraction])
feature_embedding_layer (Optional[gigl.src.common.models.pyg.nn.models.feature_embedding.FeatureEmbeddingLayer])
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- class gigl.src.common.models.pyg.homogeneous.GATv2(in_dim, hid_dim, out_dim, conv_kwargs={}, edge_dim=None, num_layers=DEFAULT_NUM_GNN_HOPS, activation=F.relu, activation_before_norm=False, activation_after_last_conv=False, dropout=0.0, batchnorm=False, linear_layer=False, return_emb=False, should_l2_normalize_embedding_layer_output=False, jk_mode=None, jk_lstm_dim=None, feature_interaction_layer=None, feature_embedding_layer=None, **kwargs)[source]#
Bases:
BasicHomogeneousGNN
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self) -> None: super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
in_dim (int)
hid_dim (int)
out_dim (int)
conv_kwargs (Dict[str, Any])
edge_dim (Optional[int])
num_layers (int)
activation (Callable)
activation_before_norm (bool)
activation_after_last_conv (bool)
dropout (float)
batchnorm (bool)
linear_layer (bool)
return_emb (bool)
should_l2_normalize_embedding_layer_output (bool)
jk_mode (Optional[str])
jk_lstm_dim (Optional[int])
feature_interaction_layer (Optional[gigl.src.common.models.pyg.nn.models.feature_interaction.FeatureInteraction])
feature_embedding_layer (Optional[gigl.src.common.models.pyg.nn.models.feature_embedding.FeatureEmbeddingLayer])
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- class gigl.src.common.models.pyg.homogeneous.GIN(in_dim, hid_dim, out_dim, conv_kwargs={}, edge_dim=None, num_layers=DEFAULT_NUM_GNN_HOPS, activation=F.relu, activation_before_norm=False, activation_after_last_conv=False, dropout=0.0, batchnorm=False, linear_layer=False, return_emb=False, should_l2_normalize_embedding_layer_output=False, jk_mode=None, jk_lstm_dim=None, feature_interaction_layer=None, feature_embedding_layer=None, **kwargs)[source]#
Bases:
BasicHomogeneousGNN
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self) -> None: super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
in_dim (int)
hid_dim (int)
out_dim (int)
conv_kwargs (Dict[str, Any])
edge_dim (Optional[int])
num_layers (int)
activation (Callable)
activation_before_norm (bool)
activation_after_last_conv (bool)
dropout (float)
batchnorm (bool)
linear_layer (bool)
return_emb (bool)
should_l2_normalize_embedding_layer_output (bool)
jk_mode (Optional[str])
jk_lstm_dim (Optional[int])
feature_interaction_layer (Optional[gigl.src.common.models.pyg.nn.models.feature_interaction.FeatureInteraction])
feature_embedding_layer (Optional[gigl.src.common.models.pyg.nn.models.feature_embedding.FeatureEmbeddingLayer])
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- class gigl.src.common.models.pyg.homogeneous.GINE(in_dim, hid_dim, out_dim, conv_kwargs={}, edge_dim=None, num_layers=DEFAULT_NUM_GNN_HOPS, activation=F.relu, activation_before_norm=False, activation_after_last_conv=False, dropout=0.0, batchnorm=False, linear_layer=False, return_emb=False, should_l2_normalize_embedding_layer_output=False, jk_mode=None, jk_lstm_dim=None, feature_interaction_layer=None, feature_embedding_layer=None, **kwargs)[source]#
Bases:
BasicHomogeneousGNN
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self) -> None: super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
in_dim (int)
hid_dim (int)
out_dim (int)
conv_kwargs (Dict[str, Any])
edge_dim (Optional[int])
num_layers (int)
activation (Callable)
activation_before_norm (bool)
activation_after_last_conv (bool)
dropout (float)
batchnorm (bool)
linear_layer (bool)
return_emb (bool)
should_l2_normalize_embedding_layer_output (bool)
jk_mode (Optional[str])
jk_lstm_dim (Optional[int])
feature_interaction_layer (Optional[gigl.src.common.models.pyg.nn.models.feature_interaction.FeatureInteraction])
feature_embedding_layer (Optional[gigl.src.common.models.pyg.nn.models.feature_embedding.FeatureEmbeddingLayer])
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- class gigl.src.common.models.pyg.homogeneous.GraphSAGE(in_dim, hid_dim, out_dim, conv_kwargs={}, edge_dim=None, num_layers=DEFAULT_NUM_GNN_HOPS, activation=F.relu, activation_before_norm=False, activation_after_last_conv=False, dropout=0.0, batchnorm=False, linear_layer=False, return_emb=False, should_l2_normalize_embedding_layer_output=False, jk_mode=None, jk_lstm_dim=None, feature_interaction_layer=None, feature_embedding_layer=None, **kwargs)[source]#
Bases:
BasicHomogeneousGNN
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self) -> None: super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
in_dim (int)
hid_dim (int)
out_dim (int)
conv_kwargs (Dict[str, Any])
edge_dim (Optional[int])
num_layers (int)
activation (Callable)
activation_before_norm (bool)
activation_after_last_conv (bool)
dropout (float)
batchnorm (bool)
linear_layer (bool)
return_emb (bool)
should_l2_normalize_embedding_layer_output (bool)
jk_mode (Optional[str])
jk_lstm_dim (Optional[int])
feature_interaction_layer (Optional[gigl.src.common.models.pyg.nn.models.feature_interaction.FeatureInteraction])
feature_embedding_layer (Optional[gigl.src.common.models.pyg.nn.models.feature_embedding.FeatureEmbeddingLayer])
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- class gigl.src.common.models.pyg.homogeneous.Transformer(in_dim, hid_dim, out_dim, conv_kwargs={}, edge_dim=None, num_layers=DEFAULT_NUM_GNN_HOPS, activation=F.relu, activation_before_norm=False, activation_after_last_conv=False, dropout=0.0, batchnorm=False, linear_layer=False, return_emb=False, should_l2_normalize_embedding_layer_output=False, jk_mode=None, jk_lstm_dim=None, feature_interaction_layer=None, feature_embedding_layer=None, **kwargs)[source]#
Bases:
BasicHomogeneousGNN
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self) -> None: super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
in_dim (int)
hid_dim (int)
out_dim (int)
conv_kwargs (Dict[str, Any])
edge_dim (Optional[int])
num_layers (int)
activation (Callable)
activation_before_norm (bool)
activation_after_last_conv (bool)
dropout (float)
batchnorm (bool)
linear_layer (bool)
return_emb (bool)
should_l2_normalize_embedding_layer_output (bool)
jk_mode (Optional[str])
jk_lstm_dim (Optional[int])
feature_interaction_layer (Optional[gigl.src.common.models.pyg.nn.models.feature_interaction.FeatureInteraction])
feature_embedding_layer (Optional[gigl.src.common.models.pyg.nn.models.feature_embedding.FeatureEmbeddingLayer])
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- class gigl.src.common.models.pyg.homogeneous.TwoLayerGCN(in_dim, out_dim, hid_dim=16, is_training=True, should_l2_normalize_output=False, **kwargs)[source]#
Bases:
torch.nn.Module
,gigl.src.common.types.model.GnnModel
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self) -> None: super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
in_dim (int)
out_dim (int)
hid_dim (int)
is_training (bool)
should_l2_normalize_output (bool)
Simple 2 layer GCN Implementation using PyG constructs :param in_feats: number input features :type in_feats: int :param out_dim: num output classes :type out_dim: int :param h_feats: num hidden features. Defaults to 16. :type h_feats: int, optional :param **kwargs: Additional arguments for all GCNConv layers :type **kwargs:
torch_geometric.nn.conv.MessagePassing
- property graph_backend: gigl.src.common.types.model.GraphBackend[source]#
- Return type: