gigl.src.common.models.pyg.homogeneous#

Attributes#

Classes#

BasicHomogeneousGNN

Base class for all neural network modules.

EdgeAttrGAT

Base class for all neural network modules.

GAT

Base class for all neural network modules.

GATv2

Base class for all neural network modules.

GIN

Base class for all neural network modules.

GINE

Base class for all neural network modules.

GraphSAGE

Base class for all neural network modules.

Transformer

Base class for all neural network modules.

TwoLayerGCN

Base class for all neural network modules.

Module Contents#

class gigl.src.common.models.pyg.homogeneous.BasicHomogeneousGNN(in_dim, hid_dim, out_dim, conv_kwargs={}, edge_dim=None, num_layers=DEFAULT_NUM_GNN_HOPS, activation=F.relu, activation_before_norm=False, activation_after_last_conv=False, dropout=0.0, batchnorm=False, linear_layer=False, return_emb=False, should_l2_normalize_embedding_layer_output=False, jk_mode=None, jk_lstm_dim=None, feature_interaction_layer=None, feature_embedding_layer=None, **kwargs)[source]#

Bases: torch.nn.Module, gigl.src.common.types.model.GnnModel

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(data, device=None)[source]#
Parameters:
  • data (torch_geometric.data.Data)

  • device (Optional[torch.device])

Return type:

torch.Tensor

abstract init_conv_layers(in_dim, out_dim, edge_dim, hid_dim, num_layers, **kwargs)[source]#
Parameters:
  • in_dim (Union[int, Tuple[int, int]])

  • out_dim (int)

  • edge_dim (Optional[int])

  • hid_dim (int)

  • num_layers (int)

Return type:

torch.nn.ModuleList

activation[source]#
activation_after_last_conv = False[source]#
activation_before_norm = False[source]#
batchnorm = False[source]#
conv_layers: torch.nn.ModuleList[source]#
dropout[source]#
feats_interaction = None[source]#
feature_embedding_layer = None[source]#
property graph_backend: gigl.src.common.types.model.GraphBackend[source]#
Return type:

gigl.src.common.types.model.GraphBackend

hid_dim[source]#
in_dim[source]#
linear_layer = False[source]#
num_layers = 2[source]#
out_dim[source]#
return_emb = False[source]#
should_l2_normalize_embedding_layer_output = False[source]#
class gigl.src.common.models.pyg.homogeneous.EdgeAttrGAT(in_dim, hid_dim, out_dim, conv_kwargs={}, edge_dim=None, num_layers=DEFAULT_NUM_GNN_HOPS, activation=F.relu, activation_before_norm=False, activation_after_last_conv=False, dropout=0.0, batchnorm=False, linear_layer=False, return_emb=False, should_l2_normalize_embedding_layer_output=False, jk_mode=None, jk_lstm_dim=None, feature_interaction_layer=None, feature_embedding_layer=None, **kwargs)[source]#

Bases: BasicHomogeneousGNN

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:

Initialize internal Module state, shared by both nn.Module and ScriptModule.

init_conv_layers(in_dim, out_dim, edge_dim, hid_dim, num_layers, **kwargs)[source]#
Parameters:
  • in_dim (Union[int, Tuple[int, int]])

  • out_dim (int)

  • edge_dim (Optional[int])

  • hid_dim (int)

  • num_layers (int)

Return type:

torch.nn.ModuleList

supports_edge_attr = True[source]#
supports_edge_weight = False[source]#
class gigl.src.common.models.pyg.homogeneous.GAT(in_dim, hid_dim, out_dim, conv_kwargs={}, edge_dim=None, num_layers=DEFAULT_NUM_GNN_HOPS, activation=F.relu, activation_before_norm=False, activation_after_last_conv=False, dropout=0.0, batchnorm=False, linear_layer=False, return_emb=False, should_l2_normalize_embedding_layer_output=False, jk_mode=None, jk_lstm_dim=None, feature_interaction_layer=None, feature_embedding_layer=None, **kwargs)[source]#

Bases: BasicHomogeneousGNN

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:

Initialize internal Module state, shared by both nn.Module and ScriptModule.

init_conv_layers(in_dim, out_dim, edge_dim, hid_dim, num_layers, **kwargs)[source]#
Parameters:
  • in_dim (Union[int, Tuple[int, int]])

  • out_dim (int)

  • edge_dim (Optional[int])

  • hid_dim (int)

  • num_layers (int)

Return type:

torch.nn.ModuleList

supports_edge_attr = True[source]#
supports_edge_weight = False[source]#
class gigl.src.common.models.pyg.homogeneous.GATv2(in_dim, hid_dim, out_dim, conv_kwargs={}, edge_dim=None, num_layers=DEFAULT_NUM_GNN_HOPS, activation=F.relu, activation_before_norm=False, activation_after_last_conv=False, dropout=0.0, batchnorm=False, linear_layer=False, return_emb=False, should_l2_normalize_embedding_layer_output=False, jk_mode=None, jk_lstm_dim=None, feature_interaction_layer=None, feature_embedding_layer=None, **kwargs)[source]#

Bases: BasicHomogeneousGNN

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:

Initialize internal Module state, shared by both nn.Module and ScriptModule.

init_conv_layers(in_dim, out_dim, edge_dim, hid_dim, num_layers, **kwargs)[source]#
Parameters:
  • in_dim (Union[int, Tuple[int, int]])

  • out_dim (int)

  • edge_dim (Optional[int])

  • hid_dim (int)

  • num_layers (int)

Return type:

torch.nn.ModuleList

supports_edge_attr = True[source]#
supports_edge_weight = False[source]#
class gigl.src.common.models.pyg.homogeneous.GIN(in_dim, hid_dim, out_dim, conv_kwargs={}, edge_dim=None, num_layers=DEFAULT_NUM_GNN_HOPS, activation=F.relu, activation_before_norm=False, activation_after_last_conv=False, dropout=0.0, batchnorm=False, linear_layer=False, return_emb=False, should_l2_normalize_embedding_layer_output=False, jk_mode=None, jk_lstm_dim=None, feature_interaction_layer=None, feature_embedding_layer=None, **kwargs)[source]#

Bases: BasicHomogeneousGNN

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:

Initialize internal Module state, shared by both nn.Module and ScriptModule.

init_conv_layers(in_dim, out_dim, edge_dim, hid_dim, num_layers, **kwargs)[source]#
Parameters:
  • in_dim (Union[int, Tuple[int, int]])

  • out_dim (int)

  • edge_dim (Optional[int])

  • hid_dim (int)

  • num_layers (int)

Return type:

torch.nn.ModuleList

supports_edge_attr = False[source]#
supports_edge_weight = False[source]#
class gigl.src.common.models.pyg.homogeneous.GINE(in_dim, hid_dim, out_dim, conv_kwargs={}, edge_dim=None, num_layers=DEFAULT_NUM_GNN_HOPS, activation=F.relu, activation_before_norm=False, activation_after_last_conv=False, dropout=0.0, batchnorm=False, linear_layer=False, return_emb=False, should_l2_normalize_embedding_layer_output=False, jk_mode=None, jk_lstm_dim=None, feature_interaction_layer=None, feature_embedding_layer=None, **kwargs)[source]#

Bases: BasicHomogeneousGNN

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:

Initialize internal Module state, shared by both nn.Module and ScriptModule.

init_conv_layers(in_dim, out_dim, edge_dim, hid_dim, num_layers, **kwargs)[source]#
Parameters:
  • in_dim (Union[int, Tuple[int, int]])

  • out_dim (int)

  • edge_dim (Optional[int])

  • hid_dim (int)

  • num_layers (int)

Return type:

torch.nn.ModuleList

supports_edge_attr = True[source]#
supports_edge_weight = False[source]#
class gigl.src.common.models.pyg.homogeneous.GraphSAGE(in_dim, hid_dim, out_dim, conv_kwargs={}, edge_dim=None, num_layers=DEFAULT_NUM_GNN_HOPS, activation=F.relu, activation_before_norm=False, activation_after_last_conv=False, dropout=0.0, batchnorm=False, linear_layer=False, return_emb=False, should_l2_normalize_embedding_layer_output=False, jk_mode=None, jk_lstm_dim=None, feature_interaction_layer=None, feature_embedding_layer=None, **kwargs)[source]#

Bases: BasicHomogeneousGNN

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:

Initialize internal Module state, shared by both nn.Module and ScriptModule.

init_conv_layers(in_dim, out_dim, edge_dim, hid_dim, num_layers, **kwargs)[source]#
Parameters:
  • in_dim (Union[int, Tuple[int, int]])

  • out_dim (int)

  • edge_dim (Optional[int])

  • hid_dim (int)

  • num_layers (int)

Return type:

torch.nn.ModuleList

supports_edge_attr = False[source]#
supports_edge_weight = False[source]#
class gigl.src.common.models.pyg.homogeneous.Transformer(in_dim, hid_dim, out_dim, conv_kwargs={}, edge_dim=None, num_layers=DEFAULT_NUM_GNN_HOPS, activation=F.relu, activation_before_norm=False, activation_after_last_conv=False, dropout=0.0, batchnorm=False, linear_layer=False, return_emb=False, should_l2_normalize_embedding_layer_output=False, jk_mode=None, jk_lstm_dim=None, feature_interaction_layer=None, feature_embedding_layer=None, **kwargs)[source]#

Bases: BasicHomogeneousGNN

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:

Initialize internal Module state, shared by both nn.Module and ScriptModule.

init_conv_layers(in_dim, out_dim, edge_dim, hid_dim, num_layers, **kwargs)[source]#
Parameters:
  • in_dim (Union[int, Tuple[int, int]])

  • out_dim (int)

  • edge_dim (Optional[int])

  • hid_dim (int)

  • num_layers (int)

Return type:

torch.nn.ModuleList

supports_edge_attr = True[source]#
supports_edge_weight = False[source]#
class gigl.src.common.models.pyg.homogeneous.TwoLayerGCN(in_dim, out_dim, hid_dim=16, is_training=True, should_l2_normalize_output=False, **kwargs)[source]#

Bases: torch.nn.Module, gigl.src.common.types.model.GnnModel

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:
  • in_dim (int)

  • out_dim (int)

  • hid_dim (int)

  • is_training (bool)

  • should_l2_normalize_output (bool)

Simple 2 layer GCN Implementation using PyG constructs :param in_feats: number input features :type in_feats: int :param out_dim: num output classes :type out_dim: int :param h_feats: num hidden features. Defaults to 16. :type h_feats: int, optional :param **kwargs: Additional arguments for all GCNConv layers :type **kwargs: torch_geometric.nn.conv.MessagePassing

forward(data)[source]#
Parameters:

data (torch_geometric.data.Data)

Return type:

torch.Tensor

conv1[source]#
conv2[source]#
property graph_backend: gigl.src.common.types.model.GraphBackend[source]#
Return type:

gigl.src.common.types.model.GraphBackend

is_training = True[source]#
should_normalize = False[source]#
gigl.src.common.models.pyg.homogeneous.logger[source]#