gigl.src.common.models.pyg.heterogeneous#

Classes#

HGT

Heterogeneous Graph Transformer model. Paper: https://arxiv.org/pdf/2003.01332.pdf

SimpleHGN

Base class for all neural network modules.

Module Contents#

class gigl.src.common.models.pyg.heterogeneous.HGT(node_type_to_feat_dim_map, edge_type_to_feat_dim_map, hid_dim, out_dim=128, num_layers=2, num_heads=2, should_l2_normalize_embedding_layer_output=False, feature_embedding_layers=None, **kwargs)[source]#

Bases: torch.nn.Module

Heterogeneous Graph Transformer model. Paper: https://arxiv.org/pdf/2003.01332.pdf This implementation is based on the example of: pyg-team/pytorch_geometric :param node_type_to_feat_dim_map: Dictionary mapping node types to their input dimensions. :type node_type_to_feat_dim_map: Dict[NodeType, int] :param edge_type_to_feat_dim_map: Dictionary mapping node types to their feature dimensions. :type edge_type_to_feat_dim_map: Dict[EdgeType, int] :param hid_dim: Hidden dimension size. :type hid_dim: int :param out_dim: Output dimension size. Defaults to 128. :type out_dim: int, optional :param num_layers: Number of layers. Defaults to 2. :type num_layers: int, optional :param num_heads: Number of attention heads. Defaults to 2. :type num_heads: int, optional

Initialize internal Module state, shared by both nn.Module and ScriptModule.

Parameters:
forward(data, output_node_types, device)[source]#

Runs the forward pass of the module :param data: Input HeteroData object. :type data: torch_geometric.data.hetero_data.HeteroData :param output_node_types: List of node types for which to return the output embeddings. :type output_node_types: List[NodeType]

Returns:

Dictionary with node types as keys and output tensors as values.

Return type:

Dict[NodeType, torch.Tensor]

Parameters:
  • data (torch_geometric.data.hetero_data.HeteroData)

  • output_node_types (List[gigl.src.common.types.graph_data.NodeType])

  • device (torch.device)

convs[source]#
feature_embedding_layers = None[source]#
lin[source]#
lin_dict[source]#
should_l2_normalize_embedding_layer_output = False[source]#
class gigl.src.common.models.pyg.heterogeneous.SimpleHGN(node_type_to_feat_dim_map, edge_type_to_feat_dim_map, node_hid_dim, edge_hid_dim, edge_type_dim, node_out_dim=128, num_layers=2, num_heads=2, should_use_node_residual=True, negative_slope=0.2, dropout=0.0, activation=F.elu, should_l2_normalize_embedding_layer_output=False, **kwargs)[source]#

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:
  • node_type_to_feat_dim_map (Dict[gigl.src.common.types.graph_data.NodeType, int])

  • edge_type_to_feat_dim_map (Dict[gigl.src.common.types.graph_data.EdgeType, int])

  • node_hid_dim (int)

  • edge_hid_dim (int)

  • edge_type_dim (int)

  • node_out_dim (int)

  • num_layers (int)

  • num_heads (int)

  • should_use_node_residual (bool)

  • negative_slope (float)

  • dropout (float)

  • should_l2_normalize_embedding_layer_output (bool)

SimpleHGN layer from the paper: https://arxiv.org/pdf/2112.14936

Parameters:
  • node_type_to_feat_dim_map (Dict[NodeType, int]) – Dictionary mapping node types to their input dimensions.

  • edge_type_to_feat_dim_map (Dict[EdgeType, int]) – Dictionary mapping edge types to their feature dimensions.

  • node_hid_dim (int) – Hidden dimension size for node features.

  • edge_hid_dim (int) – Hidden dimension size for edge features.

  • edge_type_dim (int) – Hidden dimension size for edge types.

  • node_out_dim (int) – Output dimension size for node features. Defaults to 128.

  • num_layers (int) – Number of layers. Defaults to 2.

  • num_heads (int) – Number of attention heads. Defaults to 2.

  • should_use_node_residual (bool) – Whether to use node residual. Defaults to True.

  • negative_slope (float) – Negative slope used in the LeakyReLU. Defaults to 0.2.

  • dropout (float) – Dropout rate. Defaults to 0.0.

  • activation – Activation function. Defaults to F.elu.

  • should_l2_normalize_embedding_layer_output (bool)

forward(data, output_node_types, device)[source]#
Parameters:
  • data (torch_geometric.data.hetero_data.HeteroData)

  • output_node_types (List[gigl.src.common.types.graph_data.NodeType])

  • device (torch.device)

Return type:

Dict[gigl.src.common.types.graph_data.NodeType, torch.Tensor]

activation[source]#
convs[source]#
edge_type_lin_dict[source]#
lin[source]#
node_type_lin_dict[source]#
num_layers = 2[source]#
should_have_edge_features: bool[source]#
should_l2_normalize_embedding_layer_output = False[source]#