pgl.nn

Graph Convolution Layers

This package implements common layers to help building graph neural networks.

class pgl.nn.conv.GCNConv(input_size, output_size, activation=None, norm=True)[source]

Bases: paddle.fluid.dygraph.layers.Layer

Implementation of graph convolutional neural networks (GCN)

This is an implementation of the paper SEMI-SUPERVISED CLASSIFICATION WITH GRAPH CONVOLUTIONAL NETWORKS (https://arxiv.org/pdf/1609.02907.pdf).

Parameters
  • input_size – The size of the inputs.

  • output_size – The size of outputs

  • activation – The activation for the output.

  • norm – If norm is True, then the feature will be normalized.

forward(graph, feature, norm=None)[source]
Parameters
  • graphpgl.Graph instance.

  • feature – A tensor with shape (num_nodes, input_size)

  • norm – (default None). If norm is not None, then the feature will be normalized by given norm. If norm is None and self.norm is true, then we use lapacian degree norm.

Returns

A tensor with shape (num_nodes, output_size)

class pgl.nn.conv.GATConv(input_size, hidden_size, feat_drop=0.6, attn_drop=0.6, num_heads=1, concat=True, activation=None)[source]

Bases: paddle.fluid.dygraph.layers.Layer

Implementation of graph attention networks (GAT)

This is an implementation of the paper GRAPH ATTENTION NETWORKS (https://arxiv.org/abs/1710.10903).

Parameters
  • input_size – The size of the inputs.

  • hidden_size – The hidden size for gat.

  • activation – (default None) The activation for the output.

  • num_heads – (default 1) The head number in gat.

  • feat_drop – (default 0.6) Dropout rate for feature.

  • attn_drop – (default 0.6) Dropout rate for attention.

  • concat – (default True) Whether to concat output heads or average them.

forward(graph, feature)[source]
Parameters
  • graphpgl.Graph instance.

  • feature – A tensor with shape (num_nodes, input_size)

Returns

If concat=True then return a tensor with shape (num_nodes, hidden_size), else return a tensor with shape (num_nodes, hidden_size * num_heads)

class pgl.nn.conv.APPNP(alpha=0.2, k_hop=10)[source]

Bases: paddle.fluid.dygraph.layers.Layer

Implementation of APPNP of “Predict then Propagate: Graph Neural Networks meet Personalized PageRank” (ICLR 2019).

Parameters
  • k_hop – K Steps for Propagation

  • alpha – The hyperparameter of alpha in the paper.

Returns

A tensor with shape (num_nodes, hidden_size)

forward(graph, feature, norm=None)[source]
Parameters
  • graphpgl.Graph instance.

  • feature – A tensor with shape (num_nodes, input_size)

  • norm – (default None). If norm is not None, then the feature will be normalized by given norm. If norm is None, then we use lapacian degree norm.

Returns

A tensor with shape (num_nodes, output_size)

class pgl.nn.conv.GCNII(hidden_size, activation=None, lambda_l=0.5, alpha=0.2, k_hop=10, dropout=0.6)[source]

Bases: paddle.fluid.dygraph.layers.Layer

Implementation of GCNII of “Simple and Deep Graph Convolutional Networks”

paper: https://arxiv.org/pdf/2007.02133.pdf

Parameters
  • hidden_size – The size of inputs and outputs.

  • activation – The activation for the output.

  • k_hop – Number of layers for gcnii.

  • lambda_l – The hyperparameter of lambda in the paper.

  • alpha – The hyperparameter of alpha in the paper.

  • dropout – Feature dropout rate.

forward(graph, feature, norm=None)[source]
Parameters
  • graphpgl.Graph instance.

  • feature – A tensor with shape (num_nodes, input_size)

  • norm – (default None). If norm is not None, then the feature will be normalized by given norm. If norm is None, then we use lapacian degree norm.

Returns

A tensor with shape (num_nodes, output_size)

class pgl.nn.conv.TransformerConv(input_size, hidden_size, num_heads=4, feat_drop=0.6, attn_drop=0.6, concat=True, skip_feat=True, gate=False, layer_norm=True, activation='relu')[source]

Bases: paddle.fluid.dygraph.layers.Layer

forward(graph, feature, edge_feat=None)[source]

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters
  • *inputs (tuple) – unpacked tuple arguments

  • **kwargs (dict) – unpacked dict arguments

reduce_attention(msg)[source]
send_attention(src_feat, dst_feat, edge_feat)[source]
send_recv(graph, q, k, v, edge_feat)[source]
class pgl.nn.conv.GINConv(input_size, output_size, activation=None, init_eps=0.0, train_eps=False)[source]

Bases: paddle.fluid.dygraph.layers.Layer

Implementation of Graph Isomorphism Network (GIN) layer.

This is an implementation of the paper How Powerful are Graph Neural Networks? (https://arxiv.org/pdf/1810.00826.pdf). In their implementation, all MLPs have 2 layers. Batch normalization is applied on every hidden layer.

Parameters
  • input_size – The size of input.

  • output_size – The size of output.

  • activation – The activation for the output.

  • init_eps – float, optional Initial \(\epsilon\) value, default is 0.

  • train_eps – bool, optional if True, \(\epsilon\) will be a learnable parameter.

forward(graph, feature)[source]
Parameters
  • graphpgl.Graph instance.

  • feature – A tensor with shape (num_nodes, input_size)

Returns

A tensor with shape (num_nodes, output_size)

class pgl.nn.conv.GraphSageConv(input_size, hidden_size, aggr_func='sum')[source]

Bases: paddle.fluid.dygraph.layers.Layer

GraphSAGE is a general inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data.

Paper reference: Hamilton, Will, Zhitao Ying, and Jure Leskovec. “Inductive representation learning on large graphs.” Advances in neural information processing systems. 2017.

Parameters
  • input_size – The size of the inputs.

  • hidden_size – The size of outputs

  • aggr_func – (default “sum”) Aggregation function for GraphSage [“sum”, “mean”, “max”, “min”].

forward(graph, feature, act=None)[source]
Parameters
  • graphpgl.Graph instance.

  • feature – A tensor with shape (num_nodes, input_size)

  • act – (default None) Activation for outputs and before normalize.

Returns

A tensor with shape (num_nodes, output_size)

class pgl.nn.conv.PinSageConv(input_size, hidden_size, aggr_func='sum')[source]

Bases: paddle.fluid.dygraph.layers.Layer

PinSage combines efficient random walks and graph convolutions to generate embeddings of nodes (i.e., items) that incorporate both graph structure as well as node feature information.

Paper reference: Ying, Rex, et al. “Graph convolutional neural networks for web-scale recommender systems.” Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2018.

Parameters
  • input_size – The size of the inputs.

  • hidden_size – The size of outputs

  • aggr_func – (default “sum”) Aggregation function for GraphSage [“sum”, “mean”, “max”, “min”].

forward(graph, nfeat, efeat, act=None)[source]
Parameters
  • graphpgl.Graph instance.

  • nfeat – A tensor with shape (num_nodes, input_size)

  • efeat – A tensor with shape (num_edges, 1) denotes edge weight.

  • act – (default None) Activation for outputs and before normalize.

Returns

A tensor with shape (num_nodes, output_size)

Graph Pooling Layers

This package implements common pooling to help building graph neural networks.

class pgl.nn.pool.GraphPool[source]

Bases: paddle.fluid.dygraph.layers.Layer

Implementation of graph pooling

This is an implementation of graph pooling

Parameters
  • graph – the graph object from (Graph)

  • feature – A tensor with shape (num_nodes, feature_size).

  • pool_type – The type of pooling (“sum”, “mean” , “min”, “max”)

Returns

A tensor with shape (num_graph, feature_size)

forward(graph, feature, pool_type)[source]

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters
  • *inputs (tuple) – unpacked tuple arguments

  • **kwargs (dict) – unpacked dict arguments