pgl.layers: Predefined graph neural networks layers.¶
Generate layers api
-
pgl.layers.
gcn
(gw, feature, hidden_size, activation, name, norm=None)[source]¶ Implementation of graph convolutional neural networks (GCN)
This is an implementation of the paper SEMI-SUPERVISED CLASSIFICATION WITH GRAPH CONVOLUTIONAL NETWORKS (https://arxiv.org/pdf/1609.02907.pdf).
- Parameters
gw – Graph wrapper object (
StaticGraphWrapper
orGraphWrapper
)feature – A tensor with shape (num_nodes, feature_size).
hidden_size – The hidden size for gcn.
activation – The activation for the output.
name – Gcn layer names.
norm – If
norm
is not None, then the feature will be normalized. Norm must be tensor with shape (num_nodes,) and dtype float32.
- Returns
A tensor with shape (num_nodes, hidden_size)
-
pgl.layers.
gat
(gw, feature, hidden_size, activation, name, num_heads=8, feat_drop=0.6, attn_drop=0.6, is_test=False)[source]¶ Implementation of graph attention networks (GAT)
This is an implementation of the paper GRAPH ATTENTION NETWORKS (https://arxiv.org/abs/1710.10903).
- Parameters
gw – Graph wrapper object (
StaticGraphWrapper
orGraphWrapper
)feature – A tensor with shape (num_nodes, feature_size).
hidden_size – The hidden size for gat.
activation – The activation for the output.
name – Gat layer names.
num_heads – The head number in gat.
feat_drop – Dropout rate for feature.
attn_drop – Dropout rate for attention.
is_test – Whether in test phrase.
- Returns
A tensor with shape (num_nodes, hidden_size * num_heads)
-
pgl.layers.
gin
(gw, feature, hidden_size, activation, name, init_eps=0.0, train_eps=False)[source]¶ Implementation of Graph Isomorphism Network (GIN) layer.
This is an implementation of the paper How Powerful are Graph Neural Networks? (https://arxiv.org/pdf/1810.00826.pdf).
In their implementation, all MLPs have 2 layers. Batch normalization is applied on every hidden layer.
- Parameters
gw – Graph wrapper object (
StaticGraphWrapper
orGraphWrapper
)feature – A tensor with shape (num_nodes, feature_size).
name – GIN layer names.
hidden_size – The hidden size for gin.
activation – The activation for the output.
init_eps – float, optional Initial \(\epsilon\) value, default is 0.
train_eps – bool, optional if True, \(\epsilon\) will be a learnable parameter.
- Returns
A tensor with shape (num_nodes, hidden_size).
-
pgl.layers.
gaan
(gw, feature, hidden_size_a, hidden_size_v, hidden_size_m, hidden_size_o, heads, name)[source]¶ Implementation of GaAN
-
pgl.layers.
gen_conv
(gw, feature, name, beta=None)[source]¶ Implementation of GENeralized Graph Convolution (GENConv), see the paper “DeeperGCN: All You Need to Train Deeper GCNs” in https://arxiv.org/pdf/2006.07739.pdf
- Parameters
gw – Graph wrapper object (
StaticGraphWrapper
orGraphWrapper
)feature – A tensor with shape (num_nodes, feature_size).
beta – [0, +infinity] or “dynamic” or None
name – deeper gcn layer names.
- Returns
A tensor with shape (num_nodes, feature_size)
-
pgl.layers.
appnp
(gw, feature, edge_dropout=0, alpha=0.2, k_hop=10)[source]¶ Implementation of APPNP of “Predict then Propagate: Graph Neural Networks meet Personalized PageRank” (ICLR 2019).
- Parameters
gw – Graph wrapper object (
StaticGraphWrapper
orGraphWrapper
)feature – A tensor with shape (num_nodes, feature_size).
edge_dropout – Edge dropout rate.
k_hop – K Steps for Propagation
- Returns
A tensor with shape (num_nodes, hidden_size)
-
pgl.layers.
gcnii
(gw, feature, name, activation=None, alpha=0.5, lambda_l=0.5, k_hop=1, dropout=0.5, is_test=False)[source]¶ Implementation of GCNII of “Simple and Deep Graph Convolutional Networks”
paper: https://arxiv.org/pdf/2007.02133.pdf
- Parameters
gw – Graph wrapper object (
StaticGraphWrapper
orGraphWrapper
)feature – A tensor with shape (num_nodes, feature_size).
activation – The activation for the output.
k_hop – Number of layers for gcnii.
lambda_l – The hyperparameter of lambda in the paper.
alpha – The hyperparameter of alpha in the paper.
dropout – Feature dropout rate.
is_test – train / test phase.
- Returns
A tensor with shape (num_nodes, hidden_size)
-
class
pgl.layers.
Set2Set
(input_dim, n_iters, n_layers)[source]¶ Bases:
object
Implementation of set2set pooling operator.
This is an implementation of the paper ORDER MATTERS: SEQUENCE TO SEQUENCE FOR SETS (https://arxiv.org/pdf/1511.06391.pdf).
-
pgl.layers.
graph_pooling
(gw, node_feat, pool_type)[source]¶ Implementation of graph pooling
This is an implementation of graph pooling
- Parameters
gw – Graph wrapper object (
StaticGraphWrapper
orGraphWrapper
)node_feat – A tensor with shape (num_nodes, feature_size).
pool_type – The type of pooling (“sum”, “average” , “min”)
- Returns
A tensor with shape (num_graph, hidden_size)
-
pgl.layers.
graph_norm
(gw, feature)[source]¶ Implementation of graph normalization
Reference Paper: BENCHMARKING GRAPH NEURAL NETWORKS
Each node features is divied by sqrt(num_nodes) per graphs.
- Parameters
gw – Graph wrapper object (
StaticGraphWrapper
orGraphWrapper
)feature – A tensor with shape (num_nodes, hidden_size)
- Returns
A tensor with shape (num_nodes, hidden_size)
-
pgl.layers.
graph_gather
(gw, feature, index)[source]¶ Implementation of graph gather
Gather the corresponding index for each graph.
- Parameters
gw – Graph wrapper object (
StaticGraphWrapper
orGraphWrapper
)feature – A tensor with shape (num_nodes, ).
index (int32) –
- A tensor with K-rank where the first dim denotes the graph.
Shape (num_graph, ) or (num_graph, k1, k2, k3, …, kn).
WARNING: We dont support negative index.
- Returns
A tensor with shape (num_graph, k1, k2, k3, …, kn, hidden_size)