greatx.nn.models
greatx.nn.models.surrogate
Base class for attacker or defenders that require a surrogate model for estimating labels or computing gradient information. |
- class Surrogate(device: str = 'cpu')[source]
Base class for attacker or defenders that require a surrogate model for estimating labels or computing gradient information.
- Parameters:
device (str, optional) – the device of a model to use for, by default “cpu”
- setup_surrogate(surrogate: Module, *, tau: float = 1.0, freeze: bool = True, required: Optional[Union[Module, Tuple[Module]]] = None) Surrogate [source]
Method used to initialize the (trained) surrogate model.
- Parameters:
surrogate (Module) – the input surrogate module
tau (float, optional) – temperature used for softmax activation, by default 1.0
freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True
required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None
- Returns:
the class itself
- Return type:
- Raises:
RuntimeError – if the surrogate model is not an instance of
torch.nn.Module
RuntimeError – if the surrogate model is not an instance of
required
- clip_grad(grad: Tensor, grad_clip: Optional[float]) Tensor [source]
Gradient clipping function
- Parameters:
grad (Tensor) – the input gradients to clip
grad_clip (Optional[float]) – the clipping number of the gradients
- Returns:
the clipped gradients
- Return type:
Tensor
- estimate_self_training_labels(nodes: Optional[Tensor] = None) Tensor [source]
Estimate the labels of nodes using the trained surrogate model.
- Parameters:
nodes (Optional[Tensor], optional) – the input nodes, if None, it would be all nodes in the graph, by default None
- Returns:
the labels of the input nodes.
- Return type:
Tensor
- freeze_surrogate() Surrogate [source]
Freezie the parameters of the surrogate model.
- Returns:
the class itself
- Return type:
greatx.nn.models.supervised
Graph Convolution Network (GCN) from the "Semi-supervised Classification with Graph Convolutional Networks" paper (ICLR'17) |
|
The Simple Graph Convolution Network (SGC) from the "Simplifying Graph Convolutional Networks" paper (ICML'19) |
|
The Simple Spectra Graph Convolution Network (SSGC) from paper "Simple Spectral Graph Convolution" paper (ICLR'21) |
|
The Decopuled Graph Convolution Network (DGC) from paper "Dissecting the Diffusion Process in Linear Graph Convolutional Networks" paper (NeurIPS'21) |
|
Graph Attention Networks (GAT) from the "Graph Attention Networks" paper (ICLR'19) |
|
Implementation of Approximated personalized propagation of neural predictions (APPNP) from the "Predict then Propagate: Graph Neural Networks meet Personalized PageRank" paper (ICLR'19) |
|
The DAGNN operator from the "Towards Deeper Graph Neural Networks" paper (KDD'20) |
|
Implementation of Graph Convolution Network with Jumping knowledge (JKNet) from the "Representation Learning on Graphs with Jumping Knowledge Networks" paper (ICML'18) |
|
Topological adaptive graph convolution network (TAGCN) from the "Topological Adaptive Graph Convolutional Networks" paper (arXiv'17) |
|
Non-Local Graph Neural Networks (NLGNN) with |
|
Non-Local Graph Neural Networks (NLGNN) with |
|
Non-Local Graph Neural Networks (NLGNN) with |
|
Simple logistic regression model for self-supervised/unsupervised learning. |
|
Implementation of Multi-layer Perceptron (MLP) or Feed-forward Neural Network (FNN). |
|
Graph Convolution Network (GCN) with median aggregation (MedianGCN) from the "Understanding Structural Vulnerability in Graph Convolutional Networks" paper (IJCAI'21) |
|
Robust graph convolutional network (RobustGCN) from the "Robust Graph Convolutional Networks Against Adversarial Attacks" paper (KDD'19) |
|
Graph Neural Networks with Adaptive residual (AirGNN) from the "Graph Neural Networks with Adaptive Residual" paper (NeurIPS'21) |
|
Graph Neural Networks with elastic message passing (ElasticGNN) from the "Elastic Graph Neural Networks" paper (ICML'21) |
|
Graph Convolution Network (GCN) with soft median aggregation (MedianGCN) from the "Robustness of Graph Neural Networks at Scale" paper (NeurIPS'21) |
|
Similarity Preserving Graph Convolution Network (SimPGCN) from the "Node Similarity Preserving Graph Convolutional Networks" paper (WSDM'21) |
|
Graph Convolution Network (GCN) with |
|
Graph Convolution Network with Spectral Adversarial Training (SAT) from the "Spectral Adversarial Training for Robust Graph Neural Network" paper (arXiv'22) |
|
The rotbust tensor graph convolutional operator from the "Robust Tensor Graph Convolutional Networks via T-SVD based Graph Augmentation" paper (KDD'22) |
|
The spiking graph convolutional neural network from the "Spiking Graph Convolutional Networks" paper (IJCAI'22) |
- class GCN(in_channels: int, out_channels: int, hids: List[int] = [16], acts: List[str] = ['relu'], dropout: float = 0.5, bias: bool = True, bn: bool = False, normalize: bool = True)[source]
Graph Convolution Network (GCN) from the “Semi-supervised Classification with Graph Convolutional Networks” paper (ICLR’17)
- Parameters:
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default [16]
acts (List[str], optional) – the activation function for each hidden layer, by default [‘relu’]
dropout (float, optional) – the dropout ratio of model, by default 0.5
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default Falsenormalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default True
Examples
>>> # GCN with one hidden layer >>> model = GCN(100, 10)
>>> # GCN with two hidden layers >>> model = GCN(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # GCN with two hidden layers, without first activation >>> model = GCN(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # GCN with deep architectures, each layer has elu activation >>> model = GCN(100, 10, hids=[16]*8, acts=['elu'])
See also
- class SGC(in_channels, out_channels, hids: List[int] = [], acts: List[str] = [], K: int = 2, dropout: float = 0.0, bias: bool = True, cached: bool = True, bn: bool = False)[source]
The Simple Graph Convolution Network (SGC) from the “Simplifying Graph Convolutional Networks” paper (ICML’19)
- Parameters:
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default []
acts (List[str], optional) – the activation function for each hidden layer, by default []
K (int, optional) – the number of propagation steps, by default 2
dropout (float, optional) – the dropout ratio of model, by default 0.
bias (bool, optional) – whether to use bias in the layers, by default True
cached (bool, optional) – whether the layer will cache the computation of \((\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2})^K\) on first execution, and will use the cached version for further executions, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Note
To accept a different graph as inputs, please call
cache_clear()
first to clear cached results.Examples
>>> # SGC without hidden layer >>> model = SGC(100, 10)
>>> # SGC with two hidden layers >>> model = SGC(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # SGC with two hidden layers, without first activation >>> model = SGC(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # SGC with deep architectures, each layer has elu activation >>> model = SGC(100, 10, hids=[16]*8, acts=['elu'])
See also
- class SSGC(in_channels, out_channels, hids: List[int] = [], acts: List[str] = [], dropout: float = 0.0, K: int = 5, alpha: float = 0.1, bias: bool = True, cached: bool = True, bn: bool = False)[source]
The Simple Spectra Graph Convolution Network (SSGC) from paper “Simple Spectral Graph Convolution” paper (ICLR’21)
- Parameters:
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default []
acts (List[str], optional) – the activation function for each hidden layer, by default []
K (int, optional) – the number of propagation steps, by default 5
alpha (float) – Teleport probability \(\alpha\), by default 0.1
dropout (float, optional) – the dropout ratio of model, by default 0.
bias (bool, optional) – whether to use bias in the layers, by default True
cached (bool, optional) – whether the layer will cache the computation of \((\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2})^K\) on first execution, and will use the cached version for further executions, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Note
To accept a different graph as inputs, please call
cache_clear()
first to clear cached results.Examples
>>> # SSGC without hidden layer >>> model = SSGC(100, 10)
>>> # SSGC with two hidden layers >>> model = SSGC(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # SSGC with two hidden layers, without first activation >>> model = SSGC(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # SSGC with deep architectures, each layer has elu activation >>> model = SSGC(100, 10, hids=[16]*8, acts=['elu'])
See also
- class DGC(in_channels, out_channels, hids: List[int] = [], acts: List[str] = [], dropout: float = 0.0, K: int = 5, t: float = 5.27, bias: bool = True, cached: bool = True, bn: bool = False)[source]
The Decopuled Graph Convolution Network (DGC) from paper “Dissecting the Diffusion Process in Linear Graph Convolutional Networks” paper (NeurIPS’21)
- Parameters:
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default []
acts (List[str], optional) – the activation function for each hidden layer, by default []
K (int, optional) – the number of propagation steps, by default 5
t (float) – Terminal time \(t\), by default 5.27
dropout (float, optional) – the dropout ratio of model, by default 0.
bias (bool, optional) – whether to use bias in the layers, by default True
cached (bool, optional) – whether the layer will cache the computation of \((\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2})^K\) on first execution, and will use the cached version for further executions, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Note
To accept a different graph as inputs, please call
cache_clear()
first to clear cached results.Examples
>>> # DGC without hidden layer >>> model = DGC(100, 10)
>>> # DGC with two hidden layers >>> model = DGC(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # DGC with two hidden layers, without first activation >>> model = DGC(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # DGC with deep architectures, each layer has elu activation >>> model = DGC(100, 10, hids=[16]*8, acts=['elu'])
See also
- class GAT(in_channels: int, out_channels: int, hids: List[int] = [8], num_heads: List[int] = [8], acts: List[str] = ['elu'], dropout: float = 0.6, bias: bool = True, bn: bool = False, includes=['num_heads'])[source]
Graph Attention Networks (GAT) from the “Graph Attention Networks” paper (ICLR’19)
- Parameters:
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default [8]
num_heads (List[int], optional) – the number of attention heads for each hidden layer, by default [8]
acts (List[str], optional) – the activation function for each hidden layer, by default [‘relu’]
dropout (float, optional) – the dropout ratio of model, by default 0.6
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Examples
>>> # GAT with one hidden layer >>> model = GAT(100, 10)
>>> # GAT with two hidden layers >>> model = GAT(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # GAT with two hidden layers, without first activation >>> model = GAT(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # GAT with deep architectures, each layer has elu activation >>> model = GAT(100, 10, hids=[16]*8, acts=['elu'])
Reference:
Author’s code: https://github.com/PetarV-/GAT
Pytorch implementation: https://github.com/Diego999/pyGAT
- class APPNP(in_channels: int, out_channels: int, hids: List[int] = [16], acts: List[str] = ['relu'], dropout: float = 0.8, K: int = 10, alpha: float = 0.1, bn: bool = False, bias: bool = True, cached: bool = False)[source]
Implementation of Approximated personalized propagation of neural predictions (APPNP) from the “Predict then Propagate: Graph Neural Networks meet Personalized PageRank” paper (ICLR’19)
- Parameters:
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default [16]
acts (List[str], optional) – the activation function for each hidden layer, by default [‘relu’]
dropout (float, optional) – the dropout ratio of model, by default 0.8
K (int, optional) – the number of propagation steps, by default 10
alpha (float) – Teleport probability \(\alpha\), by default 0.1
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default Falsecached (bool, optional) – whether the layer will cache the computation of propagation on first execution, and will use the cached version for further executions, by default False
Note
To accept a different graph as inputs, please call
cache_clear()
first to clear cached results.Examples
>>> # APPNP without hidden layer >>> model = APPNP(100, 10)
>>> # APPNP with two hidden layers >>> model = APPNP(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # APPNP with two hidden layers, without first activation >>> model = APPNP(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # APPNP with deep architectures, each layer has elu activation >>> model = APPNP(100, 10, hids=[16]*8, acts=['elu'])
- class DAGNN(in_channels: int, out_channels: int, hids: List[int] = [64], acts: List[str] = ['relu'], dropout: float = 0.5, K: int = 10, bn: bool = False, bias: bool = True)[source]
The DAGNN operator from the “Towards Deeper Graph Neural Networks” paper (KDD’20)
- Parameters:
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default [64]
K (int, optional) – the number of propagation steps, by default 10
acts (List[str], optional) – the activation function for each hidden layer, by default [‘relu’]
dropout (float, optional) – the dropout ratio of model, by default 0.5
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Examples
>>> # DAGNN with one hidden layer >>> model = DAGNN(100, 10)
>>> # DAGNN with two hidden layers >>> model = DAGNN(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # DAGNN with two hidden layers, without first activation >>> model = DAGNN(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # DAGNN with deep architectures, each layer has elu activation >>> model = DAGNN(100, 10, hids=[16]*8, acts=['elu'])
See also
- class JKNet(in_channels: int, out_channels: int, hids: List[int] = [16, 16, 16], acts: List[str] = ['relu', 'relu', 'relu'], dropout: float = 0.5, mode: str = 'cat', bn: bool = False, bias: bool = True)[source]
Implementation of Graph Convolution Network with Jumping knowledge (JKNet) from the “Representation Learning on Graphs with Jumping Knowledge Networks” paper (ICML’18)
- Parameters:
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default [16, 16, 16]
acts (List[str], optional) – the activation function for each hidden layer, by default [‘relu’, ‘relu’, ‘relu’]
dropout (float, optional) – the dropout ratio of model, by default 0.5
mode (str, optional) – the mode of jumping knowledge, including ‘cat’, ‘lstm’, and ‘max’,
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Note
To accept a different graph as inputs, please call
cache_clear()
first to clear cached results.Examples
>>> # JKNet with five hidden layers >>> model = JKNet(100, 10, hids=[16]*5)
- forward(x, edge_index, edge_weight=None)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class TAGCN(in_channels: int, out_channels: int, hids: List[int] = [16], acts: List[str] = ['relu'], K: int = 2, dropout: float = 0.5, bias: bool = True, normalize: bool = True, bn: bool = False)[source]
Topological adaptive graph convolution network (TAGCN) from the “Topological Adaptive Graph Convolutional Networks” paper (arXiv’17)
- Parameters:
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default [16]
acts (List[str], optional) – the activation function for each hidden layer, by default [‘relu’]
K (int) – the number of propagation steps, by default 2
dropout (float, optional) – the dropout ratio of model, by default 0.5
bias (bool, optional) – whether to use bias in the layers, by default True
normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default False
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Examples
>>> # TAGCN with one hidden layer >>> model = TAGCN(100, 10)
>>> # TAGCN with two hidden layers >>> model = TAGCN(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # TAGCN with two hidden layers, without first activation >>> model = TAGCN(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # TAGCN with deep architectures, each layer has elu activation >>> model = TAGCN(100, 10, hids=[16]*8, acts=['elu'])
See also
greatx.nn.layers.TAGCNConv
- class NLGCN(in_channels: int, out_channels: int, hids: List[int] = [16], acts: List[str] = ['relu'], kernel: int = 5, dropout: float = 0.5, bn: bool = False, normalize: bool = True, bias: bool = True)[source]
Non-Local Graph Neural Networks (NLGNN) with
GCN
as backbone from the “Non-Local Graph Neural Networks” paper (TPAMI’22)- Parameters:
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default [16]
acts (List[str], optional) – the activation function for each hidden layer, by default [‘relu’]
kernel (int,) – the number of kernel used in
nn.Conv1d
, by default 5dropout (float, optional) – the dropout ratio of model, by default 0.5
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Examples
>>> # NLGCN with one hidden layer >>> model = NLGCN(100, 10)
>>> # NLGCN with two hidden layers >>> model = NLGCN(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # NLGCN with two hidden layers, without first activation >>> model = NLGCN(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # NLGCN with deep architectures, each layer has elu activation >>> model = NLGCN(100, 10, hids=[16]*8, acts=['elu'])
- class NLGAT(in_channels: int, out_channels: int, hids: List[int] = [8], num_heads: list = [8], acts: List[str] = ['elu'], kernel: int = 5, dropout: float = 0.6, bias: bool = True, bn: bool = False, includes=['num_heads'])[source]
Non-Local Graph Neural Networks (NLGNN) with
GAT
as backbone from the “Non-Local Graph Neural Networks” paper (TPAMI’22)- Parameters:
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default [8]
num_heads (list, optional) – the number of attention heads for each hidden layer, by default [8]
acts (List[str], optional) – the activation function for each hidden layer, by default [‘relu’]
kernel (int,) – the number of kernel used in
nn.Conv1d
, by default 5dropout (float, optional) – the dropout ratio of model, by default 0.6
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Examples
>>> # NLGAT with one hidden layer >>> model = NLGAT(100, 10)
>>> # NLGAT with two hidden layers >>> model = NLGAT(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # NLGAT with two hidden layers, without first activation >>> model = NLGAT(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # NLGAT with deep architectures, each layer has elu activation >>> model = NLGAT(100, 10, hids=[16]*8, acts=['elu'])
- class NLMLP(in_channels: int, out_channels: int, hids: List[int] = [32], acts: List[str] = ['relu'], kernel: int = 5, dropout: float = 0.5, bias: bool = True, bn: bool = False)[source]
Non-Local Graph Neural Networks (NLGNN) with
MLP
as backbone from the “Non-Local Graph Neural Networks” paper (TPAMI’22)- Parameters:
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default [32]
acts (List[str], optional) – the activation function for each hidden layer, by default [‘relu’]
kernel (int,) – the number of kernel used in
nn.Conv1d
, by default 5dropout (float, optional) – the dropout ratio of model, by default 0.5
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default Falsenormalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default True
Examples
>>> # NLGCN with one hidden layer >>> model = NLGCN(100, 10)
>>> # NLGCN with two hidden layers >>> model = NLGCN(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # NLGCN with two hidden layers, without first activation >>> model = NLGCN(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # NLGCN with deep architectures, each layer has elu activation >>> model = NLGCN(100, 10, hids=[16]*8, acts=['elu'])
- class LogisticRegression(in_channels: int, out_channels: int, bias: bool = True)[source]
Simple logistic regression model for self-supervised/unsupervised learning.
- Parameters:
Examples
>>> # LogisticRegression without hidden layer >>> model = LogisticRegression(100, 10)
See also
- class MLP(in_channels: int, out_channels: int, hids: List[int] = [16], acts: List[str] = ['relu'], dropout: float = 0.5, bias: bool = True, bn: bool = False)[source]
Implementation of Multi-layer Perceptron (MLP) or Feed-forward Neural Network (FNN).
- Parameters:
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default [16]
acts (List[str], optional) – the activation function for each hidden layer, by default [‘relu’]
dropout (float, optional) – the dropout ratio of model, by default 0.5
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the Linear layer, by default False
Examples
>>> # MLP with one hidden layer >>> model = MLP(100, 10)
>>> # MLP with two hidden layers >>> model = MLP(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # MLP with two hidden layers, without first activation >>> model = MLP(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # MLP with deep architectures, each layer has elu activation >>> model = MLP(100, 10, hids=[16]*8, acts=['elu'])
- class MedianGCN(in_channels: int, out_channels: int, hids: List[int] = [16], acts: List[str] = ['relu'], reduce: str = 'median', dropout: float = 0.5, bn: bool = False, normalize: bool = False, bias: bool = True)[source]
Graph Convolution Network (GCN) with median aggregation (MedianGCN) from the “Understanding Structural Vulnerability in Graph Convolutional Networks” paper (IJCAI’21)
- Parameters:
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default [16]
acts (List[str], optional) – the activation function for each hidden layer, by default [‘relu’]
reduce (str) – aggregation function, including {‘median’, ‘sample_median’}, where
median
uses the exact median as the aggregation function, whilesample_median
appropriates the median with a fixed set of sampled nodes.sample_median
is much faster and more scalable thanmedian
. By default,median
is used.dropout (float, optional) – the dropout ratio of model, by default 0.5
bias (bool, optional) – whether to use bias in the layers, by default True
normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default False
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Examples
>>> # MedianGCN with one hidden layer >>> model = MedianGCN(100, 10)
>>> # MedianGCN with two hidden layers >>> model = MedianGCN(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # MedianGCN with two hidden layers, without first activation >>> model = MedianGCN(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # MedianGCN with deep architectures, each layer has elu activation >>> model = MedianGCN(100, 10, hids=[16]*8, acts=['elu'])
>>> # MedianGCN with sample median aggregation >>> model = MedianGCN(100, 10, reduce='sample_median')
See also
- class RobustGCN(in_channels: int, out_channels: int, hids: List[int] = [32], acts: List[str] = ['relu'], dropout: float = 0.5, bias: bool = True, gamma: float = 1.0, kl: float = 0.0005, bn: bool = False)[source]
Robust graph convolutional network (RobustGCN) from the “Robust Graph Convolutional Networks Against Adversarial Attacks” paper (KDD’19)
- Parameters:
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default [32]
acts (List[str], optional) – the activation function for each hidden layer, by default [‘relu’]
dropout (float, optional) – the dropout ratio of model, by default 0.5
bias (bool, optional) – whether to use bias in the layers, by default True
gamma (float, optional) – the scale of attention on the variances, by default 1.0
gamma – the scale of attention on the variances, by default 1.0
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Examples
>>> # RobustGCN with one hidden layer >>> model = RobustGCN(100, 10)
>>> # RobustGCN with two hidden layers >>> model = RobustGCN(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # RobustGCN with two hidden layers, without first activation >>> model = RobustGCN(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # RobustGCN with deep architectures, each layer has elu activation >>> model = RobustGCN(100, 10, hids=[16]*8, acts=['elu'])
See also
- class AirGNN(in_channels: int, out_channels: int, hids: List[int] = [64], acts: List[str] = ['relu'], K: int = 3, lambda_amp: float = 0.5, dropout: float = 0.8, bias: bool = True, bn: bool = False)[source]
Graph Neural Networks with Adaptive residual (AirGNN) from the “Graph Neural Networks with Adaptive Residual” paper (NeurIPS’21)
- Parameters:
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default [64]
acts (List[str], optional) – the activation function for each hidden layer, by default [‘relu’]
K (int, optional) – the number of propagation steps during message passing, by default 3
lambda_amp (float, optional) – trade-off for adaptive message passing, by default 0.1
dropout (float, optional) – the dropout ratio of model, by default 0.8
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Examples
>>> # AirGNN with one hidden layer >>> model = AirGNN(100, 10)
>>> # AirGNN with two hidden layers >>> model = AirGNN(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # AirGNN with two hidden layers, without first activation >>> model = AirGNN(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # AirGNN with deep architectures, each layer has elu activation >>> model = AirGNN(100, 10, hids=[16]*8, acts=['elu'])
See also
- class ElasticGNN(in_channels: int, out_channels: int, hids: List[int] = [16], acts: List[str] = ['relu'], K: int = 3, lambda1: float = 3, lambda2: float = 3, cached: bool = True, dropout: float = 0.8, bias: bool = True, bn: bool = False)[source]
Graph Neural Networks with elastic message passing (ElasticGNN) from the “Elastic Graph Neural Networks” paper (ICML’21)
- Parameters:
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default [64]
acts (List[str], optional) – the activation function for each hidden layer, by default [‘relu’]
K (int, optional) – the number of propagation steps during message passing, by default 3
lambda1 (float, optional) – trade-off hyperparameter, by default 3
lambda2 (float, optional) – trade-off hyperparameter, by default 3
L21 (bool, optional) – whether to use row-wise projection on the l2 ball of radius λ1., by default True
cached (bool, optional) – whether to cache the incident matrix, by default True
dropout (float, optional) – the dropout ratio of model, by default 0.8
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Examples
>>> # ElasticGNN with one hidden layer >>> model = ElasticGNN(100, 10)
>>> # ElasticGNN with two hidden layers >>> model = ElasticGNN(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # ElasticGNN with two hidden layers, without first activation >>> model = ElasticGNN(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # ElasticGNN with deep architectures, each layer has elu activation >>> model = ElasticGNN(100, 10, hids=[16]*8, acts=['elu'])
See also
greatx.nn.layers.ElasticGNN
- class SoftMedianGCN(in_channels: int, out_channels: int, hids: List[int] = [16], acts: List[str] = ['relu'], dropout: float = 0.5, bias: bool = True, normalize: bool = False, row_normalize: bool = False, cached: bool = True, bn: bool = False)[source]
Graph Convolution Network (GCN) with soft median aggregation (MedianGCN) from the “Robustness of Graph Neural Networks at Scale” paper (NeurIPS’21)
- Parameters:
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default [16]
acts (List[str], optional) – the activation function for each hidden layer, by default [‘relu’]
dropout (float, optional) – the dropout ratio of model, by default 0.5
bias (bool, optional) – whether to use bias in the layers, by default True
normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default False
row_normalize (bool, optional) – whether to perform row-normalization on the fly, by default False
cached (bool, optional) – whether the layer will cache the computation of \((\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2})\) and sorted edges on first execution, and will use the cached version for further executions, by default False
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Examples
>>> # SoftMedianGCN with one hidden layer >>> model = SoftMedianGCN(100, 10)
>>> # SoftMedianGCN with two hidden layers >>> model = SoftMedianGCN(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # SoftMedianGCN with two hidden layers, without first activation >>> model = SoftMedianGCN(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # SoftMedianGCN with deep architectures, each layer has elu activation >>> model = SoftMedianGCN(100, 10, hids=[16]*8, acts=['elu'])
See also
- class SimPGCN(in_channels: int, out_channels: int, hids: List[int] = [64], acts: List[str] = [None], dropout: float = 0.5, bias: bool = True, gamma: float = 0.01, lambda_: float = 5.0, bn: bool = False)[source]
Similarity Preserving Graph Convolution Network (SimPGCN) from the “Node Similarity Preserving Graph Convolutional Networks” paper (WSDM’21)
- Parameters:
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default [64]
acts (List[str], optional) – the activation function for each hidden layer, by default None
dropout (float, optional) – the dropout ratio of model, by default 0.5
bias (bool, optional) – whether to use bias in the layers, by default True
gamma (float, optional) – trade-off hyperparameter, by default 0.01
gamma – trade-off hyperparameter for the embedding loss, by default 5.0
bn (bool, optional (NOT IMPLEMENTED NOW)) – whether to use
BatchNorm1d
after the convolution layer, by default False
Examples
>>> # SimPGCN with one hidden layer >>> model = SimPGCN(100, 10)
>>> # SimPGCN with two hidden layers >>> model = SimPGCN(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # SimPGCN with two hidden layers, without first activation >>> model = SimPGCN(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # SimPGCN with deep architectures, each layer has elu activation >>> model = SimPGCN(100, 10, hids=[16]*8, acts=['elu'])
- class GNNGUARD(in_channels: int, out_channels: int, hids: List[int] = [16], acts: List[str] = ['relu'], dropout: float = 0.5, bn: bool = False, normalize: bool = True, bias: bool = True)[source]
Graph Convolution Network (GCN) with
greatx.defense.GNNGUARD
from the “GNNGUARD: Defending Graph Neural Networks against Adversarial Attacks” paper (NeurIPS’20)- Parameters:
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default [16]
acts (List[str], optional) – the activation function for each hidden layer, by default [‘relu’]
dropout (float, optional) – the dropout ratio of model, by default 0.5
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Examples
>>> # GNNGUARD with one hidden layer >>> model = GNNGUARD(100, 10)
>>> # GNNGUARD with two hidden layers >>> model = GNNGUARD(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # GNNGUARD with two hidden layers, without first activation >>> model = GNNGUARD(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # GNNGUARD with deep architectures, each layer has elu activation >>> model = GNNGUARD(100, 10, hids=[16]*8, acts=['elu'])
- class SAT(in_channels: int, out_channels: int, hids: List[int] = [16], acts: List[str] = ['relu'], dropout: float = 0.5, bias: bool = False, normalize: bool = True, bn: bool = False)[source]
Graph Convolution Network with Spectral Adversarial Training (SAT) from the “Spectral Adversarial Training for Robust Graph Neural Network” paper (arXiv’22)
- Parameters:
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default [16]
acts (List[str], optional) – the activation function for each hidden layer, by default [‘relu’]
dropout (float, optional) – the dropout ratio of model, by default 0.5
bias (bool, optional) – whether to use bias in the layers, by default False
normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Examples
>>> # SAT with one hidden layer >>> model = SAT(100, 10)
>>> # SAT with two hidden layers >>> model = SAT(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # SAT with two hidden layers, without first activation >>> model = SAT(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # SAT with deep architectures, each layer has elu activation >>> model = SAT(100, 10, hids=[16]*8, acts=['elu'])
See also
- class RTGCN(in_channels: int, out_channels: int, num_nodes: int, num_channels: int, hids: List[int] = [16], acts: List[str] = ['relu'], dropout: float = 0.5, bias: bool = True, bn: bool = False)[source]
The rotbust tensor graph convolutional operator from the “Robust Tensor Graph Convolutional Networks via T-SVD based Graph Augmentation” paper (KDD’22)
- Parameters:
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
num_nodes (int) – number of input nodes
num_channels (int) – number of input channels (adjacency matrixs)
hids (List[int], optional) – the number of hidden units for each hidden layer, by default [16]
acts (List[str], optional) – the activation function for each hidden layer, by default [‘relu’]
dropout (float, optional) – the dropout ratio of model, by default 0.5
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Examples
>>> # RTGCN with one hidden layer >>> num_nodes = 2485 >>> num_channels = 3 >>> model = RTGCN(100, 10, num_nodes, num_channels)
See also
- class SpikingGCN(in_channels, out_channels, hids: List[int] = [], acts: List[str] = [], K: int = 2, T: int = 20, tau: float = 2.0, v_threshold: float = 1.0, v_reset: float = 0.0, dropout: float = 0.0, bias: bool = True, cached: bool = True, bn: bool = False)[source]
The spiking graph convolutional neural network from the “Spiking Graph Convolutional Networks” paper (IJCAI’22)
- Parameters:
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default []
acts (List[str], optional) – the activation function for each hidden layer, by default []
K (int, optional) – the number of propagation steps, by default 2
T (int) – the number of time steps, by default 20
tau (float) – the \(\tau\) in LIF neuron, by default 2.0
v_threshold (float) – the threshold \(V_{th}\) in LIF neuron, by default 1.0
v_reset (float) – the reset level \(V_{reset}\) in LIF neuron, by default 0
dropout (float, optional) – the dropout ratio of model, by default 0.
bias (bool, optional) – whether to use bias in the layers, by default True
cached (bool, optional) – whether the layer will cache the computation of \((\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2})^K\) on first execution, and will use the cached version for further executions, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default Falsebn_input (bool, optional) – whether to use
BatchNorm1d
before input to the convolution layer, by default False
Note
To accept a different graph as inputs, please call
cache_clear()
first to clear cached results.Examples
>>> # SGC without hidden layer >>> model = SpikingGCN(100, 10)
See also
greatx.nn.models.unsupervised
Deep Graph Infomax (DGI) from the "Deep Graph Infomax" paper (ICLR'19) |
|
Graph Group Discrimination (GGD) from the "Rethinking and Scaling Up Graph Contrastive Learning: An Extremely Efficient Approach with Group Discrimination" paper (NeurIPS'22) |
|
GRAph Contrastive rEpresentation learning (GRACE) from the "Deep Graph Contrastive Representation Learning" paper (ICML'20) |
|
CCA-SSG model from the "From Canonical Correlation Analysis to Self-supervised Graph Neural Networks" paper (NeurIPS'21) |
- class DGI(in_channels: int, hids: List[int] = [512], acts: List[str] = ['prelu'], dropout: float = 0.0, bias: bool = True, bn: bool = False)[source]
Deep Graph Infomax (DGI) from the “Deep Graph Infomax” paper (ICLR’19)
- Parameters:
in_channels (int,) – the input dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default [512]
acts (List[str], optional) – the activation function for each hidden layer, by default [‘prelu’]
dropout (float, optional) – the dropout ratio of model, by default 0.0
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Examples
>>> # DGI with one hidden layer >>> model = DGI(100)
>>> # DGI with two hidden layers >>> model = DGI(100, hids=[32, 16], acts=['relu', 'elu'])
>>> # DGI with two hidden layers, without first activation >>> model = DGI(100, hids=[32, 16], acts=[None, 'relu'])
>>> # DGI with deep architectures, each layer has elu activation >>> model = DGI(100, hids=[16]*8, acts=['elu'])
Reference:
Author’s code: https://github.com/PetarV-/DGI
- class GGD(in_channels: int, hids: List[int] = [512], acts: List[str] = ['prelu'], dropout: float = 0.0, bias: bool = True, bn: bool = False, drop_feat: float = 0.2)[source]
Graph Group Discrimination (GGD) from the “Rethinking and Scaling Up Graph Contrastive Learning: An Extremely Efficient Approach with Group Discrimination” paper (NeurIPS’22)
- Parameters:
in_channels (int,) – the input dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default [512]
acts (List[str], optional) – the activation function for each hidden layer, by default [‘prelu’]
dropout (float, optional) – the dropout ratio of model, by default 0.0
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default Falsedrop_feat (float, optional) – the dropout ratio of features for contrasting, by default 0.2
Examples
>>> # GGD with one hidden layer >>> model = GGD(100)
>>> # GGD with two hidden layers >>> model = GGD(100, hids=[32, 16], acts=['relu', 'elu'])
>>> # GGD with two hidden layers, without first activation >>> model = GGD(100, hids=[32, 16], acts=[None, 'relu'])
>>> # GGD with deep architectures, each layer has elu activation >>> model = GGD(100, hids=[16]*8, acts=['elu'])
Reference:
Author’s code: https://github.com/zyzisastudyreallyhardguy/Graph-Group-Discrimination # noqa
- class GRACE(in_channels: int, hids: List[int] = [128], acts: List[str] = ['prelu'], project_hids: List[int] = [128], dropout: float = 0.0, tau: float = 0.5, bias: bool = True, bn: bool = False, drop_edge1: float = 0.8, drop_edge2: float = 0.7, drop_feat1: float = 0.4, drop_feat2: float = 0.3)[source]
GRAph Contrastive rEpresentation learning (GRACE) from the “Deep Graph Contrastive Representation Learning” paper (ICML’20)
- Parameters:
in_channels (int,) – the input dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default [128]
acts (List[str], optional) – the activation function for each hidden layer, by default [‘prelu’]
project_hids (List[int], optional) – the projection dimensions of model, by default [128]
tau (float, optional) – the temperature coefficient of softmax, by default 0.5
dropout (float, optional) – the dropout ratio of model, by default 0.0
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default Falsedrop_edge1 (float, optional) – the dropout ratio of edges for the first view, by default 0.8
drop_edge1 – the dropout ratio of edges for the second view, by default 0.7
drop_feat1 (float, optional) – the dropout ratio of features for the first view, by default 0.4
drop_feat2 (float, optional) – the dropout ratio of features for the second view, by default 0.3
Examples
>>> # GRACE with one hidden layer >>> model = GRACE(100)
>>> # GRACE with two hidden layers >>> model = GRACE(100, hids=[32, 16], acts=['relu', 'elu'])
>>> # GRACE with two hidden layers, without first activation >>> model = GRACE(100, hids=[32, 16], acts=[None, 'relu'])
>>> # GRACE with deep architectures, each layer has elu activation >>> model = GRACE(100, hids=[16]*8, acts=['elu'])
Reference:
Author’s code: https://github.com/CRIPAC-DIG/GRACE
- class CCA_SSG(in_channels: int, hids: List[int] = [512, 512], acts: List[str] = ['prelu', 'prelu'], dropout: float = 0.0, lambd: float = 0.001, bias: bool = True, bn: bool = False, drop_edge: float = 0.2, drop_feat: float = 0.2)[source]
CCA-SSG model from the “From Canonical Correlation Analysis to Self-supervised Graph Neural Networks” paper (NeurIPS’21)
- Parameters:
in_channels (int,) – the input dimensions of model
hids (List[int], optional) – the number of hidden units for each hidden layer, by default [512, 512]
acts (List[str], optional) – the activation function for each hidden layer, by default [‘prelu’, ‘prelu’]
project_hids (List[int], optional) – the projection dimensions of model, by default [512, 512]
lambd (float, optional) – the trade-off of the loss, by default 1e-3
dropout (float, optional) – the dropout ratio of model, by default 0.0
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default Falsedrop_edge (float, optional) – the dropout ratio of edges for contrasting, by default 0.2
drop_feat (float, optional) – the dropout ratio of features for contrasting, by default 0.2
Examples
>>> # CCA_SSG with one hidden layer >>> model = CCA_SSG(100)
>>> # CCA_SSG with two hidden layers >>> model = CCA_SSG(100, hids=[32, 16], acts=['relu', 'elu'])
>>> # CCA_SSG with two hidden layers, without first activation >>> model = CCA_SSG(100, hids=[32, 16], acts=[None, 'relu'])
>>> # CCA_SSG with deep architectures, each layer has elu activation >>> model = CCA_SSG(100, hids=[16]*8, acts=['elu'])
Reference:
Author’s code: https://github.com/hengruizhang98/CCA-SSG