graphwar.nn¶
graphwar.nn.layers¶
A modified |
|
DropEdge: Sampling edge using a uniform distribution from the "DropEdge: Towards Deep Graph Convolutional Networks on Node Classification" paper (ICLR'20) |
|
DropNode: Sampling node using a uniform distribution. |
|
DropPath: a structured form of |
|
The graph convolutional operator from the "Semi-supervised Classification with Graph Convolutional Networks" paper (ICLR'17) |
|
The simplified graph convolutional operator from the "Simplifying Graph Convolutional Networks" paper (ICML'19) |
|
The simple spectral graph convolutional operator from the "Simple Spectral Graph Convolution" paper (ICLR'21) |
|
The DAGNN operator from the "Towards Deeper Graph Neural Networks" paper (KDD'20) |
|
The topological adaptive graph convolutional operator from the "Topological Adaptive Graph Convolutional Networks" paper (arXiv'17) |
|
The graph convolutional operator with median aggregation from the "Understanding Structural Vulnerability in Graph Convolutional Networks" paper (IJCAI'21) |
|
The robust graph convolutional operator from the "Robust Graph Convolutional Networks Against Adversarial Attacks" paper (KDD'19) |
|
The AirGNN operator from the "Graph Neural Networks with Adaptive Residual" paper (NeurIPS'21) |
|
The ElasticGNN operator from the "Elastic Graph Neural Networks" paper (ICML'21) |
|
The graph convolutional operator with soft median aggregation from the "Robustness of Graph Neural Networks at Scale" paper (NeurIPS'21) |
|
The spectral adversarial training operator from the "Spectral Adversarial Training for Robust Graph Neural Network" paper (arXiv'22) |
- class Sequential(*args, loc: int = 0)[source]¶
A modified
torch.nn.Sequential
which can accept multiple inputs.- Parameters
loc (int, optional) – the location of feature input
x
, by default 0
Example
>>> import torch >>> from graphwar.nn.layers import Sequential, GCNConv
>>> edge_index = torch.LongTensor([[1, 2], [3,4]]) # size [2, M] >>> x = torch.randn(5, 20)
>>> conv1 = GCNConv(20, 50) >>> conv2 = GCNConv(50, 5) >>> dropout1 = torch.nn.Dropout(0.5) >>> dropout2 = torch.nn.Dropout(0.6)
>>> # Case 1: standard usage >>> sequential = Sequential(dropout1, conv1, dropout2, conv2) >>> sequential(x, edge_index) tensor([[ 0.6738, -0.9032, -0.9628, 0.0670, 0.0252], [ 0.4909, -1.2430, -0.6029, 0.0510, 0.2107], [ 0.6338, -0.2760, -0.9112, -0.3197, 0.2689], [ 0.4909, -1.2430, -0.6029, 0.0510, 0.2107], [ 0.3876, -0.6385, -0.5521, -0.2753, 0.6713]], grad_fn=<AddBackward0>)
>>> # which is equivalent to: >>> h1 = dropout1(x) >>> h2 = conv1(h1, edge_index) >>> h3 = dropout2(h2) >>> h4 = conv2(h3, edge_index)
>>> # Case 2: with keyword argument >>> sequential(x, edge_index, edge_weight=torch.ones(20)) tensor([[ 0.6738, -0.9032, -0.9628, 0.0670, 0.0252], [ 0.4909, -1.2430, -0.6029, 0.0510, 0.2107], [ 0.6338, -0.2760, -0.9112, -0.3197, 0.2689], [ 0.4909, -1.2430, -0.6029, 0.0510, 0.2107], [ 0.3876, -0.6385, -0.5521, -0.2753, 0.6713]], grad_fn=<AddBackward0>)
>>> # which is equivalent to: >>> h1 = dropout1(x) >>> h2 = conv1(x, edge_index, edge_weight=torch.ones(20)) >>> h3 = dropout2(h2) >>> h4 = conv2(x, edge_index, edge_weight=torch.ones(20))
Note
The argument
loc
must be specified as the location of feature inputx
, which would walk through the whole layers.The usage of keyword argument must be matched with that of the layers with more than one arguments required.
- forward(*inputs, **kwargs)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class DropEdge(p: float = 0.5)[source]¶
DropEdge: Sampling edge using a uniform distribution from the “DropEdge: Towards Deep Graph Convolutional Networks on Node Classification” paper (ICLR’20)
- Parameters
p (float, optional) – the probability of dropping out on each edge, by default 0.5
- Returns
the output edge index and edge weight
- Return type
Tuple[Tensor, Tensor]
- Raises
ValueError – p is out of range [0,1]
Example
>>> from graphwar.nn.layers import DropEdge >>> edge_index = torch.LongTensor([[1, 2], [3,4]]) >>> DropEdge(p=0.5)(edge_index)
See also
- forward(edge_index: Tensor, edge_weight: Optional[Tensor] = None) Tuple[Tensor, Tensor] [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class DropNode(p: float = 0.5)[source]¶
DropNode: Sampling node using a uniform distribution. from the “Graph Contrastive Learning with Augmentations” paper (NeurIPS’20)
- Parameters
p (float, optional) – the probability of dropping out on each node, by default 0.5
- Returns
the output edge index and edge weight
- Return type
Tuple[Tensor, Tensor]
Example
>>> from graphwar.nn.layers import DropNode >>> edge_index = torch.LongTensor([[1, 2], [3,4]]) >>> DropNode(p=0.5)(edge_index)
See also
- forward(edge_index: Tensor, edge_weight: Optional[Tensor] = None) Tuple[Tensor, Tensor] [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class DropPath(r: float = 0.5, walks_per_node: int = 2, walk_length: int = 4, p: float = 1, q: float = 1, num_nodes: Optional[int] = None, by: str = 'degree')[source]¶
DropPath: a structured form of
graphwar.functional.drop_edge
. From the “MaskGAE: Masked Graph Modeling Meets Graph Autoencoders” paper (arXiv’22)- Parameters
r (Optional[Union[float, Tensor]], optional) – if
r
is integer value: the percentage of nodes in the graph that chosen as root nodes to perform random walks, by default 0.5 ifr
istorch.Tensor
: a set of custom root nodeswalks_per_node (int, optional) – number of walks per node, by default 2
walk_length (int, optional) – number of walk length per node, by default 4
p (float, optional) –
p
in random walks, by default 1q (float, optional) –
q
in random walks, by default 1num_nodes (int, optional) – number of total nodes in the graph, by default None
by (str, optional) – sampling root nodes uniformly
uniform
or by degree distributiondegree
, by default ‘degree’
- Returns
the output edge index and edge weight
- Return type
Tuple[Tensor, Tensor]
- Raises
ImportError – if
torch_cluster
is not installed.ValueError –
r
is out of scope [0,1]ValueError –
r
is not integer value or a Tensor
Example
>>> from graphwar.nn.layers import DropPath >>> edge_index = torch.LongTensor([[1, 2], [3,4]]) >>> DropPath(r=0.5)(edge_index)
>>> DropPath(r=torch.tensor([1,2]))(edge_index) # specify root nodes
See also
- forward(edge_index: Tensor, edge_weight: Optional[Tensor] = None) Tuple[Tensor, Tensor] [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class GCNConv(in_channels: int, out_channels: int, improved: bool = False, cached: bool = False, add_self_loops: bool = True, normalize: bool = True, bias: bool = True)[source]¶
The graph convolutional operator from the “Semi-supervised Classification with Graph Convolutional Networks” paper (ICLR’17)
- Parameters
in_channels (int) – dimensions of int samples
out_channels (int) – dimensions of output samples
improved (bool, optional) – whether the layer computes \(\mathbf{\hat{A}}\) as \(\mathbf{A} + 2\mathbf{I}\), by default False
cached (bool, optional (UNUSED)) – whether the layer will cache the computation of \(\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2}\) on first execution, and will use the cached version for further executions, by default False
add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True
normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default True
bias (bool, optional) – whether to use bias in the layers, by default True
Note
Different from that in
torch_geometric
, for the inputsx
,edge_index
, andedge_weight
, our implementation supports:edge_index
istorch.FloatTensor
: dense adjacency matrix with shape[N, N]
edge_index
istorch.LongTensor
: edge indices with shape[2, M]
edge_index
istorch_sparse.SparseTensor
: sparse matrix with sparse shape[N, N]
In addition, the argument
cached
is unused. We add this argument to be compatible withtorch_geometric
.See also
- forward(x: Tensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class SGConv(in_channels: int, out_channels: int, K: int = 1, cached: bool = False, add_self_loops: bool = True, normalize: bool = True, bias: bool = True)[source]¶
The simplified graph convolutional operator from the “Simplifying Graph Convolutional Networks” paper (ICML’19)
- Parameters
in_channels (int) – dimensions of int samples
out_channels (int) – dimensions of output samples
K (int) – the number of propagation steps, by default 1
cached (bool, optional) – whether the layer will cache the computation of \((\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2})^K\) on first execution, and will use the cached version for further executions, by default False
add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True
normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default True
bias (bool, optional) – whether to use bias in the layers, by default True
Note
Different from that in
torch_geometric
, for the inputsx
,edge_index
, andedge_weight
, our implementation supports:edge_index
istorch.FloatTensor
: dense adjacency matrix with shape[N, N]
edge_index
istorch.LongTensor
: edge indices with shape[2, M]
edge_index
istorch_sparse.SparseTensor
: sparse matrix with sparse shape[N, N]
See also
- forward(x: Tensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class SSGConv(in_channels: int, out_channels: int, K: int = 5, alpha: float = 0.1, cached: bool = False, add_self_loops: bool = True, normalize: bool = True, bias: bool = True)[source]¶
The simple spectral graph convolutional operator from the “Simple Spectral Graph Convolution” paper (ICLR’21)
- Parameters
in_channels (int) – dimensions of int samples
out_channels (int) – dimensions of output samples
K (int) – the number of propagation steps, by default 5
alpha (float) – Teleport probability \(\alpha\), by default 0.1
cached (bool, optional) – whether the layer will cache the K-step aggregation on first execution, and will use the cached version for further executions, by default False
add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True
normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default True
bias (bool, optional) – whether to use bias in the layers, by default True
Note
Different from that in
torch_geometric
, for the inputsx
,edge_index
, andedge_weight
, our implementation supports:edge_index
istorch.FloatTensor
: dense adjacency matrix with shape[N, N]
edge_index
istorch.LongTensor
: edge indices with shape[2, M]
edge_index
istorch_sparse.SparseTensor
: sparse matrix with sparse shape[N, N]
See also
- forward(x: Tensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class DAGNNConv(in_channels: int, out_channels: int = 1, K: int = 1, add_self_loops: bool = True, bias: bool = True)[source]¶
The DAGNN operator from the “Towards Deeper Graph Neural Networks” paper (KDD’20)
- Parameters
in_channels (int) – dimensions of input samples
out_channels (int, optional) – dimensions of output samples, by default 1
K (int, optional) – the number of propagation steps, by default 1
add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True
bias (bool, optional) – whether to use bias in the layers, by default True
Note
out_channels
must be 1 for any cases
Different from that in
torch_geometric
, for the inputsx
,edge_index
, andedge_weight
, our implementation supports:edge_index
istorch.FloatTensor
: dense adjacency matrix with shape[N, N]
edge_index
istorch.LongTensor
: edge indices with shape[2, M]
edge_index
istorch_sparse.SparseTensor
: sparse matrix with sparse shape[N, N]
See also
- forward(x: Tensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class TAGConv(in_channels: int, out_channels: int, K: int = 2, add_self_loops: bool = True, normalize: bool = True, bias: bool = True)[source]¶
The topological adaptive graph convolutional operator from the “Topological Adaptive Graph Convolutional Networks” paper (arXiv’17)
- Parameters
in_channels (int) – dimensions of int samples
out_channels (int) – dimensions of output samples
K (int) – the number of propagation steps, by default 2
add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True
normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default True
bias (bool, optional) – whether to use bias in the layers, by default True
Note
Different from that in
torch_geometric
, for the inputsx
,edge_index
, andedge_weight
, our implementation supports:edge_index
istorch.FloatTensor
: dense adjacency matrix with shape[N, N]
edge_index
istorch.LongTensor
: edge indices with shape[2, M]
edge_index
istorch_sparse.SparseTensor
: sparse matrix with sparse shape[N, N]
See also
- forward(x: Tensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class MedianConv(in_channels: int, out_channels: int, add_self_loops: bool = True, normalize: bool = False, bias: bool = True)[source]¶
The graph convolutional operator with median aggregation from the “Understanding Structural Vulnerability in Graph Convolutional Networks” paper (IJCAI’21)
- Parameters
in_channels (int) – dimensions of int samples
out_channels (int) – dimensions of output samples
add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True
normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default True
bias (bool, optional) – whether to use bias in the layers, by default True
Note
The same as
torch_geometric
, our implementation supports:torch.LongTensor
(recommended): edge indices with shape[2, M]
torch_sparse.SparseTensor
: sparse matrix with sparse shape[N, N]
In addition, the arguments
add_self_loops
andnormalize
are worked separately. One can setnormalize=True
but setadd_self_loops=False
, different from that intorch_geometric
.See also
- forward(x: Tensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class RobustConv(in_channels: int, out_channels: int, gamma: float = 1.0, add_self_loops: bool = True, bias: bool = True)[source]¶
The robust graph convolutional operator from the “Robust Graph Convolutional Networks Against Adversarial Attacks” paper (KDD’19)
- Parameters
in_channels (int) – dimensions of int samples
out_channels (int) – dimensions of output samples
gamma (float, optional) – the scale of attention on the variances, by default 1.0
add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True
bias (bool, optional) – whether to use bias in the layers, by default True
Note
Different from that in
torch_geometric
, For the inputsx
,edge_index
, andedge_weight
, our implementation supports:edge_index
istorch.FloatTensor
: dense adjacency matrix with shape[N, N]
edge_index
istorch.LongTensor
: edge indices with shape[2, M]
edge_index
istorch_sparse.SparseTensor
: sparse matrix with sparse shape[N, N]
See also
- forward(x: Union[Tensor, Tuple[Tensor, Optional[Tensor]]], edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class AdaptiveConv(K: int = 3, lambda_amp: float = 0.1, normalize: bool = True, add_self_loops: bool = True)[source]¶
The AirGNN operator from the “Graph Neural Networks with Adaptive Residual” paper (NeurIPS’21)
- Parameters
K (int, optional) – the number of propagation steps during message passing, by default 3
lambda_amp (float, optional) – trade-off for adaptive message passing, by default 0.1
normalize (bool, optional) – Whether to add self-loops and compute symmetric normalization coefficients on the fly, by default True
add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True
Note
Different from that in
torch_geometric
, for the inputsx
,edge_index
, andedge_weight
, our implementation supports:edge_index
istorch.FloatTensor
: dense adjacency matrix with shape[N, N]
edge_index
istorch.LongTensor
: edge indices with shape[2, M]
edge_index
istorch_sparse.SparseTensor
: sparse matrix with sparse shape[N, N]
See also
- forward(x: Tensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- amp_forward(x: Tensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor [source]¶
- class ElasticConv(K: int = 3, lambda_amp: float = 0.1, normalize: bool = True, add_self_loops: bool = True, lambda1: float = 3.0, lambda2: float = 3.0, L21: bool = True, cached: bool = True)[source]¶
The ElasticGNN operator from the “Elastic Graph Neural Networks” paper (ICML’21)
- Parameters
K (int, optional) – the number of propagation steps, by default 3
lambda_amp (float, optional) – trade-off of adaptive message passing, by default 0.1
normalize (bool, optional) – Whether to add self-loops and compute symmetric normalization coefficients on the fly, by default True
add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True
lambda1 (float, optional) – trade-off hyperparameter, by default 3
lambda2 (float, optional) – trade-off hyperparameter, by default 3
L21 (bool, optional) – whether to use row-wise projection on the l2 ball of radius λ1., by default True
cached (bool, optional) – whether to cache the incident matrix, by default True
Note
The same as
torch_geometric
, for the inputsx
,edge_index
, andedge_weight
, our implementation supports:edge_index
istorch.LongTensor
: edge indices with shape[2, M]
edge_index
istorch_sparse.SparseTensor
: sparse matrix with sparse shape[N, N]
See also
- forward(x: Tensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- emp_forward(x: Tensor, inc_mat: SparseTensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor [source]¶
- class SoftMedianConv(in_channels: int, out_channels: int, cached: bool = False, add_self_loops: bool = True, normalize: bool = False, row_normalize: bool = True, bias: bool = True)[source]¶
The graph convolutional operator with soft median aggregation from the “Robustness of Graph Neural Networks at Scale” paper (NeurIPS’21)
- Parameters
in_channels (int) – dimensions of int samples
out_channels (int) – dimensions of output samples
cached (bool, optional) – whether the layer will cache the computation of \((\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2})\) and sorted edges on first execution, and will use the cached version for further executions, by default False
add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True
normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default False
row_normalize (bool, optional) – whether to perform row-normalization on the fly, by default True
bias (bool, optional) – whether to use bias in the layers, by default True
- Raises
RuntimeWarning – if module “glcore” is not properly installed.
Note
The input edges must be sorted for
dimmedian_idx()
fromglcore
The same as
torch_geometric
, for the inputsx
,edge_index
, andedge_weight
, our implementation supports:edge_index
istorch.LongTensor
: edge indices with shape[2, M]
edge_index
istorch_sparse.SparseTensor
: sparse matrix with sparse shape[N, N]
See also
- forward(x: Tensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor [source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class SATConv(in_channels: int, out_channels: int, add_self_loops: bool = True, normalize: bool = True, bias: bool = False)[source]¶
The spectral adversarial training operator from the “Spectral Adversarial Training for Robust Graph Neural Network” paper (arXiv’22)
- Parameters
in_channels (int) – dimensions of int samples
out_channels (int) – dimensions of output samples
add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True
normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default True
bias (bool, optional) – whether to use bias in the layers, by default True
Note
For the inputs
x
,U
, andV
, our implementation supports:U
istorch.LongTensor
: edge indices with shape[2, M]
U
istorch.FloatTensor
andV
isNone
: dense matrix with shape[N, N]
U
andV
aretorch.FloatTensor
: eigenvector and corresponding eigenvalues
See also
- forward(x: Tensor, U: Tensor, V: Optional[Tensor] = None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
graphwar.nn.models¶
Graph Convolution Network (GCN) from the "Semi-supervised Classification with Graph Convolutional Networks" paper (ICLR'17) |
|
The simplified graph convolutional operator from the "Simplifying Graph Convolutional Networks" paper (ICML'19) |
|
The Simple Spectra Graph Convolution Network (SSGC) from paper "Simple Spectral Graph Convolution" paper (ICLR'21) |
|
Graph Attention Networks (GAT) from the "Graph Attention Networks" paper (ICLR'19) |
|
Implementation of Approximated personalized propagation of neural predictions (APPNP) from the "Predict then Propagate: Graph Neural Networks meet Personalized PageRank" paper (ICLR'19) |
|
The DAGNN operator from the "Towards Deeper Graph Neural Networks" paper (KDD'20) |
|
Implementation of Graph Convolution Network with Jumping knowledge (JKNet) from the "Representation Learning on Graphs with Jumping Knowledge Networks" paper (ICML'18) |
|
Topological adaptive graph convolution network (TAGCN) from the "Topological Adaptive Graph Convolutional Networks" paper (arXiv'17) |
|
Graph Convolution Network (GCN) with median aggregation (MedianGCN) from the "Understanding Structural Vulnerability in Graph Convolutional Networks" paper (IJCAI'21) |
|
Robust graph convolutional network (RobustGCN) from the "Robust Graph Convolutional Networks Against Adversarial Attacks" paper (KDD'19) |
|
Graph Neural Networks with Adaptive residual (AirGNN) from the "Graph Neural Networks with Adaptive Residual" paper (NeurIPS'21) |
|
Graph Neural Networks with elastic message passing (ElasticGNN) from the "Elastic Graph Neural Networks" paper (ICML'21) |
|
Graph Convolution Network (GCN) with soft median aggregation (MedianGCN) from the "Robustness of Graph Neural Networks at Scale" paper (NeurIPS'21) |
|
Similarity Preserving Graph Convolution Network (SimPGCN) from the "Node Similarity Preserving Graph Convolutional Networks" paper (WSDM'21) |
|
Graph Convolution Network (GCN) with |
|
Graph Convolution Network with Spectral Adversarial Training (SAT) from the "Spectral Adversarial Training for Robust Graph Neural Network" paper (arXiv'22) |
- class GCN(in_channels: int, out_channels: int, hids: list = [16], acts: list = ['relu'], dropout: float = 0.5, bn: bool = False, normalize: bool = True, bias: bool = True)[source]¶
Graph Convolution Network (GCN) from the “Semi-supervised Classification with Graph Convolutional Networks” paper (ICLR’17)
- Parameters
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (list, optional) – the number of hidden units for each hidden layer, by default [16]
acts (list, optional) – the activation function for each hidden layer, by default [‘relu’]
dropout (float, optional) – the dropout ratio of model, by default 0.5
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Note
It is convenient to extend the number of layers with different or the same hidden units (activation functions) using
graphwar.utils.wrapper()
.See Examples below:
Examples
>>> # GCN with one hidden layer >>> model = GCN(100, 10)
>>> # GCN with two hidden layers >>> model = GCN(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # GCN with two hidden layers, without activation at the first layer >>> model = GCN(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # GCN with very deep architectures, each layer has elu as activation function >>> model = GCN(100, 10, hids=[16]*8, acts=['elu'])
See also
- forward(x, edge_index, edge_weight=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class SGC(in_channels, out_channels, hids: list = [], acts: list = [], K: int = 2, dropout: float = 0.0, bias: bool = True, cached: bool = True, bn: bool = False)[source]¶
The simplified graph convolutional operator from the “Simplifying Graph Convolutional Networks” paper (ICML’19)
- Parameters
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (list, optional) – the number of hidden units for each hidden layer, by default []
acts (list, optional) – the activation function for each hidden layer, by default []
K (int, optional) – the number of propagation steps, by default 2
dropout (float, optional) – the dropout ratio of model, by default 0.
bias (bool, optional) – whether to use bias in the layers, by default True
cached (bool, optional) – whether the layer will cache the computation of \((\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2})^K\) on first execution, and will use the cached version for further executions, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Note
To accept a different graph as inputs, please call
cache_clear()
first to clear cached results.It is convenient to extend the number of layers with different or the same hidden units (activation functions) using
graphwar.utils.wrapper()
.See Examples below:
Examples
>>> # SGC without hidden layer >>> model = SGC(100, 10)
>>> # SGC with two hidden layers >>> model = SGC(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # SGC with two hidden layers, without activation at the first layer >>> model = SGC(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # SGC with very deep architectures, each layer has elu as activation function >>> model = SGC(100, 10, hids=[16]*8, acts=['elu'])
See also
- forward(x, edge_index, edge_weight=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class SSGC(in_channels, out_channels, hids: list = [], acts: list = [], dropout: float = 0.0, K: int = 5, alpha: float = 0.1, bias: bool = True, cached: bool = True, bn: bool = False)[source]¶
The Simple Spectra Graph Convolution Network (SSGC) from paper “Simple Spectral Graph Convolution” paper (ICLR’21)
- Parameters
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (list, optional) – the number of hidden units for each hidden layer, by default []
acts (list, optional) – the activation function for each hidden layer, by default []
K (int, optional) – the number of propagation steps, by default 5
alpha (float) – Teleport probability \(\alpha\), by default 0.1
dropout (float, optional) – the dropout ratio of model, by default 0.
bias (bool, optional) – whether to use bias in the layers, by default True
cached (bool, optional) – whether the layer will cache the computation of \((\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2})^K\) on first execution, and will use the cached version for further executions, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Note
To accept a different graph as inputs, please call
cache_clear()
first to clear cached results.It is convenient to extend the number of layers with different or the same hidden units (activation functions) using
graphwar.utils.wrapper()
.See Examples below:
Examples
>>> # SSGC without hidden layer >>> model = SSGC(100, 10)
>>> # SSGC with two hidden layers >>> model = SSGC(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # SSGC with two hidden layers, without activation at the first layer >>> model = SSGC(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # SSGC with very deep architectures, each layer has elu as activation function >>> model = SSGC(100, 10, hids=[16]*8, acts=['elu'])
See also
- forward(x, edge_index, edge_weight=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class GAT(in_channels: int, out_channels: int, hids: list = [8], num_heads: list = [8], acts: list = ['elu'], dropout: float = 0.6, bias: bool = True, bn: bool = False, includes=['num_heads'])[source]¶
Graph Attention Networks (GAT) from the “Graph Attention Networks” paper (ICLR’19)
- Parameters
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (list, optional) – the number of hidden units for each hidden layer, by default [8]
num_heads (list, optional) – the number of attention heads for each hidden layer, by default [8]
acts (list, optional) – the activation function for each hidden layer, by default [‘relu’]
dropout (float, optional) – the dropout ratio of model, by default 0.6
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Note
It is convenient to extend the number of layers with different or the same hidden units (activation functions) using
graphwar.utils.wrapper()
.See Examples below:
Examples
>>> # GAT with one hidden layer >>> model = GAT(100, 10)
>>> # GAT with two hidden layers >>> model = GAT(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # GAT with two hidden layers, without activation at the first layer >>> model = GAT(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # GAT with very deep architectures, each layer has elu as activation function >>> model = GAT(100, 10, hids=[16]*8, acts=['elu'])
References
Author’s code: https://github.com/PetarV-/GAT
Pytorch implementation: https://github.com/Diego999/pyGAT
- forward(x, edge_index, edge_weight=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class APPNP(in_channels: int, out_channels: int, hids: list = [16], acts: list = ['relu'], dropout: float = 0.8, K: int = 10, alpha: float = 0.1, bn: bool = False, bias: bool = True, cached: bool = False)[source]¶
Implementation of Approximated personalized propagation of neural predictions (APPNP) from the “Predict then Propagate: Graph Neural Networks meet Personalized PageRank” paper (ICLR’19)
- Parameters
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (list, optional) – the number of hidden units for each hidden layer, by default [16]
acts (list, optional) – the activation function for each hidden layer, by default [‘relu’]
dropout (float, optional) – the dropout ratio of model, by default 0.8
K (int, optional) – the number of propagation steps, by default 10
alpha (float) – Teleport probability \(\alpha\), by default 0.1
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default Falsecached (bool, optional) – whether the layer will cache the computation of propagation on first execution, and will use the cached version for further executions, by default False
Note
To accept a different graph as inputs, please call
cache_clear()
first to clear cached results.It is convenient to extend the number of layers with different or the same hidden units (activation functions) using
graphwar.utils.wrapper()
.See Examples below:
Examples
>>> # APPNP without hidden layer >>> model = APPNP(100, 10)
>>> # APPNP with two hidden layers >>> model = APPNP(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # APPNP with two hidden layers, without activation at the first layer >>> model = APPNP(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # APPNP with very deep architectures, each layer has elu as activation function >>> model = APPNP(100, 10, hids=[16]*8, acts=['elu'])
- forward(x, edge_index, edge_weight=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class DAGNN(in_channels: int, out_channels: int, hids: list = [64], acts: list = ['relu'], dropout: float = 0.5, K: int = 10, bn: bool = False, bias: bool = True)[source]¶
The DAGNN operator from the “Towards Deeper Graph Neural Networks” paper (KDD’20)
- Parameters
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (list, optional) – the number of hidden units for each hidden layer, by default [64]
K (int, optional) – the number of propagation steps, by default 10
acts (list, optional) – the activation function for each hidden layer, by default [‘relu’]
dropout (float, optional) – the dropout ratio of model, by default 0.5
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Note
It is convenient to extend the number of layers with different or the same hidden units (activation functions) using
graphwar.utils.wrapper()
.See Examples below:
Examples
>>> # DAGNN with one hidden layer >>> model = DAGNN(100, 10)
>>> # DAGNN with two hidden layers >>> model = DAGNN(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # DAGNN with two hidden layers, without activation at the first layer >>> model = DAGNN(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # DAGNN with very deep architectures, each layer has elu as activation function >>> model = DAGNN(100, 10, hids=[16]*8, acts=['elu'])
See also
- forward(x, edge_index, edge_weight=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class JKNet(in_channels: int, out_channels: int, hids: list = [16, 16, 16], acts: list = ['relu', 'relu', 'relu'], dropout: float = 0.5, mode: str = 'cat', bn: bool = False, bias: bool = True)[source]¶
Implementation of Graph Convolution Network with Jumping knowledge (JKNet) from the “Representation Learning on Graphs with Jumping Knowledge Networks” paper (ICML’18)
- Parameters
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (list, optional) – the number of hidden units for each hidden layer, by default [16, 16, 16]
acts (list, optional) – the activation function for each hidden layer, by default [‘relu’, ‘relu’, ‘relu’]
dropout (float, optional) – the dropout ratio of model, by default 0.5
mode (str, optional) – the mode of jumping knowledge, including ‘cat’, ‘lstm’, and ‘max’,
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Note
To accept a different graph as inputs, please call
cache_clear()
first to clear cached results.It is convenient to extend the number of layers with different or the same hidden units (activation functions) using
graphwar.utils.wrapper()
.See Examples below:
Examples
>>> # JKNet with five hidden layers >>> model = JKNet(100, 10, hids=[16]*5)
- forward(x, edge_index, edge_weight=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class TAGCN(in_channels: int, out_channels: int, hids: list = [16], acts: list = ['relu'], K: int = 2, dropout: float = 0.5, bias: bool = True, normalize: bool = True, bn: bool = False)[source]¶
Topological adaptive graph convolution network (TAGCN) from the “Topological Adaptive Graph Convolutional Networks” paper (arXiv’17)
- Parameters
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (list, optional) – the number of hidden units for each hidden layer, by default [16]
acts (list, optional) – the activation function for each hidden layer, by default [‘relu’]
K (int) – the number of propagation steps, by default 2
dropout (float, optional) – the dropout ratio of model, by default 0.5
bias (bool, optional) – whether to use bias in the layers, by default True
normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default False
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Note
It is convenient to extend the number of layers with different or the same hidden units (activation functions) using
graphwar.utils.wrapper()
.See Examples below:
Examples
>>> # TAGCN with one hidden layer >>> model = TAGCN(100, 10)
>>> # TAGCN with two hidden layers >>> model = TAGCN(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # TAGCN with two hidden layers, without activation at the first layer >>> model = TAGCN(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # TAGCN with very deep architectures, each layer has elu as activation function >>> model = TAGCN(100, 10, hids=[16]*8, acts=['elu'])
See also
graphwar.nn.layers.TAGCNConv
- forward(x, edge_index, edge_weight=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class MedianGCN(in_channels: int, out_channels: int, hids: list = [16], acts: list = ['relu'], dropout: float = 0.5, bn: bool = False, normalize: bool = False, bias: bool = True)[source]¶
Graph Convolution Network (GCN) with median aggregation (MedianGCN) from the “Understanding Structural Vulnerability in Graph Convolutional Networks” paper (IJCAI’21)
- Parameters
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (list, optional) – the number of hidden units for each hidden layer, by default [16]
acts (list, optional) – the activation function for each hidden layer, by default [‘relu’]
dropout (float, optional) – the dropout ratio of model, by default 0.5
bias (bool, optional) – whether to use bias in the layers, by default True
normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default False
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Note
It is convenient to extend the number of layers with different or the same hidden units (activation functions) using
graphwar.utils.wrapper()
.See Examples below:
Examples
>>> # MedianGCN with one hidden layer >>> model = MedianGCN(100, 10)
>>> # MedianGCN with two hidden layers >>> model = MedianGCN(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # MedianGCN with two hidden layers, without activation at the first layer >>> model = MedianGCN(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # MedianGCN with very deep architectures, each layer has elu as activation function >>> model = MedianGCN(100, 10, hids=[16]*8, acts=['elu'])
See also
- forward(x, edge_index, edge_weight=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class RobustGCN(in_channels: int, out_channels: int, hids: list = [32], acts: list = ['relu'], dropout: float = 0.5, bias: bool = True, gamma: float = 1.0, bn: bool = False)[source]¶
Robust graph convolutional network (RobustGCN) from the “Robust Graph Convolutional Networks Against Adversarial Attacks” paper (KDD’19)
- Parameters
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (list, optional) – the number of hidden units for each hidden layer, by default [32]
acts (list, optional) – the activation function for each hidden layer, by default [‘relu’]
dropout (float, optional) – the dropout ratio of model, by default 0.5
bias (bool, optional) – whether to use bias in the layers, by default True
gamma (float, optional) – the scale of attention on the variances, by default 1.0
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Note
It is convenient to extend the number of layers with different or the same hidden units (activation functions) using
graphwar.utils.wrapper()
.See Examples below:
Examples
>>> # RobustGCN with one hidden layer >>> model = RobustGCN(100, 10)
>>> # RobustGCN with two hidden layers >>> model = RobustGCN(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # RobustGCN with two hidden layers, without activation at the first layer >>> model = RobustGCN(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # RobustGCN with very deep architectures, each layer has elu as activation function >>> model = RobustGCN(100, 10, hids=[16]*8, acts=['elu'])
See also
- forward(x, edge_index, edge_weight=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class AirGNN(in_channels: int, out_channels: int, hids: list = [64], acts: list = ['relu'], K: int = 3, lambda_amp: float = 0.5, dropout: float = 0.8, bias: bool = True, bn: bool = False)[source]¶
Graph Neural Networks with Adaptive residual (AirGNN) from the “Graph Neural Networks with Adaptive Residual” paper (NeurIPS’21)
- Parameters
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (list, optional) – the number of hidden units for each hidden layer, by default [64]
acts (list, optional) – the activation function for each hidden layer, by default [‘relu’]
K (int, optional) – the number of propagation steps during message passing, by default 3
lambda_amp (float, optional) – trade-off for adaptive message passing, by default 0.1
dropout (float, optional) – the dropout ratio of model, by default 0.8
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Note
It is convenient to extend the number of layers with different or the same hidden units (activation functions) using
graphwar.utils.wrapper()
.See Examples below:
Examples
>>> # AirGNN with one hidden layer >>> model = AirGNN(100, 10)
>>> # AirGNN with two hidden layers >>> model = AirGNN(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # AirGNN with two hidden layers, without activation at the first layer >>> model = AirGNN(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # AirGNN with very deep architectures, each layer has elu as activation function >>> model = AirGNN(100, 10, hids=[16]*8, acts=['elu'])
See also
- forward(x, edge_index, edge_weight=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class ElasticGNN(in_channels: int, out_channels: int, hids: list = [16], acts: list = ['relu'], K: int = 3, lambda1: float = 3, lambda2: float = 3, cached: bool = True, dropout: float = 0.8, bias: bool = True, bn: bool = False)[source]¶
Graph Neural Networks with elastic message passing (ElasticGNN) from the “Elastic Graph Neural Networks” paper (ICML’21)
- Parameters
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (list, optional) – the number of hidden units for each hidden layer, by default [64]
acts (list, optional) – the activation function for each hidden layer, by default [‘relu’]
K (int, optional) – the number of propagation steps during message passing, by default 3
lambda1 (float, optional) – trade-off hyperparameter, by default 3
lambda2 (float, optional) – trade-off hyperparameter, by default 3
L21 (bool, optional) – whether to use row-wise projection on the l2 ball of radius λ1., by default True
cached (bool, optional) – whether to cache the incident matrix, by default True
dropout (float, optional) – the dropout ratio of model, by default 0.8
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Note
It is convenient to extend the number of layers with different or the same hidden units (activation functions) using
graphwar.utils.wrapper()
.See Examples below:
Examples
>>> # ElasticGNN with one hidden layer >>> model = ElasticGNN(100, 10)
>>> # ElasticGNN with two hidden layers >>> model = ElasticGNN(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # ElasticGNN with two hidden layers, without activation at the first layer >>> model = ElasticGNN(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # ElasticGNN with very deep architectures, each layer has elu as activation function >>> model = ElasticGNN(100, 10, hids=[16]*8, acts=['elu'])
See also
graphwar.nn.layers.ElasticGNN
- forward(x, edge_index, edge_weight=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class SoftMedianGCN(in_channels: int, out_channels: int, hids: list = [16], acts: list = ['relu'], dropout: float = 0.5, bias: bool = True, normalize: bool = False, row_normalize: bool = False, cached: bool = True, bn: bool = False)[source]¶
Graph Convolution Network (GCN) with soft median aggregation (MedianGCN) from the “Robustness of Graph Neural Networks at Scale” paper (NeurIPS’21)
- Parameters
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (list, optional) – the number of hidden units for each hidden layer, by default [16]
acts (list, optional) – the activation function for each hidden layer, by default [‘relu’]
dropout (float, optional) – the dropout ratio of model, by default 0.5
bias (bool, optional) – whether to use bias in the layers, by default True
normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default False
row_normalize (bool, optional) – whether to perform row-normalization on the fly, by default True
cached (bool, optional) – whether the layer will cache the computation of \((\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2})\) and sorted edges on first execution, and will use the cached version for further executions, by default False
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Note
It is convenient to extend the number of layers with different or the same hidden units (activation functions) using
graphwar.utils.wrapper()
.See Examples below:
Examples
>>> # SoftMedianGCN with one hidden layer >>> model = SoftMedianGCN(100, 10)
>>> # SoftMedianGCN with two hidden layers >>> model = SoftMedianGCN(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # SoftMedianGCN with two hidden layers, without activation at the first layer >>> model = SoftMedianGCN(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # SoftMedianGCN with very deep architectures, each layer has elu as activation function >>> model = SoftMedianGCN(100, 10, hids=[16]*8, acts=['elu'])
See also
- forward(x, edge_index, edge_weight=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class SimPGCN(in_channels: int, out_channels: int, hids: list = [64], acts: list = [None], dropout: float = 0.5, bias: bool = True, gamma: float = 0.01, bn: bool = False)[source]¶
Similarity Preserving Graph Convolution Network (SimPGCN) from the “Node Similarity Preserving Graph Convolutional Networks” paper (WSDM’21)
- Parameters
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (list, optional) – the number of hidden units for each hidden layer, by default [64]
acts (list, optional) – the activation function for each hidden layer, by default None
dropout (float, optional) – the dropout ratio of model, by default 0.5
bias (bool, optional) – whether to use bias in the layers, by default True
gamma (float, optional) – trade-off hyperparameter, by default 0.01
bn (bool, optional (NOT IMPLEMENTED NOW)) – whether to use
BatchNorm1d
after the convolution layer, by default False
Note
It is convenient to extend the number of layers with different or the same hidden units (activation functions) using
graphwar.utils.wrapper()
.See Examples below:
Examples
>>> # SimPGCN with one hidden layer >>> model = SimPGCN(100, 10)
>>> # SimPGCN with two hidden layers >>> model = SimPGCN(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # SimPGCN with two hidden layers, without activation at the first layer >>> model = SimPGCN(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # SimPGCN with very deep architectures, each layer has elu as activation function >>> model = SimPGCN(100, 10, hids=[16]*8, acts=['elu'])
- forward(x, edge_index, edge_weight=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class GNNGUARD(in_channels: int, out_channels: int, hids: list = [16], acts: list = ['relu'], dropout: float = 0.5, bn: bool = False, normalize: bool = True, bias: bool = True)[source]¶
Graph Convolution Network (GCN) with
graphwar.defense.GNNGUARD
from the “GNNGUARD: Defending Graph Neural Networks against Adversarial Attacks” paper (NeurIPS’20)- Parameters
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (list, optional) – the number of hidden units for each hidden layer, by default [16]
acts (list, optional) – the activation function for each hidden layer, by default [‘relu’]
dropout (float, optional) – the dropout ratio of model, by default 0.5
bias (bool, optional) – whether to use bias in the layers, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Note
It is convenient to extend the number of layers with different or the same hidden units (activation functions) using
graphwar.utils.wrapper()
.See Examples below:
Examples
>>> # GNNGUARD with one hidden layer >>> model = GNNGUARD(100, 10)
>>> # GNNGUARD with two hidden layers >>> model = GNNGUARD(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # GNNGUARD with two hidden layers, without activation at the first layer >>> model = GNNGUARD(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # GNNGUARD with very deep architectures, each layer has elu as activation function >>> model = GNNGUARD(100, 10, hids=[16]*8, acts=['elu'])
- forward(x, edge_index, edge_weight=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class SAT(in_channels: int, out_channels: int, hids: list = [16], acts: list = ['relu'], dropout: float = 0.5, bias: bool = False, normalize: bool = True, bn: bool = False)[source]¶
Graph Convolution Network with Spectral Adversarial Training (SAT) from the “Spectral Adversarial Training for Robust Graph Neural Network” paper (arXiv’22)
- Parameters
in_channels (int,) – the input dimensions of model
out_channels (int,) – the output dimensions of model
hids (list, optional) – the number of hidden units for each hidden layer, by default [16]
acts (list, optional) – the activation function for each hidden layer, by default [‘relu’]
dropout (float, optional) – the dropout ratio of model, by default 0.5
bias (bool, optional) – whether to use bias in the layers, by default False
normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default True
bn (bool, optional) – whether to use
BatchNorm1d
after the convolution layer, by default False
Note
It is convenient to extend the number of layers with different or the same hidden units (activation functions) using
graphwar.utils.wrapper()
.See Examples below:
Examples
>>> # SAT with one hidden layer >>> model = SAT(100, 10)
>>> # SAT with two hidden layers >>> model = SAT(100, 10, hids=[32, 16], acts=['relu', 'elu'])
>>> # SAT with two hidden layers, without activation at the first layer >>> model = SAT(100, 10, hids=[32, 16], acts=[None, 'relu'])
>>> # SAT with very deep architectures, each layer has elu as activation function >>> model = SAT(100, 10, hids=[16]*8, acts=['elu'])
See also
- forward(x, edge_index, edge_weight=None)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.