greatx.nn.layers

greatx.nn.layers

Sequential

A modified torch.nn.Sequential which can accept multiple inputs.

DropEdge

DropEdge: Sampling edge using a uniform distribution from the "DropEdge: Towards Deep Graph Convolutional Networks on Node Classification" paper (ICLR'20)

DropNode

DropNode: Sampling node using a uniform distribution from the "Graph Contrastive Learning with Augmentations" paper (NeurIPS'20)

DropPath

DropPath: a structured form of greatx.functional.drop_edge from the "MaskGAE: Masked Graph Modeling Meets Graph Autoencoders" paper (arXiv'22)

GCNConv

The graph convolutional operator from the "Semi-supervised Classification with Graph Convolutional Networks" paper (ICLR'17)

SGConv

The simplified graph convolutional operator from the "Simplifying Graph Convolutional Networks" paper (ICML'19)

SSGConv

The simple spectral graph convolutional operator from the "Simple Spectral Graph Convolution" paper (ICLR'21)

DGConv

The decoupled graph convolutional operator from the "Dissecting the Diffusion Process in Linear Graph Convolutional Networks" paper (NeurIPS'21)

DAGNNConv

The DAGNN operator from the "Towards Deeper Graph Neural Networks" paper (KDD'20)

TAGConv

The topological adaptive graph convolutional operator from the "Topological Adaptive Graph Convolutional Networks" paper (arXiv'17)

MedianConv

The graph convolutional operator with median aggregation from the "Understanding Structural Vulnerability in Graph Convolutional Networks" paper (IJCAI'21)

RobustConv

The robust graph convolutional operator from the "Robust Graph Convolutional Networks Against Adversarial Attacks" paper (KDD'19)

AdaptiveConv

The AirGNN operator from the "Graph Neural Networks with Adaptive Residual" paper (NeurIPS'21)

ElasticConv

The ElasticGNN operator from the "Elastic Graph Neural Networks" paper (ICML'21)

SoftMedianConv

The graph convolutional operator with soft median aggregation from the "Robustness of Graph Neural Networks at Scale" paper (NeurIPS'21)

SATConv

The spectral adversarial training operator from the "Spectral Adversarial Training for Robust Graph Neural Network" paper (arXiv'22)

TensorGCNConv

The rotbust tensor graph convolutional operator from the "Robust Tensor Graph Convolutional Networks via T-SVD based Graph Augmentation" paper (KDD'22)

TensorLinear

The tensor linear operator from the "Robust Tensor Graph Convolutional Networks via T-SVD based Graph Augmentation" paper (KDD'22)

PoissonEncoder

IF

The Integrate-and-Fire (IF) neuron for spiking neural networks.

LIF

The Leaky Integrate-and-Fire (LIF) neuron for spiking neural networks.

PLIF

The Parametric Leaky Integrate-and-Fire (PLIF) neuron for spiking neural networks.

SpikingGCNonv

The spiking graph convolutional operator from the "Spiking Graph Convolutional Networks" paper (IJCAI'22)

class Sequential(*args, loc: int = 0)[source]

A modified torch.nn.Sequential which can accept multiple inputs.

Parameters:

loc (int, optional) – the location of feature input x, by default 0

Example

>>> import torch
>>> from greatx.nn.layers import Sequential, GCNConv
>>> edge_index = torch.LongTensor([[1, 2], [3,4]]) # size [2, M]
>>> x = torch.randn(5, 20)
>>> conv1 = GCNConv(20, 50)
>>> conv2 = GCNConv(50, 5)
>>> dropout1 = torch.nn.Dropout(0.5)
>>> dropout2 = torch.nn.Dropout(0.6)
>>> # Case 1: standard usage
>>> sequential = Sequential(dropout1, conv1, dropout2, conv2)
>>> sequential(x, edge_index)
tensor([[ 0.6738, -0.9032, -0.9628,  0.0670,  0.0252],
    [ 0.4909, -1.2430, -0.6029,  0.0510,  0.2107],
    [ 0.6338, -0.2760, -0.9112, -0.3197,  0.2689],
    [ 0.4909, -1.2430, -0.6029,  0.0510,  0.2107],
    [ 0.3876, -0.6385, -0.5521, -0.2753,  0.6713]], grad_fn=<AddBackward0>)
>>> # which is equivalent to:
>>> h1 = dropout1(x)
>>> h2 = conv1(h1, edge_index)
>>> h3 = dropout2(h2)
>>> h4 = conv2(h3, edge_index)
>>> # Case 2: with keyword argument
>>> sequential(x, edge_index, edge_weight=torch.ones(20))
tensor([[ 0.6738, -0.9032, -0.9628,  0.0670,  0.0252],
    [ 0.4909, -1.2430, -0.6029,  0.0510,  0.2107],
    [ 0.6338, -0.2760, -0.9112, -0.3197,  0.2689],
    [ 0.4909, -1.2430, -0.6029,  0.0510,  0.2107],
    [ 0.3876, -0.6385, -0.5521, -0.2753,  0.6713]], grad_fn=<AddBackward0>)
>>> # which is equivalent to:
>>> h1 = dropout1(x)
>>> h2 = conv1(x, edge_index, edge_weight=torch.ones(20))
>>> h3 = dropout2(h2)
>>> h4 = conv2(x, edge_index, edge_weight=torch.ones(20))

Note

  • The argument loc must be specified as the location of feature input x, which would walk through the whole layers.

  • The usage of keyword argument must be matched with that of the layers with more than one arguments required.

forward(*inputs, **kwargs)[source]
reset_parameters()[source]
training: bool
class DropEdge(p: float = 0.5)[source]

DropEdge: Sampling edge using a uniform distribution from the “DropEdge: Towards Deep Graph Convolutional Networks on Node Classification” paper (ICLR’20)

Parameters:

p (float, optional) – the probability of dropping out on each edge, by default 0.5

Returns:

the output edge index and edge weight

Return type:

Tuple[Tensor, Optional[Tensor]]

Raises:

ValueError – p is out of range [0,1]

Example

from greatx.nn.layers import DropEdge
edge_index = torch.LongTensor([[1, 2], [3,4]])
DropEdge(p=0.5)(edge_index)
forward(edge_index: Tensor, edge_weight: Optional[Tensor] = None) Tuple[Tensor, Optional[Tensor]][source]
training: bool
class DropNode(p: float = 0.5)[source]

DropNode: Sampling node using a uniform distribution from the “Graph Contrastive Learning with Augmentations” paper (NeurIPS’20)

Parameters:

p (float, optional) – the probability of dropping out on each node, by default 0.5

Returns:

the output edge index and edge weight

Return type:

Tuple[Tensor, Optional[Tensor]]

Example

from greatx.nn.layers import DropNode
edge_index = torch.LongTensor([[1, 2], [3,4]])
DropNode(p=0.5)(edge_index)
forward(edge_index: Tensor, edge_weight: Optional[Tensor] = None) Tuple[Tensor, Optional[Tensor]][source]
training: bool
class DropPath(p: float = 0.5, walks_per_node: int = 1, walk_length: int = 3, num_nodes: Optional[int] = None, start: str = 'node', is_sorted: bool = False)[source]

DropPath: a structured form of greatx.functional.drop_edge from the “MaskGAE: Masked Graph Modeling Meets Graph Autoencoders” paper (arXiv’22)

Parameters:
  • p (Optional[Union[float, Tensor]], optional) – If p is a float value - the percentage of nodes in the graph that chosen as root nodes to perform random walks. If p is torch.Tensor - a set of custom root nodes. By default, p=0.5.

  • walks_per_node (int, optional) – number of walks per node, by default 1

  • walk_length (int, optional) – number of walk length per node, by default 3

  • num_nodes (int, optional) – number of total nodes in the graph, by default None

  • start (string, optional) – the type of starting node chosen from node of edge, by default ‘node’

  • is_sorted (bool, optional) – whether the input edge_index is sorted

Returns:

the output edge index and edge weight

Return type:

Tuple[Tensor, Optional[Tensor]]

Raises:

Example

from greatx.nn.layers import DropPath
edge_index = torch.LongTensor([[1, 2], [3,4]])
DropPath(p=0.5)(edge_index)

DropPath(p=torch.tensor([1,2]))(edge_index) # specify root nodes
forward(edge_index: Tensor, edge_weight: Optional[Tensor] = None) Tuple[Tensor, Optional[Tensor]][source]
training: bool
class GCNConv(in_channels: int, out_channels: int, improved: bool = False, cached: bool = False, add_self_loops: bool = True, normalize: bool = True, bias: bool = True)[source]

The graph convolutional operator from the “Semi-supervised Classification with Graph Convolutional Networks” paper (ICLR’17)

Parameters:
  • in_channels (int) – dimensions of int samples

  • out_channels (int) – dimensions of output samples

  • improved (bool, optional) – whether the layer computes \(\mathbf{\hat{A}}\) as \(\mathbf{A} + 2\mathbf{I}\), by default False

  • cached (bool, optional (UNUSED)) – whether the layer will cache the computation of \(\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2}\) on first execution, and will use the cached version for further executions, by default False

  • add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True

  • normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default True

  • bias (bool, optional) – whether to use bias in the layers, by default True

Note

Different from that in torch_geometric, for the input edge_index, our implementation supports torch.FloatTensor, torch.LongTensor and obj:torch_sparse.SparseTensor.

In addition, the argument cached is unused. We add this argument to be compatible with torch_geometric.

reset_parameters()[source]
forward(x: Tensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor[source]
training: bool
class SGConv(in_channels: int, out_channels: int, K: int = 1, cached: bool = False, add_self_loops: bool = True, normalize: bool = True, bias: bool = True)[source]

The simplified graph convolutional operator from the “Simplifying Graph Convolutional Networks” paper (ICML’19)

Parameters:
  • in_channels (int) – dimensions of int samples

  • out_channels (int) – dimensions of output samples

  • K (int) – the number of propagation steps, by default 1

  • cached (bool, optional) – whether the layer will cache the computation of \((\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2})^K\) on first execution, and will use the cached version for further executions, by default False

  • add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True

  • normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default True

  • bias (bool, optional) – whether to use bias in the layers, by default True

Note

Different from that in torch_geometric, for the input edge_index, our implementation supports torch.FloatTensor, torch.LongTensor and obj:torch_sparse.SparseTensor.

reset_parameters()[source]
cache_clear()[source]

Clear cached inputs or intermediate results.

forward(x: Tensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor[source]
class SSGConv(in_channels: int, out_channels: int, K: int = 5, alpha: float = 0.1, cached: bool = False, add_self_loops: bool = True, normalize: bool = True, bias: bool = True)[source]

The simple spectral graph convolutional operator from the “Simple Spectral Graph Convolution” paper (ICLR’21)

Parameters:
  • in_channels (int) – dimensions of int samples

  • out_channels (int) – dimensions of output samples

  • K (int) – the number of propagation steps, by default 5

  • alpha (float) – Teleport probability \(\alpha\), by default 0.1

  • cached (bool, optional) – whether the layer will cache the K-step aggregation on first execution, and will use the cached version for further executions, by default False

  • add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True

  • normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default True

  • bias (bool, optional) – whether to use bias in the layers, by default True

Note

Different from that in torch_geometric, for the input edge_index, our implementation supports torch.FloatTensor, torch.LongTensor and obj:torch_sparse.SparseTensor.

reset_parameters()[source]
cache_clear()[source]

Clear cached inputs or intermediate results.

forward(x: Tensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor[source]
class DGConv(in_channels: int, out_channels: int, t: float = 5.27, K: int = 2, cached: bool = False, add_self_loops: bool = True, normalize: bool = True, bias: bool = True)[source]

The decoupled graph convolutional operator from the “Dissecting the Diffusion Process in Linear Graph Convolutional Networks” paper (NeurIPS’21)

Parameters:
  • in_channels (int) – dimensions of int samples

  • out_channels (int) – dimensions of output samples

  • K (int) – the number of propagation steps, by default 2

  • t (float) – Terminal time \(t\), by default 5.27

  • cached (bool, optional) – whether the layer will cache the K-step aggregation on first execution, and will use the cached version for further executions, by default False

  • add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True

  • normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default True

  • bias (bool, optional) – whether to use bias in the layers, by default True

Note

Different from that in torch_geometric, for the input edge_index, our implementation supports torch.FloatTensor, torch.LongTensor and obj:torch_sparse.SparseTensor.

reset_parameters()[source]
cache_clear()[source]

Clear cached inputs or intermediate results.

forward(x: Tensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor[source]
class DAGNNConv(in_channels: int, out_channels: int = 1, K: int = 1, add_self_loops: bool = True, bias: bool = True)[source]

The DAGNN operator from the “Towards Deeper Graph Neural Networks” paper (KDD’20)

Parameters:
  • in_channels (int) – dimensions of input samples

  • out_channels (int, optional) – dimensions of output samples, must be 1 for any cases, by default 1

  • K (int, optional) – the number of propagation steps, by default 1

  • add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True

  • bias (bool, optional) – whether to use bias in the layers, by default True

Note

Different from that in torch_geometric, for the input edge_index, our implementation supports torch.FloatTensor, torch.LongTensor and obj:torch_sparse.SparseTensor.

reset_parameters()[source]
forward(x: Tensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor[source]
training: bool
class TAGConv(in_channels: int, out_channels: int, K: int = 2, add_self_loops: bool = True, normalize: bool = True, bias: bool = True)[source]

The topological adaptive graph convolutional operator from the “Topological Adaptive Graph Convolutional Networks” paper (arXiv’17)

Parameters:
  • in_channels (int) – dimensions of int samples

  • out_channels (int) – dimensions of output samples

  • K (int) – the number of propagation steps, by default 2

  • add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True

  • normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default True

  • bias (bool, optional) – whether to use bias in the layers, by default True

Note

Different from that in torch_geometric, for the input edge_index, our implementation supports torch.FloatTensor, torch.LongTensor and obj:torch_sparse.SparseTensor.

reset_parameters()[source]
forward(x: Tensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor[source]
training: bool
class MedianConv(in_channels: int, out_channels: int, reduce: str = 'median', add_self_loops: bool = True, normalize: bool = False, bias: bool = True)[source]

The graph convolutional operator with median aggregation from the “Understanding Structural Vulnerability in Graph Convolutional Networks” paper (IJCAI’21)

Parameters:
  • in_channels (int) – dimensions of int samples

  • out_channels (int) – dimensions of output samples

  • reduce (str) – aggregation function, including {‘median’, ‘sample_median’}, where median uses the exact median as the aggregation function, while sample_median appropriates the median with a fixed set of sampled nodes. sample_median is much faster and more scalable than median. By default, median is used.

  • add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True

  • normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default True

  • bias (bool, optional) – whether to use bias in the layers, by default True

reset_parameters()[source]
forward(x: Tensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor[source]
training: bool
class RobustConv(in_channels: int, out_channels: int, gamma: float = 1.0, normalize: bool = True, add_self_loops: bool = True, bias: bool = True)[source]

The robust graph convolutional operator from the “Robust Graph Convolutional Networks Against Adversarial Attacks” paper (KDD’19)

Parameters:
  • in_channels (int) – dimensions of int samples

  • out_channels (int) – dimensions of output samples

  • gamma (float, optional) – the scale of attention on the variances, by default 1.0

  • add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True

  • normalize (bool, optional) – whether to normalize the input graph, by default True

  • bias (bool, optional) – whether to use bias in the layers, by default True

Note

Different from that in torch_geometric, for the input edge_index, our implementation supports torch.FloatTensor, torch.LongTensor and obj:torch_sparse.SparseTensor.

reset_parameters()[source]
forward(x: Union[Tensor, Tuple[Tensor, Optional[Tensor]]], edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor[source]
training: bool
class AdaptiveConv(K: int = 3, lambda_amp: float = 0.1, normalize: bool = True, add_self_loops: bool = True)[source]

The AirGNN operator from the “Graph Neural Networks with Adaptive Residual” paper (NeurIPS’21)

Parameters:
  • K (int, optional) – the number of propagation steps during message passing, by default 3

  • lambda_amp (float, optional) – trade-off for adaptive message passing, by default 0.1

  • normalize (bool, optional) – Whether to add self-loops and compute symmetric normalization coefficients on the fly, by default True

  • add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True

Note

Different from that in torch_geometric, for the input edge_index, our implementation supports torch.FloatTensor, torch.LongTensor and obj:torch_sparse.SparseTensor.

reset_parameters()[source]
forward(x: Tensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor[source]
amp_forward(x: Tensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor[source]
proximal_L21(x: Tensor, lambda_: float) Tensor[source]
compute_LX(x: Tensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor[source]
training: bool
class ElasticConv(K: int = 3, lambda_amp: float = 0.1, normalize: bool = True, add_self_loops: bool = True, lambda1: float = 3.0, lambda2: float = 3.0, L21: bool = True, cached: bool = True)[source]

The ElasticGNN operator from the “Elastic Graph Neural Networks” paper (ICML’21)

Parameters:
  • K (int, optional) – the number of propagation steps, by default 3

  • lambda_amp (float, optional) – trade-off of adaptive message passing, by default 0.1

  • normalize (bool, optional) – Whether to add self-loops and compute symmetric normalization coefficients on the fly, by default True

  • add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True

  • lambda1 (float, optional) – trade-off hyperparameter, by default 3

  • lambda2 (float, optional) – trade-off hyperparameter, by default 3

  • L21 (bool, optional) – whether to use row-wise projection on the l2 ball of radius λ1., by default True

  • cached (bool, optional) – whether to cache the incident matrix, by default True

reset_parameters()[source]
cache_clear()[source]

Clear cached inputs or intermediate results.

forward(x: Tensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor[source]
emp_forward(x: Tensor, inc_mat: SparseTensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor[source]
L1_projection(x: Tensor, lambda_: float) Tensor[source]

component-wise projection onto the l∞ ball of radius λ1.

L21_projection(x: Tensor, lambda_: float) Tensor[source]
class SoftMedianConv(in_channels: int, out_channels: int, cached: bool = False, add_self_loops: bool = True, normalize: bool = False, row_normalize: bool = True, bias: bool = True)[source]

The graph convolutional operator with soft median aggregation from the “Robustness of Graph Neural Networks at Scale” paper (NeurIPS’21)

Parameters:
  • in_channels (int) – dimensions of int samples

  • out_channels (int) – dimensions of output samples

  • cached (bool, optional) – whether the layer will cache the computation of \((\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2})\) and sorted edges on first execution, and will use the cached version for further executions, by default False

  • add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True

  • normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default False

  • row_normalize (bool, optional) – whether to perform row-normalization on the fly, by default True

  • bias (bool, optional) – whether to use bias in the layers, by default True

Raises:

RuntimeWarning – if module “glcore” is not properly installed.

Note

The input edges must be sorted for dimmedian_idx() from glcore

reset_parameters()[source]
cache_clear()[source]

Clear cached inputs or intermediate results.

forward(x: Tensor, edge_index: Tensor, edge_weight: Optional[Tensor] = None) Tensor[source]
class SATConv(in_channels: int, out_channels: int, add_self_loops: bool = True, normalize: bool = True, bias: bool = False)[source]

The spectral adversarial training operator from the “Spectral Adversarial Training for Robust Graph Neural Network” paper (arXiv’22)

Parameters:
  • in_channels (int) – dimensions of int samples

  • out_channels (int) – dimensions of output samples

  • add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True

  • normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default True

  • bias (bool, optional) – whether to use bias in the layers, by default True

Note

For the inputs x, U, and V, our implementation supports: (1) U is torch.LongTensor, denoting edge indices with shape [2, M]; (2) U is torch.FloatTensor and V is None, denoting dense matrix with shape [N, N]; (3) U and V are torch.FloatTensor, denoting eigenvector and corresponding eigenvalues.

reset_parameters()[source]
forward(x: Tensor, U: Tensor, V: Optional[Tensor] = None)[source]
training: bool
class TensorGCNConv(in_channels: int, out_channels: int, num_nodes: int, num_channels: int, bias: bool = True)[source]

The rotbust tensor graph convolutional operator from the “Robust Tensor Graph Convolutional Networks via T-SVD based Graph Augmentation” paper (KDD’22)

Parameters:
  • in_channels (int) – dimensions of int samples

  • out_channels (int) – dimensions of output samples

  • num_nodes (int) – number of input nodes

  • num_channels (int) – number of input channels (adjacency matrixs)

  • bias (bool, optional) – whether to use bias in the layers, by default True

reset_parameters()[source]
forward(x: Tensor, adjs: Tensor) Tensor[source]
static fft_product(X, Y)[source]
training: bool
class TensorLinear(in_channels: int, num_nodes: int, num_channels: int, bias: bool = True)[source]

The tensor linear operator from the “Robust Tensor Graph Convolutional Networks via T-SVD based Graph Augmentation” paper (KDD’22)

Parameters:
  • in_channels (int) – dimensions of int samples

  • out_channels (int) – dimensions of output samples

  • bias (bool, optional) – whether to use bias in the layers, by default True

reset_parameters()[source]
forward(x)[source]
training: bool
class PoissonEncoder[source]
forward(x: Tensor) Tensor[source]
training: bool
class IF(v_threshold: float = 1.0, v_reset: float = 0.0, alpha: float = 1.0, gamma: float = 0.0, thresh_decay: float = 1.0, surrogate: str = 'sigmoid')[source]

The Integrate-and-Fire (IF) neuron for spiking neural networks.

Parameters:
  • v_threshold (float, optional) – the threshold for emitting a spike, by default 1.0

  • v_reset (float, optional) – the reset level for neuron, by default 0.

  • alpha (float, optional) – the smooth factor for surrogate function, by default 1.0

  • gamma (float, optional) – the threshold decay factor \(\gamma\), by default 0.

  • thresh_decay (float, optional) – the threshold decay factor, by default 1.0

  • surrogate (str, optional) – the surrogate function for training spiking neurons, could one of (:obj:’sigmoid’, :obj:’triangle’, :obj:’arctan’ :obj:’mg’, and :obj:’super’), by default ‘sigmoid’

reset()[source]

Reset neuron states.

forward(dv: Tensor) Tensor[source]
training: bool
class LIF(v_threshold: float = 1.0, v_reset: float = 0.0, tau: float = 1.0, alpha: float = 1.0, gamma: float = 0.0, thresh_decay: float = 1.0, surrogate: str = 'sigmoid')[source]

The Leaky Integrate-and-Fire (LIF) neuron for spiking neural networks.

Parameters:
  • v_threshold (float, optional) – the threshold for emitting a spike, by default 1.0

  • v_reset (float, optional) – the reset level for neuron, by default 0.

  • tau (float, optional) – the leaky factor \(\tau\) for LIF-based neuron, by default 1.0

  • alpha (float, optional) – the smooth factor for surrogate function, by default 1.0

  • gamma (float, optional) – the threshold decay factor \(\gamma\), by default 0.

  • thresh_decay (float, optional) – the threshold decay factor, by default 1.0

  • surrogate (str, optional) – the surrogate function for training spiking neurons, could one of (:obj:’sigmoid’, :obj:’triangle’, :obj:’arctan’ :obj:’mg’, and :obj:’super’), by default ‘sigmoid’

reset()[source]

Reset neuron states.

forward(dv: Tensor) Tensor[source]
training: bool
class PLIF(v_threshold: float = 1.0, v_reset: float = 0.0, tau: float = 1.0, alpha: float = 1.0, gamma: float = 0.0, thresh_decay: float = 1.0, surrogate: str = 'sigmoid')[source]

The Parametric Leaky Integrate-and-Fire (PLIF) neuron for spiking neural networks. It differs from LIF with a trainable \(\tau\).

Parameters:
  • v_threshold (float, optional) – the threshold for emitting a spike, by default 1.0

  • v_reset (float, optional) – the reset level for neuron, by default 0.

  • tau (float, optional) – the leaky factor \(\tau\) for LIF-based neuron, by default 1.0

  • alpha (float, optional) – the smooth factor for surrogate function, by default 1.0

  • gamma (float, optional) – the threshold decay factor \(\gamma\), by default 0.

  • thresh_decay (float, optional) – the threshold decay factor, by default 1.0

  • surrogate (str, optional) – the surrogate function for training spiking neurons, could one of (:obj:’sigmoid’, :obj:’triangle’, :obj:’arctan’ :obj:’mg’, and :obj:’super’), by default ‘sigmoid’

reset()[source]

Reset neuron states.

forward(dv: Tensor) Tensor[source]
training: bool
class SpikingGCNonv(in_channels: int, out_channels: int, K: int = 1, T: int = 20, tau: float = 1.0, v_threshold: float = 1.0, v_reset: float = 0.0, cached: bool = False, add_self_loops: bool = True, normalize: bool = True, bias: bool = True)[source]

The spiking graph convolutional operator from the “Spiking Graph Convolutional Networks” paper (IJCAI’22)

Parameters:
  • in_channels (int) – dimensions of int samples

  • out_channels (int) – dimensions of output samples

  • K (int) – the number of propagation steps, by default 1

  • T (int) – the number of time steps, by default 20

  • tau (float) – the \(\tau\) in LIF neuron, by default 1.0

  • v_threshold (float) – the threshold \(V_{th}\) in LIF neuron, by default 1.0

  • v_reset (float) – the reset level \(V_{reset}\) in LIF neuron, by default 0

  • cached (bool, optional) – whether the layer will cache the computation of \((\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2})^K\) on first execution, and will use the cached version for further executions, by default False

  • add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True

  • normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default True

  • bias (bool, optional) – whether to use bias in the layers, by default True

Note

Different from that in torch_geometric, for the input edge_index, our implementation supports torch.FloatTensor, torch.LongTensor and obj:torch_sparse.SparseTensor.

reset_parameters()[source]
cache_clear()[source]

Clear cached inputs or intermediate results.

forward(x: Tensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor[source]