graphwar.defense

CosinePurification

Graph purification based on cosine similarity of connected nodes.

JaccardPurification

Graph purification based on Jaccard similarity of connected nodes.

SVDPurification

Graph purification based on low-rank Singular Value Decomposition (SVD) reconstruction on the adjacency matrix.

EigenDecomposition

Graph purification based on low-rank Eigen Decomposition reconstruction on the adjacency matrix.

GNNGUARD

Implementation of GNNGUARD from the "GNNGUARD: Defending Graph Neural Networks against Adversarial Attacks" paper (NeurIPS'20)

UniversalDefense

Base class for graph universal defense

GUARD

Graph Universal Adversarial Defense (GUARD)

DegreeGUARD

Graph Universal Defense based on node degrees

RandomGUARD

Graph Universal Defense based on random choice

class CosinePurification(threshold: float = 0.0, allow_singleton: bool = False)[source]

Graph purification based on cosine similarity of connected nodes.

Note

CosinePurification is an extension of graphwar.defense.JaccardPurification for dealing with continuous node features.

Parameters
  • threshold (float, optional) – threshold to filter edges based on cosine similarity, by default 0.

  • allow_singleton (bool, optional) – whether such defense strategy allow singleton nodes, by default False

class JaccardPurification(threshold: float = 0.0, allow_singleton: bool = False)[source]

Graph purification based on Jaccard similarity of connected nodes. As in “Adversarial Examples on Graph Data: Deep Insights into Attack and Defense” paper (IJCAI’19)

Parameters
  • threshold (float, optional) – threshold to filter edges based on Jaccard similarity, by default 0.

  • allow_singleton (bool, optional) – whether such defense strategy allow singleton nodes, by default False

class SVDPurification(K: int = 50, threshold: float = 0.01, binaryzation: bool = False, remove_edge_index: bool = True)[source]

Graph purification based on low-rank Singular Value Decomposition (SVD) reconstruction on the adjacency matrix.

Parameters
  • K (int, optional) – the top-k largest singular value for reconstruction, by default 50

  • threshold (float, optional) – threshold to set elements in the reconstructed adjacency matrix as zero, by default 0.01

  • binaryzation (bool, optional) – whether to binarize the reconstructed adjacency matrix, by default False

  • remove_edge_index (bool, optional) – whether to remove the edge_index and edge_weight int the input data after reconstruction, by default True

Note

We set the reconstructed adjacency matrix as adj_t to be compatible with torch_geometric where adj_t denotes the torch_sparse.SparseTensor.

class EigenDecomposition(K: int = 50, normalize: bool = True, remove_edge_index: bool = True)[source]

Graph purification based on low-rank Eigen Decomposition reconstruction on the adjacency matrix.

EigenDecomposition is similar to graphwar.defense.SVDPurification

Parameters
  • K (int, optional) – the top-k largest singular value for reconstruction, by default 50

  • normalize (bool, optional) – whether to normalize the input adjacency matrix

  • remove_edge_index (bool, optional) – whether to remove the edge_index and edge_weight int the input data after reconstruction, by default True

Note

We set the reconstructed adjacency matrix as adj_t to be compatible with torch_geometric where adj_t denotes the torch_sparse.SparseTensor.

class GNNGUARD(threshold: float = 0.1, add_self_loops: bool = False)[source]

Implementation of GNNGUARD from the “GNNGUARD: Defending Graph Neural Networks against Adversarial Attacks” paper (NeurIPS’20)

Parameters
  • threshold (float, optional) – threshold for removing edges based on attention scores, by default 0.1

  • add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default False

forward(x, edge_index)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

extra_repr() str[source]

Set the extra representation of the module

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

training: bool
class UniversalDefense(device: str = 'cpu')[source]

Base class for graph universal defense

forward(data: Data, target_nodes: Union[int, Tensor], k: int = 50, symmetric: bool = True) Data[source]

Return the defended graph with defensive perturbation performed on.

Parameters
  • data (a graph represented as PyG-like data instance) – the graph where the defensive perturbation performed on

  • target_nodes (Union[int, Tensor]) – the target nodes where the defensive perturbation performed on

  • k (int) – the number of anchor nodes in the defensive perturbation, by default 50

  • symmetric (bool) – Determine whether the resulting graph is forcibly symmetric, by default True

Returns

Data – the defended graph with defensive perturbation performed on the target nodes

Return type

PyG-like data

removed_edges(target_nodes: Union[int, Tensor], k: int = 50) Tensor[source]

Return edges to remove with the defensive perturbation performed on on the target nodes

Parameters
  • target_nodes (Union[int, Tensor]) – the target nodes where the defensive perturbation performed on

  • k (int) – the number of anchor nodes in the defensive perturbation, by default 50

Returns

the edges to remove with the defensive perturbation performed on on the target nodes

Return type

Tensor, shape [2, k]

anchors(k: int = 50) Tensor[source]

Return the top-k anchor nodes

Parameters

k (int, optional) – the number of anchor nodes in the defensive perturbation, by default 50

Returns

the top-k anchor nodes

Return type

Tensor

patch(k=50) Tensor[source]

Return the universal patch of the defensive perturbation

Parameters

k (int, optional) – the number of anchor nodes in the defensive perturbation, by default 50

Returns

the 0-1 (boolean) universal patch where 1 denotes the edges to be removed.

Return type

Tensor

training: bool
class GUARD(data: Data, alpha: float = 2, batch_size: int = 512, device: str = 'cpu')[source]

Graph Universal Adversarial Defense (GUARD)

Parameters
  • data (Data) – the PyG-like input data

  • alpha (float, optional) – the scale factor for node degree, by default 2

  • batch_size (int, optional) – the batch size for computing node influence, by default 512

  • device (str, optional) – the device where the method running on, by default “cpu”

Example

>>> surrogate = GCN(dataset.num_features, dataset.num_classes, bias=False, acts=None)
>>> surrogate_trainer = Trainer(surrogate, device=device)
>>> ckp = ModelCheckpoint('guard.pth', monitor='val_acc')
>>> trainer.fit({'data': data, 'mask': splits.train_nodes},
            {'data': data, 'mask': splits.val_nodes}, callbacks=[ckp])
>>> trainer.evaluate({'data': data, 'mask': splits.test_nodes})
>>> guard = GUARD(data, device=device)
>>> guard.setup_surrogate(surrogate, data.y[splits.train_nodes])
>>> target_node = 1
>>> perturbed_data = ... # Other PyG-like Data
>>> guard(perturbed_data, target_node, k=50)
setup_surrogate(surrogate: Module, victim_labels: Tensor) GUARD[source]

Method used to initialize the (trained) surrogate model.

Parameters
  • surrogate (Module) – the input surrogate module

  • eps (float, optional) – temperature used for softmax activation, by default 1.0

  • freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True

  • required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None

Returns

the class itself

Return type

Surrogate

Raises
  • RuntimeError – if the surrogate model is not an instance of torch.nn.Module

  • RuntimeError – if the surrogate model is not an instance of required

training: bool
class DegreeGUARD(data: Data, descending: bool = False, device: str = 'cpu')[source]

Graph Universal Defense based on node degrees

Parameters
  • data (Data) – the PyG-like input data

  • descending (bool, optional) – whether the degree of chosen nodes are in descending order, by default False

  • device (str, optional) – the device where the method running on, by default “cpu”

Example

>>> data = ... # PyG-like Data
>>> guard = DegreeGUARD(data))
>>> target_node = 1
>>> perturbed_data = ... # Other PyG-like Data
>>> guard(perturbed_data, target_node, k=50)
training: bool
class RandomGUARD(data: Data, device: str = 'cpu')[source]

Graph Universal Defense based on random choice

Parameters
  • data (Data) – the PyG-like input data

  • device (str, optional) – the device where the method running on, by default “cpu”

Example

>>> data = ... # PyG-like Data
>>> guard = RandomGUARD(data)
>>> target_node = 1
>>> perturbed_data = ... # Other PyG-like Data
>>> guard(perturbed_data, target_node, k=50)
training: bool