graphwar.defense¶
Graph purification based on cosine similarity of connected nodes. |
|
Graph purification based on Jaccard similarity of connected nodes. |
|
Graph purification based on low-rank Singular Value Decomposition (SVD) reconstruction on the adjacency matrix. |
|
Graph purification based on low-rank Eigen Decomposition reconstruction on the adjacency matrix. |
|
Implementation of GNNGUARD from the "GNNGUARD: Defending Graph Neural Networks against Adversarial Attacks" paper (NeurIPS'20) |
|
Base class for graph universal defense |
|
Graph Universal Adversarial Defense (GUARD) |
|
Graph Universal Defense based on node degrees |
|
Graph Universal Defense based on random choice |
- class CosinePurification(threshold: float = 0.0, allow_singleton: bool = False)[source]¶
Graph purification based on cosine similarity of connected nodes.
Note
CosinePurification
is an extension ofgraphwar.defense.JaccardPurification
for dealing with continuous node features.
- class JaccardPurification(threshold: float = 0.0, allow_singleton: bool = False)[source]¶
Graph purification based on Jaccard similarity of connected nodes. As in “Adversarial Examples on Graph Data: Deep Insights into Attack and Defense” paper (IJCAI’19)
- class SVDPurification(K: int = 50, threshold: float = 0.01, binaryzation: bool = False, remove_edge_index: bool = True)[source]¶
Graph purification based on low-rank Singular Value Decomposition (SVD) reconstruction on the adjacency matrix.
- Parameters
K (int, optional) – the top-k largest singular value for reconstruction, by default 50
threshold (float, optional) – threshold to set elements in the reconstructed adjacency matrix as zero, by default 0.01
binaryzation (bool, optional) – whether to binarize the reconstructed adjacency matrix, by default False
remove_edge_index (bool, optional) – whether to remove the
edge_index
andedge_weight
int the inputdata
after reconstruction, by default True
Note
We set the reconstructed adjacency matrix as
adj_t
to be compatible with torch_geometric whereadj_t
denotes thetorch_sparse.SparseTensor
.
- class EigenDecomposition(K: int = 50, normalize: bool = True, remove_edge_index: bool = True)[source]¶
Graph purification based on low-rank Eigen Decomposition reconstruction on the adjacency matrix.
EigenDecomposition
is similar tographwar.defense.SVDPurification
- Parameters
K (int, optional) – the top-k largest singular value for reconstruction, by default 50
normalize (bool, optional) – whether to normalize the input adjacency matrix
remove_edge_index (bool, optional) – whether to remove the
edge_index
andedge_weight
int the inputdata
after reconstruction, by default True
Note
We set the reconstructed adjacency matrix as
adj_t
to be compatible with torch_geometric whereadj_t
denotes thetorch_sparse.SparseTensor
.
- class GNNGUARD(threshold: float = 0.1, add_self_loops: bool = False)[source]¶
Implementation of GNNGUARD from the “GNNGUARD: Defending Graph Neural Networks against Adversarial Attacks” paper (NeurIPS’20)
- Parameters
- forward(x, edge_index)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class UniversalDefense(device: str = 'cpu')[source]¶
Base class for graph universal defense
- forward(data: Data, target_nodes: Union[int, Tensor], k: int = 50, symmetric: bool = True) Data [source]¶
Return the defended graph with defensive perturbation performed on.
- Parameters
data (a graph represented as PyG-like data instance) – the graph where the defensive perturbation performed on
target_nodes (Union[int, Tensor]) – the target nodes where the defensive perturbation performed on
k (int) – the number of anchor nodes in the defensive perturbation, by default 50
symmetric (bool) – Determine whether the resulting graph is forcibly symmetric, by default True
- Returns
Data – the defended graph with defensive perturbation performed on the target nodes
- Return type
PyG-like data
- removed_edges(target_nodes: Union[int, Tensor], k: int = 50) Tensor [source]¶
Return edges to remove with the defensive perturbation performed on on the target nodes
- Parameters
- Returns
the edges to remove with the defensive perturbation performed on on the target nodes
- Return type
Tensor, shape [2, k]
- anchors(k: int = 50) Tensor [source]¶
Return the top-k anchor nodes
- Parameters
k (int, optional) – the number of anchor nodes in the defensive perturbation, by default 50
- Returns
the top-k anchor nodes
- Return type
Tensor
- class GUARD(data: Data, alpha: float = 2, batch_size: int = 512, device: str = 'cpu')[source]¶
Graph Universal Adversarial Defense (GUARD)
- Parameters
Example
>>> surrogate = GCN(dataset.num_features, dataset.num_classes, bias=False, acts=None) >>> surrogate_trainer = Trainer(surrogate, device=device) >>> ckp = ModelCheckpoint('guard.pth', monitor='val_acc') >>> trainer.fit({'data': data, 'mask': splits.train_nodes}, {'data': data, 'mask': splits.val_nodes}, callbacks=[ckp]) >>> trainer.evaluate({'data': data, 'mask': splits.test_nodes})
>>> guard = GUARD(data, device=device) >>> guard.setup_surrogate(surrogate, data.y[splits.train_nodes]) >>> target_node = 1 >>> perturbed_data = ... # Other PyG-like Data >>> guard(perturbed_data, target_node, k=50)
- setup_surrogate(surrogate: Module, victim_labels: Tensor) GUARD [source]¶
Method used to initialize the (trained) surrogate model.
- Parameters
surrogate (Module) – the input surrogate module
eps (float, optional) – temperature used for softmax activation, by default 1.0
freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True
required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None
- Returns
the class itself
- Return type
- Raises
RuntimeError – if the surrogate model is not an instance of
torch.nn.Module
RuntimeError – if the surrogate model is not an instance of
required
- class DegreeGUARD(data: Data, descending: bool = False, device: str = 'cpu')[source]¶
Graph Universal Defense based on node degrees
- Parameters
Example
>>> data = ... # PyG-like Data >>> guard = DegreeGUARD(data)) >>> target_node = 1 >>> perturbed_data = ... # Other PyG-like Data >>> guard(perturbed_data, target_node, k=50)
- class RandomGUARD(data: Data, device: str = 'cpu')[source]¶
Graph Universal Defense based on random choice
- Parameters
data (Data) – the PyG-like input data
device (str, optional) – the device where the method running on, by default “cpu”
Example
>>> data = ... # PyG-like Data >>> guard = RandomGUARD(data) >>> target_node = 1 >>> perturbed_data = ... # Other PyG-like Data >>> guard(perturbed_data, target_node, k=50)