greatx.defense
Graph purification based on cosine similarity of connected nodes. |
|
Graph purification based on Jaccard similarity of connected nodes. |
|
Graph purification based on low-rank Singular Value Decomposition (SVD) reconstruction on the adjacency matrix. |
|
Graph purification based on low-rank Eigen Decomposition reconstruction on the adjacency matrix. |
|
Graph purification based on low-rank Singular Value Decomposition (SVD) reconstruction on the adjacency matrix. |
|
Implementation of GNNGUARD from the "GNNGUARD: Defending Graph Neural Networks against Adversarial Attacks" paper (NeurIPS'20) |
|
Base class for graph universal defense from the "Graph Universal Adversarial Defense" paper (arXiv'22) |
|
Implementation of Graph Universal Adversarial Defense (GUARD) from the "Graph Universal Adversarial Defense" paper (arXiv'22) |
|
Implementation of Graph Universal Defense based on node degrees from the "Graph Universal Adversarial Defense" paper (arXiv'22) |
|
Implementation of Graph Universal Defense based on random choices from the "Graph Universal Adversarial Defense" paper (arXiv'22) |
|
Implementation of FeaturePropagation from the "On the Unreasonable Effectiveness of Feature propagation in Learning on Graphs with Missing Node Features" paper (Log'22) |
- class CosinePurification(threshold: float = 0.0, allow_singleton: bool = False)[source]
Graph purification based on cosine similarity of connected nodes.
Note
CosinePurification
is an extension ofgreatx.defense.JaccardPurification
for dealing with continuous node features.
- class JaccardPurification(threshold: float = 0.0, allow_singleton: bool = False)[source]
Graph purification based on Jaccard similarity of connected nodes. As in “Adversarial Examples on Graph Data: Deep Insights into Attack and Defense” paper (IJCAI’19)
- class SVDPurification(K: int = 50, threshold: float = 0.01, binaryzation: bool = False, remove_edge_index: bool = True)[source]
Graph purification based on low-rank Singular Value Decomposition (SVD) reconstruction on the adjacency matrix.
- Parameters:
K (int, optional) – the top-k largest singular value for reconstruction, by default 50
threshold (float, optional) – threshold to set elements in the reconstructed adjacency matrix as zero, by default 0.01
binaryzation (bool, optional) – whether to binarize the reconstructed adjacency matrix, by default False
remove_edge_index (bool, optional) – whether to remove the
edge_index
andedge_weight
int the inputdata
after reconstruction, by default True
Note
We set the reconstructed adjacency matrix as
adj_t
to be compatible with torch_geometric whoseadj_t
denotes thetorch_sparse.SparseTensor
.
- class EigenDecomposition(K: int = 50, normalize: bool = True, remove_edge_index: bool = True)[source]
Graph purification based on low-rank Eigen Decomposition reconstruction on the adjacency matrix.
EigenDecomposition
is similar togreatx.defense.SVDPurification
- Parameters:
K (int, optional) – the top-k largest singular value for reconstruction, by default 50
normalize (bool, optional) – whether to normalize the input adjacency matrix
remove_edge_index (bool, optional) – whether to remove the
edge_index
andedge_weight
int the inputdata
after reconstruction, by default True
Note
We set the reconstructed adjacency matrix as
adj_t
to be compatible with torch_geometric whoseadj_t
denotes thetorch_sparse.SparseTensor
.
- class TSVD(K: int = 50, num_channels: int = 5, p: float = 0.1, normalize: bool = True)[source]
Graph purification based on low-rank Singular Value Decomposition (SVD) reconstruction on the adjacency matrix.
- Parameters:
K (int, optional) – the top-k largest singular value for reconstruction, by default 50
threshold (float, optional) – threshold to set elements in the reconstructed adjacency matrix as zero, by default 0.01
binaryzation (bool, optional) – whether to binarize the reconstructed adjacency matrix, by default False
remove_edge_index (bool, optional) – whether to remove the
edge_index
andedge_weight
int the inputdata
after reconstruction, by default True
Note
We set the reconstructed adjacency matrix as
adj_t
to be compatible with torch_geometric whoseadj_t
denotes thetorch_sparse.SparseTensor
.
- class GNNGUARD(threshold: float = 0.1, add_self_loops: bool = False)[source]
Implementation of GNNGUARD from the “GNNGUARD: Defending Graph Neural Networks against Adversarial Attacks” paper (NeurIPS’20)
- Parameters:
- class UniversalDefense(device: str = 'cpu')[source]
Base class for graph universal defense from the “Graph Universal Adversarial Defense” paper (arXiv’22)
- forward(data: Data, target_nodes: Union[int, Tensor], k: int = 50, symmetric: bool = True) Data [source]
Return the defended graph with defensive perturbation performed on.
- Parameters:
data (a graph represented as PyG-like data instance) – the graph where the defensive perturbation performed on
target_nodes (Union[int, Tensor]) – the target nodes where the defensive perturbation performed on
k (int) – the number of anchor nodes in the defensive perturbation, by default 50
symmetric (bool) – Determine whether the resulting graph is forcibly symmetric, by default True
- Returns:
Data – the defended graph with defensive perturbation performed on the target nodes
- Return type:
PyG-like data
- removed_edges(target_nodes: Union[int, Tensor], k: int = 50) Tensor [source]
Return edges to remove with the defensive perturbation performed on on the target nodes
- Parameters:
- Returns:
the edges to remove with the defensive perturbation performed on on the target nodes
- Return type:
Tensor, shape [2, k]
- anchors(k: int = 50) Tensor [source]
Return the top-k anchor nodes
- Parameters:
k (int, optional) – the number of anchor nodes in the defensive perturbation, by default 50
- Returns:
the top-k anchor nodes
- Return type:
Tensor
- class GUARD(data: Data, alpha: float = 2, batch_size: int = 512, device: str = 'cpu')[source]
Implementation of Graph Universal Adversarial Defense (GUARD) from the “Graph Universal Adversarial Defense” paper (arXiv’22)
- Parameters:
Example
surrogate = GCN(num_features, num_classes, bias=False, acts=None) surrogate_trainer = Trainer(surrogate, device=device) ckp = ModelCheckpoint('guard.pth', monitor='val_acc') trainer.fit(data, mask=(splits.train_nodes, splits.val_nodes), callbacks=[ckp]) trainer.evaluate(data, splits.test_nodes) guard = GUARD(data, device=device) guard.setup_surrogate(surrogate, data.y[splits.train_nodes]) target_node = 1 perturbed_data = ... # Other PyG-like Data guard(perturbed_data, target_node, k=50)
- setup_surrogate(surrogate: Module, victim_labels: Tensor) GUARD [source]
Method used to initialize the (trained) surrogate model.
- Parameters:
surrogate (Module) – the input surrogate module
tau (float, optional) – temperature used for softmax activation, by default 1.0
freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True
required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None
- Returns:
the class itself
- Return type:
- Raises:
RuntimeError – if the surrogate model is not an instance of
torch.nn.Module
RuntimeError – if the surrogate model is not an instance of
required
- class DegreeGUARD(data: Data, descending: bool = False, device: str = 'cpu')[source]
Implementation of Graph Universal Defense based on node degrees from the “Graph Universal Adversarial Defense” paper (arXiv’22)
- Parameters:
Example
data = ... # PyG-like Data guard = DegreeGUARD(data)) target_node = 1 perturbed_data = ... # Other PyG-like Data guard(perturbed_data, target_node, k=50)
- class RandomGUARD(data: Data, device: str = 'cpu')[source]
Implementation of Graph Universal Defense based on random choices from the “Graph Universal Adversarial Defense” paper (arXiv’22)
- Parameters:
data (Data) – the PyG-like input data
device (str, optional) – the device where the method running on, by default “cpu”
Example
data = ... # PyG-like Data guard = RandomGUARD(data) target_node = 1 perturbed_data = ... # Other PyG-like Data guard(perturbed_data, target_node, k=50)
- class FeaturePropagation(missing_mask: Optional[Tensor] = None, num_iterations: int = 40, normalize: bool = True)[source]
Implementation of FeaturePropagation from the “On the Unreasonable Effectiveness of Feature propagation in Learning on Graphs with Missing Node Features” paper (Log’22)
- Parameters:
num_iterations (int, optional) – number of iterations to run, by default 40
missing_mask (Optional[Tensor], optional) – mask on missing features, by default None
normalize (bool, optional) – whether to compute symmetric normalization coefficients on the fly, by default True
add_self_loops (bool, optional) – whether to add self-loops to the input graph, by default True
Example
data = ... # PyG-like data data = FeaturePropagation(num_iterations=40)(data) # missing_mask is a mask `[num_nodes, num_features]` # indicating where the feature is missing data = FeaturePropagation(missing_mask=missing_mask)(data)
See also
torch_geometric.transforms.FeaturePropagation
Reference: