greatx.attack
Base Classes
Adversarial attacker for graph data. |
|
Adversarial attacker for graph data by flipping edges. |
- class Attacker(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]
Adversarial attacker for graph data. Note that this is an abstract class.
- Parameters:
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
greatx.attack.Attacker
,) –
- Raises:
TypeError – unexpected keyword argument in
kwargs
Examples
For example, the attacker model should be defined as follows:
from greatx.attacker import Attacker attacker = Attacker(data, device='cuda') attacker.reset() # reset states attacker.attack(attack_arguments) # attack attacker.data() # get the attacked graph denoted as PyG-like Data
- reset()[source]
Reset attacker state. Override this method in subclass to implement specific function.
- abstract data() Data [source]
Get the attacked graph denoted as PyG-like Data.
- Raises:
NotImplementedError – The subclass does not implement this interface.
- abstract attack() Attacker [source]
Abstract method. The subclass must override this method to implement specific attack for itself.
- Raises:
NotImplementedError – The subclass does not implement this interface.
- set_max_perturbations(max_perturbations: Union[float, int] = inf, verbose: bool = True) Attacker [source]
Set the maximum number of allowed perturbations
- Parameters:
Example
attacker.set_max_perturbations(10)
- class FlipAttacker(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]
Adversarial attacker for graph data by flipping edges.
- Parameters:
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
greatx.attack.Attacker
,) –
- Raises:
TypeError – unexpected keyword argument in
kwargs
Note
greatx.attack.FlipAttacker
is a base class for graph modification attacks (GMA).- reset() FlipAttacker [source]
Reset attacker. This method must be called before attack.
- edge_flips(frac: float = 1.0) BunchDict [source]
Get all the edges to be flipped, including edges to be added and removed.
- Parameters:
frac (float, optional) – the fraction of edge perturbations, i.e., how many perturbed features are used to construct the perturbed graph. by default 1.0
Example
>>> # Get the edge flips >>> attacker.edge_flips()
>>> # Get the edge flips, with >>> # specifying feat_ratio >>> attacker.edge_flips(frac=0.5)
- remove_feat(u: int, v: int, it: Optional[int] = None)[source]
Remove the feature in a dimension v form a node u. That is, set a dimension of the specific node to zero.
- add_feat(u: int, v: int, it: Optional[int] = None)[source]
Remove the feature in a dimension v form a node u. That is, set a dimension of the specific node to one.
- feat_flips(frac: float = 1.0) BunchDict [source]
Get all the features to be flipped, including features to be added and removed.
- Parameters:
frac (float, optional) – the fraction of feature perturbations, i.e., how many perturbed features are used to construct the perturbed graph. by default 1.0
Example
>>> # Get the feature flips >>> attacker.feat_flips()
>>> # Get the feature flips, with >>> # specifying feat_ratio >>> attacker.feat_flips(frac=0.5)
- data(edge_ratio: float = 1.0, feat_ratio: float = 1.0, coalesce: bool = True, symmetric: bool = True) Data [source]
Get the attacked graph denoted by PyG-like data instance. Note that this method uses LRU cache for efficiency, the computation is only excuted at the first call if the input parameters were the same.
- Parameters:
edge_ratio (float, optional) – the fraction of edge perturbations, i.e., how many perturbed edges are used to construct the perturbed graph. by default 1.0
feat_ratio (float, optional) – the fraction of feature perturbations, i.e., how many perturbed features are used to construct the perturbed graph. by default 1.0
coalesce (bool, optional) – whether to coalesce the output edges.
symmetric (bool, optional) – whether the output graph is symmetric, by default True
Example
>>> # Get the perturbed graph, including >>> # edge flips and feature flips >>> attacker.data()
>>> # Get the perturbed graph, with >>> # specifying edge_ratio >>> attacker.data(edge_ratio=0.5)
>>> # Get the perturbed graph, with >>> # specifying feat_ratio >>> attacker.data(feat_ratio=0.5)
- Returns:
the attacked graph denoted by PyG-like data instance
- Return type:
Data
- set_allow_singleton(state: bool)[source]
Set whether the attacked graph allow singleton node, i.e., zero degree nodes.
- Parameters:
state (bool) – the flag to set
Example
>>> attacker.set_allow_singleton(True)
- is_singleton_edge(u: int, v: int) bool [source]
Check if the edge is an singleton edge that, if removed, would result in a singleton node in the graph.
- Parameters:
- Returns:
bool
- Return type:
True if the edge is an singleton edge, otherwise False.
Note
Please make sure the edge is the one being removed.
Targeted Attacks
Base class for adversarial targeted attack. |
|
Random attacker that randomly chooses edges to flip. |
|
Implementation of DICE attack from the: "Hiding Individuals and Communities in a Social Network" paper |
|
Implementation of FGA attack from the: "Fast Gradient Attack on Network Embedding" paper (arXiv'18) |
|
Implementation of IG-FGSM attack from the: "Adversarial Examples on Graph Data: Deep Insights into Attack and Defense" paper (IJCAI'19) |
|
Implementation of SGA attack from the: "Adversarial Attack on Large Scale Graph" paper (TKDE'21) |
|
Implementation of Nettack attack from the: "Adversarial Attacks on Neural Networks for Graph Data" paper (KDD'18) |
|
Implementation of GFA attack from the: "A Restricted Black - box Adversarial Framework Towards Attacking Graph Embedding Models" paper (AAAI'20) |
|
Implementation of PGD attack from the: "Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective" paper (IJCAI'19) |
- class TargetedAttacker(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]
Base class for adversarial targeted attack.
- Parameters:
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
greatx.attack.Attacker
,) –
- Raises:
TypeError – unexpected keyword argument in
kwargs
Note
greatx.attack.targeted.TargetedAttacker
is a subclass ofgreatx.attack.FlipAttacker
. It belongs to graph modification attack (GMA).- reset() TargetedAttacker [source]
Reset the state of the Attacker
- Returns:
the attacker itself
- Return type:
- attack(target, target_label, num_budgets, direct_attack, structure_attack, feature_attack) TargetedAttacker [source]
Base method that describes the adversarial targeted attack.
- Parameters:
target (int) – the target node to be attacked
target_label (int) – the label of the target node
num_budgets (int or float) – the number/percentage of perturbations allowed to attack
direct_attack (bool) – whether to conduct direct attack or indirect attack
structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)
feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features
- class RandomAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]
Random attacker that randomly chooses edges to flip.
- Parameters:
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
greatx.attack.Attacker
,) –
- Raises:
TypeError – unexpected keyword argument in
kwargs
Example
from greatx.dataset import GraphDataset import torch_geometric.transforms as T import os.path as osp dataset = GraphDataset(root='.', name='Cora', transform=T.LargestConnectedComponents()) data = dataset[0] from greatx.attack.targeted import RandomAttack attacker = RandomAttack(data) attacker.reset() # attacking target node `1` with default budget set as node degree attacker.attack(target=1) # attacking target node `1` with budget set as 1 attacker.attack(target=1, num_budgets=1) attacker.data() # get attacked graph attacker.edge_flips() # get edge flips after attack attacker.added_edges() # get added edges after attack attacker.removed_edges() # get removed edges after attack
Note
Please remember to call
reset()
before each attack.
- attack(target, *, num_budgets=None, threshold=0.5, direct_attack=True, structure_attack=True, feature_attack=False, disable=False)[source]
Base method that describes the adversarial targeted attack.
- Parameters:
target (int) – the target node to be attacked
target_label (int) – the label of the target node
num_budgets (int or float) – the number/percentage of perturbations allowed to attack
direct_attack (bool) – whether to conduct direct attack or indirect attack
structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)
feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features
- class DICEAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]
Implementation of DICE attack from the: “Hiding Individuals and Communities in a Social Network” paper
DICE randomly chooses edges to flip based on the principle of “Disconnect Internally, Connect Externally” (DICE), which conducts attacks by removing edges between nodes with high correlations and connecting edges with low correlations.
- Parameters:
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
greatx.attack.Attacker
,) –
- Raises:
TypeError – unexpected keyword argument in
kwargs
Example
from greatx.dataset import GraphDataset import torch_geometric.transforms as T import os.path as osp dataset = GraphDataset(root='.', name='Cora', transform=T.LargestConnectedComponents()) data = dataset[0] from greatx.attack.targeted import IGAttack attacker = IGAttack(data) attacker.reset() # attacking target node `1` with default budget set as node degree attacker.attack(target=1) # attacking target node `1` with budget set as 1 attacker.attack(target=1, num_budgets=1) attacker.data() # get attacked graph attacker.edge_flips() # get edge flips after attack attacker.added_edges() # get added edges after attack attacker.removed_edges() # get removed edges after attack
Note
Please remember to call
reset()
before each attack.
- class FGAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]
Implementation of FGA attack from the: “Fast Gradient Attack on Network Embedding” paper (arXiv’18)
- Parameters:
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
greatx.attack.Attacker
,) –
- Raises:
TypeError – unexpected keyword argument in
kwargs
Example
from greatx.dataset import GraphDataset import torch_geometric.transforms as T import os.path as osp dataset = GraphDataset(root='.', name='Cora', transform=T.LargestConnectedComponents()) data = dataset[0] surrogate_model = ... # train your surrogate model from greatx.attack.targeted import FGAttack attacker = FGAttack(data) attacker.setup_surrogate(surrogate_model) attacker.reset() # attacking target node `1` with default budget set as node degree attacker.attack(target=1) attacker.reset() # attacking target node `1` with budget set as 1 attacker.attack(target=1, num_budgets=1) attacker.data() # get attacked graph attacker.edge_flips() # get edge flips after attack attacker.added_edges() # get added edges after attack attacker.removed_edges() # get removed edges after attack
Note
This is a simple but effective attack that utilizes gradient information of the adjacency matrix. There are several work sharing the same heuristic:
FGSM: “Explaining and Harnessing Adversarial Examples” paper (ICLR’15) # noqa
“Link Prediction Adversarial Attack Via Iterative Gradient Attack” paper (IEEE Trans’20) # noqa
“Adversarial Attack on Graph Structured Data” paper (ICML’18) # noqa
Also, Please remember to call
reset()
before each attack.- attack(target, *, target_label=None, num_budgets=None, direct_attack=True, structure_attack=True, feature_attack=False, disable=False)[source]
Base method that describes the adversarial targeted attack.
- Parameters:
target (int) – the target node to be attacked
target_label (int) – the label of the target node
num_budgets (int or float) – the number/percentage of perturbations allowed to attack
direct_attack (bool) – whether to conduct direct attack or indirect attack
structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)
feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features
- class IGAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]
Implementation of IG-FGSM attack from the: “Adversarial Examples on Graph Data: Deep Insights into Attack and Defense” paper (IJCAI’19)
- Parameters:
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
greatx.attack.Attacker
,) –
- Raises:
TypeError – unexpected keyword argument in
kwargs
Example
from greatx.dataset import GraphDataset import torch_geometric.transforms as T import os.path as osp dataset = GraphDataset(root='.', name='Cora', transform=T.LargestConnectedComponents()) data = dataset[0] surrogate_model = ... # train your surrogate model from greatx.attack.targeted import IGAttack attacker = IGAttack(data) attacker.setup_surrogate(surrogate_model) attacker.reset() # attacking target node `1` with default budget set as node degree attacker.attack(target=1) # attacking target node `1` with budget set as 1 attacker.attack(target=1, num_budgets=1) attacker.data() # get attacked graph attacker.edge_flips() # get edge flips after attack attacker.added_edges() # get added edges after attack attacker.removed_edges() # get removed edges after attack
Note
Please remember to call
reset()
before each attack.
- attack(target, *, target_label=None, num_budgets=None, steps=20, direct_attack=True, structure_attack=True, feature_attack=False, disable=False)[source]
Base method that describes the adversarial targeted attack.
- Parameters:
target (int) – the target node to be attacked
target_label (int) – the label of the target node
num_budgets (int or float) – the number/percentage of perturbations allowed to attack
direct_attack (bool) – whether to conduct direct attack or indirect attack
structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)
feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features
- class SGAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]
Implementation of SGA attack from the: “Adversarial Attack on Large Scale Graph” paper (TKDE’21)
- Parameters:
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
greatx.attack.Attacker
,) –
- Raises:
TypeError – unexpected keyword argument in
kwargs
Example
from greatx.dataset import GraphDataset import torch_geometric.transforms as T dataset = GraphDataset(root='.', name='Cora', transform=T.LargestConnectedComponents()) data = dataset[0] surrogate_model = ... # train your surrogate model from greatx.attack.targeted import SGAttack attacker = SGAttack(data) attacker.setup_surrogate(surrogate_model) attacker.reset() # attacking target node `1` with default budget set as node degree attacker.attack(target=1) # attacking target node `1` with budget set as 1 attacker.attack(target=1, num_budgets=1) attacker.data() # get attacked graph attacker.edge_flips() # get edge flips after attack attacker.added_edges() # get added edges after attack attacker.removed_edges() # get removed edges after attack
Note
SGAttack is a scalable attack that can be applied to large scale graph
Please remember to call
reset()
before each attack.
- setup_surrogate(surrogate: Module, *, tau: float = 5.0, freeze: bool = True)[source]
Method used to initialize the (trained) surrogate model.
- Parameters:
surrogate (Module) – the input surrogate module
tau (float, optional) – temperature used for softmax activation, by default 1.0
freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True
required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None
- Returns:
the class itself
- Return type:
- Raises:
RuntimeError – if the surrogate model is not an instance of
torch.nn.Module
RuntimeError – if the surrogate model is not an instance of
required
- attack(target, *, K: int = 2, target_label=None, num_budgets=None, direct_attack=True, structure_attack=True, feature_attack=False, disable=False)[source]
Base method that describes the adversarial targeted attack.
- Parameters:
target (int) – the target node to be attacked
target_label (int) – the label of the target node
num_budgets (int or float) – the number/percentage of perturbations allowed to attack
direct_attack (bool) – whether to conduct direct attack or indirect attack
structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)
feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features
- class Nettack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]
Implementation of Nettack attack from the: “Adversarial Attacks on Neural Networks for Graph Data” paper (KDD’18)
- Parameters:
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
greatx.attack.Attacker
,) –
- Raises:
TypeError – unexpected keyword argument in
kwargs
Example
from greatx.dataset import GraphDataset import torch_geometric.transforms as T import os.path as osp dataset = GraphDataset(root='.', name='Cora', transform=T.LargestConnectedComponents()) data = dataset[0] surrogate_model = ... # train your surrogate model from greatx.attack.targeted import Nettack attacker = Nettack(data) attacker.setup_surrogate(surrogate_model) attacker.reset() # attacking target node `1` with default budget set as node degree attacker.attack(target=1) # attacking target node `1` with budget set as 1 attacker.attack(target=1, num_budgets=1) attacker.data() # get attacked graph attacker.edge_flips() # get edge flips after attack attacker.added_edges() # get added edges after attack attacker.removed_edges() # get removed edges after attack
Note
Please remember to call
reset()
before each attack.
- setup_surrogate(surrogate)[source]
Method used to initialize the (trained) surrogate model.
- Parameters:
surrogate (Module) – the input surrogate module
tau (float, optional) – temperature used for softmax activation, by default 1.0
freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True
required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None
- Returns:
the class itself
- Return type:
- Raises:
RuntimeError – if the surrogate model is not an instance of
torch.nn.Module
RuntimeError – if the surrogate model is not an instance of
required
- attack(target, *, target_label=None, num_budgets=None, n_influencers=5, direct_attack=True, structure_attack=True, feature_attack=False, ll_constraint=True, ll_cutoff=0.004, disable=False)[source]
Base method that describes the adversarial targeted attack.
- Parameters:
target (int) – the target node to be attacked
target_label (int) – the label of the target node
num_budgets (int or float) – the number/percentage of perturbations allowed to attack
direct_attack (bool) – whether to conduct direct attack or indirect attack
structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)
feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features
- class GFAttack(data: Data, K: int = 2, T: int = 128, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]
Implementation of GFA attack from the: “A Restricted Black - box Adversarial Framework Towards Attacking Graph Embedding Models” paper (AAAI’20)
- Parameters:
data (Data) – PyG-like data denoting the input graph
K (int, optional) – the order of graph filter, by default 2
T (int, optional) – top-T largest eigen-values/vectors selected, by default 128
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed of reproduce the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None
- Raises:
TypeError – unexpected keyword argument in
kwargs
Example
from greatx.dataset import GraphDataset import torch_geometric.transforms as T import os.path as osp dataset = GraphDataset(root='.', name='Cora', transform=T.LargestConnectedComponents()) data = dataset[0] from greatx.attack.targeted import IGAttack attacker = IGAttack(data) # attacking target node `1` with default budget set as node degree attacker.attack(target=1) # attacking target node `1` with budget set as 1 attacker.attack(target=1, num_budgets=1) attacker.data() # get attacked graph attacker.edge_flips() # get edge flips after attack attacker.added_edges() # get added edges after attack attacker.removed_edges() # get removed edges after attack
Note
In the paper, the authors mainly consider the single edge perturbations, i.e.,
num_budgets=1
. # noqaPlease remember to call
reset()
before each attack.T=128 for citeseer and pubmed, T=num_nodes//2 for cora to reproduce results in paper. # noqa
- attack(target, *, num_budgets=None, direct_attack=True, structure_attack=True, feature_attack=False, ll_constraint=False, ll_cutoff=0.004, disable=False)[source]
Base method that describes the adversarial targeted attack.
- Parameters:
target (int) – the target node to be attacked
target_label (int) – the label of the target node
num_budgets (int or float) – the number/percentage of perturbations allowed to attack
direct_attack (bool) – whether to conduct direct attack or indirect attack
structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)
feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features
- static structure_score(A: csr_matrix, x_mean: Tensor, eig_vals: Tensor, eig_vec: Tensor, candidate_edges: ndarray, K: int, T: int, method: str = 'nosum')[source]
Calculate the score of potential edges as formulated in paper.
- Parameters:
A (sp.csr_matrix) – the graph adjacency matrix
x_mean (torch.Tensor) –
eig_vals (torch.Tensor) – the eigen value
eig_vec (torch.Tensor) – the eigen vector
candidate_edges (np.ndarray) – the candidate_edges to be selected
K (int) – The order of graph filter K.
T (int) – Selecting the Top-T largest eigen-values/vectors.
method (str, optional) – “sum” or “nosum” Indicates the score are calculated from which loss as in Equation (8) or Equation (12). “nosum” denotes Equation (8), where the loss is derived from Graph Convolutional Networks, “sum” denotes Equation (12), where the loss is derived from Sampling-based Graph Embedding Methods, by default “nosum”
- Returns:
Scores for potential edges.
- Return type:
Tensor
- class PGDAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]
Implementation of PGD attack from the: “Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective” paper (IJCAI’19)
- Parameters:
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
greatx.attack.Attacker
,) –
- Raises:
TypeError – unexpected keyword argument in
kwargs
Example
from greatx.dataset import GraphDataset import torch_geometric.transforms as T import os.path as osp dataset = GraphDataset(root='.', name='Cora', transform=T.LargestConnectedComponents()) data = dataset[0] surrogate_model = ... # train your surrogate model from greatx.attack.targeted import PGDAttack attacker = PGDAttack(data) attacker.setup_surrogate(surrogate_model) attacker.reset() # attacking target node `1` with default budget set as node degree attacker.attack(target=1) attacker.reset() # attacking target node `1` with budget set as 1 attacker.attack(target=1, num_budgets=1) attacker.data() # get attacked graph attacker.edge_flips() # get edge flips after attack attacker.added_edges() # get added edges after attack attacker.removed_edges() # get removed edges after attack
- setup_surrogate(surrogate: Module, *, tau: float = 1.0, freeze: bool = True) PGDAttack [source]
Method used to initialize the (trained) surrogate model.
- Parameters:
surrogate (Module) – the input surrogate module
tau (float, optional) – temperature used for softmax activation, by default 1.0
freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True
required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None
- Returns:
the class itself
- Return type:
- Raises:
RuntimeError – if the surrogate model is not an instance of
torch.nn.Module
RuntimeError – if the surrogate model is not an instance of
required
- reset() PGDAttack [source]
Reset the state of the Attacker
- Returns:
the attacker itself
- Return type:
- attack(target: int, *, target_label: Optional[int] = None, num_budgets: Optional[Union[float, int]] = None, direct_attack: bool = True, base_lr: float = 0.1, grad_clip: Optional[float] = None, epochs: int = 200, ce_loss: bool = False, sample_epochs: int = 20, structure_attack: bool = True, feature_attack: bool = False, disable: bool = False) PGDAttack [source]
Adversarial attack method for “Project gradient descent attack (PGD)”
- Parameters:
target (int) – the target node to attack
target_label (Optional[int], optional) – the label of the target node, if None, it defaults to its ground truth label, by default None
direct_attack (bool, optional) – whether to conduct direct attack on the target, N/A for this method when
direct_attack=False
.num_budgets (Union[int, float], optional) – the number of attack budgets, coubd be float (ratio) or int (number), if None, it defaults to the number of node degree of
target
by default Nonebase_lr (float, optional) – the base learning rate for PGD training, by default 0.1
grad_clip (float, optional) – gradient clipping for the computed gradients, by default None
epochs (int, optional) – the number of epochs for PGD training, by default 200
ce_loss (bool, optional) – whether to use cross-entropy loss (True) or margin loss (False), by default False
sample_epochs (int, optional) – the number of sampling epochs for learned perturbations, by default 20
structure_attack (bool, optional) – whether to conduct structure attack, i.e., modify the graph structure (edges), by default True
feature_attack (bool, optional) – whether to conduct feature attack, i.e., modify the node features, N/A for this method. by default False
disable (bool, optional) – whether to disable the tqdm progress bar, by default False
- Returns:
the attacker itself
- Return type:
Untargeted Attacks
Base class for adversarial non-targeted attack. |
|
Random attacker that randomly chooses edges to flip. |
|
Implementation of DICE attack from the: "Hiding Individuals and Communities in a Social Network" paper |
|
Implementation of FGA attack from the: "Fast Gradient Attack on Network Embedding" paper (arXiv'18) |
|
Implementation of IG-FGSM attack from the: "Adversarial Examples on Graph Data: Deep Insights into Attack and Defense" paper (IJCAI'19) |
|
Implementation of Metattack attack from the: "Adversarial Attacks on Graph Neural Networks via Meta Learning" paper (ICLR'19) |
|
Implementation of PGD attack from the: "Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective" paper (IJCAI'19) |
- class UntargetedAttacker(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]
Base class for adversarial non-targeted attack.
- Parameters:
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
greatx.attack.Attacker
,) –
- Raises:
TypeError – unexpected keyword argument in
kwargs
Note
greatx.attack.targeted.UntargetedAttacker
is a subclass ofgreatx.attack.FlipAttacker
. It belongs to graph modification attack (GMA).- reset() UntargetedAttacker [source]
Reset the state of the Attacker
- Returns:
the attacker itself
- Return type:
- attack(num_budgets, structure_attack, feature_attack) UntargetedAttacker [source]
Base method that describes the adversarial untargeted attack.
- Parameters:
num_budgets (int or float) – the number/percentage of perturbations allowed to attack
structure_attack (bool, optional) – whether to conduct structure attack, i.e., modify the graph structure (edges),
feature_attack (bool, optional) – whether to conduct feature attack, i.e., modify the node features,
- class RandomAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]
Random attacker that randomly chooses edges to flip.
- Parameters:
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
greatx.attack.Attacker
,) –
- Raises:
TypeError – unexpected keyword argument in
kwargs
Example
from greatx.dataset import GraphDataset import torch_geometric.transforms as T dataset = GraphDataset(root='.', name='Cora', transform=T.LargestConnectedComponents()) data = dataset[0] from greatx.attack.untargeted import RandomAttack attacker = RandomAttack(data) attacker.reset() attacker.attack(0.05) # attack with 0.05% of edge perturbations attacker.data() # get attacked graph attacker.edge_flips() # get edge flips after attack attacker.added_edges() # get added edges after attack attacker.removed_edges() # get removed edges after attack
Note
Please remember to call
reset()
before each attack.
- attack(num_budgets=0.05, *, threshold=0.5, structure_attack=True, feature_attack=False, disable=False)[source]
Base method that describes the adversarial untargeted attack.
- Parameters:
num_budgets (int or float) – the number/percentage of perturbations allowed to attack
structure_attack (bool, optional) – whether to conduct structure attack, i.e., modify the graph structure (edges),
feature_attack (bool, optional) – whether to conduct feature attack, i.e., modify the node features,
- class DICEAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]
Implementation of DICE attack from the: “Hiding Individuals and Communities in a Social Network” paper
DICE randomly chooses edges to flip based on the principle of “Disconnect Internally, Connect Externally” (DICE), which conducts attacks by removing edges between nodes with high correlations and connecting edges with low correlations.
- Parameters:
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
greatx.attack.Attacker
,) –
- Raises:
TypeError – unexpected keyword argument in
kwargs
Example
from greatx.dataset import GraphDataset import torch_geometric.transforms as T dataset = GraphDataset(root='.', name='Cora', transform=T.LargestConnectedComponents()) data = dataset[0] from greatx.attack.untargeted import DICEAttack attacker = DICEAttack(data) attacker.reset() attacker.attack(0.05) # attack with 0.05% of edge perturbations attacker.data() # get attacked graph attacker.edge_flips() # get edge flips after attack attacker.added_edges() # get added edges after attack attacker.removed_edges() # get removed edges after attack
Note
Please remember to call
reset()
before each attack.
- class FGAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]
Implementation of FGA attack from the: “Fast Gradient Attack on Network Embedding” paper (arXiv’18)
- Parameters:
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
greatx.attack.Attacker
,) –
- Raises:
TypeError – unexpected keyword argument in
kwargs
Example
from greatx.dataset import GraphDataset import torch_geometric.transforms as T dataset = GraphDataset(root='.', name='Cora', transform=T.LargestConnectedComponents()) data = dataset[0] surrogate_model = ... # train your surrogate model from greatx.attack.untargeted import FGAttack attacker = FGAttack(data) attacker.setup_surrogate(surrogate_model) attacker.reset() attacker.attack(0.05) # attack with 0.05% of edge perturbations attacker.data() # get attacked graph attacker.edge_flips() # get edge flips after attack attacker.added_edges() # get added edges after attack attacker.removed_edges() # get removed edges after attack
Note
This is a simple but effective attack that utilizes gradient information of the adjacency matrix. There are several work sharing the same heuristic:
FGSM: “Explaining and Harnessing Adversarial Examples” paper (ICLR’15) # noqa
“Link Prediction Adversarial Attack Via Iterative Gradient Attack” paper (IEEE Trans’20) # noqa
“Adversarial Attack on Graph Structured Data” paper (ICML’18) # noqa
Also, Please remember to call
reset()
before each attack.- setup_surrogate(surrogate: Module, victim_nodes: Tensor, victim_labels: Optional[Tensor] = None, *, tau: float = 1.0)[source]
Method used to initialize the (trained) surrogate model.
- Parameters:
surrogate (Module) – the input surrogate module
tau (float, optional) – temperature used for softmax activation, by default 1.0
freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True
required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None
- Returns:
the class itself
- Return type:
- Raises:
RuntimeError – if the surrogate model is not an instance of
torch.nn.Module
RuntimeError – if the surrogate model is not an instance of
required
- attack(num_budgets=0.05, *, structure_attack=True, feature_attack=False, disable=False)[source]
Base method that describes the adversarial untargeted attack.
- Parameters:
num_budgets (int or float) – the number/percentage of perturbations allowed to attack
structure_attack (bool, optional) – whether to conduct structure attack, i.e., modify the graph structure (edges),
feature_attack (bool, optional) – whether to conduct feature attack, i.e., modify the node features,
- class IGAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]
Implementation of IG-FGSM attack from the: “Adversarial Examples on Graph Data: Deep Insights into Attack and Defense” paper (IJCAI’19)
- Parameters:
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
greatx.attack.Attacker
,) –
- Raises:
TypeError – unexpected keyword argument in
kwargs
Example
from greatx.dataset import GraphDataset import torch_geometric.transforms as T dataset = GraphDataset(root='.', name='Cora', transform=T.LargestConnectedComponents()) data = dataset[0] surrogate_model = ... # train your surrogate model from greatx.attack.untargeted import IGAttack attacker = IGAttack(data) attacker.setup_surrogate(surrogate_model) attacker.reset() attacker.attack(0.05) # attack with 0.05% of edge perturbations attacker.data() # get attacked graph attacker.edge_flips() # get edge flips after attack attacker.added_edges() # get added edges after attack attacker.removed_edges() # get removed edges after attack
Note
In the paper, IG-FGSM attack was implemented for targeted attack, we adapt the codes for the non-targeted attack here. # noqa
Please remember to call
reset()
before each attack.
- setup_surrogate(surrogate: Module, victim_nodes: Tensor, victim_labels: Optional[Tensor] = None, *, tau: float = 1.0)[source]
Method used to initialize the (trained) surrogate model.
- Parameters:
surrogate (Module) – the input surrogate module
tau (float, optional) – temperature used for softmax activation, by default 1.0
freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True
required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None
- Returns:
the class itself
- Return type:
- Raises:
RuntimeError – if the surrogate model is not an instance of
torch.nn.Module
RuntimeError – if the surrogate model is not an instance of
required
- attack(num_budgets=0.05, *, steps=20, structure_attack=True, feature_attack=False, disable=False)[source]
Base method that describes the adversarial untargeted attack.
- Parameters:
num_budgets (int or float) – the number/percentage of perturbations allowed to attack
structure_attack (bool, optional) – whether to conduct structure attack, i.e., modify the graph structure (edges),
feature_attack (bool, optional) – whether to conduct feature attack, i.e., modify the node features,
- class Metattack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]
Implementation of Metattack attack from the: “Adversarial Attacks on Graph Neural Networks via Meta Learning” paper (ICLR’19)
- Parameters:
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
greatx.attack.Attacker
,) –
- Raises:
TypeError – unexpected keyword argument in
kwargs
Example
from greatx.dataset import GraphDataset import torch_geometric.transforms as T dataset = GraphDataset(root='.', name='Cora', transform=T.LargestConnectedComponents()) data = dataset[0] surrogate_model = ... # train your surrogate model from greatx.attack.untargeted import Metattack attacker = Metattack(data) attacker.setup_surrogate(surrogate_model) attacker.reset() attacker.attack(0.05) # attack with 0.05% of edge perturbations attacker.data() # get attacked graph attacker.edge_flips() # get edge flips after attack attacker.added_edges() # get added edges after attack attacker.removed_edges() # get removed edges after attack
Note
Please remember to call
reset()
before each attack.
- setup_surrogate(surrogate: Module, labeled_nodes: Tensor, unlabeled_nodes: Tensor, lr: float = 0.1, epochs: int = 100, momentum: float = 0.9, lambda_: float = 0.0, *, tau: float = 1.0)[source]
Method used to initialize the (trained) surrogate model.
- Parameters:
surrogate (Module) – the input surrogate module
tau (float, optional) – temperature used for softmax activation, by default 1.0
freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True
required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None
- Returns:
the class itself
- Return type:
- Raises:
RuntimeError – if the surrogate model is not an instance of
torch.nn.Module
RuntimeError – if the surrogate model is not an instance of
required
- attack(num_budgets=0.05, *, structure_attack=True, feature_attack=False, disable=False)[source]
Base method that describes the adversarial untargeted attack.
- Parameters:
num_budgets (int or float) – the number/percentage of perturbations allowed to attack
structure_attack (bool, optional) – whether to conduct structure attack, i.e., modify the graph structure (edges),
feature_attack (bool, optional) – whether to conduct feature attack, i.e., modify the node features,
- class PGDAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]
Implementation of PGD attack from the: “Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective” paper (IJCAI’19)
- Parameters:
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
greatx.attack.Attacker
,) –
- Raises:
TypeError – unexpected keyword argument in
kwargs
Example
from greatx.dataset import GraphDataset import torch_geometric.transforms as T dataset = GraphDataset(root='.', name='Cora', transform=T.LargestConnectedComponents()) data = dataset[0] surrogate_model = ... # train your surrogate model from greatx.attack.untargeted import PGDAttack attacker = PGDAttack(data) attacker.setup_surrogate(surrogate_model, victim_nodes=test_nodes) attacker.reset() attacker.attack(0.05) # attack with 0.05% of edge perturbations attacker.data() # get attacked graph attacker.edge_flips() # get edge flips after attack attacker.added_edges() # get added edges after attack attacker.removed_edges() # get removed edges after attack
Note
Please remember to call
reset()
before each attack.
- setup_surrogate(surrogate: Module, victim_nodes: Tensor, ground_truth: bool = False, *, tau: float = 1.0, freeze: bool = True) PGDAttack [source]
Setup the surrogate model for adversarial attack.
- Parameters:
surrogate (torch.nn.Module) – the surrogate model
victim_nodes (Tensor) – the victim nodes_set
ground_truth (bool, optional) – whether to use ground-truth label for victim nodes, if False, the node labels are estimated by the surrogate model, by default False
tau (float, optional) – the temperature of softmax activation, by default 1.0
freeze (bool, optional) – whether to free the surrogate model to avoid the gradient accumulation, by default True
- Returns:
the attacker itself
- Return type:
- reset() PGDAttack [source]
Reset the state of the Attacker
- Returns:
the attacker itself
- Return type:
- attack(num_budgets: Union[int, float] = 0.05, *, base_lr: float = 0.1, grad_clip: Optional[float] = None, epochs: int = 200, ce_loss: bool = False, sample_epochs: int = 20, structure_attack: bool = True, feature_attack: bool = False, disable: bool = False) PGDAttack [source]
Adversarial attack method for “Project gradient descent attack (PGD)”
- Parameters:
num_budgets (Union[int, float], optional) – the number of attack budgets, coubd be float (ratio) or int (number), by default 0.05
base_lr (float, optional) – the base learning rate for PGD training, by default 0.1
grad_clip (float, optional) – gradient clipping for the computed gradients, by default None
epochs (int, optional) – the number of epochs for PGD training, by default 200
ce_loss (bool, optional) – whether to use cross-entropy loss (True) or margin loss (False), by default False
sample_epochs (int, optional) – the number of sampling epochs for learned perturbations, by default 20
structure_attack (bool, optional) – whether to conduct structure attack, i.e., modify the graph structure (edges), by default True
feature_attack (bool, optional) – whether to conduct feature attack, i.e., modify the node features, N/A for this method. by default False
disable (bool, optional) – whether to disable the tqdm progress bar, by default False
- Returns:
the attacker itself
- Return type:
Injection Attacks
Base class for Injection Attacker, an inherent attack should implement the attack method. |
|
Injection nodes into a graph randomly. |
|
2nd place solution of KDD CUP 2020 "Adversarial attack and defense" challenge. |
- class InjectionAttacker(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]
Base class for Injection Attacker, an inherent attack should implement the attack method.
Example
attacker = InjectionAttacker(data) attacker.reset() # inject 10 nodes, where each nodes has 2 edges attacker.attack(num_budgets=10, num_edges_local=2) # inject 10 nodes, with 100 edges in total attacker.attack(num_budgets=10, num_edges_global=100) # inject 10 nodes, where each nodes has 2 edges, # and the features of injected nodes lies in [0,1] attacker.attack(num_budgets=10, num_edges_local=2, feat_limits=(0,1)) attacker.attack(num_budgets=10, num_edges_local=2, feat_limits={'min': 0, 'max':1}) # inject 10 nodes, where each nodes has 2 edges, # and the features of injected each node has 10 nonzero elements attacker.attack(num_budgets=10, num_edges_local=2, feat_budgets=10) # get injected nodes attacker.injected_nodes() # get injected edges attacker.injected_edges() # get injected nodes' features attacker.injected_feats() # get perturbed graph attacker.data()
- reset() InjectionAttacker [source]
Reset the state of the Attacker
- Returns:
the attacker itself
- Return type:
- attack(num_budgets: Union[int, float], *, targets: Optional[Tensor] = None, num_edges_global: Optional[int] = None, num_edges_local: Optional[int] = None, feat_limits: Optional[Union[tuple, dict]] = None, feat_budgets: Optional[int] = None) InjectionAttacker [source]
Base method that describes the adversarial injection attack
- Parameters:
num_budgets (Union[int, float]) – the number/percentage of nodes allowed to inject
targets (Optional[Tensor], optional) – the targeted nodes where injected nodes perturb, if None, it will be all nodes in the graph, by default None
num_edges_global (Optional[int], optional) – the number of total edges in the graph to be injected for all injected nodes, by default None
num_edges_local (Optional[int], optional) – the number of edges allowed to inject for each injected nodes, by default None
feat_limits (Optional[Union[tuple, dict]], optional) – the limitation or allowed budgets of injected node features, it can be a tuple, e.g., (0, 1) or a dict, e.g., {‘min’:0, ‘max’: 1}. if None, it is set as (self.feat.min(), self.feat.max()), by default None
feat_budgets (Optional[int], optional) – the number of nonzero features can be injected for each node, e.g., 10, denoting 10 nonzero features can be injected, by default None
- Return type:
the attacker itself
Note
Both num_edges_local and num_edges_global cannot be used simultaneously. # noqa
Both feat_limits and feat_budgets cannot be used simultaneously.
- edge_flips() BunchDict [source]
Get all the edges to be flipped, including edges to be added and removed.
- inject_edges(edges: Union[Tensor, List])[source]
Inject a set of edges to the graph.
- Parameters:
edges (Union[Tensor, List]) – The newly injected.
- class RandomInjection(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]
Injection nodes into a graph randomly.
Example
from greatx.dataset import GraphDataset import torch_geometric.transforms as T dataset = GraphDataset(root='.', name='Cora', transform=T.LargestConnectedComponents()) data = dataset[0] from greatx.attack.injection import RandomInjection attacker = RandomInjection(data) attacker.reset() # injecting 10 nodes for continuous features attacker.attack(10, feat_limits=(0, 1)) attacker.reset() # injecting 10 nodes for binary features attacker.attack(10, feat_budgets=10) attacker.data() # get attacked graph attacker.injected_nodes() # get injected nodes after attack attacker.injected_edges() # get injected edges after attack attacker.injected_feats() # get injected features after attack
Note
Please remember to call
reset()
before each attack.
- attack(num_budgets: Union[int, float], *, targets: Optional[Tensor] = None, interconnection: bool = False, num_edges_global: Optional[int] = None, num_edges_local: Optional[int] = None, feat_limits: Optional[Union[tuple, dict]] = None, feat_budgets: Optional[int] = None, disable: bool = False) RandomInjection [source]
Base method that describes the adversarial injection attack
- Parameters:
num_budgets (Union[int, float]) – the number/percentage of nodes allowed to inject
targets (Optional[Tensor], optional) – the targeted nodes where injected nodes perturb, if None, it will be all nodes in the graph, by default None
interconnection (bool, optional) – whether the injected nodes can connect to each other, by default False
num_edges_global (Optional[int], optional) – the number of total edges in the graph to be injected for all injected nodes, by default None
num_edges_local (Optional[int], optional) – the number of edges allowed to inject for each injected nodes, by default None
feat_limits (Optional[Union[tuple, dict]], optional) – the limitation or allowed budgets of injected node features, it can be a tuple, e.g., (0, 1) or a dict, e.g., {‘min’:0, ‘max’: 1}. if None, it is set as (self.feat.min(), self.feat.max()), by default None
feat_budgets (Optional[int], optional) – the number of nonzero features can be injected for each node, e.g., 10, denoting 10 nonzero features can be injected, by default Nonehao
disable (bool, optional) – whether the tqdm progbar is to disabled, by default False
- Return type:
the attacker itself
Note
Both num_edges_local and num_edges_global cannot be used simultaneously. # noqa
Both feat_limits and feat_budgets cannot be used simultaneously.
- class AdvInjection(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]
2nd place solution of KDD CUP 2020 “Adversarial attack and defense” challenge.
Example
from greatx.dataset import GraphDataset import torch_geometric.transforms as T dataset = GraphDataset(root='.', name='Cora', transform=T.LargestConnectedComponents()) data = dataset[0] surrogate_model = ... # train your surrogate model from greatx.attack.injection import AdvInjection attacker.setup_surrogate(surrogate_model) attacker = AdvInjection(data) attacker.reset() # injecting 10 nodes for continuous features attacker.attack(10, feat_limits=(0, 1)) attacker.reset() # injecting 10 nodes for binary features attacker.attack(10, feat_budgets=10) attacker.data() # get attacked graph attacker.injected_nodes() # get injected nodes after attack attacker.injected_edges() # get injected edges after attack attacker.injected_feats() # get injected features after attack
Note
Please remember to call
reset()
before each attack.
- attack(num_budgets: Union[int, float], *, targets: Optional[Tensor] = None, interconnection: bool = False, lr: float = 0.1, num_edges_global: Optional[int] = None, num_edges_local: Optional[int] = None, feat_limits: Optional[Union[tuple, dict]] = None, feat_budgets: Optional[int] = None, disable: bool = False) AdvInjection [source]
Base method that describes the adversarial injection attack
- Parameters:
num_budgets (Union[int, float]) – the number/percentage of nodes allowed to inject
targets (Optional[Tensor], optional) – the targeted nodes where injected nodes perturb, if None, it will be all nodes in the graph, by default None
num_edges_global (Optional[int], optional) – the number of total edges in the graph to be injected for all injected nodes, by default None
num_edges_local (Optional[int], optional) – the number of edges allowed to inject for each injected nodes, by default None
feat_limits (Optional[Union[tuple, dict]], optional) – the limitation or allowed budgets of injected node features, it can be a tuple, e.g., (0, 1) or a dict, e.g., {‘min’:0, ‘max’: 1}. if None, it is set as (self.feat.min(), self.feat.max()), by default None
feat_budgets (Optional[int], optional) – the number of nonzero features can be injected for each node, e.g., 10, denoting 10 nonzero features can be injected, by default None
- Return type:
the attacker itself
Note
Both num_edges_local and num_edges_global cannot be used simultaneously. # noqa
Both feat_limits and feat_budgets cannot be used simultaneously.
Backdoor Attacks
Base class for backdoor attacks. |
|
Implementation of GB-FGSM attack from the: "Neighboring Backdoor Attacks on Graph Convolutional Network" paper (arXiv'22) |
|
Implementation of LGCB attack from the: "Neighboring Backdoor Attacks on Graph Convolutional Network" paper (arXiv'22) |
- class BackdoorAttacker(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]
Base class for backdoor attacks.
- reset() BackdoorAttacker [source]
Reset the state of the Attacker
- Returns:
the attacker itself
- Return type:
- attack(num_budgets: Union[int, float], targets_class: int) BackdoorAttacker [source]
Base method that describes the adversarial backdoor attack
- class FGBackdoor(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]
Implementation of GB-FGSM attack from the: “Neighboring Backdoor Attacks on Graph Convolutional Network” paper (arXiv’22)
Example
from greatx.dataset import GraphDataset import torch_geometric.transforms as T dataset = GraphDataset(root='.', name='Cora', transform=T.LargestConnectedComponents()) data = dataset[0] surrogate_model = ... # train your surrogate model from greatx.attack.backdoor import FGBackdoor attacker.setup_surrogate(surrogate_model) attacker = FGBackdoor(data) attacker.reset() attacker.attack(num_budgets=50, target_class=0) attacker.data() # get attacked graph attacker.trigger() # get trigger node
Note
Please remember to call
reset()
before each attack.
- setup_surrogate(surrogate: Module, *, tau: float = 1.0) FGBackdoor [source]
Method used to initialize the (trained) surrogate model.
- Parameters:
surrogate (Module) – the input surrogate module
tau (float, optional) – temperature used for softmax activation, by default 1.0
freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True
required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None
- Returns:
the class itself
- Return type:
- Raises:
RuntimeError – if the surrogate model is not an instance of
torch.nn.Module
RuntimeError – if the surrogate model is not an instance of
required
- class LGCBackdoor(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]
Implementation of LGCB attack from the: “Neighboring Backdoor Attacks on Graph Convolutional Network” paper (arXiv’22)
Example
from greatx.dataset import GraphDataset import torch_geometric.transforms as T dataset = GraphDataset(root='.', name='Cora', transform=T.LargestConnectedComponents()) data = dataset[0] surrogate_model = ... # train your surrogate model from greatx.attack.backdoor import LGCBackdoor attacker.setup_surrogate(surrogate_model) attacker = LGCBackdoor(data) attacker.reset() attacker.attack(num_budgets=50, target_class=0) attacker.data() # get attacked graph attacker.trigger() # get trigger node
Note
Please remember to call
reset()
before each attack.
- setup_surrogate(surrogate: Module) LGCBackdoor [source]