graphwar.attack¶
Base Classes¶
Adversarial attacker for graph data. |
|
Adversarial attacker for graph data by flipping edges. |
- class Attacker(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]¶
Adversarial attacker for graph data. Note that this is an abstract class.
- Parameters
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
graphwar.attack.Attacker
,) –
- Raises
TypeError – unexpected keyword argument in
kwargs
Examples
For example, the attacker model should be defined as follows:
>>> from graphwar.attacker import Attacker >>> attacker = Attacker(data, device='cuda') >>> attacker.reset() # reset states >>> attacker.attack(attack_arguments) # attack >>> attacker.data() # get the attacked graph denoted as PyG-like Data
- reset()[source]¶
Reset attacker state. Override this method in subclass to implement specific function.
- abstract data() Data [source]¶
Get the attacked graph denoted as PyG-like Data.
- Raises
NotImplementedError – The subclass does not implement this interface.
- abstract attack() Attacker [source]¶
Abstract method. The subclass must override this method to implement specific attack for itself.
- Raises
NotImplementedError – The subclass does not implement this interface.
- set_max_perturbations(max_perturbations: Union[float, int] = inf, verbose: bool = True) Attacker [source]¶
Set the maximum number of allowed perturbations
- Parameters
Example
>>> attacker.set_max_perturbations(10)
- property feat: Tensor¶
Node features of the original graph.
- property label: Tensor¶
Node labels of the original graph.
- property edge_index: Tensor¶
Edge index of the original graph.
- property edge_weight: Tensor¶
Edge weight of the original graph.
- class FlipAttacker(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]¶
Adversarial attacker for graph data by flipping edges.
- Parameters
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
graphwar.attack.Attacker
,) –
- Raises
TypeError – unexpected keyword argument in
kwargs
Note
graphwar.attack.FlipAttacker
is a base class for graph modification attacks (GMA).- reset() FlipAttacker [source]¶
Reset attacker. This method must be called before attack.
- edge_flips() BunchDict [source]¶
Get all the edges to be flipped, including edges to be added and removed.
- remove_feat(u: int, v: int, it: Optional[int] = None)[source]¶
Remove the feature in a dimension v form a node u. That is, set a dimension of the specific node to zero.
- add_feat(u: int, v: int, it: Optional[int] = None)[source]¶
Remove the feature in a dimension v form a node u. That is, set a dimension of the specific node to one.
- feat_flips() BunchDict [source]¶
Get all the features to be flipped, including features to be added and removed.
- data(symmetric: bool = True) Data [source]¶
Get the attacked graph denoted by PyG-like data instance.
- Parameters
symmetric (bool, optional) – whether the output graph is symmetric, by default True
- Returns
the attacked graph denoted by PyG-like data instance
- Return type
Data
- set_allow_singleton(state: bool)[source]¶
Set whether the attacked graph allow singleton node, i.e., zero degree nodes.
- Parameters
state (bool) – the flag to set
Example
>>> attacker.set_allow_singleton(True)
- set_allow_structure_attack(state: bool)[source]¶
Set whether the attacker allow attacks on the topology of the graph.
- Parameters
state (bool) – the flag to set
Example
>>> attacker.set_allow_structure_attack(True)
- set_allow_feature_attack(state: bool)[source]¶
Set whether the attacker allow attacks on the features of nodes in the graph.
- Parameters
state (bool) – the flag to set
Example
>>> attacker.set_allow_feature_attack(True)
- is_singleton_edge(u: int, v: int) bool [source]¶
Check if the edge is an singleton edge that, if removed, would result in a singleton node in the graph.
- Parameters
- Returns
bool
- Return type
True if the edge is an singleton edge, otherwise False.
Note
Please make sure the edge is the one being removed.
Targeted Attacks¶
Base class for adversarial targeted attack. |
|
Random attacker that randomly chooses edges to flip. |
|
Implementation of DICE attack from the: "Hiding Individuals and Communities in a Social Network" paper |
|
Implementation of FGA attack from the: "Fast Gradient Attack on Network Embedding" paper (arXiv'18) |
|
Implementation of IG-FGSM attack from the: "Adversarial Examples on Graph Data: Deep Insights into Attack and Defense" paper (IJCAI'19) |
|
Implementation of SGA attack from the: "Adversarial Attack on Large Scale Graph" paper (TKDE'21) |
|
Implementation of Nettack attack from the: "Adversarial Attacks on Neural Networks for Graph Data" paper (KDD'18) |
|
Implementation of GFA attack from the: "A Restricted Black - box Adversarial Framework Towards Attacking Graph Embedding Models" paper (AAAI'20) |
- class TargetedAttacker(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]¶
Base class for adversarial targeted attack.
- Parameters
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
graphwar.attack.Attacker
,) –
- Raises
TypeError – unexpected keyword argument in
kwargs
Note
graphwar.attack.targeted.TargetedAttacker
is a subclass ofgraphwar.attack.FlipAttacker
. It belongs to graph modification attack (GMA).- reset() TargetedAttacker [source]¶
Reset the state of the Attacker
- Returns
the attacker itself
- Return type
- attack(target, target_label, num_budgets, direct_attack, structure_attack, feature_attack) TargetedAttacker [source]¶
Base method that describes the adversarial targeted attack
- Parameters
target (int) – the target node to be attacked
target_label (int) – the label of the target node.
num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –
Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.
Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations
See :attr:max_perturbations
direct_attack (bool) – whether to conduct direct attack or indirect attack.
structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)
feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features
- class RandomAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]¶
Random attacker that randomly chooses edges to flip.
- Parameters
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
graphwar.attack.Attacker
,) –
- Raises
TypeError – unexpected keyword argument in
kwargs
Example
>>> from graphwar.dataset import GraphWarDataset >>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora', transform=T.LargestConnectedComponents()) >>> data = dataset[0]
>>> from graphwar.attack.targeted import RandomAttack >>> attacker = RandomAttack(data) >>> attacker.reset() >>> attacker.attack(target=1) # attacking target node `1` with default budget set as node degree
>>> attacker.reset() >>> attacker.attack(target=1, num_budgets=1) # attacking target node `1` with budget set as 1
>>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack
Note
Please remember to call
reset()
before each attack.
- attack(target, *, num_budgets=None, threshold=0.5, direct_attack=True, structure_attack=True, feature_attack=False, disable=False)[source]¶
Base method that describes the adversarial targeted attack
- Parameters
target (int) – the target node to be attacked
target_label (int) – the label of the target node.
num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –
Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.
Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations
See :attr:max_perturbations
direct_attack (bool) – whether to conduct direct attack or indirect attack.
structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)
feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features
- class DICEAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]¶
Implementation of DICE attack from the: “Hiding Individuals and Communities in a Social Network” paper
DICE randomly chooses edges to flip based on the principle of “Disconnect Internally, Connect Externally” (DICE), which conducts attacks by removing edges between nodes with high correlations and connecting edges with low correlations.
- Parameters
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
graphwar.attack.Attacker
,) –
- Raises
TypeError – unexpected keyword argument in
kwargs
Example
>>> from graphwar.dataset import GraphWarDataset >>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora', transform=T.LargestConnectedComponents()) >>> data = dataset[0]
>>> from graphwar.attack.targeted import IGAttack >>> attacker = IGAttack(data) >>> attacker.reset() >>> attacker.attack(target=1) # attacking target node `1` with default budget set as node degree
>>> attacker.reset() >>> attacker.attack(target=1, num_budgets=1) # attacking target node `1` with budget set as 1
>>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack
Note
Please remember to call
reset()
before each attack.
- class FGAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]¶
Implementation of FGA attack from the: “Fast Gradient Attack on Network Embedding” paper (arXiv’18)
- Parameters
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
graphwar.attack.Attacker
,) –
- Raises
TypeError – unexpected keyword argument in
kwargs
Example
>>> from graphwar.dataset import GraphWarDataset >>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora', transform=T.LargestConnectedComponents()) >>> data = dataset[0]
>>> surrogate_model = ... # train your surrogate model
>>> from graphwar.attack.targeted import FGAttack >>> attacker = FGAttack(data) >>> attacker.setup_surrogate(surrogate_model) >>> attacker.reset() >>> attacker.attack(target=1) # attacking target node `1` with default budget set as node degree
>>> attacker.reset() >>> attacker.attack(target=1, num_budgets=1) # attacking target node `1` with budget set as 1
>>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack
Note
This is a simple but effective attack that utilizing gradient information of the adjacency matrix. There are several work sharing the same heuristic, we list them as follows: [1] FGSM: “Explaining and Harnessing Adversarial Examples” paper (ICLR’15) [2] “Link Prediction Adversarial Attack Via Iterative Gradient Attack” paper (IEEE Trans’20) [3] “Adversarial Attack on Graph Structured Data” paper (ICML’18)
Note
Please remember to call
reset()
before each attack.
- attack(target, *, target_label=None, num_budgets=None, direct_attack=True, structure_attack=True, feature_attack=False, disable=False)[source]¶
Base method that describes the adversarial targeted attack
- Parameters
target (int) – the target node to be attacked
target_label (int) – the label of the target node.
num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –
Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.
Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations
See :attr:max_perturbations
direct_attack (bool) – whether to conduct direct attack or indirect attack.
structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)
feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features
- class IGAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]¶
Implementation of IG-FGSM attack from the: “Adversarial Examples on Graph Data: Deep Insights into Attack and Defense” paper (IJCAI’19)
- Parameters
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
graphwar.attack.Attacker
,) –
- Raises
TypeError – unexpected keyword argument in
kwargs
Example
>>> from graphwar.dataset import GraphWarDataset >>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora', transform=T.LargestConnectedComponents()) >>> data = dataset[0]
>>> surrogate_model = ... # train your surrogate model
>>> from graphwar.attack.targeted import IGAttack >>> attacker = IGAttack(data) >>> attacker.setup_surrogate(surrogate_model) >>> attacker.reset() >>> attacker.attack(target=1) # attacking target node `1` with default budget set as node degree
>>> attacker.reset() >>> attacker.attack(target=1, num_budgets=1) # attacking target node `1` with budget set as 1
>>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack
Note
Please remember to call
reset()
before each attack.
- attack(target, *, target_label=None, num_budgets=None, steps=20, direct_attack=True, structure_attack=True, feature_attack=False, disable=False)[source]¶
Base method that describes the adversarial targeted attack
- Parameters
target (int) – the target node to be attacked
target_label (int) – the label of the target node.
num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –
Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.
Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations
See :attr:max_perturbations
direct_attack (bool) – whether to conduct direct attack or indirect attack.
structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)
feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features
- class SGAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]¶
Implementation of SGA attack from the: “Adversarial Attack on Large Scale Graph” paper (TKDE’21)
- Parameters
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
graphwar.attack.Attacker
,) –
- Raises
TypeError – unexpected keyword argument in
kwargs
Example
>>> from graphwar.dataset import GraphWarDataset >>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora', transform=T.LargestConnectedComponents()) >>> data = dataset[0]
>>> surrogate_model = ... # train your surrogate model
>>> from graphwar.attack.targeted import SGAttack >>> attacker = SGAttack(data) >>> attacker.setup_surrogate(surrogate_model) >>> attacker.reset() >>> attacker.attack(target=1) # attacking target node `1` with default budget set as node degree
>>> attacker.reset() >>> attacker.attack(target=1, num_budgets=1) # attacking target node `1` with budget set as 1
>>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack
Note
SGAttack is a scalable attack that can be applied to large scale graphs.
Please remember to call
reset()
before each attack.
- setup_surrogate(surrogate: Module, eps: float = 5.0, freeze: bool = True, K: int = 2)[source]¶
Method used to initialize the (trained) surrogate model.
- Parameters
surrogate (Module) – the input surrogate module
eps (float, optional) – temperature used for softmax activation, by default 1.0
freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True
required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None
- Returns
the class itself
- Return type
- Raises
RuntimeError – if the surrogate model is not an instance of
torch.nn.Module
RuntimeError – if the surrogate model is not an instance of
required
- attack(target, *, target_label=None, num_budgets=None, direct_attack=True, structure_attack=True, feature_attack=False, disable=False)[source]¶
Base method that describes the adversarial targeted attack
- Parameters
target (int) – the target node to be attacked
target_label (int) – the label of the target node.
num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –
Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.
Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations
See :attr:max_perturbations
direct_attack (bool) – whether to conduct direct attack or indirect attack.
structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)
feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features
- class Nettack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]¶
Implementation of Nettack attack from the: “Adversarial Attacks on Neural Networks for Graph Data” paper (KDD’18)
- Parameters
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
graphwar.attack.Attacker
,) –
- Raises
TypeError – unexpected keyword argument in
kwargs
Example
>>> from graphwar.dataset import GraphWarDataset >>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora', transform=T.LargestConnectedComponents()) >>> data = dataset[0]
>>> surrogate_model = ... # train your surrogate model
>>> from graphwar.attack.targeted import Nettack >>> attacker = Nettack(data) >>> attacker.setup_surrogate(surrogate_model) >>> attacker.reset() >>> attacker.attack(target=1) # attacking target node `1` with default budget set as node degree
>>> attacker.reset() >>> attacker.attack(target=1, num_budgets=1) # attacking target node `1` with budget set as 1
>>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack
Note
Please remember to call
reset()
before each attack.
- setup_surrogate(surrogate)[source]¶
Method used to initialize the (trained) surrogate model.
- Parameters
surrogate (Module) – the input surrogate module
eps (float, optional) – temperature used for softmax activation, by default 1.0
freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True
required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None
- Returns
the class itself
- Return type
- Raises
RuntimeError – if the surrogate model is not an instance of
torch.nn.Module
RuntimeError – if the surrogate model is not an instance of
required
- attack(target, *, target_label=None, num_budgets=None, n_influencers=5, direct_attack=True, structure_attack=True, feature_attack=False, ll_constraint=True, ll_cutoff=0.004, disable=False)[source]¶
Base method that describes the adversarial targeted attack
- Parameters
target (int) – the target node to be attacked
target_label (int) – the label of the target node.
num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –
Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.
Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations
See :attr:max_perturbations
direct_attack (bool) – whether to conduct direct attack or indirect attack.
structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)
feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features
- class GFAttack(data: Data, K: int = 2, T: int = 128, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]¶
Implementation of GFA attack from the: “A Restricted Black - box Adversarial Framework Towards Attacking Graph Embedding Models” paper (AAAI’20)
- Parameters
data (Data) – PyG-like data denoting the input graph
K (int, optional) – the order of graph filter, by default 2
T (int, optional) – top-T largest eigen-values/vectors selected, by default 128
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed of reproduce the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None
- Raises
TypeError – unexpected keyword argument in
kwargs
Example
>>> from graphwar.dataset import GraphWarDataset >>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora', transform=T.LargestConnectedComponents()) >>> data = dataset[0]
>>> from graphwar.attack.targeted import IGAttack >>> attacker = IGAttack(data) >>> attacker.attack(target=1) # attacking target node `1` with default budget set as node degree
>>> attacker.attack(target=1, num_budgets=1) # attacking target node `1` with budget set as 1
>>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack
Note
In the paper, the authors mainly consider the single edge perturbations, i.e.,
num_budgets=1
.Please remember to call
reset()
before each attack.T=128 for citeseer and pubmed, T=num_nodes//2 for cora to reproduce results in paper.
- attack(target, *, num_budgets=None, direct_attack=True, structure_attack=True, feature_attack=False, ll_constraint=False, ll_cutoff=0.004, disable=False)[source]¶
Base method that describes the adversarial targeted attack
- Parameters
target (int) – the target node to be attacked
target_label (int) – the label of the target node.
num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –
Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.
Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations
See :attr:max_perturbations
direct_attack (bool) – whether to conduct direct attack or indirect attack.
structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)
feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features
- static structure_score(A: csr_matrix, x_mean: Tensor, eig_vals: Tensor, eig_vec: Tensor, candidate_edges: ndarray, K: int, T: int, method: str = 'nosum')[source]¶
Calculate the score of potential edges as formulated in paper.
- Parameters
A (sp.csr_matrix) – the graph adjacency matrix
x_mean (Tensor) –
eig_vals (Tensor) – the eigen value
eig_vec (Tensor) – the eigen vector
candidate_edges (np.ndarray) – the candidate_edges to be selected
K (int) – The order of graph filter K.
T (int) – Selecting the Top-T largest eigen-values/vectors.
method (str, optional) – “sum” or “nosum” Indicates the score are calculated from which loss as in Equation (8) or Equation (12). “nosum” denotes Equation (8), where the loss is derived from Graph Convolutional Networks, “sum” denotes Equation (12), where the loss is derived from Sampling-based Graph Embedding Methods. by default “nosum”
- Returns
Scores for potential edges.
- Return type
Tensor
Untargeted Attacks¶
Base class for adversarial non-targeted attack. |
|
Random attacker that randomly chooses edges to flip. |
|
Implementation of DICE attack from the: "Hiding Individuals and Communities in a Social Network" paper |
|
Implementation of FGA attack from the: "Fast Gradient Attack on Network Embedding" paper (arXiv'18) |
|
Implementation of IG-FGSM attack from the: "Adversarial Examples on Graph Data: Deep Insights into Attack and Defense" paper (IJCAI'19) |
|
Implementation of Metattack attack from the: "Adversarial Attacks on Graph Neural Networks via Meta Learning" paper (ICLR'19) |
|
Implementation of MinMax attack from the: "Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective" paper (IJCAI'19) |
|
Implementation of PGD attack from the: "Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective" paper (IJCAI'19) |
- class UntargetedAttacker(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]¶
Base class for adversarial non-targeted attack.
- Parameters
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
graphwar.attack.Attacker
,) –
- Raises
TypeError – unexpected keyword argument in
kwargs
Note
graphwar.attack.targeted.UntargetedAttacker
is a subclass ofgraphwar.attack.FlipAttacker
. It belongs to graph modification attack (GMA).- reset() UntargetedAttacker [source]¶
Reset the state of the Attacker
- Returns
the attacker itself
- Return type
- attack(num_budgets, structure_attack, feature_attack) UntargetedAttacker [source]¶
Base method that describes the adversarial untargeted attack
- Parameters
num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –
Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.
Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations
See :attr:max_perturbations
structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)
feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features
- class RandomAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]¶
Random attacker that randomly chooses edges to flip.
- Parameters
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
graphwar.attack.Attacker
,) –
- Raises
TypeError – unexpected keyword argument in
kwargs
Example
>>> from graphwar.dataset import GraphWarDataset >>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora', transform=T.LargestConnectedComponents()) >>> data = dataset[0]
>>> from graphwar.attack.untargeted import RandomAttack >>> attacker = RandomAttack(data) >>> attacker.reset() >>> attacker.attack(0.05) # attack with 0.05% of edge perturbations >>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack
Note
Please remember to call
reset()
before each attack.
- attack(num_budgets=0.05, *, threshold=0.5, structure_attack=True, feature_attack=False, disable=False)[source]¶
Base method that describes the adversarial untargeted attack
- Parameters
num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –
Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.
Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations
See :attr:max_perturbations
structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)
feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features
- class DICEAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]¶
Implementation of DICE attack from the: “Hiding Individuals and Communities in a Social Network” paper
DICE randomly chooses edges to flip based on the principle of “Disconnect Internally, Connect Externally” (DICE), which conducts attacks by removing edges between nodes with high correlations and connecting edges with low correlations.
- Parameters
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
graphwar.attack.Attacker
,) –
- Raises
TypeError – unexpected keyword argument in
kwargs
Example
>>> from graphwar.dataset import GraphWarDataset >>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora', transform=T.LargestConnectedComponents()) >>> data = dataset[0]
>>> from graphwar.attack.untargeted import DICEAttack >>> attacker = DICEAttack(data) >>> attacker.reset() >>> attacker.attack(0.05) # attack with 0.05% of edge perturbations >>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack
Note
Please remember to call
reset()
before each attack.
- class FGAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]¶
Implementation of FGA attack from the: “Fast Gradient Attack on Network Embedding” paper (arXiv’18)
- Parameters
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
graphwar.attack.Attacker
,) –
- Raises
TypeError – unexpected keyword argument in
kwargs
Example
>>> from graphwar.dataset import GraphWarDataset >>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora', transform=T.LargestConnectedComponents()) >>> data = dataset[0]
>>> surrogate_model = ... # train your surrogate model
>>> from graphwar.attack.untargeted import FGAttack >>> attacker = FGAttack(data) >>> attacker.setup_surrogate(surrogate_model) >>> attacker.reset() >>> attacker.attack(0.05) # attack with 0.05% of edge perturbations >>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack
Note
This is a simple but effective attack that utilizing gradient information of the adjacency matrix. There are several work sharing the same heuristic, we list them as follows: [1] FGSM: “Explaining and Harnessing Adversarial Examples” paper (ICLR’15) [2] “Link Prediction Adversarial Attack Via Iterative Gradient Attack” paper (IEEE Trans’20) [3] “Adversarial Attack on Graph Structured Data” paper (ICML’18)
Note
Please remember to call
reset()
before each attack.
- setup_surrogate(surrogate: Module, victim_nodes: Tensor, victim_labels: Optional[Tensor] = None, *, eps: float = 1.0)[source]¶
Method used to initialize the (trained) surrogate model.
- Parameters
surrogate (Module) – the input surrogate module
eps (float, optional) – temperature used for softmax activation, by default 1.0
freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True
required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None
- Returns
the class itself
- Return type
- Raises
RuntimeError – if the surrogate model is not an instance of
torch.nn.Module
RuntimeError – if the surrogate model is not an instance of
required
- attack(num_budgets=0.05, *, structure_attack=True, feature_attack=False, disable=False)[source]¶
Base method that describes the adversarial untargeted attack
- Parameters
num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –
Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.
Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations
See :attr:max_perturbations
structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)
feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features
- class IGAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]¶
Implementation of IG-FGSM attack from the: “Adversarial Examples on Graph Data: Deep Insights into Attack and Defense” paper (IJCAI’19)
- Parameters
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
graphwar.attack.Attacker
,) –
- Raises
TypeError – unexpected keyword argument in
kwargs
Example
>>> from graphwar.dataset import GraphWarDataset >>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora', transform=T.LargestConnectedComponents()) >>> data = dataset[0]
>>> surrogate_model = ... # train your surrogate model
>>> from graphwar.attack.untargeted import IGAttack >>> attacker = IGAttack(data) >>> attacker.setup_surrogate(surrogate_model) >>> attacker.reset() >>> attacker.attack(0.05) # attack with 0.05% of edge perturbations >>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack
Note
In the paper, IG-FGSM attack was implemented for targeted attack, we adapt the codes for the non-targeted attack here.
Please remember to call
reset()
before each attack.
- setup_surrogate(surrogate: Module, victim_nodes: Tensor, victim_labels: Optional[Tensor] = None, *, eps: float = 1.0)[source]¶
Method used to initialize the (trained) surrogate model.
- Parameters
surrogate (Module) – the input surrogate module
eps (float, optional) – temperature used for softmax activation, by default 1.0
freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True
required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None
- Returns
the class itself
- Return type
- Raises
RuntimeError – if the surrogate model is not an instance of
torch.nn.Module
RuntimeError – if the surrogate model is not an instance of
required
- attack(num_budgets=0.05, *, steps=20, structure_attack=True, feature_attack=False, disable=False)[source]¶
Base method that describes the adversarial untargeted attack
- Parameters
num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –
Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.
Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations
See :attr:max_perturbations
structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)
feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features
- class Metattack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]¶
Implementation of Metattack attack from the: “Adversarial Attacks on Graph Neural Networks via Meta Learning” paper (ICLR’19)
- Parameters
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
graphwar.attack.Attacker
,) –
- Raises
TypeError – unexpected keyword argument in
kwargs
Example
>>> from graphwar.dataset import GraphWarDataset >>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora', transform=T.LargestConnectedComponents()) >>> data = dataset[0]
>>> surrogate_model = ... # train your surrogate model
>>> from graphwar.attack.untargeted import Metattack >>> attacker = Metattack(data) >>> attacker.setup_surrogate(surrogate_model) >>> attacker.reset() >>> attacker.attack(0.05) # attack with 0.05% of edge perturbations >>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack
Note
Please remember to call
reset()
before each attack.
- setup_surrogate(surrogate: Module, labeled_nodes: Tensor, unlabeled_nodes: Tensor, lr: float = 0.1, epochs: int = 100, momentum: float = 0.9, lambda_: float = 0.0, *, eps: float = 1.0)[source]¶
Method used to initialize the (trained) surrogate model.
- Parameters
surrogate (Module) – the input surrogate module
eps (float, optional) – temperature used for softmax activation, by default 1.0
freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True
required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None
- Returns
the class itself
- Return type
- Raises
RuntimeError – if the surrogate model is not an instance of
torch.nn.Module
RuntimeError – if the surrogate model is not an instance of
required
- forward(adj, x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- attack(num_budgets=0.05, *, structure_attack=True, feature_attack=False, disable=False)[source]¶
Base method that describes the adversarial untargeted attack
- Parameters
num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –
Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.
Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations
See :attr:max_perturbations
structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)
feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features
- class MinmaxAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]¶
Implementation of MinMax attack from the: “Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective” paper (IJCAI’19)
- Parameters
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
graphwar.attack.Attacker
,) –
- Raises
TypeError – unexpected keyword argument in
kwargs
Example
>>> from graphwar.dataset import GraphWarDataset >>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora', transform=T.LargestConnectedComponents()) >>> data = dataset[0]
>>> surrogate_model = ... # train your surrogate model
>>> from graphwar.attack.untargeted import MinmaxAttack >>> attacker = MinmaxAttack(data) >>> attacker.setup_surrogate(surrogate_model) >>> attacker.reset() >>> attacker.attack(0.05) # attack with 0.05% of edge perturbations >>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack
Note
MinMax attack is a variant of
graphwar.attack.untargeted.PGDAttack
attack.Please remember to call
reset()
before each attack.
- setup_surrogate(surrogate: Module, labeled_nodes: Tensor, unlabeled_nodes: Optional[Tensor] = None, *, eps: float = 1.0)[source]¶
Method used to initialize the (trained) surrogate model.
- Parameters
surrogate (Module) – the input surrogate module
eps (float, optional) – temperature used for softmax activation, by default 1.0
freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True
required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None
- Returns
the class itself
- Return type
- Raises
RuntimeError – if the surrogate model is not an instance of
torch.nn.Module
RuntimeError – if the surrogate model is not an instance of
required
- attack(num_budgets=0.05, *, C=None, lr=0.001, CW_loss=False, epochs=100, sample_epochs=20, structure_attack=True, feature_attack=False, disable=False)[source]¶
Base method that describes the adversarial untargeted attack
- Parameters
num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –
Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.
Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations
See :attr:max_perturbations
structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)
feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features
- class PGDAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]¶
Implementation of PGD attack from the: “Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective” paper (IJCAI’19)
- Parameters
data (Data) – PyG-like data denoting the input graph
device (str, optional) – the device of the attack running on, by default “cpu”
seed (Optional[int], optional) – the random seed for reproducing the attack, by default None
name (Optional[str], optional) – name of the attacker, if None, it would be
__class__.__name__
, by default Nonekwargs (additional arguments of
graphwar.attack.Attacker
,) –
- Raises
TypeError – unexpected keyword argument in
kwargs
Example
>>> from graphwar.dataset import GraphWarDataset >>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora', transform=T.LargestConnectedComponents()) >>> data = dataset[0]
>>> surrogate_model = ... # train your surrogate model
>>> from graphwar.attack.untargeted import PGDAttack >>> attacker = PGDAttack(data) >>> attacker.setup_surrogate(surrogate_model) >>> attacker.reset() >>> attacker.attack(0.05) # attack with 0.05% of edge perturbations >>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack
Note
MinMax attack is a variant of
graphwar.attack.untargeted.PGDAttack
attack.Please remember to call
reset()
before each attack.
- setup_surrogate(surrogate: Module, labeled_nodes: Tensor, unlabeled_nodes: Optional[Tensor] = None, *, eps: float = 1.0, freeze: bool = True)[source]¶
Method used to initialize the (trained) surrogate model.
- Parameters
surrogate (Module) – the input surrogate module
eps (float, optional) – temperature used for softmax activation, by default 1.0
freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True
required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None
- Returns
the class itself
- Return type
- Raises
RuntimeError – if the surrogate model is not an instance of
torch.nn.Module
RuntimeError – if the surrogate model is not an instance of
required
- attack(num_budgets=0.05, *, C=None, CW_loss=False, epochs=200, sample_epochs=20, structure_attack=True, feature_attack=False, disable=False)[source]¶
Base method that describes the adversarial untargeted attack
- Parameters
num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –
Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.
Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations
See :attr:max_perturbations
structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)
feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features
Injection Attacks¶
Base class for Injection Attacker, an inherent attack should implement the attack method. |
|
Injection nodes into a graph randomly. |
|
2nd place solution of KDD CUP 2020 "Adversarial attack and defense" challenge. |
- class InjectionAttacker(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]¶
Base class for Injection Attacker, an inherent attack should implement the attack method.
Example
>>> attacker = InjectionAttacker(data) >>> attacker.reset() # inject 10 nodes, where each nodes has 2 edges >>> attacker.attack(num_budgets=10, num_edges_local=2) # inject 10 nodes, with 100 edges in total >>> attacker.attack(num_budgets=10, num_edges_global=100) # inject 10 nodes, where each nodes has 2 edges, # the features of injected nodes lies in [0,1] >>> attacker.attack(num_budgets=10, num_edges_local=2, feat_limits=(0,1)) >>> attacker.attack(num_budgets=10, num_edges_local=2, feat_limits={'min': 0, 'max':1}) # inject 10 nodes, where each nodes has 2 edges, # the features of injected each node has 10 nonzero elements >>> attacker.attack(num_budgets=10, num_edges_local=2, feat_budgets=10)
# get injected nodes >>> attacker.injected_nodes() # get injected edges >>> attacker.injected_edges() # get injected nodes’ features >>> attacker.injected_feats() # get perturbed graph >>> attacker.data()
- reset() InjectionAttacker [source]¶
Reset the state of the Attacker
- Returns
the attacker itself
- Return type
- attack(num_budgets: Union[int, float], *, targets: Optional[Tensor] = None, num_edges_global: Optional[int] = None, num_edges_local: Optional[int] = None, feat_limits: Optional[Union[tuple, dict]] = None, feat_budgets: Optional[int] = None) InjectionAttacker [source]¶
Base method that describes the adversarial injection attack
- Parameters
num_budgets (Union[int, float]) – the number/percentage of nodes allowed to inject
targets (Optional[Tensor], optional) – the targeted nodes where injected nodes perturb, if None, it will be all nodes in the graph, by default None
num_edges_global (Optional[int], optional) – the number of total edges in the graph to be injected for all injected nodes, by default None
num_edges_local (Optional[int], optional) – the number of edges allowed to inject for each injected nodes, by default None
feat_limits (Optional[Union[tuple, dict]], optional) – the limitation or allowed budgets of injected node features, it can be a tuple, e.g., (0, 1) or a dict, e.g., {‘min’:0, ‘max’: 1}. if None, it is set as (self.feat.min(), self.feat.max()), by default None
feat_budgets (Optional[int], optional) – the number of nonzero features can be injected for each node, e.g., 10, denoting 10 nonzero features can be injected, by default None
- Return type
the attacker itself
Note
Both num_edges_local and num_edges_global cannot be used simultaneously.
Both feat_limits and feat_budgets cannot be used simultaneously.
- edge_flips() BunchDict [source]¶
Get all the edges to be flipped, including edges to be added and removed.
- inject_edges(edges: Union[Tensor, List])[source]¶
Inject a set of edges to the graph.
- Parameters
edges (Union[Tensor, List]) – The newly injected.
- class RandomInjection(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]¶
Injection nodes into a graph randomly.
Example
>>> from graphwar.dataset import GraphWarDataset >>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora', transform=T.LargestConnectedComponents()) >>> data = dataset[0]
>>> from graphwar.attack.injection import RandomInjection >>> attacker = RandomInjection(data)
>>> attacker.reset() >>> attacker.attack(10, feat_limits=(0, 1)) # injecting 10 nodes for continuous features
>>> attacker.reset() >>> attacker.attack(10, feat_budgets=10) # injecting 10 nodes for binary features
>>> attacker.data() # get attacked graph
>>> attacker.injected_nodes() # get injected nodes after attack
>>> attacker.injected_edges() # get injected edges after attack
>>> attacker.injected_feats() # get injected features after attack
Note
Please remember to call
reset()
before each attack.
- attack(num_budgets: Union[int, float], *, targets: Optional[Tensor] = None, interconnection: bool = False, num_edges_global: Optional[int] = None, num_edges_local: Optional[int] = None, feat_limits: Optional[Union[tuple, dict]] = None, feat_budgets: Optional[int] = None, disable: bool = False) RandomInjection [source]¶
Base method that describes the adversarial injection attack
- Parameters
num_budgets (Union[int, float]) – the number/percentage of nodes allowed to inject
targets (Optional[Tensor], optional) – the targeted nodes where injected nodes perturb, if None, it will be all nodes in the graph, by default None
interconnection (bool, optional) – whether the injected nodes can connect to each other, by default False
num_edges_global (Optional[int], optional) – the number of total edges in the graph to be injected for all injected nodes, by default None
num_edges_local (Optional[int], optional) – the number of edges allowed to inject for each injected nodes, by default None
feat_limits (Optional[Union[tuple, dict]], optional) – the limitation or allowed budgets of injected node features, it can be a tuple, e.g., (0, 1) or a dict, e.g., {‘min’:0, ‘max’: 1}. if None, it is set as (self.feat.min(), self.feat.max()), by default None
feat_budgets (Optional[int], optional) – the number of nonzero features can be injected for each node, e.g., 10, denoting 10 nonzero features can be injected, by default None
disable (bool, optional) – whether the tqdm progbar is to disabled, by default False
- Return type
the attacker itself
Note
Both num_edges_local and num_edges_global cannot be used simultaneously.
Both feat_limits and feat_budgets cannot be used simultaneously.
- class AdvInjection(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]¶
2nd place solution of KDD CUP 2020 “Adversarial attack and defense” challenge.
Example
>>> from graphwar.dataset import GraphWarDataset >>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora', transform=T.LargestConnectedComponents()) >>> data = dataset[0]
>>> surrogate_model = ... # train your surrogate model
>>> from graphwar.attack.injection import AdvInjection >>> attacker.setup_surrogate(surrogate_model) >>> attacker = AdvInjection(data)
>>> attacker.reset() >>> attacker.attack(10, feat_limits=(0, 1)) # injecting 10 nodes for continuous features
>>> attacker.reset() >>> attacker.attack(10, feat_budgets=10) # injecting 10 nodes for binary features
>>> attacker.data() # get attacked graph
>>> attacker.injected_nodes() # get injected nodes after attack
>>> attacker.injected_edges() # get injected edges after attack
>>> attacker.injected_feats() # get injected features after attack
Note
Please remember to call
reset()
before each attack.
- attack(num_budgets: Union[int, float], *, targets: Optional[Tensor] = None, interconnection: bool = False, lr: float = 0.01, num_edges_global: Optional[int] = None, num_edges_local: Optional[int] = None, feat_limits: Optional[Union[tuple, dict]] = None, feat_budgets: Optional[int] = None, disable: bool = False) AdvInjection [source]¶
Base method that describes the adversarial injection attack
- Parameters
num_budgets (Union[int, float]) – the number/percentage of nodes allowed to inject
targets (Optional[Tensor], optional) – the targeted nodes where injected nodes perturb, if None, it will be all nodes in the graph, by default None
num_edges_global (Optional[int], optional) – the number of total edges in the graph to be injected for all injected nodes, by default None
num_edges_local (Optional[int], optional) – the number of edges allowed to inject for each injected nodes, by default None
feat_limits (Optional[Union[tuple, dict]], optional) – the limitation or allowed budgets of injected node features, it can be a tuple, e.g., (0, 1) or a dict, e.g., {‘min’:0, ‘max’: 1}. if None, it is set as (self.feat.min(), self.feat.max()), by default None
feat_budgets (Optional[int], optional) – the number of nonzero features can be injected for each node, e.g., 10, denoting 10 nonzero features can be injected, by default None
- Return type
the attacker itself
Note
Both num_edges_local and num_edges_global cannot be used simultaneously.
Both feat_limits and feat_budgets cannot be used simultaneously.
Backdoor Attacks¶
Base class for backdoor attacks. |
|
Implementation of GB-FGSM attack from the: "Neighboring Backdoor Attacks on Graph Convolutional Network" paper (arXiv'22) |
|
Implementation of LGCB attack from the: "Neighboring Backdoor Attacks on Graph Convolutional Network" paper (arXiv'22) |
- class BackdoorAttacker(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]¶
Base class for backdoor attacks.
- reset() BackdoorAttacker [source]¶
Reset the state of the Attacker
- Returns
the attacker itself
- Return type
- attack(num_budgets: Union[int, float], targets_class: int) BackdoorAttacker [source]¶
Base method that describes the adversarial backdoor attack
- class FGBackdoor(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]¶
Implementation of GB-FGSM attack from the: “Neighboring Backdoor Attacks on Graph Convolutional Network” paper (arXiv’22)
Example
>>> from graphwar.dataset import GraphWarDataset >>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora', transform=T.LargestConnectedComponents()) >>> data = dataset[0]
>>> surrogate_model = ... # train your surrogate model
>>> from graphwar.attack.backdoor import FGBackdoor >>> attacker.setup_surrogate(surrogate_model) >>> attacker = FGBackdoor(data)
>>> attacker.reset() >>> attacker.attack(num_budgets=50, target_class=0)
>>> attacker.data() # get attacked graph
>>> attacker.trigger() # get trigger node
Note
Please remember to call
reset()
before each attack.
- setup_surrogate(surrogate: Module, *, eps: float = 1.0) FGBackdoor [source]¶
Method used to initialize the (trained) surrogate model.
- Parameters
surrogate (Module) – the input surrogate module
eps (float, optional) – temperature used for softmax activation, by default 1.0
freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True
required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None
- Returns
the class itself
- Return type
- Raises
RuntimeError – if the surrogate model is not an instance of
torch.nn.Module
RuntimeError – if the surrogate model is not an instance of
required
- class LGCBackdoor(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]¶
Implementation of LGCB attack from the: “Neighboring Backdoor Attacks on Graph Convolutional Network” paper (arXiv’22)
Example
>>> from graphwar.dataset import GraphWarDataset >>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora', transform=T.LargestConnectedComponents()) >>> data = dataset[0]
>>> surrogate_model = ... # train your surrogate model
>>> from graphwar.attack.backdoor import LGCBackdoor >>> attacker.setup_surrogate(surrogate_model) >>> attacker = LGCBackdoor(data)
>>> attacker.reset() >>> attacker.attack(num_budgets=50, target_class=0)
>>> attacker.data() # get attacked graph
>>> attacker.trigger() # get trigger node
Note
Please remember to call
reset()
before each attack.
- setup_surrogate(surrogate: Module) LGCBackdoor [source]¶