graphwar.attack

Base Classes

Attacker

Adversarial attacker for graph data.

FlipAttacker

Adversarial attacker for graph data by flipping edges.

class Attacker(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Adversarial attacker for graph data. Note that this is an abstract class.

Parameters
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of graphwar.attack.Attacker,) –

Raises

TypeError – unexpected keyword argument in kwargs

Examples

For example, the attacker model should be defined as follows:

>>> from graphwar.attacker import Attacker
>>> attacker = Attacker(data, device='cuda')
>>> attacker.reset() # reset states
>>> attacker.attack(attack_arguments) # attack
>>> attacker.data() # get the attacked graph denoted as PyG-like Data
reset()[source]

Reset attacker state. Override this method in subclass to implement specific function.

abstract data() Data[source]

Get the attacked graph denoted as PyG-like Data.

Raises

NotImplementedError – The subclass does not implement this interface.

abstract attack() Attacker[source]

Abstract method. The subclass must override this method to implement specific attack for itself.

Raises

NotImplementedError – The subclass does not implement this interface.

set_max_perturbations(max_perturbations: Union[float, int] = inf, verbose: bool = True) Attacker[source]

Set the maximum number of allowed perturbations

Parameters
  • max_perturbations (Union[float, int], optional) – the maximum number of allowed perturbations, by default np.inf

  • verbose (bool, optional) – whether to verbose the operation, by default True

Example

>>> attacker.set_max_perturbations(10)
property max_perturbations: Union[float, int]

Maximum allowable perturbation size.

Type

float or int

property feat: Tensor

Node features of the original graph.

property label: Tensor

Node labels of the original graph.

property edge_index: Tensor

Edge index of the original graph.

property edge_weight: Tensor

Edge weight of the original graph.

class FlipAttacker(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Adversarial attacker for graph data by flipping edges.

Parameters
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of graphwar.attack.Attacker,) –

Raises

TypeError – unexpected keyword argument in kwargs

Note

graphwar.attack.FlipAttacker is a base class for graph modification attacks (GMA).

reset() FlipAttacker[source]

Reset attacker. This method must be called before attack.

remove_edge(u: int, v: int, it: Optional[int] = None)[source]

Remove an edge from the graph.

Parameters
  • u (int) – The source node of the edge

  • v (int) – The destination node of the edge

  • it (Optional[int], optional) – The iteration that indicates the order of the edge being removed, by default None

add_edge(u: int, v: int, it: Optional[int] = None)[source]

Add one edge to the graph.

Parameters
  • u (int) – The source node of the edge

  • v (int) – The destination node of the edge

  • it (Optional[int], optional) – The iteration that indicates the order of the edge being added, by default None

removed_edges() Optional[Tensor][source]

Get all the edges to be removed.

added_edges() Optional[Tensor][source]

Get all the edges to be added.

edge_flips() BunchDict[source]

Get all the edges to be flipped, including edges to be added and removed.

remove_feat(u: int, v: int, it: Optional[int] = None)[source]

Remove the feature in a dimension v form a node u. That is, set a dimension of the specific node to zero.

Parameters
  • u (int) – the node whose features are to be removed

  • v (int) – the dimension of the feature to be removed

  • it (Optional[int], optional) – The iteration that indicates the order of the features being removed, by default None

add_feat(u: int, v: int, it: Optional[int] = None)[source]

Remove the feature in a dimension v form a node u. That is, set a dimension of the specific node to one.

Parameters
  • u (int) – the node whose features are to be added

  • v (int) – the dimension of the feature to be added

  • it (Optional[int], optional) – The iteration that indicates the order of the features being added, by default None

removed_feats() Optional[Tensor][source]

Get all the features to be removed.

added_feats() Optional[Tensor][source]

Get all the features to be added.

feat_flips() BunchDict[source]

Get all the features to be flipped, including features to be added and removed.

data(symmetric: bool = True) Data[source]

Get the attacked graph denoted by PyG-like data instance.

Parameters

symmetric (bool, optional) – whether the output graph is symmetric, by default True

Returns

the attacked graph denoted by PyG-like data instance

Return type

Data

set_allow_singleton(state: bool)[source]

Set whether the attacked graph allow singleton node, i.e., zero degree nodes.

Parameters

state (bool) – the flag to set

Example

>>> attacker.set_allow_singleton(True)
set_allow_structure_attack(state: bool)[source]

Set whether the attacker allow attacks on the topology of the graph.

Parameters

state (bool) – the flag to set

Example

>>> attacker.set_allow_structure_attack(True)
set_allow_feature_attack(state: bool)[source]

Set whether the attacker allow attacks on the features of nodes in the graph.

Parameters

state (bool) – the flag to set

Example

>>> attacker.set_allow_feature_attack(True)
is_singleton_edge(u: int, v: int) bool[source]

Check if the edge is an singleton edge that, if removed, would result in a singleton node in the graph.

Parameters
  • u (int) – The source node of the edge

  • v (int) – The destination node of the edge

Returns

bool

Return type

True if the edge is an singleton edge, otherwise False.

Note

Please make sure the edge is the one being removed.

Check whether the edge (u,v) is legal.

An edge (u,v) is legal if u!=v and edge (u,v) is not selected before.

Parameters
  • u (int) – The source node of the edge

  • v (int) – The destination node of the edge

Returns

bool

Return type

True if the u!=v and edge (u,v), (v,u) is not selected, otherwise False.

Targeted Attacks

TargetedAttacker

Base class for adversarial targeted attack.

RandomAttack

Random attacker that randomly chooses edges to flip.

DICEAttack

Implementation of DICE attack from the: "Hiding Individuals and Communities in a Social Network" paper

FGAttack

Implementation of FGA attack from the: "Fast Gradient Attack on Network Embedding" paper (arXiv'18)

IGAttack

Implementation of IG-FGSM attack from the: "Adversarial Examples on Graph Data: Deep Insights into Attack and Defense" paper (IJCAI'19)

SGAttack

Implementation of SGA attack from the: "Adversarial Attack on Large Scale Graph" paper (TKDE'21)

Nettack

Implementation of Nettack attack from the: "Adversarial Attacks on Neural Networks for Graph Data" paper (KDD'18)

GFAttack

Implementation of GFA attack from the: "A Restricted Black - box Adversarial Framework Towards Attacking Graph Embedding Models" paper (AAAI'20)

class TargetedAttacker(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Base class for adversarial targeted attack.

Parameters
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of graphwar.attack.Attacker,) –

Raises

TypeError – unexpected keyword argument in kwargs

Note

graphwar.attack.targeted.TargetedAttacker is a subclass of graphwar.attack.FlipAttacker. It belongs to graph modification attack (GMA).

reset() TargetedAttacker[source]

Reset the state of the Attacker

Returns

the attacker itself

Return type

TargetedAttacker

attack(target, target_label, num_budgets, direct_attack, structure_attack, feature_attack) TargetedAttacker[source]

Base method that describes the adversarial targeted attack

Parameters
  • target (int) – the target node to be attacked

  • target_label (int) – the label of the target node.

  • num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –

    Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.

    Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations

    See :attr:max_perturbations

  • direct_attack (bool) – whether to conduct direct attack or indirect attack.

  • structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)

  • feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features

Check whether the edge (u,v) is legal.

For targeted attacker, an edge (u,v) is legal if u!=v and edge (u,v) is not selected before.

In addition, if the setting is indirect attack, the targeted node is not allowed to be u or v.

Parameters
  • u (int) – src node id

  • v (int) – dst node id

Returns

True if the u!=v and edge (u,v) is not selected, otherwise False.

Return type

bool

class RandomAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Random attacker that randomly chooses edges to flip.

Parameters
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of graphwar.attack.Attacker,) –

Raises

TypeError – unexpected keyword argument in kwargs

Example

>>> from graphwar.dataset import GraphWarDataset
>>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora',
                      transform=T.LargestConnectedComponents())
>>> data = dataset[0]
>>> from graphwar.attack.targeted import RandomAttack
>>> attacker = RandomAttack(data)
>>> attacker.reset()
>>> attacker.attack(target=1) # attacking target node `1` with default budget set as node degree
>>> attacker.reset()
>>> attacker.attack(target=1, num_budgets=1) # attacking target node `1` with budget set as 1
>>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack

Note

  • Please remember to call reset() before each attack.

attack(target, *, num_budgets=None, threshold=0.5, direct_attack=True, structure_attack=True, feature_attack=False, disable=False)[source]

Base method that describes the adversarial targeted attack

Parameters
  • target (int) – the target node to be attacked

  • target_label (int) – the label of the target node.

  • num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –

    Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.

    Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations

    See :attr:max_perturbations

  • direct_attack (bool) – whether to conduct direct attack or indirect attack.

  • structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)

  • feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features

get_added_edge(influence_nodes: list) Optional[tuple][source]
get_removed_edge(influence_nodes: list) Optional[tuple][source]
class DICEAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of DICE attack from the: “Hiding Individuals and Communities in a Social Network” paper

DICE randomly chooses edges to flip based on the principle of “Disconnect Internally, Connect Externally” (DICE), which conducts attacks by removing edges between nodes with high correlations and connecting edges with low correlations.

Parameters
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of graphwar.attack.Attacker,) –

Raises

TypeError – unexpected keyword argument in kwargs

Example

>>> from graphwar.dataset import GraphWarDataset
>>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora',
                      transform=T.LargestConnectedComponents())
>>> data = dataset[0]
>>> from graphwar.attack.targeted import IGAttack
>>> attacker = IGAttack(data)
>>> attacker.reset()
>>> attacker.attack(target=1) # attacking target node `1` with default budget set as node degree
>>> attacker.reset()
>>> attacker.attack(target=1, num_budgets=1) # attacking target node `1` with budget set as 1
>>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack

Note

  • Please remember to call reset() before each attack.

get_added_edge(influence_nodes: list) Optional[tuple][source]
get_removed_edge(influence_nodes: list) Optional[tuple][source]
class FGAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of FGA attack from the: “Fast Gradient Attack on Network Embedding” paper (arXiv’18)

Parameters
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of graphwar.attack.Attacker,) –

Raises

TypeError – unexpected keyword argument in kwargs

Example

>>> from graphwar.dataset import GraphWarDataset
>>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora',
                      transform=T.LargestConnectedComponents())
>>> data = dataset[0]
>>> surrogate_model = ... # train your surrogate model
>>> from graphwar.attack.targeted import FGAttack
>>> attacker = FGAttack(data)
>>> attacker.setup_surrogate(surrogate_model)
>>> attacker.reset()
>>> attacker.attack(target=1) # attacking target node `1` with default budget set as node degree
>>> attacker.reset()
>>> attacker.attack(target=1, num_budgets=1) # attacking target node `1` with budget set as 1
>>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack

Note

This is a simple but effective attack that utilizing gradient information of the adjacency matrix. There are several work sharing the same heuristic, we list them as follows: [1] FGSM: “Explaining and Harnessing Adversarial Examples” paper (ICLR’15) [2] “Link Prediction Adversarial Attack Via Iterative Gradient Attack” paper (IEEE Trans’20) [3] “Adversarial Attack on Graph Structured Data” paper (ICML’18)

Note

  • Please remember to call reset() before each attack.

reset()[source]

Reset the state of the Attacker

Returns

the attacker itself

Return type

TargetedAttacker

attack(target, *, target_label=None, num_budgets=None, direct_attack=True, structure_attack=True, feature_attack=False, disable=False)[source]

Base method that describes the adversarial targeted attack

Parameters
  • target (int) – the target node to be attacked

  • target_label (int) – the label of the target node.

  • num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –

    Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.

    Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations

    See :attr:max_perturbations

  • direct_attack (bool) – whether to conduct direct attack or indirect attack.

  • structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)

  • feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features

structure_score(modified_adj, adj_grad, target)[source]
feature_score(modified_feat, feat_grad, target)[source]
compute_gradients(modified_adj, modified_feat, target, target_label)[source]
class IGAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of IG-FGSM attack from the: “Adversarial Examples on Graph Data: Deep Insights into Attack and Defense” paper (IJCAI’19)

Parameters
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of graphwar.attack.Attacker,) –

Raises

TypeError – unexpected keyword argument in kwargs

Example

>>> from graphwar.dataset import GraphWarDataset
>>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora',
                      transform=T.LargestConnectedComponents())
>>> data = dataset[0]
>>> surrogate_model = ... # train your surrogate model
>>> from graphwar.attack.targeted import IGAttack
>>> attacker = IGAttack(data)
>>> attacker.setup_surrogate(surrogate_model)
>>> attacker.reset()
>>> attacker.attack(target=1) # attacking target node `1` with default budget set as node degree
>>> attacker.reset()
>>> attacker.attack(target=1, num_budgets=1) # attacking target node `1` with budget set as 1
>>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack

Note

  • Please remember to call reset() before each attack.

attack(target, *, target_label=None, num_budgets=None, steps=20, direct_attack=True, structure_attack=True, feature_attack=False, disable=False)[source]

Base method that describes the adversarial targeted attack

Parameters
  • target (int) – the target node to be attacked

  • target_label (int) – the label of the target node.

  • num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –

    Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.

    Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations

    See :attr:max_perturbations

  • direct_attack (bool) – whether to conduct direct attack or indirect attack.

  • structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)

  • feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features

get_candidate_edges()[source]
get_candidate_features()[source]
get_feature_importance(candidates, steps, target, target_label, disable=False)[source]
compute_structure_gradients(feat, adj_step, target, target_label)[source]
compute_feature_gradients(feat_step, adj, target, target_label)[source]
class SGAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of SGA attack from the: “Adversarial Attack on Large Scale Graph” paper (TKDE’21)

Parameters
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of graphwar.attack.Attacker,) –

Raises

TypeError – unexpected keyword argument in kwargs

Example

>>> from graphwar.dataset import GraphWarDataset
>>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora',
                      transform=T.LargestConnectedComponents())
>>> data = dataset[0]
>>> surrogate_model = ... # train your surrogate model
>>> from graphwar.attack.targeted import SGAttack
>>> attacker = SGAttack(data)
>>> attacker.setup_surrogate(surrogate_model)
>>> attacker.reset()
>>> attacker.attack(target=1) # attacking target node `1` with default budget set as node degree
>>> attacker.reset()
>>> attacker.attack(target=1, num_budgets=1) # attacking target node `1` with budget set as 1
>>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack

Note

  • SGAttack is a scalable attack that can be applied to large scale graphs.

  • Please remember to call reset() before each attack.

setup_surrogate(surrogate: Module, eps: float = 5.0, freeze: bool = True, K: int = 2)[source]

Method used to initialize the (trained) surrogate model.

Parameters
  • surrogate (Module) – the input surrogate module

  • eps (float, optional) – temperature used for softmax activation, by default 1.0

  • freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True

  • required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None

Returns

the class itself

Return type

Surrogate

Raises
  • RuntimeError – if the surrogate model is not an instance of torch.nn.Module

  • RuntimeError – if the surrogate model is not an instance of required

set_normalize(state)[source]
strongest_wrong_class(target, target_label)[source]
get_subgraph(target, target_label, best_wrong_label)[source]
get_top_attackers(subgraph, target, target_label, best_wrong_label, num_attackers)[source]
subgraph_processing(sub_nodes, sub_edges, influencers, attacker_nodes)[source]
attack(target, *, target_label=None, num_budgets=None, direct_attack=True, structure_attack=True, feature_attack=False, disable=False)[source]

Base method that describes the adversarial targeted attack

Parameters
  • target (int) – the target node to be attacked

  • target_label (int) – the label of the target node.

  • num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –

    Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.

    Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations

    See :attr:max_perturbations

  • direct_attack (bool) – whether to conduct direct attack or indirect attack.

  • structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)

  • feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features

compute_gradients(subgraph, target, target_label, best_wrong_label)[source]
class Nettack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of Nettack attack from the: “Adversarial Attacks on Neural Networks for Graph Data” paper (KDD’18)

Parameters
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of graphwar.attack.Attacker,) –

Raises

TypeError – unexpected keyword argument in kwargs

Example

>>> from graphwar.dataset import GraphWarDataset
>>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora',
                      transform=T.LargestConnectedComponents())
>>> data = dataset[0]
>>> surrogate_model = ... # train your surrogate model
>>> from graphwar.attack.targeted import Nettack
>>> attacker = Nettack(data)
>>> attacker.setup_surrogate(surrogate_model)
>>> attacker.reset()
>>> attacker.attack(target=1) # attacking target node `1` with default budget set as node degree
>>> attacker.reset()
>>> attacker.attack(target=1, num_budgets=1) # attacking target node `1` with budget set as 1
>>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack

Note

  • Please remember to call reset() before each attack.

setup_surrogate(surrogate)[source]

Method used to initialize the (trained) surrogate model.

Parameters
  • surrogate (Module) – the input surrogate module

  • eps (float, optional) – temperature used for softmax activation, by default 1.0

  • freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True

  • required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None

Returns

the class itself

Return type

Surrogate

Raises
  • RuntimeError – if the surrogate model is not an instance of torch.nn.Module

  • RuntimeError – if the surrogate model is not an instance of required

reset()[source]

Reset the state of the Attacker

Returns

the attacker itself

Return type

TargetedAttacker

compute_cooccurrence_constraint(nodes)[source]
gradient_wrt_x(label)[source]
compute_logits()[source]
strongest_wrong_class(logits)[source]
feature_scores()[source]
structure_score(a_hat_uv, XW)[source]
compute_XW()[source]
get_attacker_nodes(n=5, add_additional_nodes=False)[source]
compute_new_a_hat_uv(candidate_edges)[source]
get_candidate_edges(n_influencers)[source]
attack(target, *, target_label=None, num_budgets=None, n_influencers=5, direct_attack=True, structure_attack=True, feature_attack=False, ll_constraint=True, ll_cutoff=0.004, disable=False)[source]

Base method that describes the adversarial targeted attack

Parameters
  • target (int) – the target node to be attacked

  • target_label (int) – the label of the target node.

  • num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –

    Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.

    Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations

    See :attr:max_perturbations

  • direct_attack (bool) – whether to conduct direct attack or indirect attack.

  • structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)

  • feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features

class GFAttack(data: Data, K: int = 2, T: int = 128, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of GFA attack from the: “A Restricted Black - box Adversarial Framework Towards Attacking Graph Embedding Models” paper (AAAI’20)

Parameters
  • data (Data) – PyG-like data denoting the input graph

  • K (int, optional) – the order of graph filter, by default 2

  • T (int, optional) – top-T largest eigen-values/vectors selected, by default 128

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed of reproduce the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

Raises

TypeError – unexpected keyword argument in kwargs

Example

>>> from graphwar.dataset import GraphWarDataset
>>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora',
                      transform=T.LargestConnectedComponents())
>>> data = dataset[0]
>>> from graphwar.attack.targeted import IGAttack
>>> attacker = IGAttack(data)
>>> attacker.attack(target=1) # attacking target node `1` with default budget set as node degree
>>> attacker.attack(target=1, num_budgets=1) # attacking target node `1` with budget set as 1
>>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack

Note

  • In the paper, the authors mainly consider the single edge perturbations, i.e., num_budgets=1.

  • Please remember to call reset() before each attack.

  • T=128 for citeseer and pubmed, T=num_nodes//2 for cora to reproduce results in paper.

get_candidate_edges()[source]
attack(target, *, num_budgets=None, direct_attack=True, structure_attack=True, feature_attack=False, ll_constraint=False, ll_cutoff=0.004, disable=False)[source]

Base method that describes the adversarial targeted attack

Parameters
  • target (int) – the target node to be attacked

  • target_label (int) – the label of the target node.

  • num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –

    Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.

    Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations

    See :attr:max_perturbations

  • direct_attack (bool) – whether to conduct direct attack or indirect attack.

  • structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)

  • feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features

static structure_score(A: csr_matrix, x_mean: Tensor, eig_vals: Tensor, eig_vec: Tensor, candidate_edges: ndarray, K: int, T: int, method: str = 'nosum')[source]

Calculate the score of potential edges as formulated in paper.

Parameters
  • A (sp.csr_matrix) – the graph adjacency matrix

  • x_mean (Tensor) –

  • eig_vals (Tensor) – the eigen value

  • eig_vec (Tensor) – the eigen vector

  • candidate_edges (np.ndarray) – the candidate_edges to be selected

  • K (int) – The order of graph filter K.

  • T (int) – Selecting the Top-T largest eigen-values/vectors.

  • method (str, optional) – “sum” or “nosum” Indicates the score are calculated from which loss as in Equation (8) or Equation (12). “nosum” denotes Equation (8), where the loss is derived from Graph Convolutional Networks, “sum” denotes Equation (12), where the loss is derived from Sampling-based Graph Embedding Methods. by default “nosum”

Returns

Scores for potential edges.

Return type

Tensor

Untargeted Attacks

UntargetedAttacker

Base class for adversarial non-targeted attack.

RandomAttack

Random attacker that randomly chooses edges to flip.

DICEAttack

Implementation of DICE attack from the: "Hiding Individuals and Communities in a Social Network" paper

FGAttack

Implementation of FGA attack from the: "Fast Gradient Attack on Network Embedding" paper (arXiv'18)

IGAttack

Implementation of IG-FGSM attack from the: "Adversarial Examples on Graph Data: Deep Insights into Attack and Defense" paper (IJCAI'19)

Metattack

Implementation of Metattack attack from the: "Adversarial Attacks on Graph Neural Networks via Meta Learning" paper (ICLR'19)

MinmaxAttack

Implementation of MinMax attack from the: "Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective" paper (IJCAI'19)

PGDAttack

Implementation of PGD attack from the: "Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective" paper (IJCAI'19)

class UntargetedAttacker(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Base class for adversarial non-targeted attack.

Parameters
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of graphwar.attack.Attacker,) –

Raises

TypeError – unexpected keyword argument in kwargs

Note

graphwar.attack.targeted.UntargetedAttacker is a subclass of graphwar.attack.FlipAttacker. It belongs to graph modification attack (GMA).

reset() UntargetedAttacker[source]

Reset the state of the Attacker

Returns

the attacker itself

Return type

UntargetedAttacker

attack(num_budgets, structure_attack, feature_attack) UntargetedAttacker[source]

Base method that describes the adversarial untargeted attack

Parameters
  • num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –

    Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.

    Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations

    See :attr:max_perturbations

  • structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)

  • feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features

class RandomAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Random attacker that randomly chooses edges to flip.

Parameters
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of graphwar.attack.Attacker,) –

Raises

TypeError – unexpected keyword argument in kwargs

Example

>>> from graphwar.dataset import GraphWarDataset
>>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora',
                      transform=T.LargestConnectedComponents())
>>> data = dataset[0]
>>> from graphwar.attack.untargeted import RandomAttack
>>> attacker = RandomAttack(data)
>>> attacker.reset()
>>> attacker.attack(0.05) # attack with 0.05% of edge perturbations
>>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack

Note

  • Please remember to call reset() before each attack.

attack(num_budgets=0.05, *, threshold=0.5, structure_attack=True, feature_attack=False, disable=False)[source]

Base method that describes the adversarial untargeted attack

Parameters
  • num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –

    Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.

    Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations

    See :attr:max_perturbations

  • structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)

  • feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features

get_added_edge(influence_nodes: list) Optional[tuple][source]
get_removed_edge(influence_nodes: list) Optional[tuple][source]
class DICEAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of DICE attack from the: “Hiding Individuals and Communities in a Social Network” paper

DICE randomly chooses edges to flip based on the principle of “Disconnect Internally, Connect Externally” (DICE), which conducts attacks by removing edges between nodes with high correlations and connecting edges with low correlations.

Parameters
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of graphwar.attack.Attacker,) –

Raises

TypeError – unexpected keyword argument in kwargs

Example

>>> from graphwar.dataset import GraphWarDataset
>>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora',
                      transform=T.LargestConnectedComponents())
>>> data = dataset[0]
>>> from graphwar.attack.untargeted import DICEAttack
>>> attacker = DICEAttack(data)
>>> attacker.reset()
>>> attacker.attack(0.05) # attack with 0.05% of edge perturbations
>>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack

Note

  • Please remember to call reset() before each attack.

get_added_edge(influence_nodes: list) Optional[tuple][source]
get_removed_edge(influence_nodes: list) Optional[tuple][source]
class FGAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of FGA attack from the: “Fast Gradient Attack on Network Embedding” paper (arXiv’18)

Parameters
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of graphwar.attack.Attacker,) –

Raises

TypeError – unexpected keyword argument in kwargs

Example

>>> from graphwar.dataset import GraphWarDataset
>>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora',
                      transform=T.LargestConnectedComponents())
>>> data = dataset[0]
>>> surrogate_model = ... # train your surrogate model
>>> from graphwar.attack.untargeted import FGAttack
>>> attacker = FGAttack(data)
>>> attacker.setup_surrogate(surrogate_model)
>>> attacker.reset()
>>> attacker.attack(0.05) # attack with 0.05% of edge perturbations
>>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack

Note

This is a simple but effective attack that utilizing gradient information of the adjacency matrix. There are several work sharing the same heuristic, we list them as follows: [1] FGSM: “Explaining and Harnessing Adversarial Examples” paper (ICLR’15) [2] “Link Prediction Adversarial Attack Via Iterative Gradient Attack” paper (IEEE Trans’20) [3] “Adversarial Attack on Graph Structured Data” paper (ICML’18)

Note

  • Please remember to call reset() before each attack.

setup_surrogate(surrogate: Module, victim_nodes: Tensor, victim_labels: Optional[Tensor] = None, *, eps: float = 1.0)[source]

Method used to initialize the (trained) surrogate model.

Parameters
  • surrogate (Module) – the input surrogate module

  • eps (float, optional) – temperature used for softmax activation, by default 1.0

  • freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True

  • required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None

Returns

the class itself

Return type

Surrogate

Raises
  • RuntimeError – if the surrogate model is not an instance of torch.nn.Module

  • RuntimeError – if the surrogate model is not an instance of required

reset()[source]

Reset the state of the Attacker

Returns

the attacker itself

Return type

UntargetedAttacker

attack(num_budgets=0.05, *, structure_attack=True, feature_attack=False, disable=False)[source]

Base method that describes the adversarial untargeted attack

Parameters
  • num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –

    Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.

    Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations

    See :attr:max_perturbations

  • structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)

  • feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features

structure_score(modified_adj, adj_grad)[source]
feature_score(modified_feat, feat_grad)[source]
compute_gradients(modified_adj, modified_feat, victim_nodes, victim_labels)[source]
class IGAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of IG-FGSM attack from the: “Adversarial Examples on Graph Data: Deep Insights into Attack and Defense” paper (IJCAI’19)

Parameters
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of graphwar.attack.Attacker,) –

Raises

TypeError – unexpected keyword argument in kwargs

Example

>>> from graphwar.dataset import GraphWarDataset
>>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora',
                      transform=T.LargestConnectedComponents())
>>> data = dataset[0]
>>> surrogate_model = ... # train your surrogate model
>>> from graphwar.attack.untargeted import IGAttack
>>> attacker = IGAttack(data)
>>> attacker.setup_surrogate(surrogate_model)
>>> attacker.reset()
>>> attacker.attack(0.05) # attack with 0.05% of edge perturbations
>>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack

Note

  • In the paper, IG-FGSM attack was implemented for targeted attack, we adapt the codes for the non-targeted attack here.

  • Please remember to call reset() before each attack.

setup_surrogate(surrogate: Module, victim_nodes: Tensor, victim_labels: Optional[Tensor] = None, *, eps: float = 1.0)[source]

Method used to initialize the (trained) surrogate model.

Parameters
  • surrogate (Module) – the input surrogate module

  • eps (float, optional) – temperature used for softmax activation, by default 1.0

  • freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True

  • required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None

Returns

the class itself

Return type

Surrogate

Raises
  • RuntimeError – if the surrogate model is not an instance of torch.nn.Module

  • RuntimeError – if the surrogate model is not an instance of required

attack(num_budgets=0.05, *, steps=20, structure_attack=True, feature_attack=False, disable=False)[source]

Base method that describes the adversarial untargeted attack

Parameters
  • num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –

    Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.

    Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations

    See :attr:max_perturbations

  • structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)

  • feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features

get_feature_importance(steps, victim_nodes, victim_labels, disable=False)[source]
structure_score(adj, adj_grad)[source]
feature_score(feat, feat_grad)[source]
compute_structure_gradients(adj_step, feat, victim_nodes, victim_labels)[source]
compute_feature_gradients(adj, feat_step, victim_nodes, victim_labels)[source]
class Metattack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of Metattack attack from the: “Adversarial Attacks on Graph Neural Networks via Meta Learning” paper (ICLR’19)

Parameters
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of graphwar.attack.Attacker,) –

Raises

TypeError – unexpected keyword argument in kwargs

Example

>>> from graphwar.dataset import GraphWarDataset
>>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora',
                      transform=T.LargestConnectedComponents())
>>> data = dataset[0]
>>> surrogate_model = ... # train your surrogate model
>>> from graphwar.attack.untargeted import Metattack
>>> attacker = Metattack(data)
>>> attacker.setup_surrogate(surrogate_model)
>>> attacker.reset()
>>> attacker.attack(0.05) # attack with 0.05% of edge perturbations
>>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack

Note

  • Please remember to call reset() before each attack.

setup_surrogate(surrogate: Module, labeled_nodes: Tensor, unlabeled_nodes: Tensor, lr: float = 0.1, epochs: int = 100, momentum: float = 0.9, lambda_: float = 0.0, *, eps: float = 1.0)[source]

Method used to initialize the (trained) surrogate model.

Parameters
  • surrogate (Module) – the input surrogate module

  • eps (float, optional) – temperature used for softmax activation, by default 1.0

  • freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True

  • required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None

Returns

the class itself

Return type

Surrogate

Raises
  • RuntimeError – if the surrogate model is not an instance of torch.nn.Module

  • RuntimeError – if the surrogate model is not an instance of required

reset()[source]

Reset the state of the Attacker

Returns

the attacker itself

Return type

UntargetedAttacker

get_perturbed_adj(adj_changes=None)[source]
get_perturbed_feat(feat_changes=None)[source]
clip(matrix)[source]
reset_parameters()[source]
forward(adj, x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

inner_train(adj, feat)[source]
attack(num_budgets=0.05, *, structure_attack=True, feature_attack=False, disable=False)[source]

Base method that describes the adversarial untargeted attack

Parameters
  • num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –

    Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.

    Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations

    See :attr:max_perturbations

  • structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)

  • feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features

structure_score(modified_adj, adj_grad)[source]
feature_score(modified_feat, feat_grad)[source]
compute_gradients(modified_adj, modified_feat)[source]
class MinmaxAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of MinMax attack from the: “Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective” paper (IJCAI’19)

Parameters
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of graphwar.attack.Attacker,) –

Raises

TypeError – unexpected keyword argument in kwargs

Example

>>> from graphwar.dataset import GraphWarDataset
>>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora',
                      transform=T.LargestConnectedComponents())
>>> data = dataset[0]
>>> surrogate_model = ... # train your surrogate model
>>> from graphwar.attack.untargeted import MinmaxAttack
>>> attacker = MinmaxAttack(data)
>>> attacker.setup_surrogate(surrogate_model)
>>> attacker.reset()
>>> attacker.attack(0.05) # attack with 0.05% of edge perturbations
>>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack

Note

setup_surrogate(surrogate: Module, labeled_nodes: Tensor, unlabeled_nodes: Optional[Tensor] = None, *, eps: float = 1.0)[source]

Method used to initialize the (trained) surrogate model.

Parameters
  • surrogate (Module) – the input surrogate module

  • eps (float, optional) – temperature used for softmax activation, by default 1.0

  • freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True

  • required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None

Returns

the class itself

Return type

Surrogate

Raises
  • RuntimeError – if the surrogate model is not an instance of torch.nn.Module

  • RuntimeError – if the surrogate model is not an instance of required

reset()[source]

Reset the state of the Attacker

Returns

the attacker itself

Return type

UntargetedAttacker

attack(num_budgets=0.05, *, C=None, lr=0.001, CW_loss=False, epochs=100, sample_epochs=20, structure_attack=True, feature_attack=False, disable=False)[source]

Base method that describes the adversarial untargeted attack

Parameters
  • num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –

    Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.

    Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations

    See :attr:max_perturbations

  • structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)

  • feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features

class PGDAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of PGD attack from the: “Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective” paper (IJCAI’19)

Parameters
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of graphwar.attack.Attacker,) –

Raises

TypeError – unexpected keyword argument in kwargs

Example

>>> from graphwar.dataset import GraphWarDataset
>>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora',
                      transform=T.LargestConnectedComponents())
>>> data = dataset[0]
>>> surrogate_model = ... # train your surrogate model
>>> from graphwar.attack.untargeted import PGDAttack
>>> attacker = PGDAttack(data)
>>> attacker.setup_surrogate(surrogate_model)
>>> attacker.reset()
>>> attacker.attack(0.05) # attack with 0.05% of edge perturbations
>>> attacker.data() # get attacked graph
>>> attacker.edge_flips() # get edge flips after attack
>>> attacker.added_edges() # get added edges after attack
>>> attacker.removed_edges() # get removed edges after attack

Note

setup_surrogate(surrogate: Module, labeled_nodes: Tensor, unlabeled_nodes: Optional[Tensor] = None, *, eps: float = 1.0, freeze: bool = True)[source]

Method used to initialize the (trained) surrogate model.

Parameters
  • surrogate (Module) – the input surrogate module

  • eps (float, optional) – temperature used for softmax activation, by default 1.0

  • freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True

  • required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None

Returns

the class itself

Return type

Surrogate

Raises
  • RuntimeError – if the surrogate model is not an instance of torch.nn.Module

  • RuntimeError – if the surrogate model is not an instance of required

reset()[source]

Reset the state of the Attacker

Returns

the attacker itself

Return type

UntargetedAttacker

attack(num_budgets=0.05, *, C=None, CW_loss=False, epochs=200, sample_epochs=20, structure_attack=True, feature_attack=False, disable=False)[source]

Base method that describes the adversarial untargeted attack

Parameters
  • num_budgets (int (0<`num_budgets`<=:attr:max_perturbations) or float (0<`num_budgets`<=1)) –

    Case 1: int : the number of attack budgets, i.e., how many edges can be perturbed.

    Case 2: float: the number of attack budgets is the ratio of :attr:max_perturbations

    See :attr:max_perturbations

  • structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)

  • feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features

config_C(C=None)[source]
bisection(perturbations, a, b, epsilon)[source]
get_perturbed_adj(perturbations=None)[source]
projection(perturbations)[source]
clip(matrix)[source]
bernoulli_sample(perturbations, sample_epochs=20, disable=False)[source]
compute_loss(perturbations, victim_nodes, victim_labels)[source]
compute_gradients(perturbations, victim_nodes, victim_labels)[source]

Injection Attacks

InjectionAttacker

Base class for Injection Attacker, an inherent attack should implement the attack method.

RandomInjection

Injection nodes into a graph randomly.

AdvInjection

2nd place solution of KDD CUP 2020 "Adversarial attack and defense" challenge.

class InjectionAttacker(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Base class for Injection Attacker, an inherent attack should implement the attack method.

Example

>>> attacker = InjectionAttacker(data)
>>> attacker.reset()
# inject 10 nodes, where each nodes has 2 edges
>>> attacker.attack(num_budgets=10, num_edges_local=2)
# inject 10 nodes, with 100 edges in total
>>> attacker.attack(num_budgets=10, num_edges_global=100)
# inject 10 nodes, where each nodes has 2 edges,
# the features of injected nodes lies in [0,1]
>>> attacker.attack(num_budgets=10, num_edges_local=2, feat_limits=(0,1))
>>> attacker.attack(num_budgets=10, num_edges_local=2, feat_limits={'min': 0, 'max':1})
# inject 10 nodes, where each nodes has 2 edges,
# the features of injected each node has 10 nonzero elements
>>> attacker.attack(num_budgets=10, num_edges_local=2, feat_budgets=10)

# get injected nodes >>> attacker.injected_nodes() # get injected edges >>> attacker.injected_edges() # get injected nodes’ features >>> attacker.injected_feats() # get perturbed graph >>> attacker.data()

reset() InjectionAttacker[source]

Reset the state of the Attacker

Returns

the attacker itself

Return type

InjectionAttacker

attack(num_budgets: Union[int, float], *, targets: Optional[Tensor] = None, num_edges_global: Optional[int] = None, num_edges_local: Optional[int] = None, feat_limits: Optional[Union[tuple, dict]] = None, feat_budgets: Optional[int] = None) InjectionAttacker[source]

Base method that describes the adversarial injection attack

Parameters
  • num_budgets (Union[int, float]) – the number/percentage of nodes allowed to inject

  • targets (Optional[Tensor], optional) – the targeted nodes where injected nodes perturb, if None, it will be all nodes in the graph, by default None

  • num_edges_global (Optional[int], optional) – the number of total edges in the graph to be injected for all injected nodes, by default None

  • num_edges_local (Optional[int], optional) – the number of edges allowed to inject for each injected nodes, by default None

  • feat_limits (Optional[Union[tuple, dict]], optional) – the limitation or allowed budgets of injected node features, it can be a tuple, e.g., (0, 1) or a dict, e.g., {‘min’:0, ‘max’: 1}. if None, it is set as (self.feat.min(), self.feat.max()), by default None

  • feat_budgets (Optional[int], optional) – the number of nonzero features can be injected for each node, e.g., 10, denoting 10 nonzero features can be injected, by default None

Return type

the attacker itself

Note

  • Both num_edges_local and num_edges_global cannot be used simultaneously.

  • Both feat_limits and feat_budgets cannot be used simultaneously.

injected_nodes() Optional[Tensor][source]

Get all the nodes to be injected.

added_nodes() Optional[Tensor][source]

alias of method added_nodes

injected_edges() Optional[Tensor][source]

Get all the edges to be injected.

added_edges() Optional[Tensor][source]

alias of method injected_edges

edge_flips() BunchDict[source]

Get all the edges to be flipped, including edges to be added and removed.

injected_feats() Optional[Tensor][source]

Get the features injected nodes.

added_feats() Optional[Tensor][source]

alias of method added_edges

inject_node(node)[source]
inject_edge(u: int, v: int)[source]

Inject an edge to the graph.

Parameters
  • u (int) – The source node of the edge.

  • v (int) – The destination node of the edge.

inject_edges(edges: Union[Tensor, List])[source]

Inject a set of edges to the graph.

Parameters

edges (Union[Tensor, List]) – The newly injected.

inject_feat(feat: Optional[Tensor] = None)[source]

Generate a feature vector to the graph for a newly injected node.

Parameters

feat (Optional[Tensor], optional) –

the injected feature. If None, it would be randomly generated,

by default None

data(symmetric: bool = True) Data[source]

return the attacked graph

Parameters

symmetric (bool) – Determine whether the resulting graph is forcibly symmetric

Returns

the attacked graph represented as PyG-like data

Return type

Data

class RandomInjection(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Injection nodes into a graph randomly.

Example

>>> from graphwar.dataset import GraphWarDataset
>>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora',
                      transform=T.LargestConnectedComponents())
>>> data = dataset[0]
>>> from graphwar.attack.injection import RandomInjection
>>> attacker = RandomInjection(data)
>>> attacker.reset()
>>> attacker.attack(10, feat_limits=(0, 1))  # injecting 10 nodes for continuous features
>>> attacker.reset()
>>> attacker.attack(10, feat_budgets=10)  # injecting 10 nodes for binary features
>>> attacker.data() # get attacked graph
>>> attacker.injected_nodes() # get injected nodes after attack
>>> attacker.injected_edges() # get injected edges after attack
>>> attacker.injected_feats() # get injected features after attack

Note

  • Please remember to call reset() before each attack.

attack(num_budgets: Union[int, float], *, targets: Optional[Tensor] = None, interconnection: bool = False, num_edges_global: Optional[int] = None, num_edges_local: Optional[int] = None, feat_limits: Optional[Union[tuple, dict]] = None, feat_budgets: Optional[int] = None, disable: bool = False) RandomInjection[source]

Base method that describes the adversarial injection attack

Parameters
  • num_budgets (Union[int, float]) – the number/percentage of nodes allowed to inject

  • targets (Optional[Tensor], optional) – the targeted nodes where injected nodes perturb, if None, it will be all nodes in the graph, by default None

  • interconnection (bool, optional) – whether the injected nodes can connect to each other, by default False

  • num_edges_global (Optional[int], optional) – the number of total edges in the graph to be injected for all injected nodes, by default None

  • num_edges_local (Optional[int], optional) – the number of edges allowed to inject for each injected nodes, by default None

  • feat_limits (Optional[Union[tuple, dict]], optional) – the limitation or allowed budgets of injected node features, it can be a tuple, e.g., (0, 1) or a dict, e.g., {‘min’:0, ‘max’: 1}. if None, it is set as (self.feat.min(), self.feat.max()), by default None

  • feat_budgets (Optional[int], optional) – the number of nonzero features can be injected for each node, e.g., 10, denoting 10 nonzero features can be injected, by default None

  • disable (bool, optional) – whether the tqdm progbar is to disabled, by default False

Return type

the attacker itself

Note

  • Both num_edges_local and num_edges_global cannot be used simultaneously.

  • Both feat_limits and feat_budgets cannot be used simultaneously.

class AdvInjection(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

2nd place solution of KDD CUP 2020 “Adversarial attack and defense” challenge.

Example

>>> from graphwar.dataset import GraphWarDataset
>>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora',
                      transform=T.LargestConnectedComponents())
>>> data = dataset[0]
>>> surrogate_model = ... # train your surrogate model
>>> from graphwar.attack.injection import AdvInjection
>>> attacker.setup_surrogate(surrogate_model)
>>> attacker = AdvInjection(data)
>>> attacker.reset()
>>> attacker.attack(10, feat_limits=(0, 1))  # injecting 10 nodes for continuous features
>>> attacker.reset()
>>> attacker.attack(10, feat_budgets=10)  # injecting 10 nodes for binary features
>>> attacker.data() # get attacked graph
>>> attacker.injected_nodes() # get injected nodes after attack
>>> attacker.injected_edges() # get injected edges after attack
>>> attacker.injected_feats() # get injected features after attack

Note

  • Please remember to call reset() before each attack.

attack(num_budgets: Union[int, float], *, targets: Optional[Tensor] = None, interconnection: bool = False, lr: float = 0.01, num_edges_global: Optional[int] = None, num_edges_local: Optional[int] = None, feat_limits: Optional[Union[tuple, dict]] = None, feat_budgets: Optional[int] = None, disable: bool = False) AdvInjection[source]

Base method that describes the adversarial injection attack

Parameters
  • num_budgets (Union[int, float]) – the number/percentage of nodes allowed to inject

  • targets (Optional[Tensor], optional) – the targeted nodes where injected nodes perturb, if None, it will be all nodes in the graph, by default None

  • num_edges_global (Optional[int], optional) – the number of total edges in the graph to be injected for all injected nodes, by default None

  • num_edges_local (Optional[int], optional) – the number of edges allowed to inject for each injected nodes, by default None

  • feat_limits (Optional[Union[tuple, dict]], optional) – the limitation or allowed budgets of injected node features, it can be a tuple, e.g., (0, 1) or a dict, e.g., {‘min’:0, ‘max’: 1}. if None, it is set as (self.feat.min(), self.feat.max()), by default None

  • feat_budgets (Optional[int], optional) – the number of nonzero features can be injected for each node, e.g., 10, denoting 10 nonzero features can be injected, by default None

Return type

the attacker itself

Note

  • Both num_edges_local and num_edges_global cannot be used simultaneously.

  • Both feat_limits and feat_budgets cannot be used simultaneously.

compute_gradients(x, edge_index, edge_weight, injected_feats, injected_edge_index, injected_edge_weight, targets, target_labels)[source]

Backdoor Attacks

BackdoorAttacker

Base class for backdoor attacks.

FGBackdoor

Implementation of GB-FGSM attack from the: "Neighboring Backdoor Attacks on Graph Convolutional Network" paper (arXiv'22)

LGCBackdoor

Implementation of LGCB attack from the: "Neighboring Backdoor Attacks on Graph Convolutional Network" paper (arXiv'22)

class BackdoorAttacker(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Base class for backdoor attacks.

reset() BackdoorAttacker[source]

Reset the state of the Attacker

Returns

the attacker itself

Return type

BackdoorAttacker

attack(num_budgets: Union[int, float], targets_class: int) BackdoorAttacker[source]

Base method that describes the adversarial backdoor attack

trigger() Tensor[source]
data(target_node: int, symmetric: bool = True) Data[source]

return the attacked graph

Parameters
  • target_node (int) – the target node that the attack performed

  • symmetric (bool) – determine whether the resulting graph is forcibly symmetric, by default True

Returns

the attacked graph with backdoor attack performed on the target node

Return type

Data

class FGBackdoor(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of GB-FGSM attack from the: “Neighboring Backdoor Attacks on Graph Convolutional Network” paper (arXiv’22)

Example

>>> from graphwar.dataset import GraphWarDataset
>>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora',
                      transform=T.LargestConnectedComponents())
>>> data = dataset[0]
>>> surrogate_model = ... # train your surrogate model
>>> from graphwar.attack.backdoor import FGBackdoor
>>> attacker.setup_surrogate(surrogate_model)
>>> attacker = FGBackdoor(data)
>>> attacker.reset()
>>> attacker.attack(num_budgets=50, target_class=0)
>>> attacker.data() # get attacked graph
>>> attacker.trigger() # get trigger node

Note

  • Please remember to call reset() before each attack.

setup_surrogate(surrogate: Module, *, eps: float = 1.0) FGBackdoor[source]

Method used to initialize the (trained) surrogate model.

Parameters
  • surrogate (Module) – the input surrogate module

  • eps (float, optional) – temperature used for softmax activation, by default 1.0

  • freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True

  • required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None

Returns

the class itself

Return type

Surrogate

Raises
  • RuntimeError – if the surrogate model is not an instance of torch.nn.Module

  • RuntimeError – if the surrogate model is not an instance of required

attack(num_budgets: Union[int, float], target_class: int, disable: bool = False) FGBackdoor[source]

Base method that describes the adversarial backdoor attack

class LGCBackdoor(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of LGCB attack from the: “Neighboring Backdoor Attacks on Graph Convolutional Network” paper (arXiv’22)

Example

>>> from graphwar.dataset import GraphWarDataset
>>> import torch_geometric.transforms as T
>>> dataset = GraphWarDataset(root='~/data/pygdata', name='cora',
                      transform=T.LargestConnectedComponents())
>>> data = dataset[0]
>>> surrogate_model = ... # train your surrogate model
>>> from graphwar.attack.backdoor import LGCBackdoor
>>> attacker.setup_surrogate(surrogate_model)
>>> attacker = LGCBackdoor(data)
>>> attacker.reset()
>>> attacker.attack(num_budgets=50, target_class=0)
>>> attacker.data() # get attacked graph
>>> attacker.trigger() # get trigger node

Note

  • Please remember to call reset() before each attack.

setup_surrogate(surrogate: Module) LGCBackdoor[source]
attack(num_budgets: Union[int, float], target_class: int, disable: bool = False) LGCBackdoor[source]

Base method that describes the adversarial backdoor attack

static get_feat_perturbations(W: Tensor, target_class: int, num_budgets: int) Tensor[source]