greatx.attack

Base Classes

Attacker

Adversarial attacker for graph data.

FlipAttacker

Adversarial attacker for graph data by flipping edges.

class Attacker(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Adversarial attacker for graph data. Note that this is an abstract class.

Parameters:
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of greatx.attack.Attacker,) –

Raises:

TypeError – unexpected keyword argument in kwargs

Examples

For example, the attacker model should be defined as follows:

from greatx.attacker import Attacker
attacker = Attacker(data, device='cuda')
attacker.reset() # reset states
attacker.attack(attack_arguments) # attack
attacker.data() # get the attacked graph denoted as PyG-like Data
reset()[source]

Reset attacker state. Override this method in subclass to implement specific function.

abstract data() Data[source]

Get the attacked graph denoted as PyG-like Data.

Raises:

NotImplementedError – The subclass does not implement this interface.

abstract attack() Attacker[source]

Abstract method. The subclass must override this method to implement specific attack for itself.

Raises:

NotImplementedError – The subclass does not implement this interface.

set_max_perturbations(max_perturbations: Union[float, int] = inf, verbose: bool = True) Attacker[source]

Set the maximum number of allowed perturbations

Parameters:
  • max_perturbations (Union[float, int], optional) – the maximum number of allowed perturbations, by default np.inf

  • verbose (bool, optional) – whether to verbose the operation, by default True

Example

attacker.set_max_perturbations(10)

property max_perturbations: Union[float, int]

Maximum allowable perturbation size.

Type:

float or int

property feat: Tensor

Node features of the original graph.

property label: Tensor

Node labels of the original graph.

property edge_index: Tensor

Edge index of the original graph.

property edge_weight: Tensor

Edge weight of the original graph.

get_dense_adj() Tensor[source]

Returns a dense adjacency denoting the original graph. If self.ori_data has the attribute adj_t, then it is returned, otherwise it is built from the tuple (edge_index, edge_weight).

class FlipAttacker(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Adversarial attacker for graph data by flipping edges.

Parameters:
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of greatx.attack.Attacker,) –

Raises:

TypeError – unexpected keyword argument in kwargs

Note

greatx.attack.FlipAttacker is a base class for graph modification attacks (GMA).

reset() FlipAttacker[source]

Reset attacker. This method must be called before attack.

remove_edge(u: int, v: int, it: Optional[int] = None)[source]

Remove an edge from the graph.

Parameters:
  • u (int) – The source node of the edge

  • v (int) – The destination node of the edge

  • it (Optional[int], optional) – The iteration that indicates the order of the edge being removed, by default None

add_edge(u: int, v: int, it: Optional[int] = None)[source]

Add one edge to the graph.

Parameters:
  • u (int) – The source node of the edge

  • v (int) – The destination node of the edge

  • it (Optional[int], optional) – The iteration that indicates the order of the edge being added, by default None

removed_edges() Optional[Tensor][source]

Get all the edges to be removed.

added_edges() Optional[Tensor][source]

Get all the edges to be added.

edge_flips(frac: float = 1.0) BunchDict[source]

Get all the edges to be flipped, including edges to be added and removed.

Parameters:

frac (float, optional) – the fraction of edge perturbations, i.e., how many perturbed features are used to construct the perturbed graph. by default 1.0

Example

>>> # Get the edge flips
>>> attacker.edge_flips()
>>> # Get the edge flips, with
>>> # specifying feat_ratio
>>> attacker.edge_flips(frac=0.5)
remove_feat(u: int, v: int, it: Optional[int] = None)[source]

Remove the feature in a dimension v form a node u. That is, set a dimension of the specific node to zero.

Parameters:
  • u (int) – the node whose features are to be removed

  • v (int) – the dimension of the feature to be removed

  • it (Optional[int], optional) – The iteration that indicates the order of the features being removed, by default None

add_feat(u: int, v: int, it: Optional[int] = None)[source]

Remove the feature in a dimension v form a node u. That is, set a dimension of the specific node to one.

Parameters:
  • u (int) – the node whose features are to be added

  • v (int) – the dimension of the feature to be added

  • it (Optional[int], optional) – The iteration that indicates the order of the features being added, by default None

removed_feats() Optional[Tensor][source]

Get all the features to be removed.

added_feats() Optional[Tensor][source]

Get all the features to be added.

feat_flips(frac: float = 1.0) BunchDict[source]

Get all the features to be flipped, including features to be added and removed.

Parameters:

frac (float, optional) – the fraction of feature perturbations, i.e., how many perturbed features are used to construct the perturbed graph. by default 1.0

Example

>>> # Get the feature flips
>>> attacker.feat_flips()
>>> # Get the feature flips, with
>>> # specifying feat_ratio
>>> attacker.feat_flips(frac=0.5)
data(edge_ratio: float = 1.0, feat_ratio: float = 1.0, coalesce: bool = True, symmetric: bool = True) Data[source]

Get the attacked graph denoted by PyG-like data instance. Note that this method uses LRU cache for efficiency, the computation is only excuted at the first call if the input parameters were the same.

Parameters:
  • edge_ratio (float, optional) – the fraction of edge perturbations, i.e., how many perturbed edges are used to construct the perturbed graph. by default 1.0

  • feat_ratio (float, optional) – the fraction of feature perturbations, i.e., how many perturbed features are used to construct the perturbed graph. by default 1.0

  • coalesce (bool, optional) – whether to coalesce the output edges.

  • symmetric (bool, optional) – whether the output graph is symmetric, by default True

Example

>>> # Get the perturbed graph, including
>>> # edge flips and feature flips
>>> attacker.data()
>>> # Get the perturbed graph, with
>>> # specifying edge_ratio
>>> attacker.data(edge_ratio=0.5)
>>> # Get the perturbed graph, with
>>> # specifying feat_ratio
>>> attacker.data(feat_ratio=0.5)
Returns:

the attacked graph denoted by PyG-like data instance

Return type:

Data

set_allow_singleton(state: bool)[source]

Set whether the attacked graph allow singleton node, i.e., zero degree nodes.

Parameters:

state (bool) – the flag to set

Example

>>> attacker.set_allow_singleton(True)
is_singleton_edge(u: int, v: int) bool[source]

Check if the edge is an singleton edge that, if removed, would result in a singleton node in the graph.

Parameters:
  • u (int) – The source node of the edge

  • v (int) – The destination node of the edge

Returns:

bool

Return type:

True if the edge is an singleton edge, otherwise False.

Note

Please make sure the edge is the one being removed.

Check whether the edge (u,v) is legal.

An edge (u,v) is legal if u!=v and edge (u,v) is not selected before.

Parameters:
  • u (int) – The source node of the edge

  • v (int) – The destination node of the edge

Returns:

  • bool (True if the u!=v and edge (u,v), (v,u) is not selected,)

  • otherwise False.

Targeted Attacks

TargetedAttacker

Base class for adversarial targeted attack.

RandomAttack

Random attacker that randomly chooses edges to flip.

DICEAttack

Implementation of DICE attack from the: "Hiding Individuals and Communities in a Social Network" paper

FGAttack

Implementation of FGA attack from the: "Fast Gradient Attack on Network Embedding" paper (arXiv'18)

IGAttack

Implementation of IG-FGSM attack from the: "Adversarial Examples on Graph Data: Deep Insights into Attack and Defense" paper (IJCAI'19)

SGAttack

Implementation of SGA attack from the: "Adversarial Attack on Large Scale Graph" paper (TKDE'21)

Nettack

Implementation of Nettack attack from the: "Adversarial Attacks on Neural Networks for Graph Data" paper (KDD'18)

GFAttack

Implementation of GFA attack from the: "A Restricted Black - box Adversarial Framework Towards Attacking Graph Embedding Models" paper (AAAI'20)

PGDAttack

Implementation of PGD attack from the: "Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective" paper (IJCAI'19)

class TargetedAttacker(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Base class for adversarial targeted attack.

Parameters:
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of greatx.attack.Attacker,) –

Raises:

TypeError – unexpected keyword argument in kwargs

Note

greatx.attack.targeted.TargetedAttacker is a subclass of greatx.attack.FlipAttacker. It belongs to graph modification attack (GMA).

reset() TargetedAttacker[source]

Reset the state of the Attacker

Returns:

the attacker itself

Return type:

TargetedAttacker

attack(target, target_label, num_budgets, direct_attack, structure_attack, feature_attack) TargetedAttacker[source]

Base method that describes the adversarial targeted attack.

Parameters:
  • target (int) – the target node to be attacked

  • target_label (int) – the label of the target node

  • num_budgets (int or float) – the number/percentage of perturbations allowed to attack

  • direct_attack (bool) – whether to conduct direct attack or indirect attack

  • structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)

  • feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features

Check whether the edge (u,v) is legal.

For targeted attacker, an edge (u,v) is legal if u!=v and edge (u,v) is not selected before.

In addition, if the setting is indirect attack, the targeted node is not allowed to be u or v.

Parameters:
  • u (int) – src node id

  • v (int) – dst node id

Returns:

True if the u!=v and edge (u,v) is not selected, otherwise False.

Return type:

bool

class RandomAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Random attacker that randomly chooses edges to flip.

Parameters:
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of greatx.attack.Attacker,) –

Raises:

TypeError – unexpected keyword argument in kwargs

Example

from greatx.dataset import GraphDataset
import torch_geometric.transforms as T

import os.path as osp

dataset = GraphDataset(root='.', name='Cora',
                        transform=T.LargestConnectedComponents())
data = dataset[0]

from greatx.attack.targeted import RandomAttack
attacker = RandomAttack(data)
attacker.reset()
# attacking target node `1` with default budget set as node degree
attacker.attack(target=1)

# attacking target node `1` with budget set as 1
attacker.attack(target=1, num_budgets=1)

attacker.data() # get attacked graph

attacker.edge_flips() # get edge flips after attack

attacker.added_edges() # get added edges after attack

attacker.removed_edges() # get removed edges after attack

Note

  • Please remember to call reset() before each attack.

attack(target, *, num_budgets=None, threshold=0.5, direct_attack=True, structure_attack=True, feature_attack=False, disable=False)[source]

Base method that describes the adversarial targeted attack.

Parameters:
  • target (int) – the target node to be attacked

  • target_label (int) – the label of the target node

  • num_budgets (int or float) – the number/percentage of perturbations allowed to attack

  • direct_attack (bool) – whether to conduct direct attack or indirect attack

  • structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)

  • feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features

get_added_edge(influence_nodes: list) Optional[tuple][source]
get_removed_edge(influence_nodes: list) Optional[tuple][source]
class DICEAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of DICE attack from the: “Hiding Individuals and Communities in a Social Network” paper

DICE randomly chooses edges to flip based on the principle of “Disconnect Internally, Connect Externally” (DICE), which conducts attacks by removing edges between nodes with high correlations and connecting edges with low correlations.

Parameters:
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of greatx.attack.Attacker,) –

Raises:

TypeError – unexpected keyword argument in kwargs

Example

from greatx.dataset import GraphDataset
import torch_geometric.transforms as T

import os.path as osp

dataset = GraphDataset(root='.', name='Cora',
                        transform=T.LargestConnectedComponents())
data = dataset[0]

from greatx.attack.targeted import IGAttack
attacker = IGAttack(data)
attacker.reset()
# attacking target node `1` with default budget set as node degree
attacker.attack(target=1)

# attacking target node `1` with budget set as 1
attacker.attack(target=1, num_budgets=1)

attacker.data() # get attacked graph

attacker.edge_flips() # get edge flips after attack

attacker.added_edges() # get added edges after attack

attacker.removed_edges() # get removed edges after attack

Note

  • Please remember to call reset() before each attack.

get_added_edge(influence_nodes: list) Optional[tuple][source]
get_removed_edge(influence_nodes: list) Optional[tuple][source]
class FGAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of FGA attack from the: “Fast Gradient Attack on Network Embedding” paper (arXiv’18)

Parameters:
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of greatx.attack.Attacker,) –

Raises:

TypeError – unexpected keyword argument in kwargs

Example

from greatx.dataset import GraphDataset
import torch_geometric.transforms as T

import os.path as osp

dataset = GraphDataset(root='.', name='Cora',
                        transform=T.LargestConnectedComponents())
data = dataset[0]

surrogate_model = ... # train your surrogate model

from greatx.attack.targeted import FGAttack
attacker = FGAttack(data)
attacker.setup_surrogate(surrogate_model)
attacker.reset()
# attacking target node `1` with default budget set as node degree
attacker.attack(target=1)

attacker.reset()
# attacking target node `1` with budget set as 1
attacker.attack(target=1, num_budgets=1)

attacker.data() # get attacked graph

attacker.edge_flips() # get edge flips after attack

attacker.added_edges() # get added edges after attack

attacker.removed_edges() # get removed edges after attack

Note

This is a simple but effective attack that utilizes gradient information of the adjacency matrix. There are several work sharing the same heuristic:

Also, Please remember to call reset() before each attack.

reset()[source]

Reset the state of the Attacker

Returns:

the attacker itself

Return type:

TargetedAttacker

attack(target, *, target_label=None, num_budgets=None, direct_attack=True, structure_attack=True, feature_attack=False, disable=False)[source]

Base method that describes the adversarial targeted attack.

Parameters:
  • target (int) – the target node to be attacked

  • target_label (int) – the label of the target node

  • num_budgets (int or float) – the number/percentage of perturbations allowed to attack

  • direct_attack (bool) – whether to conduct direct attack or indirect attack

  • structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)

  • feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features

structure_score(modified_adj, adj_grad, target)[source]
feature_score(modified_feat, feat_grad, target)[source]
compute_gradients(modified_adj, modified_feat, target, target_label)[source]
class IGAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of IG-FGSM attack from the: “Adversarial Examples on Graph Data: Deep Insights into Attack and Defense” paper (IJCAI’19)

Parameters:
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of greatx.attack.Attacker,) –

Raises:

TypeError – unexpected keyword argument in kwargs

Example

from greatx.dataset import GraphDataset
import torch_geometric.transforms as T

import os.path as osp

dataset = GraphDataset(root='.', name='Cora',
                        transform=T.LargestConnectedComponents())
data = dataset[0]

surrogate_model = ... # train your surrogate model

from greatx.attack.targeted import IGAttack
attacker = IGAttack(data)
attacker.setup_surrogate(surrogate_model)
attacker.reset()
# attacking target node `1` with default budget set as node degree
attacker.attack(target=1)

# attacking target node `1` with budget set as 1
attacker.attack(target=1, num_budgets=1)

attacker.data() # get attacked graph

attacker.edge_flips() # get edge flips after attack

attacker.added_edges() # get added edges after attack

attacker.removed_edges() # get removed edges after attack

Note

  • Please remember to call reset() before each attack.

attack(target, *, target_label=None, num_budgets=None, steps=20, direct_attack=True, structure_attack=True, feature_attack=False, disable=False)[source]

Base method that describes the adversarial targeted attack.

Parameters:
  • target (int) – the target node to be attacked

  • target_label (int) – the label of the target node

  • num_budgets (int or float) – the number/percentage of perturbations allowed to attack

  • direct_attack (bool) – whether to conduct direct attack or indirect attack

  • structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)

  • feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features

get_candidate_edges()[source]
get_candidate_features()[source]
get_feature_importance(candidates, steps, target, target_label, disable=False)[source]
compute_structure_gradients(feat, adj_step, target, target_label)[source]
compute_feature_gradients(feat_step, adj, target, target_label)[source]
class SGAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of SGA attack from the: “Adversarial Attack on Large Scale Graph” paper (TKDE’21)

Parameters:
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of greatx.attack.Attacker,) –

Raises:

TypeError – unexpected keyword argument in kwargs

Example

from greatx.dataset import GraphDataset
import torch_geometric.transforms as T

dataset = GraphDataset(root='.', name='Cora',
                        transform=T.LargestConnectedComponents())
data = dataset[0]

surrogate_model = ... # train your surrogate model

from greatx.attack.targeted import SGAttack
attacker = SGAttack(data)
attacker.setup_surrogate(surrogate_model)
attacker.reset()
# attacking target node `1` with default budget set as node degree
attacker.attack(target=1)

# attacking target node `1` with budget set as 1
attacker.attack(target=1, num_budgets=1)

attacker.data() # get attacked graph

attacker.edge_flips() # get edge flips after attack

attacker.added_edges() # get added edges after attack

attacker.removed_edges() # get removed edges after attack

Note

  • SGAttack is a scalable attack that can be applied to large scale graph

  • Please remember to call reset() before each attack.

setup_surrogate(surrogate: Module, *, tau: float = 5.0, freeze: bool = True)[source]

Method used to initialize the (trained) surrogate model.

Parameters:
  • surrogate (Module) – the input surrogate module

  • tau (float, optional) – temperature used for softmax activation, by default 1.0

  • freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True

  • required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None

Returns:

the class itself

Return type:

Surrogate

Raises:
set_normalize(state)[source]
strongest_wrong_class(target, target_label)[source]
get_subgraph(target, target_label, best_wrong_label)[source]
get_top_attackers(subgraph, target, target_label, best_wrong_label, num_attackers)[source]
subgraph_processing(sub_nodes, sub_edges, influencers, attacker_nodes)[source]
attack(target, *, K: int = 2, target_label=None, num_budgets=None, direct_attack=True, structure_attack=True, feature_attack=False, disable=False)[source]

Base method that describes the adversarial targeted attack.

Parameters:
  • target (int) – the target node to be attacked

  • target_label (int) – the label of the target node

  • num_budgets (int or float) – the number/percentage of perturbations allowed to attack

  • direct_attack (bool) – whether to conduct direct attack or indirect attack

  • structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)

  • feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features

compute_gradients(subgraph, target, target_label, best_wrong_label)[source]
class Nettack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of Nettack attack from the: “Adversarial Attacks on Neural Networks for Graph Data” paper (KDD’18)

Parameters:
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of greatx.attack.Attacker,) –

Raises:

TypeError – unexpected keyword argument in kwargs

Example

from greatx.dataset import GraphDataset
import torch_geometric.transforms as T

import os.path as osp

dataset = GraphDataset(root='.', name='Cora',
                        transform=T.LargestConnectedComponents())
data = dataset[0]

surrogate_model = ... # train your surrogate model

from greatx.attack.targeted import Nettack
attacker = Nettack(data)
attacker.setup_surrogate(surrogate_model)
attacker.reset()
# attacking target node `1` with default budget set as node degree
attacker.attack(target=1)

# attacking target node `1` with budget set as 1
attacker.attack(target=1, num_budgets=1)

attacker.data() # get attacked graph

attacker.edge_flips() # get edge flips after attack

attacker.added_edges() # get added edges after attack

attacker.removed_edges() # get removed edges after attack

Note

  • Please remember to call reset() before each attack.

setup_surrogate(surrogate)[source]

Method used to initialize the (trained) surrogate model.

Parameters:
  • surrogate (Module) – the input surrogate module

  • tau (float, optional) – temperature used for softmax activation, by default 1.0

  • freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True

  • required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None

Returns:

the class itself

Return type:

Surrogate

Raises:
reset()[source]

Reset the state of the Attacker

Returns:

the attacker itself

Return type:

TargetedAttacker

compute_cooccurrence_constraint(nodes)[source]
gradient_wrt_x(label)[source]
compute_logits()[source]
strongest_wrong_class(logits)[source]
feature_scores()[source]
structure_score(a_hat_uv, XW)[source]
compute_XW()[source]
get_attacker_nodes(n=5, add_additional_nodes=False)[source]
compute_new_a_hat_uv(candidate_edges)[source]
get_candidate_edges(n_influencers)[source]
attack(target, *, target_label=None, num_budgets=None, n_influencers=5, direct_attack=True, structure_attack=True, feature_attack=False, ll_constraint=True, ll_cutoff=0.004, disable=False)[source]

Base method that describes the adversarial targeted attack.

Parameters:
  • target (int) – the target node to be attacked

  • target_label (int) – the label of the target node

  • num_budgets (int or float) – the number/percentage of perturbations allowed to attack

  • direct_attack (bool) – whether to conduct direct attack or indirect attack

  • structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)

  • feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features

class GFAttack(data: Data, K: int = 2, T: int = 128, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of GFA attack from the: “A Restricted Black - box Adversarial Framework Towards Attacking Graph Embedding Models” paper (AAAI’20)

Parameters:
  • data (Data) – PyG-like data denoting the input graph

  • K (int, optional) – the order of graph filter, by default 2

  • T (int, optional) – top-T largest eigen-values/vectors selected, by default 128

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed of reproduce the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

Raises:

TypeError – unexpected keyword argument in kwargs

Example

from greatx.dataset import GraphDataset
import torch_geometric.transforms as T

import os.path as osp

dataset = GraphDataset(root='.', name='Cora',
                        transform=T.LargestConnectedComponents())
data = dataset[0]

from greatx.attack.targeted import IGAttack
attacker = IGAttack(data)
# attacking target node `1` with default budget set as node degree
attacker.attack(target=1)

# attacking target node `1` with budget set as 1
attacker.attack(target=1, num_budgets=1)

attacker.data() # get attacked graph

attacker.edge_flips() # get edge flips after attack

attacker.added_edges() # get added edges after attack

attacker.removed_edges() # get removed edges after attack

Note

  • In the paper, the authors mainly consider the single edge perturbations, i.e., num_budgets=1. # noqa

  • Please remember to call reset() before each attack.

  • T=128 for citeseer and pubmed, T=num_nodes//2 for cora to reproduce results in paper. # noqa

get_candidate_edges()[source]
attack(target, *, num_budgets=None, direct_attack=True, structure_attack=True, feature_attack=False, ll_constraint=False, ll_cutoff=0.004, disable=False)[source]

Base method that describes the adversarial targeted attack.

Parameters:
  • target (int) – the target node to be attacked

  • target_label (int) – the label of the target node

  • num_budgets (int or float) – the number/percentage of perturbations allowed to attack

  • direct_attack (bool) – whether to conduct direct attack or indirect attack

  • structure_attack (bool) – whether to conduct structure attack, i.e., modify the graph structure (edges)

  • feature_attack (bool) – whether to conduct feature attack, i.e., modify the node features

static structure_score(A: csr_matrix, x_mean: Tensor, eig_vals: Tensor, eig_vec: Tensor, candidate_edges: ndarray, K: int, T: int, method: str = 'nosum')[source]

Calculate the score of potential edges as formulated in paper.

Parameters:
  • A (sp.csr_matrix) – the graph adjacency matrix

  • x_mean (torch.Tensor) –

  • eig_vals (torch.Tensor) – the eigen value

  • eig_vec (torch.Tensor) – the eigen vector

  • candidate_edges (np.ndarray) – the candidate_edges to be selected

  • K (int) – The order of graph filter K.

  • T (int) – Selecting the Top-T largest eigen-values/vectors.

  • method (str, optional) – “sum” or “nosum” Indicates the score are calculated from which loss as in Equation (8) or Equation (12). “nosum” denotes Equation (8), where the loss is derived from Graph Convolutional Networks, “sum” denotes Equation (12), where the loss is derived from Sampling-based Graph Embedding Methods, by default “nosum”

Returns:

Scores for potential edges.

Return type:

Tensor

class PGDAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of PGD attack from the: “Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective” paper (IJCAI’19)

Parameters:
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of greatx.attack.Attacker,) –

Raises:

TypeError – unexpected keyword argument in kwargs

Example

from greatx.dataset import GraphDataset
import torch_geometric.transforms as T

import os.path as osp

dataset = GraphDataset(root='.', name='Cora',
                        transform=T.LargestConnectedComponents())
data = dataset[0]

surrogate_model = ... # train your surrogate model

from greatx.attack.targeted import PGDAttack
attacker = PGDAttack(data)
attacker.setup_surrogate(surrogate_model)
attacker.reset()
# attacking target node `1` with default budget set as node degree
attacker.attack(target=1)

attacker.reset()
# attacking target node `1` with budget set as 1
attacker.attack(target=1, num_budgets=1)

attacker.data() # get attacked graph

attacker.edge_flips() # get edge flips after attack

attacker.added_edges() # get added edges after attack

attacker.removed_edges() # get removed edges after attack
setup_surrogate(surrogate: Module, *, tau: float = 1.0, freeze: bool = True) PGDAttack[source]

Method used to initialize the (trained) surrogate model.

Parameters:
  • surrogate (Module) – the input surrogate module

  • tau (float, optional) – temperature used for softmax activation, by default 1.0

  • freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True

  • required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None

Returns:

the class itself

Return type:

Surrogate

Raises:
reset() PGDAttack[source]

Reset the state of the Attacker

Returns:

the attacker itself

Return type:

TargetedAttacker

attack(target: int, *, target_label: Optional[int] = None, num_budgets: Optional[Union[float, int]] = None, direct_attack: bool = True, base_lr: float = 0.1, grad_clip: Optional[float] = None, epochs: int = 200, ce_loss: bool = False, sample_epochs: int = 20, structure_attack: bool = True, feature_attack: bool = False, disable: bool = False) PGDAttack[source]

Adversarial attack method for “Project gradient descent attack (PGD)”

Parameters:
  • target (int) – the target node to attack

  • target_label (Optional[int], optional) – the label of the target node, if None, it defaults to its ground truth label, by default None

  • direct_attack (bool, optional) – whether to conduct direct attack on the target, N/A for this method when direct_attack=False.

  • num_budgets (Union[int, float], optional) – the number of attack budgets, coubd be float (ratio) or int (number), if None, it defaults to the number of node degree of target by default None

  • base_lr (float, optional) – the base learning rate for PGD training, by default 0.1

  • grad_clip (float, optional) – gradient clipping for the computed gradients, by default None

  • epochs (int, optional) – the number of epochs for PGD training, by default 200

  • ce_loss (bool, optional) – whether to use cross-entropy loss (True) or margin loss (False), by default False

  • sample_epochs (int, optional) – the number of sampling epochs for learned perturbations, by default 20

  • structure_attack (bool, optional) – whether to conduct structure attack, i.e., modify the graph structure (edges), by default True

  • feature_attack (bool, optional) – whether to conduct feature attack, i.e., modify the node features, N/A for this method. by default False

  • disable (bool, optional) – whether to disable the tqdm progress bar, by default False

Returns:

the attacker itself

Return type:

PGDAttack

Untargeted Attacks

UntargetedAttacker

Base class for adversarial non-targeted attack.

RandomAttack

Random attacker that randomly chooses edges to flip.

DICEAttack

Implementation of DICE attack from the: "Hiding Individuals and Communities in a Social Network" paper

FGAttack

Implementation of FGA attack from the: "Fast Gradient Attack on Network Embedding" paper (arXiv'18)

IGAttack

Implementation of IG-FGSM attack from the: "Adversarial Examples on Graph Data: Deep Insights into Attack and Defense" paper (IJCAI'19)

Metattack

Implementation of Metattack attack from the: "Adversarial Attacks on Graph Neural Networks via Meta Learning" paper (ICLR'19)

PGDAttack

Implementation of PGD attack from the: "Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective" paper (IJCAI'19)

class UntargetedAttacker(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Base class for adversarial non-targeted attack.

Parameters:
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of greatx.attack.Attacker,) –

Raises:

TypeError – unexpected keyword argument in kwargs

Note

greatx.attack.targeted.UntargetedAttacker is a subclass of greatx.attack.FlipAttacker. It belongs to graph modification attack (GMA).

reset() UntargetedAttacker[source]

Reset the state of the Attacker

Returns:

the attacker itself

Return type:

UntargetedAttacker

attack(num_budgets, structure_attack, feature_attack) UntargetedAttacker[source]

Base method that describes the adversarial untargeted attack.

Parameters:
  • num_budgets (int or float) – the number/percentage of perturbations allowed to attack

  • structure_attack (bool, optional) – whether to conduct structure attack, i.e., modify the graph structure (edges),

  • feature_attack (bool, optional) – whether to conduct feature attack, i.e., modify the node features,

class RandomAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Random attacker that randomly chooses edges to flip.

Parameters:
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of greatx.attack.Attacker,) –

Raises:

TypeError – unexpected keyword argument in kwargs

Example

from greatx.dataset import GraphDataset
import torch_geometric.transforms as T

dataset = GraphDataset(root='.', name='Cora',
                        transform=T.LargestConnectedComponents())
data = dataset[0]

from greatx.attack.untargeted import RandomAttack
attacker = RandomAttack(data)
attacker.reset()
attacker.attack(0.05) # attack with 0.05% of edge perturbations
attacker.data() # get attacked graph

attacker.edge_flips() # get edge flips after attack

attacker.added_edges() # get added edges after attack

attacker.removed_edges() # get removed edges after attack

Note

  • Please remember to call reset() before each attack.

attack(num_budgets=0.05, *, threshold=0.5, structure_attack=True, feature_attack=False, disable=False)[source]

Base method that describes the adversarial untargeted attack.

Parameters:
  • num_budgets (int or float) – the number/percentage of perturbations allowed to attack

  • structure_attack (bool, optional) – whether to conduct structure attack, i.e., modify the graph structure (edges),

  • feature_attack (bool, optional) – whether to conduct feature attack, i.e., modify the node features,

get_added_edge(influence_nodes: list) Optional[tuple][source]
get_removed_edge(influence_nodes: list) Optional[tuple][source]
class DICEAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of DICE attack from the: “Hiding Individuals and Communities in a Social Network” paper

DICE randomly chooses edges to flip based on the principle of “Disconnect Internally, Connect Externally” (DICE), which conducts attacks by removing edges between nodes with high correlations and connecting edges with low correlations.

Parameters:
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of greatx.attack.Attacker,) –

Raises:

TypeError – unexpected keyword argument in kwargs

Example

from greatx.dataset import GraphDataset
import torch_geometric.transforms as T

dataset = GraphDataset(root='.', name='Cora',
                        transform=T.LargestConnectedComponents())
data = dataset[0]

from greatx.attack.untargeted import DICEAttack
attacker = DICEAttack(data)
attacker.reset()
attacker.attack(0.05) # attack with 0.05% of edge perturbations
attacker.data() # get attacked graph

attacker.edge_flips() # get edge flips after attack

attacker.added_edges() # get added edges after attack

attacker.removed_edges() # get removed edges after attack

Note

  • Please remember to call reset() before each attack.

get_added_edge(influence_nodes: list) Optional[tuple][source]
get_removed_edge(influence_nodes: list) Optional[tuple][source]
class FGAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of FGA attack from the: “Fast Gradient Attack on Network Embedding” paper (arXiv’18)

Parameters:
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of greatx.attack.Attacker,) –

Raises:

TypeError – unexpected keyword argument in kwargs

Example

from greatx.dataset import GraphDataset
import torch_geometric.transforms as T

dataset = GraphDataset(root='.', name='Cora',
                        transform=T.LargestConnectedComponents())
data = dataset[0]

surrogate_model = ... # train your surrogate model

from greatx.attack.untargeted import FGAttack
attacker = FGAttack(data)
attacker.setup_surrogate(surrogate_model)
attacker.reset()
attacker.attack(0.05) # attack with 0.05% of edge perturbations
attacker.data() # get attacked graph

attacker.edge_flips() # get edge flips after attack

attacker.added_edges() # get added edges after attack

attacker.removed_edges() # get removed edges after attack

Note

This is a simple but effective attack that utilizes gradient information of the adjacency matrix. There are several work sharing the same heuristic:

Also, Please remember to call reset() before each attack.

setup_surrogate(surrogate: Module, victim_nodes: Tensor, victim_labels: Optional[Tensor] = None, *, tau: float = 1.0)[source]

Method used to initialize the (trained) surrogate model.

Parameters:
  • surrogate (Module) – the input surrogate module

  • tau (float, optional) – temperature used for softmax activation, by default 1.0

  • freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True

  • required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None

Returns:

the class itself

Return type:

Surrogate

Raises:
reset()[source]

Reset the state of the Attacker

Returns:

the attacker itself

Return type:

UntargetedAttacker

attack(num_budgets=0.05, *, structure_attack=True, feature_attack=False, disable=False)[source]

Base method that describes the adversarial untargeted attack.

Parameters:
  • num_budgets (int or float) – the number/percentage of perturbations allowed to attack

  • structure_attack (bool, optional) – whether to conduct structure attack, i.e., modify the graph structure (edges),

  • feature_attack (bool, optional) – whether to conduct feature attack, i.e., modify the node features,

structure_score(modified_adj, adj_grad)[source]
feature_score(modified_feat, feat_grad)[source]
compute_gradients(modified_adj, modified_feat, victim_nodes, victim_labels)[source]
class IGAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of IG-FGSM attack from the: “Adversarial Examples on Graph Data: Deep Insights into Attack and Defense” paper (IJCAI’19)

Parameters:
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of greatx.attack.Attacker,) –

Raises:

TypeError – unexpected keyword argument in kwargs

Example

from greatx.dataset import GraphDataset
import torch_geometric.transforms as T

dataset = GraphDataset(root='.', name='Cora',
                        transform=T.LargestConnectedComponents())
data = dataset[0]

surrogate_model = ... # train your surrogate model

from greatx.attack.untargeted import IGAttack
attacker = IGAttack(data)
attacker.setup_surrogate(surrogate_model)
attacker.reset()
attacker.attack(0.05) # attack with 0.05% of edge perturbations
attacker.data() # get attacked graph

attacker.edge_flips() # get edge flips after attack

attacker.added_edges() # get added edges after attack

attacker.removed_edges() # get removed edges after attack

Note

  • In the paper, IG-FGSM attack was implemented for targeted attack, we adapt the codes for the non-targeted attack here. # noqa

  • Please remember to call reset() before each attack.

setup_surrogate(surrogate: Module, victim_nodes: Tensor, victim_labels: Optional[Tensor] = None, *, tau: float = 1.0)[source]

Method used to initialize the (trained) surrogate model.

Parameters:
  • surrogate (Module) – the input surrogate module

  • tau (float, optional) – temperature used for softmax activation, by default 1.0

  • freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True

  • required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None

Returns:

the class itself

Return type:

Surrogate

Raises:
attack(num_budgets=0.05, *, steps=20, structure_attack=True, feature_attack=False, disable=False)[source]

Base method that describes the adversarial untargeted attack.

Parameters:
  • num_budgets (int or float) – the number/percentage of perturbations allowed to attack

  • structure_attack (bool, optional) – whether to conduct structure attack, i.e., modify the graph structure (edges),

  • feature_attack (bool, optional) – whether to conduct feature attack, i.e., modify the node features,

get_feature_importance(steps, victim_nodes, victim_labels, disable=False)[source]
structure_score(adj, adj_grad)[source]
feature_score(feat, feat_grad)[source]
compute_structure_gradients(adj_step, feat, victim_nodes, victim_labels)[source]
compute_feature_gradients(adj, feat_step, victim_nodes, victim_labels)[source]
class Metattack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of Metattack attack from the: “Adversarial Attacks on Graph Neural Networks via Meta Learning” paper (ICLR’19)

Parameters:
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of greatx.attack.Attacker,) –

Raises:

TypeError – unexpected keyword argument in kwargs

Example

from greatx.dataset import GraphDataset
import torch_geometric.transforms as T
dataset = GraphDataset(root='.', name='Cora',
                        transform=T.LargestConnectedComponents())
data = dataset[0]

surrogate_model = ... # train your surrogate model

from greatx.attack.untargeted import Metattack
attacker = Metattack(data)
attacker.setup_surrogate(surrogate_model)
attacker.reset()
attacker.attack(0.05) # attack with 0.05% of edge perturbations
attacker.data() # get attacked graph

attacker.edge_flips() # get edge flips after attack

attacker.added_edges() # get added edges after attack

attacker.removed_edges() # get removed edges after attack

Note

  • Please remember to call reset() before each attack.

setup_surrogate(surrogate: Module, labeled_nodes: Tensor, unlabeled_nodes: Tensor, lr: float = 0.1, epochs: int = 100, momentum: float = 0.9, lambda_: float = 0.0, *, tau: float = 1.0)[source]

Method used to initialize the (trained) surrogate model.

Parameters:
  • surrogate (Module) – the input surrogate module

  • tau (float, optional) – temperature used for softmax activation, by default 1.0

  • freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True

  • required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None

Returns:

the class itself

Return type:

Surrogate

Raises:
reset()[source]

Reset the state of the Attacker

Returns:

the attacker itself

Return type:

UntargetedAttacker

get_perturbed_adj(adj_changes=None)[source]
get_perturbed_feat(feat_changes=None)[source]
clip(matrix)[source]
reset_parameters()[source]
forward(adj, x)[source]
inner_train(adj, feat)[source]
attack(num_budgets=0.05, *, structure_attack=True, feature_attack=False, disable=False)[source]

Base method that describes the adversarial untargeted attack.

Parameters:
  • num_budgets (int or float) – the number/percentage of perturbations allowed to attack

  • structure_attack (bool, optional) – whether to conduct structure attack, i.e., modify the graph structure (edges),

  • feature_attack (bool, optional) – whether to conduct feature attack, i.e., modify the node features,

structure_score(modified_adj, adj_grad)[source]
feature_score(modified_feat, feat_grad)[source]
compute_gradients(modified_adj, modified_feat)[source]
class PGDAttack(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of PGD attack from the: “Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective” paper (IJCAI’19)

Parameters:
  • data (Data) – PyG-like data denoting the input graph

  • device (str, optional) – the device of the attack running on, by default “cpu”

  • seed (Optional[int], optional) – the random seed for reproducing the attack, by default None

  • name (Optional[str], optional) – name of the attacker, if None, it would be __class__.__name__, by default None

  • kwargs (additional arguments of greatx.attack.Attacker,) –

Raises:

TypeError – unexpected keyword argument in kwargs

Example

from greatx.dataset import GraphDataset
import torch_geometric.transforms as T

dataset = GraphDataset(root='.', name='Cora',
                        transform=T.LargestConnectedComponents())
data = dataset[0]

surrogate_model = ... # train your surrogate model

from greatx.attack.untargeted import PGDAttack
attacker = PGDAttack(data)
attacker.setup_surrogate(surrogate_model,
                         victim_nodes=test_nodes)
attacker.reset()
attacker.attack(0.05) # attack with 0.05% of edge perturbations
attacker.data() # get attacked graph

attacker.edge_flips() # get edge flips after attack

attacker.added_edges() # get added edges after attack

attacker.removed_edges() # get removed edges after attack

Note

  • Please remember to call reset() before each attack.

setup_surrogate(surrogate: Module, victim_nodes: Tensor, ground_truth: bool = False, *, tau: float = 1.0, freeze: bool = True) PGDAttack[source]

Setup the surrogate model for adversarial attack.

Parameters:
  • surrogate (torch.nn.Module) – the surrogate model

  • victim_nodes (Tensor) – the victim nodes_set

  • ground_truth (bool, optional) – whether to use ground-truth label for victim nodes, if False, the node labels are estimated by the surrogate model, by default False

  • tau (float, optional) – the temperature of softmax activation, by default 1.0

  • freeze (bool, optional) – whether to free the surrogate model to avoid the gradient accumulation, by default True

Returns:

the attacker itself

Return type:

PGDAttack

reset() PGDAttack[source]

Reset the state of the Attacker

Returns:

the attacker itself

Return type:

UntargetedAttacker

attack(num_budgets: Union[int, float] = 0.05, *, base_lr: float = 0.1, grad_clip: Optional[float] = None, epochs: int = 200, ce_loss: bool = False, sample_epochs: int = 20, structure_attack: bool = True, feature_attack: bool = False, disable: bool = False) PGDAttack[source]

Adversarial attack method for “Project gradient descent attack (PGD)”

Parameters:
  • num_budgets (Union[int, float], optional) – the number of attack budgets, coubd be float (ratio) or int (number), by default 0.05

  • base_lr (float, optional) – the base learning rate for PGD training, by default 0.1

  • grad_clip (float, optional) – gradient clipping for the computed gradients, by default None

  • epochs (int, optional) – the number of epochs for PGD training, by default 200

  • ce_loss (bool, optional) – whether to use cross-entropy loss (True) or margin loss (False), by default False

  • sample_epochs (int, optional) – the number of sampling epochs for learned perturbations, by default 20

  • structure_attack (bool, optional) – whether to conduct structure attack, i.e., modify the graph structure (edges), by default True

  • feature_attack (bool, optional) – whether to conduct feature attack, i.e., modify the node features, N/A for this method. by default False

  • disable (bool, optional) – whether to disable the tqdm progress bar, by default False

Returns:

the attacker itself

Return type:

PGDAttack

Injection Attacks

InjectionAttacker

Base class for Injection Attacker, an inherent attack should implement the attack method.

RandomInjection

Injection nodes into a graph randomly.

AdvInjection

2nd place solution of KDD CUP 2020 "Adversarial attack and defense" challenge.

class InjectionAttacker(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Base class for Injection Attacker, an inherent attack should implement the attack method.

Example

attacker = InjectionAttacker(data)
attacker.reset()

# inject 10 nodes, where each nodes has 2 edges
attacker.attack(num_budgets=10, num_edges_local=2)

# inject 10 nodes, with 100 edges in total
attacker.attack(num_budgets=10, num_edges_global=100)

# inject 10 nodes, where each nodes has 2 edges,
# and the features of injected nodes lies in [0,1]
attacker.attack(num_budgets=10, num_edges_local=2, feat_limits=(0,1))
attacker.attack(num_budgets=10, num_edges_local=2,
                feat_limits={'min': 0, 'max':1})

# inject 10 nodes, where each nodes has 2 edges,
# and the features of injected each node has 10 nonzero elements
attacker.attack(num_budgets=10, num_edges_local=2, feat_budgets=10)

# get injected nodes
attacker.injected_nodes()

# get injected edges
attacker.injected_edges()

# get injected nodes' features
attacker.injected_feats()

# get perturbed graph
attacker.data()
reset() InjectionAttacker[source]

Reset the state of the Attacker

Returns:

the attacker itself

Return type:

InjectionAttacker

attack(num_budgets: Union[int, float], *, targets: Optional[Tensor] = None, num_edges_global: Optional[int] = None, num_edges_local: Optional[int] = None, feat_limits: Optional[Union[tuple, dict]] = None, feat_budgets: Optional[int] = None) InjectionAttacker[source]

Base method that describes the adversarial injection attack

Parameters:
  • num_budgets (Union[int, float]) – the number/percentage of nodes allowed to inject

  • targets (Optional[Tensor], optional) – the targeted nodes where injected nodes perturb, if None, it will be all nodes in the graph, by default None

  • num_edges_global (Optional[int], optional) – the number of total edges in the graph to be injected for all injected nodes, by default None

  • num_edges_local (Optional[int], optional) – the number of edges allowed to inject for each injected nodes, by default None

  • feat_limits (Optional[Union[tuple, dict]], optional) – the limitation or allowed budgets of injected node features, it can be a tuple, e.g., (0, 1) or a dict, e.g., {‘min’:0, ‘max’: 1}. if None, it is set as (self.feat.min(), self.feat.max()), by default None

  • feat_budgets (Optional[int], optional) – the number of nonzero features can be injected for each node, e.g., 10, denoting 10 nonzero features can be injected, by default None

Return type:

the attacker itself

Note

  • Both num_edges_local and num_edges_global cannot be used simultaneously. # noqa

  • Both feat_limits and feat_budgets cannot be used simultaneously.

injected_nodes() Optional[Tensor][source]

Get all the nodes to be injected.

added_nodes() Optional[Tensor][source]

alias of method added_nodes

injected_edges() Optional[Tensor][source]

Get all the edges to be injected.

added_edges() Optional[Tensor][source]

alias of method injected_edges

edge_flips() BunchDict[source]

Get all the edges to be flipped, including edges to be added and removed.

injected_feats() Optional[Tensor][source]

Get the features injected nodes.

added_feats() Optional[Tensor][source]

alias of method added_edges

inject_node(node)[source]
inject_edge(u: int, v: int)[source]

Inject an edge to the graph.

Parameters:
  • u (int) – The source node of the edge.

  • v (int) – The destination node of the edge.

inject_edges(edges: Union[Tensor, List])[source]

Inject a set of edges to the graph.

Parameters:

edges (Union[Tensor, List]) – The newly injected.

inject_feat(feat: Optional[Tensor] = None)[source]

Generate a feature vector to the graph for a newly injected node.

Parameters:

feat (Optional[Tensor], optional) –

the injected feature. If None, it would be randomly generated,

by default None

data(symmetric: bool = True) Data[source]

return the attacked graph

Parameters:

symmetric (bool) – Determine whether the resulting graph is forcibly symmetric

Returns:

the attacked graph represented as PyG-like data

Return type:

Data

class RandomInjection(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Injection nodes into a graph randomly.

Example

from greatx.dataset import GraphDataset
import torch_geometric.transforms as T

dataset = GraphDataset(root='.', name='Cora',
                        transform=T.LargestConnectedComponents())
data = dataset[0]

from greatx.attack.injection import RandomInjection
attacker = RandomInjection(data)

attacker.reset()
# injecting 10 nodes for continuous features
attacker.attack(10, feat_limits=(0, 1))

attacker.reset()
# injecting 10 nodes for binary features
attacker.attack(10, feat_budgets=10)

attacker.data() # get attacked graph

attacker.injected_nodes() # get injected nodes after attack

attacker.injected_edges() # get injected edges after attack

attacker.injected_feats() # get injected features after attack

Note

  • Please remember to call reset() before each attack.

attack(num_budgets: Union[int, float], *, targets: Optional[Tensor] = None, interconnection: bool = False, num_edges_global: Optional[int] = None, num_edges_local: Optional[int] = None, feat_limits: Optional[Union[tuple, dict]] = None, feat_budgets: Optional[int] = None, disable: bool = False) RandomInjection[source]

Base method that describes the adversarial injection attack

Parameters:
  • num_budgets (Union[int, float]) – the number/percentage of nodes allowed to inject

  • targets (Optional[Tensor], optional) – the targeted nodes where injected nodes perturb, if None, it will be all nodes in the graph, by default None

  • interconnection (bool, optional) – whether the injected nodes can connect to each other, by default False

  • num_edges_global (Optional[int], optional) – the number of total edges in the graph to be injected for all injected nodes, by default None

  • num_edges_local (Optional[int], optional) – the number of edges allowed to inject for each injected nodes, by default None

  • feat_limits (Optional[Union[tuple, dict]], optional) – the limitation or allowed budgets of injected node features, it can be a tuple, e.g., (0, 1) or a dict, e.g., {‘min’:0, ‘max’: 1}. if None, it is set as (self.feat.min(), self.feat.max()), by default None

  • feat_budgets (Optional[int], optional) – the number of nonzero features can be injected for each node, e.g., 10, denoting 10 nonzero features can be injected, by default Nonehao

  • disable (bool, optional) – whether the tqdm progbar is to disabled, by default False

Return type:

the attacker itself

Note

  • Both num_edges_local and num_edges_global cannot be used simultaneously. # noqa

  • Both feat_limits and feat_budgets cannot be used simultaneously.

class AdvInjection(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

2nd place solution of KDD CUP 2020 “Adversarial attack and defense” challenge.

Example

from greatx.dataset import GraphDataset
import torch_geometric.transforms as T

dataset = GraphDataset(root='.', name='Cora',
                        transform=T.LargestConnectedComponents())
data = dataset[0]

surrogate_model = ... # train your surrogate model

from greatx.attack.injection import AdvInjection
attacker.setup_surrogate(surrogate_model)
attacker = AdvInjection(data)

attacker.reset()
# injecting 10 nodes for continuous features
attacker.attack(10, feat_limits=(0, 1))

attacker.reset()
# injecting 10 nodes for binary features
attacker.attack(10, feat_budgets=10)

attacker.data() # get attacked graph

attacker.injected_nodes() # get injected nodes after attack

attacker.injected_edges() # get injected edges after attack

attacker.injected_feats() # get injected features after attack

Note

  • Please remember to call reset() before each attack.

attack(num_budgets: Union[int, float], *, targets: Optional[Tensor] = None, interconnection: bool = False, lr: float = 0.1, num_edges_global: Optional[int] = None, num_edges_local: Optional[int] = None, feat_limits: Optional[Union[tuple, dict]] = None, feat_budgets: Optional[int] = None, disable: bool = False) AdvInjection[source]

Base method that describes the adversarial injection attack

Parameters:
  • num_budgets (Union[int, float]) – the number/percentage of nodes allowed to inject

  • targets (Optional[Tensor], optional) – the targeted nodes where injected nodes perturb, if None, it will be all nodes in the graph, by default None

  • num_edges_global (Optional[int], optional) – the number of total edges in the graph to be injected for all injected nodes, by default None

  • num_edges_local (Optional[int], optional) – the number of edges allowed to inject for each injected nodes, by default None

  • feat_limits (Optional[Union[tuple, dict]], optional) – the limitation or allowed budgets of injected node features, it can be a tuple, e.g., (0, 1) or a dict, e.g., {‘min’:0, ‘max’: 1}. if None, it is set as (self.feat.min(), self.feat.max()), by default None

  • feat_budgets (Optional[int], optional) – the number of nonzero features can be injected for each node, e.g., 10, denoting 10 nonzero features can be injected, by default None

Return type:

the attacker itself

Note

  • Both num_edges_local and num_edges_global cannot be used simultaneously. # noqa

  • Both feat_limits and feat_budgets cannot be used simultaneously.

compute_gradients(x, edge_index, edge_weight, injected_feats, injected_edge_index, injected_edge_weight, targets, target_labels)[source]

Backdoor Attacks

BackdoorAttacker

Base class for backdoor attacks.

FGBackdoor

Implementation of GB-FGSM attack from the: "Neighboring Backdoor Attacks on Graph Convolutional Network" paper (arXiv'22)

LGCBackdoor

Implementation of LGCB attack from the: "Neighboring Backdoor Attacks on Graph Convolutional Network" paper (arXiv'22)

class BackdoorAttacker(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Base class for backdoor attacks.

reset() BackdoorAttacker[source]

Reset the state of the Attacker

Returns:

the attacker itself

Return type:

BackdoorAttacker

attack(num_budgets: Union[int, float], targets_class: int) BackdoorAttacker[source]

Base method that describes the adversarial backdoor attack

trigger() Tensor[source]
data(target_node: int, symmetric: bool = True) Data[source]

return the attacked graph

Parameters:
  • target_node (int) – the target node that the attack performed

  • symmetric (bool) – determine whether the resulting graph is forcibly symmetric, by default True

Returns:

the attacked graph with backdoor attack performed on the target node

Return type:

Data

class FGBackdoor(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of GB-FGSM attack from the: “Neighboring Backdoor Attacks on Graph Convolutional Network” paper (arXiv’22)

Example

from greatx.dataset import GraphDataset
import torch_geometric.transforms as T

dataset = GraphDataset(root='.', name='Cora',
                        transform=T.LargestConnectedComponents())
data = dataset[0]

surrogate_model = ... # train your surrogate model

from greatx.attack.backdoor import FGBackdoor
attacker.setup_surrogate(surrogate_model)
attacker = FGBackdoor(data)

attacker.reset()
attacker.attack(num_budgets=50, target_class=0)

attacker.data() # get attacked graph

attacker.trigger() # get trigger node

Note

  • Please remember to call reset() before each attack.

setup_surrogate(surrogate: Module, *, tau: float = 1.0) FGBackdoor[source]

Method used to initialize the (trained) surrogate model.

Parameters:
  • surrogate (Module) – the input surrogate module

  • tau (float, optional) – temperature used for softmax activation, by default 1.0

  • freeze (bool, optional) – whether to freeze the model’s parameters to save time, by default True

  • required (Union[Module, Tuple[Module]], optional) – which class(es) of the surrogate model are required, by default None

Returns:

the class itself

Return type:

Surrogate

Raises:
attack(num_budgets: Union[int, float], target_class: int, disable: bool = False) FGBackdoor[source]

Base method that describes the adversarial backdoor attack

class LGCBackdoor(data: Data, device: str = 'cpu', seed: Optional[int] = None, name: Optional[str] = None, **kwargs)[source]

Implementation of LGCB attack from the: “Neighboring Backdoor Attacks on Graph Convolutional Network” paper (arXiv’22)

Example

from greatx.dataset import GraphDataset
import torch_geometric.transforms as T

dataset = GraphDataset(root='.', name='Cora',
                        transform=T.LargestConnectedComponents())
data = dataset[0]

surrogate_model = ... # train your surrogate model

from greatx.attack.backdoor import LGCBackdoor
attacker.setup_surrogate(surrogate_model)
attacker = LGCBackdoor(data)

attacker.reset()
attacker.attack(num_budgets=50, target_class=0)

attacker.data() # get attacked graph

attacker.trigger() # get trigger node

Note

  • Please remember to call reset() before each attack.

setup_surrogate(surrogate: Module) LGCBackdoor[source]
attack(num_budgets: Union[int, float], target_class: int, disable: bool = False) LGCBackdoor[source]

Base method that describes the adversarial backdoor attack

static get_feat_perturbations(W: Tensor, target_class: int, num_budgets: int) Tensor[source]