๐ Frameworks
AI generated
Pruning Graphs by Adversarial Robustness Evaluation to Strengthen GNN Defenses
## Introduction
Graph Neural Networks (GNNs) have emerged as a dominant paradigm for learning on graph-structured data, thanks to their ability to jointly exploit node features and relational information encoded in the graph topology. However, this joint modeling also introduces a critical weakness: perturbations or noise in either the structure or the features can be amplified through message passing, making GNNs highly vulnerable to adversarial attacks and spurious connections.
## Problem of Base
The vulnerability of GNNs to perturbations is a complex problem that requires a deep solution. GNNs are designed to exploit the topology of the network to improve learning, but this same mechanism can be used to create spurious connections or perturbations that can compromise the stability of the model.
## Proposal for Solution
In this work, we present a pruning framework that leverages adversarial robustness evaluation to explicitly identify and remove fragile or detrimental components of the graph. By using robustness scores as guidance, our method selectively prunes edges that are most likely to degrade model reliability, thereby yielding cleaner and more resilient graph representations.
## Experiments
We conducted an in-depth analysis of three representative GNN architectures and performed a series of experiments on benchmarks. The results show that our approach can significantly enhance the defense capability of GNNs in the high-perturbation regime.
๐ฌ Commenti (0)
๐ Accedi o registrati per commentare gli articoli.
Nessun commento ancora. Sii il primo a commentare!