Transformer with Sparse Adaptive Mask for Network Dismantling

Abstract

The task of network dismantling aims to attack the least number of critical nodes to decompose a network into many small subnetworks. Recent approaches design task-oriented neural models to encode nodes’ structural features for predicting their importance. Instead of crafting small models, an interesting question is about whether and how large models like Transformers can be exploited for this classic yet NP-hard task in the network science domain. This paper provides an affirmative answer. The key lies in how to enable a Transformer to encode nodes’ representations based on comparisons over their importance to network integrity. In this paper, we propose to encode node egonet characteristics as well as internode spatial dependences from a global view. Furthermore, for each node encoding, we propose to include peer attention to enable networkwide importance comparison. A new fusion module with a sparse adaptive mask is designed into the Transformer architecture for encoding node comparative importance to network integrity. Experiments on real-world networks and synthetic networks validate the effectiveness of our design over the state-of-the-art schemes. The source code and datasets are available at: https://github.com/valyentine/TSAM .

Cite

Text

Liu et al. "Transformer with Sparse Adaptive Mask for Network Dismantling." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2025. doi:10.1007/978-3-032-06106-5_31

Markdown

[Liu et al. "Transformer with Sparse Adaptive Mask for Network Dismantling." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2025.](https://mlanthology.org/ecmlpkdd/2025/liu2025ecmlpkdd-transformer/) doi:10.1007/978-3-032-06106-5_31

BibTeX

@inproceedings{liu2025ecmlpkdd-transformer,
  title     = {{Transformer with Sparse Adaptive Mask for Network Dismantling}},
  author    = {Liu, Yuhua and Hu, Fanghao and Huang, Haojun and Wang, Bang},
  booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
  year      = {2025},
  pages     = {533-550},
  doi       = {10.1007/978-3-032-06106-5_31},
  url       = {https://mlanthology.org/ecmlpkdd/2025/liu2025ecmlpkdd-transformer/}
}