Lambda: Learning Matchable Prior for Entity Alignment with Unlabeled Dangling Cases

Abstract

We investigate the entity alignment (EA) problem with unlabeled dangling cases, meaning that partial entities have no counterparts in the other knowledge graph (KG), yet these entities are unlabeled. The problem arises when the source and target graphs are of different scales, and it is much cheaper to label the matchable pairs than the dangling entities. To address this challenge, we propose the framework \textit{Lambda} for dangling detection and entity alignment. Lambda features a GNN-based encoder called KEESA with a spectral contrastive learning loss for EA and a positive-unlabeled learning algorithm called iPULE for dangling detection. Our dangling detection module offers theoretical guarantees of unbiasedness, uniform deviation bounds, and convergence. Experimental results demonstrate that each component contributes to overall performances that are superior to baselines, even when baselines additionally exploit 30\% of dangling entities labeled for training.

Cite

Text

Yin et al. "Lambda: Learning Matchable Prior for Entity Alignment with Unlabeled Dangling Cases." Neural Information Processing Systems, 2024. doi:10.52202/079017-2507

Markdown

[Yin et al. "Lambda: Learning Matchable Prior for Entity Alignment with Unlabeled Dangling Cases." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/yin2024neurips-lambda/) doi:10.52202/079017-2507

BibTeX

@inproceedings{yin2024neurips-lambda,
  title     = {{Lambda: Learning Matchable Prior for Entity Alignment with Unlabeled Dangling Cases}},
  author    = {Yin, Hang and Xiang, Liyao and Ding, Dong and He, Yuheng and Wu, Yihan and Chu, Pengzhi and Wang, Xinbing and Zhou, Chenghu},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-2507},
  url       = {https://mlanthology.org/neurips/2024/yin2024neurips-lambda/}
}