Welfare Loss in Connected Resource Allocation

Abstract

Explainable artificial intelligence (XAI) is at the core of trustworthy AI. The best-known methods of XAI are sub-symbolic. Unfortunately, these methods do not give guarantees of rigor. Logic-based XAI addresses the lack of rigor of sub-symbolic methods, but in turn it exhibits some drawbacks. These include scalability, explanation size, but also the need to access the details of the machine learning model. Furthermore, access to the details of an ML model may reveal sensitive information. This paper builds on recent work on symbolic model-agnostic XAI, which is based on explaining samples of behavior of a blackbox ML model, and proposes efficient algorithms for the computation of explanations. The experiments confirm the scalability of the novel algorithms.

Cite

Text

Bei et al. "Welfare Loss in Connected Resource Allocation." International Joint Conference on Artificial Intelligence, 2024. doi:10.24963/ijcai.2024/294

Markdown

[Bei et al. "Welfare Loss in Connected Resource Allocation." International Joint Conference on Artificial Intelligence, 2024.](https://mlanthology.org/ijcai/2024/bei2024ijcai-welfare/) doi:10.24963/ijcai.2024/294

BibTeX

@inproceedings{bei2024ijcai-welfare,
  title     = {{Welfare Loss in Connected Resource Allocation}},
  author    = {Bei, Xiaohui and Lam, Alexander and Lu, Xinhang and Suksompong, Warut},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {2660-2668},
  doi       = {10.24963/ijcai.2024/294},
  url       = {https://mlanthology.org/ijcai/2024/bei2024ijcai-welfare/}
}