A Knowledge Distillation-Based Approach to Enhance Transparency of Classifier Models

Cite

Text

Jiang et al. "A Knowledge Distillation-Based Approach to Enhance Transparency of Classifier Models." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I17.33941

Markdown

[Jiang et al. "A Knowledge Distillation-Based Approach to Enhance Transparency of Classifier Models." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/jiang2025aaai-knowledge/) doi:10.1609/AAAI.V39I17.33941

BibTeX

@inproceedings{jiang2025aaai-knowledge,
  title     = {{A Knowledge Distillation-Based Approach to Enhance Transparency of Classifier Models}},
  author    = {Jiang, Yuchen and Zhao, Xinyuan and Wu, Yihang and Chaddad, Ahmad},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {17653-17661},
  doi       = {10.1609/AAAI.V39I17.33941},
  url       = {https://mlanthology.org/aaai/2025/jiang2025aaai-knowledge/}
}