Knowledge-Based Residual Learning

Abstract

Small data has been a barrier for many machine learning tasks, especially when applied in scientific domains. Fortunately, we can utilize domain knowledge to make up the lack of data. Hence, in this paper, we propose a hybrid model KRL that treats domain knowledge model as a weak learner and uses another neural net model to boost it. We prove that KRL is guaranteed to improve over pure domain knowledge model and pure neural net model under certain loss functions. Extensive experiments have shown the superior performance of KRL over baselines. In addition, several case studies have explained how the domain knowledge can assist the prediction.

Cite

Text

Zheng et al. "Knowledge-Based Residual Learning." International Joint Conference on Artificial Intelligence, 2021. doi:10.24963/IJCAI.2021/228

Markdown

[Zheng et al. "Knowledge-Based Residual Learning." International Joint Conference on Artificial Intelligence, 2021.](https://mlanthology.org/ijcai/2021/zheng2021ijcai-knowledge/) doi:10.24963/IJCAI.2021/228

BibTeX

@inproceedings{zheng2021ijcai-knowledge,
  title     = {{Knowledge-Based Residual Learning}},
  author    = {Zheng, Guanjie and Liu, Chang and Wei, Hua and Jenkins, Porter and Chen, Chacha and Wen, Tao and Li, Zhenhui},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {1653-1659},
  doi       = {10.24963/IJCAI.2021/228},
  url       = {https://mlanthology.org/ijcai/2021/zheng2021ijcai-knowledge/}
}