RES: An Interpretable Replicability Estimation System for Research Publications

Abstract

Reliable and faithful research is the cornerstone of breakthrough advancements and disruptive innovations. Assessing the credibility of scientific findings and claims in research publications has long been a time-consuming and challenging task for researchers and decision-makers. In this paper, we introduce RES - an intelligent system that assists humans in analyzing the credibility of scientific findings and claims in research publications in the field of social and behavioral sciences by estimating their replicability. The pipeline of RES consists of four major modules that perform feature extraction, replicability estimation, result explanation, and sentiment analysis respectively. Our evaluation based on human experts' assessments suggests that the RES has achieved adequate performance. The RES is also built with a Graphical User Interface (GUI) that is publicly accessible at https://tamu-infolab.github.io/RES/.

Cite

Text

Wang et al. "RES: An Interpretable Replicability Estimation System for Research Publications." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I11.21737

Markdown

[Wang et al. "RES: An Interpretable Replicability Estimation System for Research Publications." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/wang2022aaai-res/) doi:10.1609/AAAI.V36I11.21737

BibTeX

@inproceedings{wang2022aaai-res,
  title     = {{RES: An Interpretable Replicability Estimation System for Research Publications}},
  author    = {Wang, Zhuoer and Feng, Qizhang and Chatterjee, Mohinish and Zhao, Xing and Liu, Yezi and Li, Yuening and Singh, Abhay Kumar and Iii, Frank M. Shipman and Hu, Xia and Caverlee, James},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {13230-13232},
  doi       = {10.1609/AAAI.V36I11.21737},
  url       = {https://mlanthology.org/aaai/2022/wang2022aaai-res/}
}