TuringBox: An Experimental Platform for the Evaluation of AI Systems
Abstract
We introduce TuringBox, a platform to democratize the study of AI. On one side of the platform, AI contributors upload existing and novel algorithms to be studied scientifically by others. On the other side, AI examiners develop and post machine intelligence tasks to evaluate and characterize the outputs of algorithms. We outline the architecture of such a platform, and describe two interactive case studies of algorithmic auditing on the platform.
Cite
Text
Epstein et al. "TuringBox: An Experimental Platform for the Evaluation of AI Systems." International Joint Conference on Artificial Intelligence, 2018. doi:10.24963/IJCAI.2018/851Markdown
[Epstein et al. "TuringBox: An Experimental Platform for the Evaluation of AI Systems." International Joint Conference on Artificial Intelligence, 2018.](https://mlanthology.org/ijcai/2018/epstein2018ijcai-turingbox/) doi:10.24963/IJCAI.2018/851BibTeX
@inproceedings{epstein2018ijcai-turingbox,
title = {{TuringBox: An Experimental Platform for the Evaluation of AI Systems}},
author = {Epstein, Ziv and Payne, Blakeley H. and Shen, Judy Hanwen and Hong, Casey Jisoo and Felbo, Bjarke and Dubey, Abhimanyu and Groh, Matthew and Obradovich, Nick and Cebrián, Manuel and Rahwan, Iyad},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2018},
pages = {5826-5828},
doi = {10.24963/IJCAI.2018/851},
url = {https://mlanthology.org/ijcai/2018/epstein2018ijcai-turingbox/}
}