Principles for Responsible AI Consciousness Research
Abstract
Recent research suggests that it may be possible to build conscious AI systems now or in the near future. Conscious AI systems would arguably deserve moral consideration, and it may be the case that large numbers of conscious systems could be created and caused to suffer. Furthermore, AI systems or AI-generated characters may increasingly give the impression of being conscious, leading to debate about their moral status. Organisations involved in AI research must establish principles and policies to guide research and deployment choices and public communication concerning consciousness. Even if an organisation chooses not to study AI consciousness as such, it will still need policies in place, as those developing advanced AI systems risk inadvertently creating conscious entities. Responsible research and deployment practices are essential to address this possibility. We propose five principles for responsible research and argue that research organisations should make voluntary, public commitments to principles on these lines. Our principles concern research objectives and procedures, knowledge sharing and public communications. This article appears in the AI & Society track.
Cite
Text
Butlin and Lappas. "Principles for Responsible AI Consciousness Research." Journal of Artificial Intelligence Research, 2025. doi:10.1613/JAIR.1.17310Markdown
[Butlin and Lappas. "Principles for Responsible AI Consciousness Research." Journal of Artificial Intelligence Research, 2025.](https://mlanthology.org/jair/2025/butlin2025jair-principles/) doi:10.1613/JAIR.1.17310BibTeX
@article{butlin2025jair-principles,
title = {{Principles for Responsible AI Consciousness Research}},
author = {Butlin, Patrick and Lappas, Theodorus},
journal = {Journal of Artificial Intelligence Research},
year = {2025},
pages = {1673-1690},
doi = {10.1613/JAIR.1.17310},
volume = {82},
url = {https://mlanthology.org/jair/2025/butlin2025jair-principles/}
}