Human Control: Definitions and Algorithms
Abstract
How can humans stay in control of advanced artificial intelligence systems? One proposal is corrigibility, which requires the agent to follow the instructions of a human overseer, without inappropriately influencing them. In this paper, we formally define a variant of corrigibility called shutdown instructability, and show that it implies appropriate shutdown behavior, retention of human autonomy, and avoidance of user harm. We also analyse the related concepts of non-obstruction and shutdown alignment, three previously proposed algorithms for human control, and one new algorithm.
Cite
Text
Carey and Everitt. "Human Control: Definitions and Algorithms." Uncertainty in Artificial Intelligence, 2023.Markdown
[Carey and Everitt. "Human Control: Definitions and Algorithms." Uncertainty in Artificial Intelligence, 2023.](https://mlanthology.org/uai/2023/carey2023uai-human/)BibTeX
@inproceedings{carey2023uai-human,
title = {{Human Control: Definitions and Algorithms}},
author = {Carey, Ryan and Everitt, Tom},
booktitle = {Uncertainty in Artificial Intelligence},
year = {2023},
pages = {271-281},
volume = {216},
url = {https://mlanthology.org/uai/2023/carey2023uai-human/}
}