Usage Governance Advisor: From Intent to AI Governance
Abstract
Bringing a new AI system into a production environment involves multiple different stakeholders such as business owners, risk officer, ethics officers approving the AI System for a specific usage. Governance frameworks typically include multiple manual steps, including curating information needed to assess risks and reviewing outcomes to identify appropriate actions and governance strategies. We demo a human-in-the-loop automation system that takes a natural language description of an intended use case for an AI system in order to create semi-structured governance information, recommend the most appropriate model for that use case, prioritise risks to be evaluated, automatically running those evaluations and finally storing these results for auditing, reporting and future recommendations. As a result we increase transparency to stakeholders and provide valuable information to aid in decision making when assessing risks associated with an AI solution.
Cite
Text
Daly et al. "Usage Governance Advisor: From Intent to AI Governance." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I28.35348Markdown
[Daly et al. "Usage Governance Advisor: From Intent to AI Governance." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/daly2025aaai-usage/) doi:10.1609/AAAI.V39I28.35348BibTeX
@inproceedings{daly2025aaai-usage,
title = {{Usage Governance Advisor: From Intent to AI Governance}},
author = {Daly, Elizabeth M. and Tirupathi, Seshu and Rooney, Sean and Vejsbjerg, Inge and Salwala, Dhaval and Giblin, Christopher and Bagehorn, Frank and Garcés-Erice, Luis and Urbanetz, Peter and Wolf-Bauwens, Mira L.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {29628-29630},
doi = {10.1609/AAAI.V39I28.35348},
url = {https://mlanthology.org/aaai/2025/daly2025aaai-usage/}
}