Negotiative Alignment: An Interactive Approach to Human-AI Co-Adaptation for Clinical Applications
Abstract
We introduce a conceptual framework for ***negotiative alignment*** in high-stakes clinical AI, where human experts iteratively refine AI outputs rather than a binary accept/rejection. This approach uses graded feedback---including partial acceptance of useful insights---to systematically flag and score different types of clinical AI output errors. Although we do not present finalized experimental results, we outline a proof-of-concept using a chest radiograph image-report dataset and a multimodal model. These severity-scored errors might guide future targeted model updates. Negotiative alignment grounds each AI-generated report in a continuous, co-adaptive dialogue with clinicians, which has the potential to boost trust, transparency, and reliability in medical diagnostics and beyond.
Cite
Text
Doo et al. "Negotiative Alignment: An Interactive Approach to Human-AI Co-Adaptation for Clinical Applications." ICLR 2025 Workshops: Bi-Align, 2025.Markdown
[Doo et al. "Negotiative Alignment: An Interactive Approach to Human-AI Co-Adaptation for Clinical Applications." ICLR 2025 Workshops: Bi-Align, 2025.](https://mlanthology.org/iclrw/2025/doo2025iclrw-negotiative/)BibTeX
@inproceedings{doo2025iclrw-negotiative,
title = {{Negotiative Alignment: An Interactive Approach to Human-AI Co-Adaptation for Clinical Applications}},
author = {Doo, Florence Xini and Shah, Nikhil and Kulkarni, Pranav and Parekh, Vishwa Sanjay and Huang, Heng},
booktitle = {ICLR 2025 Workshops: Bi-Align},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/doo2025iclrw-negotiative/}
}