Enhancing Interpretability and Fairness in Medical Foundation Models: A Generative Approach for Explainable and Bias-Mitigated Medical Image Analysis
Abstract
The advent of large foundation models (FMs) has revolutionized various domains, yet their application in healthcare remains challenging due to the need for strict professional qualifications and high sensitivity to errors. This paper presents a ongoing approach to developing Medical Foundation Models (MFMs) for medical image analysis, addressing key challenges in explainability, fairness, and efficiency. We propose a generative AI framework that leverages autoencoders to learn compressed latent representations of medical images, enabling intuitive interpretation of the model's decision-making process and facilitating bias detection and mitigation. Our approach integrates elements from state-of-the-art vision models, including attention mechanisms and context modeling, to enhance classification accuracy while reducing dependency on labeled data. By focusing on explainability, robustness, and computational efficiency, our work aims to bridge the gap between the potential of AI in healthcare and the stringent requirements of clinical applications. This research contributes to the development of more transparent, fair, and trustworthy AI-driven medical assistants, ultimately improving patient outcomes and streamlining clinical workflows.
Cite
Text
Minutti. "Enhancing Interpretability and Fairness in Medical Foundation Models: A Generative Approach for Explainable and Bias-Mitigated Medical Image Analysis." NeurIPS 2024 Workshops: AIM-FM, 2024.Markdown
[Minutti. "Enhancing Interpretability and Fairness in Medical Foundation Models: A Generative Approach for Explainable and Bias-Mitigated Medical Image Analysis." NeurIPS 2024 Workshops: AIM-FM, 2024.](https://mlanthology.org/neuripsw/2024/minutti2024neuripsw-enhancing/)BibTeX
@inproceedings{minutti2024neuripsw-enhancing,
title = {{Enhancing Interpretability and Fairness in Medical Foundation Models: A Generative Approach for Explainable and Bias-Mitigated Medical Image Analysis}},
author = {Minutti, Carlos},
booktitle = {NeurIPS 2024 Workshops: AIM-FM},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/minutti2024neuripsw-enhancing/}
}