Model Plurality: A Taxonomy for Pluralistic AI

Abstract

This position paper argues that the project of pluralistic AI should be expanded from diversifying the values of individual models towards a systemic pluralism that allows for new values to emerge. First, we examine the dangers of homogeneity within the existing landscape of public-facing machine learning models. Beyond uplifting certain values over others, models have the potential to reinforce arbitrary biases and homogenize the very ontologies with which we think. We argue for model plurality—structurally embedding multiplicity into every level of model development and deployment via technical strategies and socioeconomic incentives—as a design method for addressing these dangers and creating models with meaningful difference. Finally, we provide a taxonomy of model plurality that organizes the production pipeline into areas of intervention: data, architecture, fine-tuning, and ecosystem. At each level, we analyze incentives that maintain the status quo of homogeneity, what benefits plurality could produce, and sociotechnical approaches for instantiating a more comprehensive plurality in that domain. Model plurality may not only create less biased and more robust models, but also the conditions for the ongoing evolution of human values.

Cite

Text

Lu and Van Kleek. "Model Plurality: A Taxonomy for Pluralistic AI." NeurIPS 2024 Workshops: Pluralistic-Alignment, 2024.

Markdown

[Lu and Van Kleek. "Model Plurality: A Taxonomy for Pluralistic AI." NeurIPS 2024 Workshops: Pluralistic-Alignment, 2024.](https://mlanthology.org/neuripsw/2024/lu2024neuripsw-model/)

BibTeX

@inproceedings{lu2024neuripsw-model,
  title     = {{Model Plurality: A Taxonomy for Pluralistic AI}},
  author    = {Lu, Christina and Van Kleek, Max},
  booktitle = {NeurIPS 2024 Workshops: Pluralistic-Alignment},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/lu2024neuripsw-model/}
}