Task-Level Distributionally Robust Optimization for Large Language Model-Based Dense Retrieval

Abstract

Large Language Model-based Dense Retrieval (LLM-DR) optimizes over numerous heterogeneous fine-tuning collections from different domains. However, the discussion about its training data distribution is still minimal. Previous studies rely on empirically assigned dataset choices or sampling ratios, which inevitably lead to sub-optimal retrieval performances. In this paper, we propose a new task-level Distributionally Robust Optimization (tDRO) algorithm for LLM-DR fine-tuning, targeted at improving the universal domain generalization ability by end-to-end reweighting the data distribution of each task. The tDRO parameterizes the domain weights and updates them with scaled domain gradients. The optimized weights are then transferred to the LLM-DR fine-tuning to train more robust retrievers. Experiments show optimal improvements in large-scale retrieval benchmarks and reduce up to 30% dataset usage after applying our optimization algorithm with a series of different-sized LLM-DR models.

Cite

Text

Ma et al. "Task-Level Distributionally Robust Optimization for Large Language Model-Based Dense Retrieval." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I23.34657

Markdown

[Ma et al. "Task-Level Distributionally Robust Optimization for Large Language Model-Based Dense Retrieval." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/ma2025aaai-task/) doi:10.1609/AAAI.V39I23.34657

BibTeX

@inproceedings{ma2025aaai-task,
  title     = {{Task-Level Distributionally Robust Optimization for Large Language Model-Based Dense Retrieval}},
  author    = {Ma, Guangyuan and Ma, Yongliang and Wu, Xing and Su, Zhenpeng and Zhou, Ming and Hu, Songlin},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {24759-24767},
  doi       = {10.1609/AAAI.V39I23.34657},
  url       = {https://mlanthology.org/aaai/2025/ma2025aaai-task/}
}