One Filters All: A Generalist Filter for State Estimation

Abstract

Estimating hidden states in dynamical systems, also known as optimal filtering, is a long-standing problem in various fields of science and engineering. In this paper, we introduce a general filtering framework, $\textbf{LLM-Filter}$, which leverages large language models (LLMs) for state estimation by embedding noisy observations with text prototypes. In a number of experiments for classical dynamical systems, we find that first, state estimation can significantly benefit from the knowledge embedded in pre-trained LLMs. By achieving proper modality alignment with the frozen LLM, LLM-Filter outperforms the state-of-the-art learning-based approaches. Second, we carefully design the prompt structure, System-as-Prompt (SaP), incorporating task instructions that enable LLMs to understand tasks and adapt to specific systems. Guided by these prompts, LLM-Filter exhibits exceptional generalization, capable of performing filtering tasks accurately in changed or even unseen environments. We further observe a scaling-law behavior in LLM-Filter, where accuracy improves with larger model sizes and longer training times. These findings make LLM-Filter a promising foundation model of filtering.

Cite

Text

Liu et al. "One Filters All: A Generalist Filter for State Estimation." Advances in Neural Information Processing Systems, 2025.

Markdown

[Liu et al. "One Filters All: A Generalist Filter for State Estimation." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/liu2025neurips-one/)

BibTeX

@inproceedings{liu2025neurips-one,
  title     = {{One Filters All: A Generalist Filter for State Estimation}},
  author    = {Liu, Shiqi and Cao, Wenhan and Liu, Chang and He, Zeyu and Zhang, Tianyi and Wang, Yinuo and Li, Shengbo Eben},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/liu2025neurips-one/}
}