Prompt Learning with Extended Kalman Filter for Pre-Trained Language Models
Abstract
Despite efforts to better understand the constraints that operate on single-step parallel (aka ``package'', ``multiple'') revision, very little work has been carried out on how to extend the model to the iterated case. A recent paper by Delgrande & Jin outlines a range of relevant rationality postulates. While many of these are plausible, they lack an underlying unifying explanation. We draw on recent work on iterated parallel contraction to offer a general method for extending serial iterated belief revision operators to handle parallel change. This method, based on a family of order aggregators known as TeamQueue aggregators, provides a principled way to recover the independently plausible properties that can be found in the literature, without yielding the more dubious ones.
Cite
Text
Li et al. "Prompt Learning with Extended Kalman Filter for Pre-Trained Language Models." International Joint Conference on Artificial Intelligence, 2024. doi:10.24963/ijcai.2024/492Markdown
[Li et al. "Prompt Learning with Extended Kalman Filter for Pre-Trained Language Models." International Joint Conference on Artificial Intelligence, 2024.](https://mlanthology.org/ijcai/2024/li2024ijcai-prompt/) doi:10.24963/ijcai.2024/492BibTeX
@inproceedings{li2024ijcai-prompt,
title = {{Prompt Learning with Extended Kalman Filter for Pre-Trained Language Models}},
author = {Li, Quan and Xie, Xike and Wang, Chao and Zhou, S. Kevin},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2024},
pages = {4452-4460},
doi = {10.24963/ijcai.2024/492},
url = {https://mlanthology.org/ijcai/2024/li2024ijcai-prompt/}
}