MASTER: Multi-Task Pre-Trained Bottlenecked Masked Autoencoders Are Better Dense Retrievers

Cite

Text

Zhou et al. "MASTER: Multi-Task Pre-Trained Bottlenecked Masked Autoencoders Are Better Dense Retrievers." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2023. doi:10.1007/978-3-031-43415-0_37

Markdown

[Zhou et al. "MASTER: Multi-Task Pre-Trained Bottlenecked Masked Autoencoders Are Better Dense Retrievers." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2023.](https://mlanthology.org/ecmlpkdd/2023/zhou2023ecmlpkdd-master/) doi:10.1007/978-3-031-43415-0_37

BibTeX

@inproceedings{zhou2023ecmlpkdd-master,
  title     = {{MASTER: Multi-Task Pre-Trained Bottlenecked Masked Autoencoders Are Better Dense Retrievers}},
  author    = {Zhou, Kun and Liu, Xiao and Gong, Yeyun and Zhao, Wayne Xin and Jiang, Daxin and Duan, Nan and Wen, Ji-Rong},
  booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
  year      = {2023},
  pages     = {630-647},
  doi       = {10.1007/978-3-031-43415-0_37},
  url       = {https://mlanthology.org/ecmlpkdd/2023/zhou2023ecmlpkdd-master/}
}