Prompt Compression with Context-Aware Sentence Encoding for Fast and Improved LLM Inference

Cite

Text

Liskavets et al. "Prompt Compression with Context-Aware Sentence Encoding for Fast and Improved LLM Inference." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I23.34639

Markdown

[Liskavets et al. "Prompt Compression with Context-Aware Sentence Encoding for Fast and Improved LLM Inference." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/liskavets2025aaai-prompt/) doi:10.1609/AAAI.V39I23.34639

BibTeX

@inproceedings{liskavets2025aaai-prompt,
  title     = {{Prompt Compression with Context-Aware Sentence Encoding for Fast and Improved LLM Inference}},
  author    = {Liskavets, Barys and Ushakov, Maxim and Roy, Shuvendu and Klibanov, Mark and Etemad, Ali and Luke, Shane K.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {24595-24604},
  doi       = {10.1609/AAAI.V39I23.34639},
  url       = {https://mlanthology.org/aaai/2025/liskavets2025aaai-prompt/}
}