Exploring the Relative Value of Collaborative Optimisation Pathways (Student Abstract)

Abstract

Compression techniques in machine learning (ML) independently improve a model’s inference efficiency by reducing its memory footprint while aiming to maintain its quality. This paper lays groundwork in questioning the merit of a compression pipeline involving all techniques as opposed to skipping a few by considering a case study on a keyword spotting model: DS-CNN-S. In addition, it documents improvements to the model’s training and dataset infrastructure. For this model, preliminary findings suggest that a full-scale pipeline isn’t required to achieve a competent memory footprint and accuracy, but a more comprehensive study is required.

Cite

Text

Sreeram. "Exploring the Relative Value of Collaborative Optimisation Pathways (Student Abstract)." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I13.27028

Markdown

[Sreeram. "Exploring the Relative Value of Collaborative Optimisation Pathways (Student Abstract)." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/sreeram2023aaai-exploring/) doi:10.1609/AAAI.V37I13.27028

BibTeX

@inproceedings{sreeram2023aaai-exploring,
  title     = {{Exploring the Relative Value of Collaborative Optimisation Pathways (Student Abstract)}},
  author    = {Sreeram, Sudarshan},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {16336-16337},
  doi       = {10.1609/AAAI.V37I13.27028},
  url       = {https://mlanthology.org/aaai/2023/sreeram2023aaai-exploring/}
}