Star Temporal Classification: Sequence Modeling with Partially Labeled Data

Abstract

We develop an algorithm which can learn from partially labeled and unsegmented sequential data. Most sequential loss functions, such as Connectionist Temporal Classification (CTC), break down when many labels are missing. We address this problem with Star Temporal Classification (STC) which uses a special star token to allow alignments which include all possible tokens whenever a token could be missing. We express STC as the composition of weighted finite-state transducers (WFSTs) and use GTN (a framework for automatic differentiation with WFSTs) to compute gradients. We perform extensive experiments on automatic speech recognition. These experiments show that STC can close the performance gap with supervised baseline to about 1% WER when up to 70% of the labels are missing. We also perform experiments in handwriting recognition to show that our method easily applies to other temporal classification tasks.

Cite

Text

Pratap et al. "Star Temporal Classification: Sequence Modeling with Partially Labeled Data." Neural Information Processing Systems, 2022.

Markdown

[Pratap et al. "Star Temporal Classification: Sequence Modeling with Partially Labeled Data." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/pratap2022neurips-star/)

BibTeX

@inproceedings{pratap2022neurips-star,
  title     = {{Star Temporal Classification: Sequence Modeling with Partially Labeled Data}},
  author    = {Pratap, Vineel and Hannun, Awni and Synnaeve, Gabriel and Collobert, Ronan},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/pratap2022neurips-star/}
}