Autoencoder Regularized Network for Driving Style Representation Learning

Abstract

In this paper, we study learning generalized driving style representations from automobile GPS trip data. We propose a novel Autoencoder Regularized deep neural Network (ARNet) and a trip encoding framework trip2vec to learn drivers' driving styles directly from GPS records, by combining supervised and unsupervised feature learning in a unified architecture. Experiments on a challenging driver number estimation problem and the driver identification problem show that ARNet can learn a good generalized driving style representation: It significantly outperforms existing methods and alternative architectures by reaching the least estimation error on average (0.68, less than one driver) and the highest identification accuracy (by at least 3% improvement) compared with traditional supervised learning methods.

Cite

Text

Dong et al. "Autoencoder Regularized Network for Driving Style Representation Learning." International Joint Conference on Artificial Intelligence, 2017. doi:10.24963/IJCAI.2017/222

Markdown

[Dong et al. "Autoencoder Regularized Network for Driving Style Representation Learning." International Joint Conference on Artificial Intelligence, 2017.](https://mlanthology.org/ijcai/2017/dong2017ijcai-autoencoder/) doi:10.24963/IJCAI.2017/222

BibTeX

@inproceedings{dong2017ijcai-autoencoder,
  title     = {{Autoencoder Regularized Network for Driving Style Representation Learning}},
  author    = {Dong, Weishan and Yuan, Ting and Yang, Kai and Li, Changsheng and Zhang, Shilei},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2017},
  pages     = {1603-1609},
  doi       = {10.24963/IJCAI.2017/222},
  url       = {https://mlanthology.org/ijcai/2017/dong2017ijcai-autoencoder/}
}