Authorship Attribution Using a Neural Network Language Model

Abstract

In practice, training language models for individual authors is often expensive because of limited data resources. In such cases, Neural Network Language Models (NNLMs), generally outperform the traditional non-parametric N-gram models. Here we investigate the performance of a feed-forward NNLM on an authorship attribution problem, with moderate author set size and relatively limited data. We also consider how the text topics impact performance. Compared with a well-constructed N-gram baseline method with Kneser-Ney smoothing, the proposed method achieves nearly 2.5% reduction in perplexity and increases author classification accuracy by 3.43% on average, given as few as 5 test sentences. The performance is very competitive with the state of the art in terms of accuracy and demand on test data.

Cite

Text

Ge et al. "Authorship Attribution Using a Neural Network Language Model." AAAI Conference on Artificial Intelligence, 2016. doi:10.1609/AAAI.V30I1.9924

Markdown

[Ge et al. "Authorship Attribution Using a Neural Network Language Model." AAAI Conference on Artificial Intelligence, 2016.](https://mlanthology.org/aaai/2016/ge2016aaai-authorship/) doi:10.1609/AAAI.V30I1.9924

BibTeX

@inproceedings{ge2016aaai-authorship,
  title     = {{Authorship Attribution Using a Neural Network Language Model}},
  author    = {Ge, Zhenhao and Sun, Yufang and Smith, Mark J. T.},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2016},
  pages     = {4212-4213},
  doi       = {10.1609/AAAI.V30I1.9924},
  url       = {https://mlanthology.org/aaai/2016/ge2016aaai-authorship/}
}