Deep Neural Nets with Interpolating Function as Output Activation

Abstract

We replace the output layer of deep neural nets, typically the softmax function, by a novel interpolating function. And we propose end-to-end training and testing algorithms for this new architecture. Compared to classical neural nets with softmax function as output activation, the surrogate with interpolating function as output activation combines advantages of both deep and manifold learning. The new framework demonstrates the following major advantages: First, it is better applicable to the case with insufficient training data. Second, it significantly improves the generalization accuracy on a wide variety of networks. The algorithm is implemented in PyTorch, and the code is available at https://github.com/ BaoWangMath/DNN-DataDependentActivation.

Cite

Text

Wang et al. "Deep Neural Nets with Interpolating Function as Output Activation." Neural Information Processing Systems, 2018.

Markdown

[Wang et al. "Deep Neural Nets with Interpolating Function as Output Activation." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/wang2018neurips-deep/)

BibTeX

@inproceedings{wang2018neurips-deep,
  title     = {{Deep Neural Nets with Interpolating Function as Output Activation}},
  author    = {Wang, Bao and Luo, Xiyang and Li, Zhen and Zhu, Wei and Shi, Zuoqiang and Osher, Stanley},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {743-753},
  url       = {https://mlanthology.org/neurips/2018/wang2018neurips-deep/}
}