Periodic Activation Functions Induce Stationarity
Abstract
Neural network models are known to reinforce hidden data biases, making them unreliable and difficult to interpret. We seek to build models that `know what they do not know' by introducing inductive biases in the function space. We show that periodic activation functions in Bayesian neural networks establish a connection between the prior on the network weights and translation-invariant, stationary Gaussian process priors. Furthermore, we show that this link goes beyond sinusoidal (Fourier) activations by also covering triangular wave and periodic ReLU activation functions. In a series of experiments, we show that periodic activation functions obtain comparable performance for in-domain data and capture sensitivity to perturbed inputs in deep neural networks for out-of-domain detection.
Cite
Text
Meronen et al. "Periodic Activation Functions Induce Stationarity." Neural Information Processing Systems, 2021.Markdown
[Meronen et al. "Periodic Activation Functions Induce Stationarity." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/meronen2021neurips-periodic/)BibTeX
@inproceedings{meronen2021neurips-periodic,
title = {{Periodic Activation Functions Induce Stationarity}},
author = {Meronen, Lassi and Trapp, Martin and Solin, Arno},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/meronen2021neurips-periodic/}
}