Fourier Features in Reinforcement Learning with Neural Networks
Abstract
In classic Reinforcement Learning (RL), the performance of algorithms depends critically on data representation, i.e., the way the states of the system are represented as features. Choosing appropriate features for a task is an important way of adding prior domain knowledge since cleverly distributing information into states facilitates appropriate generalization. For linear function approximations, the representation is usually hand-designed according to the task at hand and projected into a higher-dimensional space to facilitate linear separation. Among the feature encodings used in RL for linear function approximation, we can mention in a non-exhaustive way Polynomial Features or Tile Coding. However, the main bottleneck of such feature encodings is that they do not scale to high-dimensional inputs as they grow exponentially in size with the input dimension.
Cite
Text
Brellmann et al. "Fourier Features in Reinforcement Learning with Neural Networks." Transactions on Machine Learning Research, 2023.Markdown
[Brellmann et al. "Fourier Features in Reinforcement Learning with Neural Networks." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/brellmann2023tmlr-fourier/)BibTeX
@article{brellmann2023tmlr-fourier,
title = {{Fourier Features in Reinforcement Learning with Neural Networks}},
author = {Brellmann, David and Filliat, David and Frehse, Goran},
journal = {Transactions on Machine Learning Research},
year = {2023},
url = {https://mlanthology.org/tmlr/2023/brellmann2023tmlr-fourier/}
}