A Learnable Radial Basis Positional Embedding for Coordinate-MLPs
Abstract
We propose a novel method to enhance the performance of coordinate-MLPs (also referred to as neural fields) by learning instance-specific positional embeddings. End-to-end optimization of positional embedding parameters along with network weights leads to poor generalization performance. Instead, we develop a generic framework to learn the positional embedding based on the classic graph-Laplacian regularization, which can implicitly balance the trade-off between memorization and generalization. This framework is then used to propose a novel positional embedding scheme, where the hyperparameters are learned per coordinate (i.e instance) to deliver optimal performance. We show that the proposed embedding achieves better performance with higher stability compared to the well-established random Fourier features (RFF). Further, we demonstrate that the proposed embedding scheme yields stable gradients, enabling seamless integration into deep architectures as intermediate layers.
Cite
Text
Ramasinghe and Lucey. "A Learnable Radial Basis Positional Embedding for Coordinate-MLPs." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I2.25307Markdown
[Ramasinghe and Lucey. "A Learnable Radial Basis Positional Embedding for Coordinate-MLPs." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/ramasinghe2023aaai-learnable/) doi:10.1609/AAAI.V37I2.25307BibTeX
@inproceedings{ramasinghe2023aaai-learnable,
title = {{A Learnable Radial Basis Positional Embedding for Coordinate-MLPs}},
author = {Ramasinghe, Sameera and Lucey, Simon},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2023},
pages = {2137-2145},
doi = {10.1609/AAAI.V37I2.25307},
url = {https://mlanthology.org/aaai/2023/ramasinghe2023aaai-learnable/}
}