Learnable Fourier-Based Activations for Implicit Signal Representations

Abstract

Implicit neural representations (INRs) use neural networks to provide continuous and resolution-independent representations of complex signals with a small number of parameters. However, existing INR models often fail to capture important frequency components specific to each task. To address this issue, in this paper, we propose a Fourier Kolmogorov–Arnold network (FKAN) for INRs. The proposed FKAN utilizes learnable activation functions modeled as Fourier series in the first layer to effectively control and learn the task-specific frequency components. The activation functions with learnable Fourier coefficients improve the ability of the network to capture complex patterns and details, which is beneficial for high-resolution and high-dimensional data. Experimental results show that our proposed FKAN model outperforms four state-of-the-art baseline schemes across various tasks, including image representation, 3D occupancy volume representation, and image inpainting.

Cite

Text

Adi and Mehrabian. "Learnable Fourier-Based Activations for Implicit Signal Representations." NeurIPS 2024 Workshops: Compression, 2024.

Markdown

[Adi and Mehrabian. "Learnable Fourier-Based Activations for Implicit Signal Representations." NeurIPS 2024 Workshops: Compression, 2024.](https://mlanthology.org/neuripsw/2024/adi2024neuripsw-learnable/)

BibTeX

@inproceedings{adi2024neuripsw-learnable,
  title     = {{Learnable Fourier-Based Activations for Implicit Signal Representations}},
  author    = {Adi, Parsa Mojarad and Mehrabian, Ali},
  booktitle = {NeurIPS 2024 Workshops: Compression},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/adi2024neuripsw-learnable/}
}