TorchAdapt: Towards Light-Agnostic Real-Time Visual Perception

Abstract

Low-light conditions significantly degrade the performance of high-level vision tasks. Existing approaches either enhance low-light images without considering normal illumination scenarios, leading to poor generalization, or are tailored to specific tasks. We propose TorchAdapt, a realtime adaptive feature enhancement framework that generalizes robustly across varying illumination conditions without degrading performance in well-lit scenarios. TorchAdapt consists of two complementary modules: the Torch module enhances semantic features beneficial for downstream tasks, while the Adapt module dynamically modulates these enhancements based on input content. Leveraging a novel light-agnostic learning strategy, TorchAdapt aligns feature representations of enhanced and well-lit images to produce powerful illumination-invariant features. Extensive experiments on multiple high-level vision tasks, including object detection, face detection, instance segmentation, semantic segmentation, and video object detection, demonstrate that TorchAdapt consistently outperforms state-of-the-art lowlight enhancement and task-specific methods in both lowlight and light-agnostic settings. TorchAdapt thus provides a unified, flexible solution for robust visual perception across diverse lighting conditions.

Cite

Text

Hashmi et al. "TorchAdapt: Towards Light-Agnostic Real-Time Visual Perception." International Conference on Computer Vision, 2025.

Markdown

[Hashmi et al. "TorchAdapt: Towards Light-Agnostic Real-Time Visual Perception." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/hashmi2025iccv-torchadapt/)

BibTeX

@inproceedings{hashmi2025iccv-torchadapt,
  title     = {{TorchAdapt: Towards Light-Agnostic Real-Time Visual Perception}},
  author    = {Hashmi, Khurram Azeem and Suresh, Karthik Palyakere and Stricker, Didier and Afzal, Muhammad Zeshan},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {5645-5656},
  url       = {https://mlanthology.org/iccv/2025/hashmi2025iccv-torchadapt/}
}