Improving First-Order Optimization Algorithms (Student Abstract)
Abstract
This paper presents a simple and intuitive technique to accelerate the convergence of first-order optimization algorithms. The proposed solution modifies the update rule, based on the variation of the direction of the gradient and the previous step taken during training. Results after tests show that the technique has the potential to significantly improve the performance of existing first-order optimization algorithms.
Cite
Text
Tato and Nkambou. "Improving First-Order Optimization Algorithms (Student Abstract)." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I10.7240Markdown
[Tato and Nkambou. "Improving First-Order Optimization Algorithms (Student Abstract)." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/tato2020aaai-improving/) doi:10.1609/AAAI.V34I10.7240BibTeX
@inproceedings{tato2020aaai-improving,
title = {{Improving First-Order Optimization Algorithms (Student Abstract)}},
author = {Tato, Ange and Nkambou, Roger},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2020},
pages = {13935-13936},
doi = {10.1609/AAAI.V34I10.7240},
url = {https://mlanthology.org/aaai/2020/tato2020aaai-improving/}
}