Real-Time Portrait Stylization on the Edge
Abstract
In this work we demonstrate real-time portrait stylization, specifically, translating self-portrait into cartoon or anime style on mobile devices. We propose a latency-driven differentiable architecture search method, maintaining realistic generative quality. With our framework, we obtain 10× computation reduction on the generative model and achieve real-time video stylization on off-the-shelf smartphone using mobile GPUs.
Cite
Text
Li et al. "Real-Time Portrait Stylization on the Edge." International Joint Conference on Artificial Intelligence, 2022. doi:10.24963/IJCAI.2022/856Markdown
[Li et al. "Real-Time Portrait Stylization on the Edge." International Joint Conference on Artificial Intelligence, 2022.](https://mlanthology.org/ijcai/2022/li2022ijcai-real/) doi:10.24963/IJCAI.2022/856BibTeX
@inproceedings{li2022ijcai-real,
title = {{Real-Time Portrait Stylization on the Edge}},
author = {Li, Yanyu and Shen, Xuan and Yuan, Geng and Guan, Jiexiong and Niu, Wei and Tang, Hao and Ren, Bin and Wang, Yanzhi},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2022},
pages = {5928-5931},
doi = {10.24963/IJCAI.2022/856},
url = {https://mlanthology.org/ijcai/2022/li2022ijcai-real/}
}