Scalable and Trustworthy Learning in Heterogeneous Networks
Abstract
To build a responsible data economy and protect data ownerhip, it is crucial to enable learning models from separate, heterogeneous data sources without centralization. For example, federated learning (FL) aims to train models across massive remote devices or isolated organizations, while keeping user data local. However, federated learning can face critical practical issues such as scalability, noisy samples, biased learning systems or procedures, and privacy leakage. At the intersection between optimization, trustworthy (fair, robust, and private) ML, and learning in heterogeneous environments, my research aims to support scalable and responsible data sharing to collectively build intelligent models.
Cite
Text
Li. "Scalable and Trustworthy Learning in Heterogeneous Networks." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I27.35110Markdown
[Li. "Scalable and Trustworthy Learning in Heterogeneous Networks." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/li2025aaai-scalable/) doi:10.1609/AAAI.V39I27.35110BibTeX
@inproceedings{li2025aaai-scalable,
title = {{Scalable and Trustworthy Learning in Heterogeneous Networks}},
author = {Li, Tian},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {28715},
doi = {10.1609/AAAI.V39I27.35110},
url = {https://mlanthology.org/aaai/2025/li2025aaai-scalable/}
}