Our research interests lie in the intersection of optimization and generalization for deep learning:
- leveraging theoretical/empirical understanding (e.g., loss landscape, and training dynamics)
- to design efficient & robust methods (both learning and inference)
- for deep learning (centralized) and collaborative deep learning (distributed and/or decentralized),
- under imperfect environments (e.g., noisy, heterogeneous, and hardware-constrained).
- We are running a research seminar on Deep Learning and Optimization.
|Sep 22, 2023||One paper was accepted to NeurIPS 2023. Congratulations to Lin.|
|Jul 14, 2023||One paper was accepted to ICCV 2023. Congratulations to Zexi.|
|Jun 1, 2023||We have several open positions: 1-3 Ph.D. students (Fall 2024), and multiple research assistants/interns.|
|Apr 25, 2023||Three papers were accepted to ICML 2023. Congratulations to Yongxin, Hao, Yuejiang, and Zexi.|
ICML 2023On Pitfalls of Test-time AdaptationIn International Conference on Machine Learning (ICML), abridged in ICLR Workshop on Trustworthy ML 2023 and ICLR Workshop on DG (spotlight) 2023
ICLR 2023Test-Time Robust Personalization for Federated LearningIn International Conference on Learning Representations (ICLR) 2023
ICML 2021Quasi-global Momentum: Accelerating Decentralized Deep Learning on Heterogeneous DataIn International Conference on Machine Learning (ICML) 2021
NeurIPS 2020Ensemble Distillation for Robust Model Fusion in Federated LearningIn Advances in Neural Information Processing Systems (NeurIPS) 2020
ICLR 2020Decentralized Deep Learning with Arbitrary Communication CompressionIn International Conference on Learning Representations (ICLR) 2020
ICLR 2020Don’t Use Large Mini-batches, Use Local SGDIn International Conference on Learning Representations (ICLR) 2020