LINs Lab

Welcome to the website of the Learning and INference Systems (LINs) Lab at Westlake University!

The research interests of the lab lie in the intersection of optimization and generalization for deep learning:

  • leveraging theoretical/empirical understanding (e.g., loss landscape, and training dynamics)
  • to design efficient & robust methods (both learning and inference)
  • for deep learning (centralized) and collaborative deep learning (distributed and/or decentralized),
  • under imperfect environments (e.g., noisy, heterogeneous, and hardware-constrained).

News

Oct 18, 2022 Open positions: We have 1 joint postdoc position with Dr. Sebastian U. Stich.
Jun 12, 2022 Open positions: We have several open positions for (1-3) postdoc researchers, (1-3) PhD students, and multiple research assistants/interns.

Selected publications

  1. preprint
    Test-Time Robust Personalization for Federated Learning
    arXiv preprint arXiv:2205.10920 2022
  2. ICML 2021
    Quasi-global Momentum: Accelerating Decentralized Deep Learning on Heterogeneous Data
    In International Conference on Machine Learning (ICML) 2021
  3. ICML 2021
    Consensus Control for Decentralized Deep Learning
    In International Conference on Machine Learning (ICML) 2021
  4. NeurIPS 2020
    Ensemble Distillation for Robust Model Fusion in Federated Learning
    In Advances in Neural Information Processing Systems (NeurIPS) 2020
  5. EMNLP 2020
    Masking as an Efficient Alternative to Finetuning for Pretrained Language Models
    In Empirical Methods in Natural Language Processing (EMNLP) 2020
  6. ICML 2020
    Extrapolation for Large-batch Training in Deep Learning
    In International Conference on Machine Learning (ICML) 2020
  7. ICLR 2020
    Dynamic Model Pruning with Feedback
    In International Conference on Learning Representations (ICLR) 2020
  8. ICLR 2020
    Decentralized Deep Learning with Arbitrary Communication Compression
    In International Conference on Learning Representations (ICLR) 2020
  9. ICLR 2020
    Don’t Use Large Mini-batches, Use Local SGD
    In International Conference on Learning Representations (ICLR) 2020