Welcome to the website of the Learning and INference Systems (LINs) Lab at Westlake University!

Our research interests lie in the intersection of optimization and generalization for deep learning:

  • leveraging theoretical/empirical understanding (e.g., loss landscape, and training dynamics)
  • to design efficient & robust methods (both learning and inference)
  • for deep learning (centralized) and collaborative deep learning (distributed and/or decentralized),
  • under imperfect environments (e.g., noisy, heterogeneous, and hardware-constrained).

Lab activities:


News

Feb 21, 2026 Several papers from our group were accepted at this year’s CVPR 2026 conference.
  • Rethinking UMM Visual Generation: Masked Modeling for Efficient Image-Only Pre-training
  • Dual-Granularity Memory for Efficient Video Generation
  • Exploring Spatial Intelligence from a Generative Perspective
  • Eliciting Complex Spatial Reasoning in MLLMs through Wide-Baseline Matching
Jan 26, 2026 Several papers from our group were accepted at this year’s ICLR 2026 conference. Congratulations to our PhD student Peng SUN, Zhenglin CHENG, and Siyuan LU, and our internship student Fulin LIN.
Dec 9, 2025 We are excited to release TwinFlow (arxiv and code), a simple, flexible, and memory-efficient framework for one-step generation on large-scale models. The project has already garnered 200+ GitHub stars in a few days!
Sep 18, 2025 Our CPathAgent was accepted at this year’s NeurIPS 2025 conference. Congratulations to Yuxuan.
Feb 27, 2025 Our paper on Unified Multimodal Foundation Model for Computational Pathology was accepted at this year’s CVPR 2025 conference. Congratulations to Yuxuan.

Selected publications

  1. Unified Continuous Generative Models
    Sun, Peng, Jiang, Yi, and Lin, Tao
    (arXiv:2505.07447) preprint
  2. ICLR 2026
    TwinFlow: Realizing One-step Generation on Large Models with Self-adversarial Flows
    In International Conference on Learning Representations (ICLR), 2026
  3. ICLR 2025
    DeFT: Decoding with Flash Tree-Attention for Efficient Tree-structured LLM Inference
    In International Conference on Learning Representations (ICLR), Spotlight, abridged in ICLR workshop AGI (Oral), 2025
  4. NeurIPS 2024
    Efficiency for Free: Ideal Data Are Transportable Representations
    Sun, Peng, Jiang, Yi, and Lin, Tao
    In Advances in Neural Information Processing Systems (NeurIPS), 2024
  5. ICLR 2020
    Don’t Use Large Mini-batches, Use Local SGD
    In International Conference on Learning Representations (ICLR), 2020