Haowei Lin

haowei.jpg

E-mail: linhaowei (at) pku (dot) edu (dot) cn

I am Haowei Lin, a third year Ph.D. student at the Institute for Artificial Intelligence, Peking University, co-advised by Prof. Yitao Liang, Prof. Di He, and Prof. Jianzhu Ma.

I received my Bachelor’s degree in Artificial Intelligence from Yuanpei College, Peking University, where I was fortunate to work with Prof. Bing Liu and Dr. Zixuan Ke on OOD detection, continual learning, and language models. We are the first to propose the task of continual pre-training for LLMs (EMNLP22, ICLR23), and the first to apply OOD detection methods to continual learning (EMNLP23, ICLR24).

I work on unified cross-modal generative foundation models (GFMs) for scientific discovery. My current research focuses on:

Outside of my professional interests, I enjoy engaging in music-related activities, including singing and playing the guitar.

If you’re interested in working with me on GFMs / AI Scientist, please contact me through e-mail.

news

Dec 07, 2025 Glad to launch a new blog on Scaling Law Discovery (SLD) with OpenEvolve, in collaboration with Algorithmic SuperIntelligence Labs. We hope our work on SLD helps advance foundation model development and push the boundaries of AI Scientist. Code, dataset, benchmarks, and leaderboard are all publicly available.
Oct 21, 2025 Excited to be a core contributor of adapters in Terminal-Bench, which converts all agentic benchmarks (e.g., SWE-related) in a unified format to t-bench! Happy to see OAI, GDM, Anthropic, DeepSeek, etc. using T-Bench for model evaluation in their model release.
Oct 20, 2025 Our paper on AI for scientific discovery was published in Nature Machine Intelligence as a cover paper!

selected publications

  1. Nat. Mach. Intell.
    phye2e.png
    A Neural Symbolic Model for Space Physics
    Jie Ying*, Haowei Lin*, Chao Yue*, and 7 more authors
    Nature Machine Intelligence (Cover Paper),
  2. ICML Spotlight
    mcu.png
    MCU: An Evaluation Framework for Open-Ended Game Agents
    Xinyue Zheng*, Haowei Lin*, Kaichen He, and 5 more authors
    In The Forty-second International Conference on Machine Learning (ICML 2025),
  3. ICLR
    tfg-flow.png
    TFG-Flow: Training-free Guidance in Multimodal Generative Flow
    Haowei Lin*, Shanda Li*, Haotian Ye, and 4 more authors
    In The Thirteenth International Conference on Learning Representations (ICLR 2025),
  4. NeurIPS Spotlight
    TFG.png
    TFG: Unified Training-Free Guidance for Diffusion Models
    Haotian Ye*, Haowei Lin*, Jiaqi Han*, and 6 more authors
    In Advances in Neural Information Processing Systems 37 (NeurIPS 2024),
  5. ICML
    llm-selection.png
    Selecting Large Language Model to Fine-tune via Rectified Scaling Law
    Haowei Lin*, Baizhou Huang*, Haotian Ye*, and 7 more authors
    In The Forty-first International Conference on Machine Learning (ICML 2024),
  6. ICLR
    tpl.png
    Class Incremental Learning via Likelihood Ratio-Based Task Prediction
    Haowei Lin, Yijia Shao, Weinan Qian, and 3 more authors
    In The Twelfth International Conference on Learning Representations (ICLR 2024),
  7. ICLR
    cpt.png
    Continual Pre-Training of Language Models
    Zixuan Ke*, Yijia Shao*, Haowei Lin*, and 3 more authors
    In The Eleventh International Conference on Learning Representations (ICLR 2023),