avatar

Sizhuang He

Second-Year Ph.D. Student in Computer Science
Yale University
sizhuang.he (at) yale.edu

Loading...
Citations
Loading...
H-Index
Last updated: Loading...

About Me

I’m a second-year Computer Science Ph.D. student at Yale University, advised by Dr. David van Dijk. Previously, I completed my Bachelor’s degree at University of Michigan, Ann Arbor, majoring in Honors Mathematics and minoring in Computer Science.

I work on the intersection of machine learning and biology:

Seeking Summer 2026 Internship Opportunities

I am actively looking for summer 2026 research internship positions in Generative AI, general Deep Learning, and Machine Learning for PhD students. I have a strong background in ML especially generative AI—see my NeurIPS 2025 paper CaDDi.
Currently, I work on discrete diffusion models on the finite symmetric group and develop LLM multi-agent systems for single-cell perturbation response prediction and DNA methylation data curation.
I will be attending NeurIPS 2025 in San Diego—my poster session is on December 5th afternoon. Feel free to stop by and chat!

Check out my CV here: Curriculum Vitae
Connect me on LinkedIn

News

Selected Publications

  1. Cell2Sentence
    C2S-Scale scales this framework to 27 billion parameters trained on a billion-token multimodal corpus—achieving state-of-the-art predictive and generative performance for complex, multicellular analyses. Visit the project page. Read more about this work in our blog post and another blog post. — In Review

  2. CaDDI
    We introduce a novel approach to discrete diffusion models that conditions on the entire generative trajectory, thereby lifting the Markov constraint and allowing the model to revisit and improve past states. CaDDi treats standard causal language models as a special case and permits the direct reuse of pretrained LLM weights with no architectural changes. — NeurIPS 2025

  3. CaLMFlow
    We present Volterra Flow Matching, a novel generative modeling framework that reformulates ODE-based flow matching frameworks with Volterra Integral Equations, hence avoiding a core challenge in ODE-based methods, known as stiffness. We show the connection between Volterra Integral Equations and causal transformers, the backbone of modern Large Language Models and hence demonstrates that causal language models can be naturally extended to generative modeling over continuous data domains through the lens of Volterra Flow Matching. — arXiv

  4. Intelligence at the Edge of Chaos
    By training LLMs on elementary cellular automata rules of varying complexity, we pinpoint a 'sweet spot' of data complexity that maximizes downstream predictive and reasoning abilities. Our findings suggest that exposing models to appropriately complex patterns is key to unlocking emergent intelligence. — ICLR 2025

Services

Journal Reviewer

Conference Reviewer


Powered by Jekyll and Minimal Light theme.