Yang Song

PhD candidate at Stanford AI Lab.

My name is Yang Song (宋飏, Sòng Yáng). I am a final year Ph.D. student in Computer Science at Stanford University. My advisor is Stefano Ermon. Prior to joining Stanford, I obtained my Bachelor’s degree in Mathematics and Physics from Tsinghua University, where I worked with Jun Zhu, Raquel Urtasun, and Richard Zemel.

I work on deep generative models. My goal is to develop new approaches to generative modeling that allows flexible model architectures, stable training algorithms, high-quality sample generation and controllable synthesis. I am interested in various applications of generative models, such as solving inverse problems, and mitigating security vulnerabilities of machine learning systems.

Contact: yangsong [at] cs.stanford.edu

News

Sep 18, 2021 I am on the 2021–2022 job market. Please feel free to contact if you are interested!
Mar 31, 2021 Our paper Score-Based Generative Modeling through Stochastic Differential Equations received an Outstanding Paper Award at ICLR 2021!
Jul 1, 2020 I received the inaugural Apple Ph.D. Fellowship in AI/ML, and the J.P. Morgan AI Research Ph.D. Fellowship. Thank you Apple! Thank you J.P. Morgan!

Selected Publications

  1. NeurIPS Spotlight
    Maximum Likelihood Training of Score-Based Diffusion Models
    Yang Song*, Conor Durkan*, Iain Murray, and Stefano Ermon
    The 35th Conference on Neural Information Processing Systems, 2021.
    Spotlight Presentation [top 3%]
  2. ICML
    Accelerating Feedforward Computation via Parallel Nonlinear Equation Solving
    Yang Song, Chenlin Meng, Renjie Liao, and Stefano Ermon
    The 38th International Conference on Machine Learning, 2021.
  3. ICLR Oral Award
    Score-Based Generative Modeling through Stochastic Differential Equations
    The 9th International Conference on Learning Representations, 2021.
    Outstanding Paper Award
  4. NeurIPS
    Improved Techniques for Training Score-Based Generative Models
    Yang Song, and Stefano Ermon
    The 34th Conference on Neural Information Processing Systems, 2020.
  5. AISTATS
    Permutation Invariant Graph Generation via Score-Based Generative Modeling
    Chenhao Niu, Yang Song, Jiaming Song, Shengjia Zhao, Aditya Grover, and Stefano Ermon
    The 23rd International Conference on Artificial Intelligence and Statistics, 2020.
  6. NeurIPS Oral
    Generative Modeling by Estimating Gradients of the Data Distribution
    Yang Song, and Stefano Ermon
    The 33rd Conference on Neural Information Processing Systems, 2019.
    Oral Presentation [top 0.5%]
  7. UAI Oral
    Sliced Score Matching: A Scalable Approach to Density and Score Estimation
    Yang Song*, Sahaj Garg*, Jiaxin Shi, and Stefano Ermon
    The 35th Conference on Uncertainty in Artificial Intelligence, 2019.
    Oral Presentation [top 8.7%]
  8. NeurIPS
    Constructing Unrestricted Adversarial Examples with Generative Models
    Yang Song, Rui Shu, Nate Kushman, and Stefano Ermon
    The 32nd Conference on Neural Information Processing Systems, 2018.
  9. ICLR
    PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples
    Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, and Nate Kushman
    The 6th International Conference on Learning Representations, 2018.