My name is Yang Song (宋飏, Sòng Yáng). I am a final year Ph.D. student in Computer Science at Stanford University. My advisor is Stefano Ermon. Prior to joining Stanford, I obtained my Bachelor’s degree in Mathematics and Physics from Tsinghua University, where I worked with Jun Zhu, Raquel Urtasun, and Richard Zemel.
I work on deep generative models. My goal is to develop new approaches to generative modeling that allows flexible model architectures, stable training algorithms, high-quality sample generation and controllable synthesis. I am interested in various applications of generative models, such as solving inverse problems, and mitigating security vulnerabilities of machine learning systems.
Contact: yangsong [at] cs.stanford.edu
|Sep 18, 2021||I am on the 2021–2022 job market. Please feel free to contact if you are interested!|
|Mar 31, 2021||Our paper Score-Based Generative Modeling through Stochastic Differential Equations received an Outstanding Paper Award at ICLR 2021!|
|Jul 1, 2020||I received the inaugural Apple Ph.D. Fellowship in AI/ML, and the J.P. Morgan AI Research Ph.D. Fellowship. Thank you Apple! Thank you J.P. Morgan!|
- NeurIPS SpotlightMaximum Likelihood Training of Score-Based Diffusion ModelsThe 35th Conference on Neural Information Processing Systems, 2021.Spotlight Presentation [top 3%]
- ICMLAccelerating Feedforward Computation via Parallel Nonlinear Equation SolvingThe 38th International Conference on Machine Learning, 2021.
- ICLR Oral AwardScore-Based Generative Modeling through Stochastic Differential EquationsThe 9th International Conference on Learning Representations, 2021.Outstanding Paper Award
- NeurIPSImproved Techniques for Training Score-Based Generative ModelsThe 34th Conference on Neural Information Processing Systems, 2020.
- AISTATSPermutation Invariant Graph Generation via Score-Based Generative ModelingThe 23rd International Conference on Artificial Intelligence and Statistics, 2020.
- NeurIPS OralGenerative Modeling by Estimating Gradients of the Data DistributionThe 33rd Conference on Neural Information Processing Systems, 2019.Oral Presentation [top 0.5%]
- UAI OralSliced Score Matching: A Scalable Approach to Density and Score EstimationThe 35th Conference on Uncertainty in Artificial Intelligence, 2019.Oral Presentation [top 8.7%]
- NeurIPSConstructing Unrestricted Adversarial Examples with Generative ModelsThe 32nd Conference on Neural Information Processing Systems, 2018.
- ICLRPixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial ExamplesThe 6th International Conference on Learning Representations, 2018.