Skip to main content Skip to navigation

Yuanhe Zhang

My photo

My name is Yuanhe Zhang ('Russell' for short). I research on the mathematical reasoning of LLMs with a focus on autoformalization and efficient post-training, under the supervision of Dr. Fanghui LiuLink opens in a new window, Prof. Chenlei Leng, and Dr. Thomas Berrett. I hold the Chancellor's International Scholarship.

My personal webpage is https://yuanhez.github.io.

Publications:

Zhang, Y., Kuzborskij, I., Lee, J. D., Leng, C., & Liu, F. (2025). DAG-Math: Graph-Guided Mathematical Reasoning in LLMs. arXiv:2510.19842. International Conference on Learning Representations (ICLR), 2026.

Zhang, Y., Liu, F., & Chen, Y. (2025). LoRA-One: One-Step Full Gradient Could Suffice for Fine-Tuning Large Language Models, Provably and Efficiently. arXiv:2502.01235. (ICML 2025 Oral presentation, top 120/12107=0.89% of papers)

Huk, D., Zhang, Y., Dutta, R., and Steel, M. (2024). Quasi-Bayes meets Vines. In The Thirty-eighth Annual Conference on Neural Information Processing Systems. Available at https://openreview.net/forum?id=gcpeEg88R3Link opens in a new window.

Services:

Program Associate, Reviewer @ NeurIPS 2024 Workshop on Fine-Tuning in Mordern Machine Learning: Principles and Scalability (FITMLLink opens in a new window).

Teaching:

24/25 & 25/26 Term 2: CS416 OptimisationLink opens in a new window

Let us know you agree to cookies