I’m Jierui Li, a Ph.D. student at computer science department, University of Texas at Austin, advised by Prof. Raymond Mooney. I obtained my bachelor’s degree in computer science from University of Electronic Science and Technology of China.

My current research focus is learning algorithmic reasoning and code generation for competitive-level challenges with large language models. I’m generally interested in Natural Language Processing(NLP)


University of Electronic Science and Technology of China
Bachelor of Engineering in Computer Science and Technology
Sep 2016 ~ Jun 2020

University of Texas at Austin
Ph.D. Student in Computer Science
Sep 2021 ~ present


Center for Future Media @UESTC
Student Oct 2018 - Jun 2019
Math Word Problems: Work on one specific task with different methods. My main interest and focus are exploring neural networks for reasoning with math word problems as a window.

Tencent AI Lab @Shenzhen
Research Intern Sep 2019 - Jun 2020
Research Intern at Tencent AI Lab, NLP center, supervised by Dr. Lemao Liu.
Evaluating Explanation Methods for NMT: We propose a simulating-based automatic evaluation method to evaluate explanation methods for Neural Machine Translation. Attention’s Interpretability: Follow a series of works to analyze the interpretability of attention mechanism and discuss the seemingly contradictory conclusions presented in previous papers.

SUTD StatNLP Lab @Singapore
Research Assistant Jan 2021 - Aug 2021
Research Assistant supervised by Prof. Wei Lu, at Singapore University of Technology and Design. Structured Math Word Problem Solving: I am working on a project where we propose a parser to parse math word problems into specially structured formulations.

Grammarly Inc @San Francisco Applied Research Intern May 2023 - Aug 2023 Highlighted the task of detecting self-contradictions in documents and proposed an annotated dataset for document-level self-contradiction detection to study self-contradictions in documents across multiple domains, varying document lengths, self-contradiction types, and scope while evaluating SOTA LLMs on our dataset with evaluation metrics designed for LLMs.


Modeling Intra-Relation in Math Word Problems with Different Functional Multi-Head Attentions In ACL 2019
Jierui Li, Lei Wang, Jipeng Zhang, Yan Wang, Bing Tian Dai and Dongxiang Zhang

Evaluating Explanation Methods for Neural Machine Translation In ACL 2020
Jierui Li, Lemao Liu, Huayang Li, Guanlin Li, Guoping Huang and Shuming Shi

Learning to Reason Deductively: Math Word Problem Solving as Complex Relation Extraction In ACL 2022
Zhanming Jie, Jierui Li and Wei Lu

Explaining Competitive-Level Programming Solutions using LLMs Presented in NLRSE 2023
Jierui Li, Szymon Tworkowski, Yingying Wu and Raymond Mooney

Explaining Competitive-Level Programming Solutions using LLMs preprint
Jierui Li, Vipul Raheja and Dhruv Kumar