Biography

I’m Jierui Li, a Ph.D. student at computer science department, University of Texas at Austin, advised by Prof. Raymond Mooney. I obtained my bachelor’s degree in computer science from University of Electronic Science and Technology of China.

My current research focus is learning algorithmic reasoning and code generation for competitive-level challenges with large language models. I’m generally interested in Natural Language Processing(NLP)

Education

University of Electronic Science and Technology of China
Bachelor of Engineering in Computer Science and Technology
Sep 2016 ~ Jun 2020

University of Texas at Austin
Ph.D. Student in Computer Science
Sep 2021 ~ present

Experience

Meta — Menlo Park

Ph.D. SWE Intern | May 2025 – Aug 2025

  • Internal Tool: Developed an internal tool to improve the productivity of Meta’s content designers and employees working on Meta-to-Users.
    • Achieved 99% recall compared to the original method.
    • Improved pipeline speed by 200×.
    • Adopted by the Central Production Optimization Team.

Salesforce — Singapore

Ph.D. Intern @ AI Research | May 2024 – Aug 2024

  • Code Generation: Improved code generation pipeline with agent-guided tree search.
    • Developed CodeTree, a framework for LLM agents to efficiently explore the search space during code generation.
    • Outperformed then-SoTA (o1) by +1.9% with 23% token usage.
    • Patent in process.

Grammarly — San Francisco

Applied Research Intern | May 2023 – Aug 2023

  • Detecting Self-Contradictions in Documents:
    • Highlighted the task of document-level contradiction detection.
    • Proposed an annotated dataset spanning multiple domains, document lengths, and contradiction types.
    • Evaluated SOTA LLMs with new evaluation metrics designed for LLMs.

SUTD StatNLP Lab — Singapore

Research Assistant | Jan 2021 – Aug 2021

  • Structured Math Word Problem Solving:
    • Proposed a bottom-up solver to deductively reason and solve math word problems (MWPs).
    • Parsed MWPs into specially structured formulations to improve deductive reasoning.

Tencent AI Lab — Shenzhen

Research Intern | Sep 2019 – Jun 2020

  • Evaluating Explanation Methods for NMT:
    • Proposed a simulation-based automatic evaluation method for NMT explanation methods.
  • Attention’s Interpretability:
    • Analyzed the interpretability of the attention mechanisms in transformer models.

Papers

  • AlgoSimBench: Identifying Algorithmically Similar Problems for Competitive Programming
    Jierui Li and Raymond Mooney
    Under Review

  • CodeTree: Agent-guided Tree Search for Code Generation with Large Language Models
    Jierui Li, Hung Le, Yinbo Zhou, Caiming Xiong, Silvio Savarese, and Doyen Sahoo
    NAACL 2025

  • Distilling Algorithmic Reasoning from LLMs via Explaining Solution Programs
    Jierui Li and Raymond Mooney
    NLRSE 2024

  • ContraDoc: Understanding Self-Contradictions in Documents with Large Language Models
    Jierui Li, Vipul Raheja, and Dhruv Kumar
    NAACL 2024

  • Explaining Competitive-Level Programming Solutions using LLMs
    Jierui Li, Szymon Tworkowski, Yingying Wu, and Raymond Mooney
    NLRSE 2023

  • Learning to Reason Deductively: Math Word Problem Solving as Complex Relation Extraction
    Zhanming Jie, Jierui Li, and Wei Lu
    ACL 2022

  • Evaluating Explanation Methods for Neural Machine Translation
    Jierui Li, Lemao Liu, Huayang Li, Guanlin Li, Guoping Huang, and Shuming Shi
    ACL 2020

  • Modeling Intra-Relation in Math Word Problems with Different Functional Multi-Head Attentions
    Jierui Li, Lei Wang, Jipeng Zhang, Yan Wang, Bing Tian Dai, and Dongxiang Zhang
    ACL 2019