Zhengtong Xu 徐政通

Email: xu1703 AT purdue.edu

I'm a third-year PhD candidate at Purdue University, advised by Professor Yu She.

I received my Bachelor's degree in mechanical engineering at Huazhong University of Science and Technology.

G. Scholar  /  Twitter  /  Github  /  LinkedIn

profile photo

News

  • [Mar. 2025] I received the Magoon Graduate Student Research Excellence Award from Purdue University, as the sole awardee from the Edwardson School of Industrial Engineering for the year.
  • [Nov. 2024] I have passed my PhD preliminary exam and officially become a PhD candidate
  • [Sep. 2024] One paper accepted by IEEE T-RO

Research

* indicates equal contribution.

My research aims to design learning algorithms for robotic agents, enabling them to perform everyday manipulation tasks with human-level proficiency. To this end, I am currently focusing on hierarchical multimodal robot learning. Specifically, my research explores:

1. Integrating visual, 3D, and tactile modalities for robot learning.
2. Combining differentiable optimization and learning for interpretable, reactive low-level robot policies.
3. Deploying pretrained vision-language models for high-level reasoning and planning.

LeTac-MPC: Learning Model Predictive Control for Tactile-reactive Grasping
Zhengtong Xu, Yu She
IEEE Transactions on Robotics (T-RO), 2024

arXiv / video / code / bibtex

A generalizable end-to-end tactile-reactive grasping controller with differentiable MPC, combining learning and model-based approaches.

Safe Human-Robot Collaboration with Risk-tunable Control Barrier Functions
Vipul K. Sharma*, Pokuang Zhou*, Zhengtong Xu*, Yu She, S. Sivaranjani
Under Review, 2025

arXiv(soon) / video(soon)

We consider the problem of guaranteeing safety constraint satisfaction in human-robot collaboration with uncertain human position. We pose this problem as a chance-constrained problem with safety constraints represented by uncertain control barrier functions.

VILP: Imitation Learning with Latent Video Planning
Zhengtong Xu, Qiang Qiu, Yu She
IEEE Robotics and Automation Letters (RA-L), 2025

arXiv / video / code / bibtex

VILP integrates the video generation model into policies, enabling the representation of multi-modal action distributions while reducing reliance on extensive high-quality robot action data.

UniT: Data Efficient Tactile Representation with Generalization to Unseen Objects
Zhengtong Xu, Raghava Uppuluri, Xinwei Zhang, Cael Fitch, Philip Glen Crandall, Wan Shou, Dongyi Wang, Yu She
Under Review, 2024

website / arXiv / video / code / bibtex

Learn a tactile representation with generalizability only by a single simple object.

LeTO: Learning Constrained Visuomotor Policy with Differentiable Trajectory Optimization
Zhengtong Xu, Yu She
IEEE Transactions on Automation Science and Engineering (T-ASE), 2024

arXiv / video / code / bibtex

LeTO is a "gray box" method which marries optimization-based safety and interpretability with representational abilities of neural networks.

sym

VisTac: Toward a Unified Multimodal Sensing Finger for Robotic Manipulation
Sheeraz Athar*, Gaurav Patel*, Zhengtong Xu, Qiang Qiu, Yu She
IEEE Sensors Journal, 2023

paper / video / bibtex

VisTac seamlessly combines high-resolution tactile and visual perception in a single unified device.

Awards

  • Magoon Graduate Student Research Excellence Award (sole awardee from Purdue IE), Purdue University, 2025
  • Dr. Theodore J. and Isabel M. Williams Fellowship, Purdue University, 2022
  • Chinese National Scholarship, Ministry of Education of China, 2017

Reviewer Service

  • IEEE Robotics and Automation Letters (RA-L), 2025
  • IEEE Transactions on Robotics (T-RO), 2024
  • IEEE International Conference on Robotics and Automation (ICRA), 2024

Teaching

  • Vertically Integrated Projects (VIP)-GE Robotics and Autonomous Systems, Grad Mentor, Spring 2024/Fall 2023/Summer 2023
  • IE 474-Industrial Control Systems, Teaching Assistant, Fall 2022

Website template from Jon Barron's website.