[Mar. 2025] I received the Magoon Graduate Student Research Excellence Award from Purdue University, as the sole awardee from the Edwardson School of Industrial Engineering for the year.
[Nov. 2024] I have passed my PhD preliminary exam and officially become a PhD candidate
[Sep. 2024] One paper accepted by IEEE T-RO
Research
* indicates equal contribution.
My research aims to design learning algorithms for robotic agents, enabling them to perform everyday manipulation tasks with human-level proficiency. To this end, I am currently focusing on hierarchical multimodal robot learning.
Specifically, my research explores:
1. Integrating visual, 3D, and tactile modalities for robot learning. 2. Combining differentiable optimization and learning for interpretable, reactive low-level robot policies. 3. Deploying pretrained vision-language models for high-level reasoning and planning.
LeTac-MPC: Learning Model Predictive Control for Tactile-reactive Grasping Zhengtong Xu, Yu She
IEEE Transactions on Robotics (T-RO), 2024
@article{xu2024letac,
author={Xu, Zhengtong and She, Yu},
journal={IEEE Transactions on Robotics},
title={{LeTac-MPC}: Learning Model Predictive Control for Tactile-Reactive Grasping},
year={2024},
volume={40},
number={},
pages={4376-4395},
doi={10.1109/TRO.2024.3463470}
}
A generalizable end-to-end tactile-reactive grasping controller with differentiable MPC, combining learning and model-based approaches.
UniT: Data Efficient Tactile Representation with Generalization to Unseen Objects Zhengtong Xu, Raghava Uppuluri, Xinwei Zhang, Cael Fitch, Philip Glen Crandall, Wan Shou, Dongyi Wang, Yu She
IEEE Robotics and Automation Letters (RA-L), 2025
@article{xu2024unit,
title={UniT: Unified Tactile Representation for Robot Learning},
author={Xu, Zhengtong and Uppuluri, Raghava and Zhang, Xinwei and Fitch, Cael and Crandall, Philip Glen and Shou, Wan and Wang, Dongyi and She, Yu},
journal={arXiv preprint arXiv:2408.06481},
year={2024}
}
Learn a tactile representation with generalizability only by a single simple object.
Safe Human-Robot Collaboration with Risk-tunable Control Barrier Functions
Vipul K. Sharma*, Pokuang Zhou*, Zhengtong Xu*, Yu She, S. Sivaranjani
Under Review, 2025
We consider the problem of guaranteeing safety constraint satisfaction in human-robot collaboration with uncertain human position. We pose this problem as a chance-constrained problem with safety constraints represented by uncertain control barrier functions.
VILP: Imitation Learning with Latent Video Planning Zhengtong Xu, Qiang Qiu, Yu She
IEEE Robotics and Automation Letters (RA-L), 2025
VILP integrates the video generation model into policies, enabling the representation of multi-modal action distributions while reducing reliance on extensive high-quality robot action data.
LeTO: Learning Constrained Visuomotor Policy with Differentiable Trajectory Optimization Zhengtong Xu, Yu She
IEEE Transactions on Automation Science and Engineering (T-ASE), 2024
@article{athar2023vistac,
title={Vistac towards a unified multi-modal sensing finger for robotic manipulation},
author={Athar, Sheeraz and Patel, Gaurav and Xu, Zhengtong and Qiu, Qiang and She, Yu},
journal={IEEE Sensors Journal},
year={2023},
publisher={IEEE}
}