DiffOG Differentiable Policy Trajectory Optimization with Generalizability

Purdue University
First Page Image

Imitation learning-based visuomotor policies excel at manipulation tasks but often produce suboptimal action trajectories compared to model-based methods. Directly mapping camera data to actions via neural networks can result in jerky motions and difficulties in meeting critical constraints, compromising safety and robustness in real-world deployment. For tasks that require high robustness or strict adherence to constraints, ensuring trajectory quality is crucial. However, the lack of interpretability in neural networks makes it challenging to generate constraint-compliant actions in a controlled manner. This paper introduces differentiable policy trajectory optimization with generalizability (DiffOG), a learning-based trajectory optimization framework designed to enhance visuomotor policies. By leveraging the proposed differentiable formulation of trajectory optimization with transformer, DiffOG seamlessly integrates policies with a generalizable optimization layer. DiffOG refines action trajectories to be smoother and more constraint-compliant while maintaining alignment with the original demonstration distribution, thus avoiding degradation in policy performance.

Simulation Benchmark

We benchmarked DiffOG and several baselines across 11 simulation tasks. DiffOG consistently improves action trajectories, making them smoother and more constrained, while preserving alignment with the original demonstration distribution, thereby preventing policy performance degradation.

Lift1

Can1

Square1

Tool Hang1

Transport1

Push-T2

Disassemble3

Pick Place Wall3

Shelf Place3

Stick Pull3

Stick Push3

Realworld Move the Stack Task

The Move the Stack task requires the robot to stably grasp, transport, and place both a cup and a spoon. This task is highly sensitive to trajectory quality, jerky or unstable motions can easily cause the spoon to fall during transit.

Illustration of the Task

Rollouts of DiffOG

Compared to the policy with DiffOG, the base policy without trajectory optimization exhibits more unconstrained motions, leading to a lower task success rate. For the baseline methods (constraint clipping and penalty-based optimization), although they enforce motion constraints during trajectory optimization, they often cause the optimized trajectories to deviate from the original human demonstrations, which in turn degrades the policy's performance.

Comparison with the Base Policy

Typical Failure Cases of Baselines

Realworld Arrange Desk Task

Arrange Desk is a dual-arm, long-horizon task. DiffOG effectively optimizes such complex trajectories by producing smoother and more constrained action sequences, while preserving distributional alignment between the policy outputs and the human demonstration data. This ensures that the policy performance remains unaffected by the optimization process.

Illustration of the Task

Rollouts of DiffOG

Baseline methods (constraint clipping and penalty-based optimization) enforce motion constraints during trajectory optimization. However, this often leads to deviations from the original human demonstrations, ultimately degrading policy performance. In this task, such degradation commonly manifests as failure cases where the robot fails to successfully grasp the bowl.

Typical Failure Cases of Constraint Clipping

Typical Failure Cases of Penalty-Based optimization

BibTeX


      @misc{xu2025diffog,
        title={{DiffOG}: Differentiable Policy Trajectory Optimization with Generalizability},
        author={Zhengtong Xu and Zichen Miao and Qiang Qiu and Zhe Zhang and Yu She},
        year={2025},
        eprint={2504.13807},
        archivePrefix={arXiv},
        primaryClass={cs.RO},
        url={https://arxiv.org/abs/2504.13807},
      }