InsActor: Instruction-driven Physics-based Characters

1S-Lab, Nanyang Technological University 2National University of Singapore 3Dyson Robot Learning Lab


We present InsActor, a principled generative framework that leverages recent advancements in diffusion-based human motion models to produce instruction-driven animations of physics-based characters. Our framework empowers InsActor to capture complex relationships between high-level human instructions and character motions by employing diffusion policies for flexibly conditioned motion planning. To overcome invalid states and infeasible state transitions in diffusion plans, InsActor discovers low-level skills and maps plans to latent skill sequences in a compact latent space.

Comparative Study

In contrast to DReCon, which fails to comprehend the high-level instructions, and PADL, which struggles to generate reliable control, InsActor exhibits the ability to successfully execute the stipulated commands.

Waypoint Heading

Compared with DReCon, InsActor successfully reaches the waypoint without falling as planned, demonstrating the flexibility and robustness of InsActor.

Multiple Waypoint

InsActor is also capable of multiple waypoint following, which is essential for downstream tasks.

Real-time Demo

InsActor can be interacted in real time.

Random Sampling Low-level Skills

The low-level policy learns a compact skill space with differentiable physics. Random sampling in the skill space gives natural motions.


InsActor is robust against external perturbations, showcasing its adaptability and resilience in varying conditions. We generate random boxes that are used to strike humanoid characters, simulating the effects of unexpected external forces on their movements.

InsActor in a City Scene

We hope InsActor can serve as a general baseline that can be extended to human-scene and human-object interactions.


  author    = {Ren, Jiawei and Zhang, Mingyuan and Yu, Cunjun and Ma, Xiao and Pan, Liang and Liu, Ziwei},
  title     = {InsActor: Instruction-driven Physics-based Characters},
  journal   = {NeurIPS},
  year      = {2023},