In contrast to DReCon, which fails to comprehend the high-level instructions, and PADL, which struggles to generate reliable control, InsActor exhibits the ability to successfully execute the stipulated commands.
Compared with DReCon, InsActor successfully reaches the waypoint without falling as planned, demonstrating the flexibility and robustness of InsActor.
InsActor is also capable of multiple waypoint following, which is essential for downstream tasks.
InsActor can be interacted in real time.
The low-level policy learns a compact skill space with differentiable physics. Random sampling in the skill space gives natural motions.
InsActor is robust against external perturbations, showcasing its adaptability and resilience in varying conditions. We generate random boxes that are used to strike humanoid characters, simulating the effects of unexpected external forces on their movements.
We hope InsActor can serve as a general baseline that can be extended to human-scene and human-object interactions.
@article{ren2023insactor,
author = {Ren, Jiawei and Zhang, Mingyuan and Yu, Cunjun and Ma, Xiao and Pan, Liang and Liu, Ziwei},
title = {InsActor: Instruction-driven Physics-based Characters},
journal = {NeurIPS},
year = {2023},
}