DreamGaussian4D: Generative 4D Gaussian Splatting

Arxiv 2023

Jiawei Ren*1, Liang Pan*2, Jiaxiang Tang1,3, Chi Zhang1, Ang Cao4, Gang Zeng3 , Ziwei Liu1

1 S-Lab, Nanyang Technological University   2 Shanghai AI Laboratory   3 Peking University   4 University of Michigan  

Abstract

Remarkable progress has been made in 4D content generation recently. However, existing methods suffer from long optimization time, lack of motion controllability, and a low level of detail. In this paper, we introduce DreamGaussian4D, an efficient 4D generation framework that builds on 4D Gaussian Splatting representation. Our key insight is that the explicit modeling of spatial transformations in Gaussian Splatting makes it more suitable for the 4D generation setting compared with implicit representations. DreamGaussian4D reduces the optimization time from several hours to just a few minutes, allows flexible control of the generated 3D motion, and produces animated meshes that can be efficiently rendered in 3D engines.

Convergence Speed

The dynamic optimization converges in 4.5 minutes.

Image-to-4D

Diverse Motions

Diverse 3D motions can be generated for the same static model.

Exported Meshes

Citation

@article{ren2023dreamgaussian4d,
  title={DreamGaussian4D: Generative 4D Gaussian Splatting},
  author={Ren, Jiawei and Pan, Liang and Tang, Jiaxiang and Zhang, Chi and Cao, Ang and Zeng, Gang and Liu, Ziwei},
  journal={arXiv preprint arXiv:2312.17142},
  year={2023}
}