Cafe-Talk: Generating 3D Talking Face Animation with Multimodal Coarse- and Fine-grained Control

[ICLR 2025]

State Key Laboratory of Virtual Reality Technology and Systems, Beihang University
Kuaishou Technology
Zhongguancun Laboratory

* Equal contribution
** Corresponding Author
Teaser

Abstract

Speech-driven 3D talking face method should offer both accurate lip synchronization and controllable expressions. Previous methods solely adopt discrete emotion labels to globally control expressions throughout sequences while limiting flexible fine-grained facial control within the spatiotemporal domain. We propose a diffusion-transformer-based 3D talking face generation model, Cafe-Talk, which simultaneously incorporates coarse- and fine-grained multimodal control conditions. Nevertheless, the entanglement of multiple conditions challenges achieving satisfying performance. To disentangle speech audio and fine-grained conditions, we employ a two-stage training pipeline. Specifically, Cafe-Talk is initially trained using only speech audio and coarse-grained conditions. Then, a proposed fine-grained control adapter gradually adds fine-grained instructions represented by action units (AUs), preventing unfavorable speech-lip synchronization. To disentangle coarse- and fine-grained conditions, we design a swap-label training mechanism, which enables the dominance of the fine-grained conditions. We also devise a mask-based CFG technique to regulate the occurrence and intensity of fine-grained control. In addition, a text-based detector is introduced with text-AU alignment to enable natural language user input and further support multimodal control. Extensive experimental results prove that Cafe-Talk achieves state-of-the-art lip synchronization and expressiveness performance and receives wide acceptance in fine-grained control in user studies.

Demo video

BibTeX


@inproceedings{chen2025Cafe,
  title={{Cafe-Talk}: Generating 3D Talking Face Animation with Multimodal Coarse- and Fine-grained Control},
  author={Chen, Hejia and Zhang, Haoxian and Zhang, Shoulong, and Liu, Xiaoqiang, and Zhuang, Sisi, and Zhang, Yuan, and Wan, Pengfei, and Di, Zhang, and Li, Shuai},
  booktitle={ICLR},
  year={2025},
}

Acknowledgements

This research is supported by the National Natural Science Foundation of China (No. 62441201).