|
1 year ago | |
---|---|---|
.idea | 1 year ago | |
docs | 1 year ago | |
examples | 1 year ago | |
scripts | 1 year ago | |
src | 1 year ago | |
.gitignore | 1 year ago | |
LICENSE | 1 year ago | |
README_CH.md | 1 year ago | |
README_en.md | 1 year ago | |
app.py | 1 year ago | |
inference.py | 1 year ago | |
quick_demo.ipynb | 1 year ago | |
requirements.txt | 1 year ago | |
requirements3d.txt | 1 year ago |
README_en.md

Yu Guo1 Ying Shan 2 Fei Wang 1
CVPR 2023
TL;DR: single portrait image 🙎♂️ + audio 🎤 = talking head video 🎞.
🔥 Highlight
- 🔥 The extension of the stable-diffusion-webui is online. Just install it in
extensions -> install from URL -> https://github.com/Winfredy/SadTalker
, checkout more details here.
https://user-images.githubusercontent.com/4397546/222513483-89161f58-83d0-40e4-8e41-96c32b47bd4e.mp4
- 🔥
full image mode
is online! checkout here for more details.
still+enhancer in v0.0.1 | still + enhancer in v0.0.2 | input image @bagbag1815 |
---|---|---|
![]() |
-
🔥 Several new mode, eg,
still mode
,reference mode
,resize mode
are online for better and custom applications. -
🔥 Happy to see our method is used in various talking or singing avatar, checkout these wonderful demos at bilibili and twitter #sadtalker.
📋 Changelog (Previous changelog can be founded here)
-
[2023.04.08]: In v0.0.2, we add a logo watermark to the generated video to prevent abusing since it is very realistic.
-
[2023.04.08]: v0.0.2, full image animation, adding baidu driver for download checkpoints. Optimizing the logic about enhancer.
-
[2023.04.06]: stable-diffiusion webui extension is release.
-
[2023.04.03]: Enable TTS in huggingface and gradio local demo.
-
[2023.03.30]: Launch beta version of the full body mode.
-
[2023.03.30]: Launch new feature: through using reference videos, our algorithm can generate videos with more natural eye blinking and some eyebrow movement.
-
[2023.03.29]:
resize mode
is online bypython infererence.py --preprocess resize
! Where we can produce a larger crop of the image as discussed in https://github.com/Winfredy/SadTalker/issues/35. -
[2023.03.29]: local gradio demo is online!
python app.py
to start the demo. Newrequirments.txt
is used to avoid the bugs inlibrosa
.
🎼 Pipeline
Our method uses the coefficients of 3DMM as intermediate motion representation. To this end, we first generate realistic 3D motion coefficients (facial expression β, head pose ρ) from audio, then these coefficients are used to implicitly modulate the 3D-aware face render for final video generation.
🚧 TODO
Previous TODOs
- Generating 2D face from a single Image.
- Generating 3D face from Audio.
- Generating 4D free-view talking examples from audio and a single image.
- Gradio/Colab Demo.
- Full body/image Generation.
- training code of each componments.
- Audio-driven Anime Avatar.
- interpolate ChatGPT for a conversation demo 🤔
- integrade with stable-diffusion-web-ui. (stay tunning!)
⚙️ Installation (中文教程)
Installing Sadtalker on Linux:
git clone https://github.com/Winfredy/SadTalker.git
cd SadTalker
conda create -n sadtalker python=3.8
conda activate sadtalker
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113
conda install ffmpeg
pip install -r requirements.txt
### tts is optional for gradio demo.
### pip install TTS
More tips about installnation on Windows and the Docker file can be founded here
Sd-Webui-Extension:
CLICK ME
Installing the lastest version of stable-diffusion-webui and install the sadtalker via extension
.
Then, retarting the stable-diffusion-webui, set some commandline args. The models will be downloaded automatically in the right place. Alternatively, you can add the path of pre-downloaded sadtalker checkpoints to SADTALKTER_CHECKPOINTS
in webui_user.sh
(linux) or webui_user.bat
(windows) by:
# windows (webui_user.bat)
set COMMANDLINE_ARGS=--no-gradio-queue --disable-safe-unpickle
set SADTALKER_CHECKPOINTS=D:\SadTalker\checkpoints
# linux (webui_user.sh)
export COMMANDLINE_ARGS=--no-gradio-queue --disable-safe-unpickle
export SADTALKER_CHECKPOINTS=/path/to/SadTalker/checkpoints
After installation, the SadTalker can be used in stable-diffusion-webui directly.

Download Trained Models
CLICK ME
You can run the following script to put all the models in the right place.
bash scripts/download_models.sh
OR download our pre-trained model from google drive or our github release page, and then, put it in ./checkpoints.
OR we provided the downloaded model in 百度云盘 提取码: sadt.
Model | Description |
---|---|
checkpoints/auido2exp_00300-model.pth | Pre-trained ExpNet in Sadtalker. |
checkpoints/auido2pose_00140-model.pth | Pre-trained PoseVAE in Sadtalker. |
checkpoints/mapping_00229-model.pth.tar | Pre-trained MappingNet in Sadtalker. |
checkpoints/mapping_00109-model.pth.tar | Pre-trained MappingNet in Sadtalker. |
checkpoints/facevid2vid_00189-model.pth.tar | Pre-trained face-vid2vid model from the reappearance of face-vid2vid. |
checkpoints/epoch_20.pth | Pre-trained 3DMM extractor in Deep3DFaceReconstruction. |
checkpoints/wav2lip.pth | Highly accurate lip-sync model in Wav2lip. |
checkpoints/shape_predictor_68_face_landmarks.dat | Face landmark model used in dilb. |
checkpoints/BFM | 3DMM library file. |
checkpoints/hub | Face detection models used in face alignment. |
🔮 Quick Start
Generating 2D face from a single Image from default config.
python inference.py --driven_audio <audio.wav> --source_image <video.mp4 or picture.png>
The results will be saved in results/$SOME_TIMESTAMP/*.mp4
.
Or a local gradio demo similar to our hugging-face demo can be run by:
## you need manually install TTS(https://github.com/coqui-ai/TTS) via `pip install tts` in advanced.
python app.py
Advanced Configuration
Click Me
Name | Configuration | default | Explaination |
---|---|---|---|
Enhance Mode | --enhancer |
None | Using gfpgan or RestoreFormer to enhance the generated face via face restoration network |
Background Enhancer | --background_enhancer |
None | Using realesrgan to enhance the full video. |
Still Mode | --still |
False | Using the same pose parameters as the original image, fewer head motion. |
Expressive Mode | --expression_scale |
1.0 | a larger value will make the expression motion stronger. |
save path | --result_dir |
./results |
The file will be save in the newer location. |
preprocess | --preprocess |
crop |
Run and produce the results in the croped input image. Other choices: resize , where the images will be resized to the specific resolution. full Run the full image animation, use with --still to get better results. |
ref Mode (eye) | --ref_eyeblink |
None | A video path, where we borrow the eyeblink from this reference video to provide more natural eyebrow movement. |
ref Mode (pose) | --ref_pose |
None | A video path, where we borrow the pose from the head reference video. |
3D Mode | --face3dvis |
False | Need additional installation. More details to generate the 3d face can be founded here. |
free-view Mode | --input_yaw ,--input_pitch ,--input_roll |
None | Genearting novel view or free-view 4D talking head from a single image. More details can be founded here. |
Examples
basic | w/ still mode | w/ exp_scale 1.3 | w/ gfpgan |
---|---|---|---|
Kindly ensure to activate the audio as the default audio playing is incompatible with GitHub.
Input, w/ reference video , reference video |
---|
![]() |
If the reference video is shorter than the input audio, we will loop the reference video . |
Generating 3D face from Audio
Input | Animated 3d face |
---|---|
![]() |
Kindly ensure to activate the audio as the default audio playing is incompatible with GitHub.
Generating 4D free-view talking examples from audio and a single image
We use input_yaw
, input_pitch
, input_roll
to control head pose. For example, --input_yaw -20 30 10
means the input head yaw degree changes from -20 to 30 and then changes from 30 to 10.
python inference.py --driven_audio <audio.wav> \
--source_image <video.mp4 or picture.png> \
--result_dir <a file to store results> \
--input_yaw -20 30 10
Results, Free-view results, Novel view results |
---|
![]() |
[Beta Application] Full body/image Generation
Now, you can use --still
to generate a natural full body video. You can add enhancer
or full_img_enhancer
to improve the quality of the generated video. However, if you add other mode, such as ref_eyeblinking
, ref_pose
, the result will be bad. We are still trying to fix this problem.
python inference.py --driven_audio <audio.wav> \
--source_image <video.mp4 or picture.png> \
--result_dir <a file to store results> \
--still \
--preprocess full \
--enhancer gfpgan
🛎 Citation
If you find our work useful in your research, please consider citing:
@article{zhang2022sadtalker,
title={SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation},
author={Zhang, Wenxuan and Cun, Xiaodong and Wang, Xuan and Zhang, Yong and Shen, Xi and Guo, Yu and Shan, Ying and Wang, Fei},
journal={arXiv preprint arXiv:2211.12194},
year={2022}
}
💗 Acknowledgements
Facerender code borrows heavily from zhanglonghao's reproduction of face-vid2vid and PIRender. We thank the authors for sharing their wonderful code. In training process, We also use the model from Deep3DFaceReconstruction and Wav2lip. We thank for their wonderful work.
🥂 Related Works
- StyleHEAT: One-Shot High-Resolution Editable Talking Face Generation via Pre-trained StyleGAN (ECCV 2022)
- CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Prior (CVPR 2023)
- VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild (SIGGRAPH Asia 2022)
- DPE: Disentanglement of Pose and Expression for General Video Portrait Editing (CVPR 2023)
- 3D GAN Inversion with Facial Symmetry Prior (CVPR 2023)
- T2M-GPT: Generating Human Motion from Textual Descriptions with Discrete Representations (CVPR 2023)
📢 Disclaimer
This is not an official product of Tencent. This repository can only be used for personal/research/non-commercial purposes.
LOGO: color and font suggestion: ChatGPT, logo font:Montserrat Alternates .
All the copyright demo images are from communities users or the geneartion from stable diffusion. Free free to contact us if you feel uncomfortable.