ByteDance has unveiled OmniHuman-1, an advanced AI model that generates lifelike full-body deepfake videos from a single image. This technology not only animates faces but synchronizes gestures and expressions with speech or music, significantly enhancing realism. Test videos include recreations of TED Talks and simulations featuring a talking Albert Einstein. The model, trained on 19,000 hours of human motion data, surpasses existing animation tools in both realism and versatility, attracting attention from the tech community and underscoring the ongoing race in AI deepfake capabilities.
Unlike some deepfake models that can only animate faces or upper bodies, Bytedance's OmniHuman-1 generates realistic full-body animations that sync gestures and facial expressions with speech or music.
OmniHuman-1 is the latest AI model from a Chinese tech company to grab the attention of researchers following the release of DeepSeek's market-shaking R1 last month.
Collection
[
|
...
]