China's new AI video tools close the uncanny valley for good
Briefly

China's new AI video tools close the uncanny valley for good
"The Academy Award-nominated filmmaker-creator of lyrical, surreal, and deeply human movies like Black Swan, The Whale, Mother!, and Pi-has released an AI-generated series called On This Day . . . 1776 to commemorate the semiquincentennial anniversary of the American Revolution. Though the series has garnered millions of views, commentators everywhere call it "a horror," slamming Aronofsky's work for how stiff the faces look, how everything morphs unrealistically. Although calling it "requiem for a filmmaker" seems excessive, they are not wrong about these faults."
"The series, created using real human voice-overs and Google's generative video AI, does suffer from "uncanny valley syndrome" (our brains can very easily detect what's off with faces, and we don't buy it as real, feeling an automatic repulsion). But this month, two new generative AI models from China have closed the valley's gap: Kling 3.0 and Seedance 2.0. For the first time, AI is generating video content that is truly indistinguishable from film,"
Darren Aronofsky released an AI-generated series that drew criticism for stiff, morphing faces and uncanny valley effects despite millions of views. Two new Chinese generative video models, Kling 3.0 and Seedance 2.0, have narrowed that uncanny gap by producing video content that can be indistinguishable from film with coherent time and subject continuity. Seedance 2.0, developed by ByteDance and released in beta in China on February 9, is framed as a director's tool offering director-level control. ByteDance's model accepts images, video, audio, and text simultaneously as multimodal inputs rather than relying solely on text prompts. Analysts cite this multimodal interface as enabling more coherent, controllable outputs.
Read at Fast Company
Unable to calculate read time
[
|
]