"The initial results were horrifying. Will Smith looked nothing like his movie star self. Instead, he looked like a bad animation, complete with caricatured features that would be more at home on a tourist trap boardwalk. In some videos, he never actually consumed the spaghetti, failing to meet even the most basic premise of the test. The failures highlighted the early limitations of AI-generated video and images, which sometimes produced people with 8 fingers or other anatomical imperfections."
"In 2024, MiniMax, a Chinese AI model, made a much more accurate representation, but AI Smith's chewing was still off. And at the very end of the clip, the noodles appear to levitate. In May, a user posted on X that he used Google's Veo 3 to generate a new video. The problem with this one is that the noodles AI Smith is chewing sound way too crunchy. A later Veo 3.1-generated video looks even more realistic."
AI video generation advanced from crude, caricatured outputs to much more lifelike videos within about two and a half years. Early model outputs included distorted faces, anatomical errors and failures to perform simple actions, exemplified by Will Smith spaghetti clips. Newer models such as MiniMax and Google’s Veo series improved visual realism but still showed artifacts like odd chewing sounds and levitating noodles. OpenAI’s Sora achieved high-quality results, prompting additional guardrails around third-party likeness use. The trajectory reflects rapid technical improvement alongside growing concerns about authenticity, misuse, and the need for regulatory or platform controls.
Read at Business Insider
Unable to calculate read time
Collection
[
|
...
]