OpenAI's Sora 2 launches with insanely realistic video and an iPhone app
Briefly

OpenAI's Sora 2 launches with insanely realistic video and an iPhone app
"For example, OpenAI said in a blog post that the model was trained to be less overly optimistic, a characteristic that can be observed in instances where a Sora-generated video shows the player missing the shot but still making it into the hoop. With Sora 2, OpenAI claims the player would miss the shot, and the ball would rebound off the backboard."
"If you thought OpenAI's first video-generating model, Sora, was realistic, wait until you see what Sora 2 can do on both the video and audio front. Also: Luma AI created an AI video model that 'reasons' - what it does differently OpenAI finally launched the highly anticipated next-generation flagship video and audio generation model, Sora 2, on Tuesday. The new model is meant to be significantly more capable, tackling typically difficult tasks for video generators, which OpenAI equates to the jump from GPT-1 to GPT-3.5."
Sora 2 is a next-generation video and audio generation model released by OpenAI and positioned as a major capability leap over Sora. The model was trained to reduce overly optimistic outputs, improve adherence to physics, and increase controllability to follow complex instructions for more realistic results. Sora 2 can generate synchronized sound, including effects, backgroundscapes, and human speech. The release coincides with a free iOS social-style app and comparisons to other multimodal generators such as Google's Veo 2 and work from Luma AI. A disclosure notes a Ziff Davis lawsuit alleging copyright infringement in training.
Read at ZDNET
Unable to calculate read time
[
|
]