Luma releases a new AI model that lets users generate a video from a start and end frame | TechCrunch
Briefly

Luma releases a new AI model that lets users generate a video from a start and end frame | TechCrunch
"Luma, the a16z-backed AI video and 3D model company, released a new model called Ray3 Modify that allows users to modify existing footage by providing character reference images that preserve the performance of the original footage. Users can also provide a start and an end frame to guide the model to generate transitional footage. The company said Thursday the Ray3 Modify model solves the problems of preserving human performance while editing or generating effects using AI for creative studios."
""Generative video models are incredibly expressive but also hard to control. Today, we are excited to introduce Ray3 Modify that blends the real-world with the expressivity of AI while giving full control to creatives. This means creative teams can capture performances with a camera and then immediately modify them to be in any location imaginable, change costumes, or even go back and reshoot the scene with AI, without recreating the physical shoot," Amit Jain, co-founder and CEO of Luma AI, said in a statement."
"The startup said the model follows the input footage better, allowing studios to use human actors for creative or brand footage. Luma mentioned the new model retains the actor's original motion, timing, eye line, and emotional delivery while transforming the scene. With Ray3 Modify, users can provide a character reference for transformation to the original footage and convert the human actor's appearance into that character. This reference also allows creators to retain information like costumes, likeness, and identity across the shoot."
Ray3 Modify is a generative-video model that modifies existing footage while preserving an actor's original performance, motion, timing, eye line, and emotional delivery. The model accepts character reference images to transform an actor's appearance while retaining costumes, likeness, and identity across a shoot. It accepts start and end frames to guide transitional footage and control transitions, character movement, or behavior while maintaining continuity. The model is available through Luma's Dream Machine platform and aims to let creative teams alter locations, costumes, or reshoot scenes with AI without redoing physical shoots. Luma positions the model for studios needing precise performance-preserving edits.
Read at TechCrunch
Unable to calculate read time
[
|
]