
"GWM Worlds offers an interface for digital environment exploration with real-time user input that affects the generation of coming frames, which Runway suggests can remain consistent and coherent "across long sequences of movement." Users can define the nature of the world-what it contains and how it appears-as well as rules like physics. They can give it actions or changes that will be reflected in real-time, like camera movements or descriptions of changes to the environment or the objects in it."
"AI company Runway has announced what it calls its first world model, GWM-1. It's a significant step in a new direction for a company that has made its name primarily on video generation, and it's part of a wider gold rush to build a new frontier of models as large language models and image and video generation move into a refinement phase, no longer an untapped frontier."
Runway released GWM-1, a suite of three autoregressive world models built atop the Gen-4.5 text-to-video generation model and post-trained with domain-specific data. One model, GWM Worlds, offers an interface for digital environment exploration with real-time user input that affects frame generation and can remain consistent and coherent across long sequences of movement. Users can define world content, appearance, and rules like physics, and enact camera movements or environmental changes reflected in real time. Potential applications include pre-visualization for game development, VR environment generation, and educational explorations of historical spaces. World models can also be used to train AI agents, including robots. GWM Robotics can generate synthetic training data that augments existing robotics datasets across multiple dimensions, including novel objects.
Read at Ars Technica
Unable to calculate read time
Collection
[
|
...
]