Incredible new AI 3D model generator could transform character design
Briefly

Titled 'VFusion3D: Learning Scalable 3D Generative Models from Video Diffusion Models', a paper by researchers Junlin Han, Filippos Kokkinos and Philip Torr describes how the new model is capable of 'building scalable 3D generative models utilising pre-trained video diffusion models'.
The team 'fine-tuned an existing video AI model to produce multi-view video sequences', which teaches it to view objects from multiple angles. The paper includes examples of still images transformed into 3D objects with remarkable precision.
"The primary obstacle in developing foundation 3D generative models is the limited availability of 3D data. Unlike images, texts, or videos, 3D data are not readily accessible and are difficult to acquire."
Read at Creative Bloq
[
]
[
|
]