Wonder3D: 3D Generative Models and Multi-View Diffusion Models | HackerNoon
Briefly

The adoption of 2D representations in our method allows us to leverage pretrained 2D diffusion models, enhancing zero-shot generalization and overcoming the limitations posed by scarce public 3D asset datasets.
Existing multi-view diffusion models primarily emphasize generating consistent multi-view images but often fall short in addressing the specifics of 3D reconstruction, resulting in lower quality outputs when relying on inaccurately estimated depth maps.
Read at Hackernoon
[
|
]