To extract explicit 3D geometry from 2D normal maps and color images, we optimize a neural implicit signed distance field (SDF) to amalgamate all 2D generated data. This approach contrasts with existing SDF-based reconstruction methods, which require dense input views. Instead, we focus on sparse view generation, leading to challenges such as distorted geometries and omission of details during optimization. We propose a novel geometric-aware optimization scheme to address these issues effectively.
SDF representations offer compactness and differentiability which are crucial for stable optimization. However, existing methods previously designed for real images struggle with the inaccuracies present in our generated normal maps and color images. These inaccuracies can cause significant problems in the reconstruction process, resulting in distorted surfaces and incomplete 3D models if not handled properly. Through our method, we aim to improve the fidelity of the derived geometries.
Collection
[
|
...
]