ZeroShape: Here's How We Did Our Data Curation | HackerNoon
Briefly

We use all the 55 categories of ShapeNetCore.v2 for a total of about 52K meshes, as well as over 1000 categories from the Objaverse-LVIS subset, resulting in a total of over 90K 3D object meshes.
Pooling these two data sources gives us a total of over 90K 3D object meshes from over 1000 categories, and our training set consists of slightly less than 1.1M images.
Using Blender to generate synthetic images from the 3D meshes enables us to extract useful annotations like depth maps and camera intrinsics.
We generate images with varying focal lengths, from 30mm to 70mm, allowing us to capture diverse object-camera geometry that enhances the quality of the dataset.
Read at Hackernoon
[
|
]