Dataset Design Explorer

How does your method perform if you train it on more object centric vs. scene centric images? (Clicking the links will update the widget below!). Does a model trained from wide field of view images generalize to narrow field of view frames? And what's the effect of using a dataset where many parts of the scene are occluded, vs. one where everything is in plain view? These are just some of the questions that are difficult to answer with static datasets, since it is often expensive or impossible to sample new data with the desired properties.

The Omnidata annotator reduces the barrier to answering these questions by making it simple to generate datasets from 3D scans, and it allows researchers to choose exactly how images are sampled. In this interactive demo, you can investigate different generation parameters on an example building (Replica: apartment_1).


Walkability score (% of walkable pixels)
Objectness score (% of object pixels)
Clutter score (# semantic instances)
Occlusion score (% of occlusion edge pixels)
Distance (center pixel, m)
Distance (mean, m)
Nearest point (m)
Farthest point (m)
Field of View (°)
Camera pitch (°)
Camera roll (°)
Obliqueness (center pixel, °)
RGB Depth (Euclidean) Surface Normals Semantics Curvature Reshading Edges (Occlusion) Edges (Texture) 2D Keypoints 3D Keypoints