Ayush Gaggar

Resume LinkedIn Github

Research Paper

Data Augmentation for NeRFs in the Low Data Limit

Authors

A. Gaggar, T. Murphey

Published at

2025 Intl. Conference on Robotics and Automation

Description

TLDR:

Although NeRFs have taken the CV field by storm, they struggle in the low data limit, and often fail horrendously on incomplete scene data. We present an objective function that combines both in and out of distribution uncertainty. Further, we show how rejection sampling a set of views is far better than current, Next-Best-View techniques. On average, our method achieves 39.9% better performance with 87.5% less variability compared to SOTA methods.

Background:

Current methods based on Neural Radiance Fields fail in the low data limit, particularly when training on incomplete scene data. Prior works augment training data only in next-best-view applications, which lead to hallucinations and model collapse with sparse data. In contrast, we propose adding a set of views during training by rejection sampling from a posterior uncertainty distribution, generated by combining a volumetric uncertainty estimator with spatial coverage. We validate our results on partially observed scenes; on average, our method performs 39.9% better with 87.5% less variability across established scene reconstruction benchmarks, as compared to state of the art baselines. We further demonstrate that augmenting the training set by sampling from any distribution leads to better, more consistent scene reconstruction in sparse environments. This work is foundational for robotic tasks where augmenting a dataset with informative data is critical in resource-constrained, a priori unknown environments.

We've also begun hardware experiments using a 6DoF robot arm.