We propose a neural lens model that is versatile, easy to use and can be trivially used in gradient-based optimization pipelines for point projection and rendering. We also propose a new dataset for evaluation of lens and calibration models and a new strategy to create marker boards for calibration.
We propose a novel approach for 3D video synthesis that is able to represent multi-view video recordings of a dynamic real-world scene in a compact, yet expressive representation that enables high-quality view synthesis and motion interpolation.
We propose a novel approach for learning a representation of the geometry, appearance, and motion of a class of articulated objects given only a set of color images as input.
We propose a joined training for neural radiance fields and a deformation field which enables us to do 4D reconstruction over time, even using only a single camera.