Christoph Lassner


I am currently working on something new at the intersection of Machine Learning, Computer Vision and Computer Graphics and will share more details here in a few weeks.

I am deeply curious about how we can build virtual representations for the real world that can be optimized and rendered efficiently to faithfully match our perception (visual and beyond; see for example Neural Assets, VEO, Neural 3D Video, NR-NeRF). A lot of my work focuses on perception and rendering systems for humans, see for example TAVA, HVH, ARCH, NBF, UP. I created the human pose estimation system for Amazon Halo, which is part of a system to create a 3D model of your body using your smartphone. My team’s work at Meta on reconstructing and rendering radiance fields interactively was featured at Meta Connect 2022 and on CNET (co-presented with Zhao Dong et al.’s work on inverse rendering).

At the same time, I am very interested in the engineering challenges such systems create and was awarded an Honorable Mention at the ACM Multimedia Open-Source Software Competition 2016 for my work on decision forests. In 2021, I wrote the Pulsar renderer (now the sphere-based rendering backend for PyTorch3D) and would love to find better ways to use low-level autodifferentiation on GPUs.

Previously, I was leading research teams at Epic Games and Meta Reality Labs Research; before joining Meta, I worked at a startup which was acquired by Amazon. I completed my PhD at the Bernstein Center for Computational Neuroscience and the Max Planck Institute for Intelligent Systems in Tübingen.

If you are working on human representations, neural or differentiable rendering or autodifferentiation and would like to collaborate (research internship, research collaboration, position), please reach out!

Recent Publications

Quickly discover relevant content by filtering publications.

SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes

We propose a novel method to reconstruct and render dynamic scenes including dense scene flow, establishing correspondences over time.

Neural Relighting with Subsurface Scattering by Learning the Radiance Transfer Gradient

We propose the currently best performing method to reconstruct object faithfully for all possible lighting environments.

Neural Lens Modeling

We propose a neural lens model that is versatile, easy to use and can be trivially used in gradient-based optimization pipelines for point projection and rendering. We also propose a new dataset for evaluation of lens and calibration models and a new strategy to create marker boards for calibration.

NeuWigs: A Neural Dynamic Model for Volumetric Hair Capture and Animation

We create a drivable and accurate hair representation using neural rendering primitives and temporal models.

Neural Assets: Volumetric Object Capture and Rendering for Interactive Environments

We present a new radiance field representation that can be used to create photorealistic assets from a simple smartphone video that can be rendered volumetrically in real-time in state-of-the-art game engines. Rendering is comparably fast with mesh rendering, but includes volumetric effects and is suitable for fur, hair, …