I spent the summer 2024 interning at Google San Francisco, continuing in London until April 2025.
Before my PhD, I worked at Microsoft Mixed Reality Lab (2021-2022) in Cambridge and received an MEng with Distinction
from the University of Cambridge (2017-2021).
Bolt3D allows feed-forward 3D scene generation in 6.25s on a single GPU.
Our latent diffusion model directly outputs 3D Gaussians, and is trained on a large-scale dataset of reconstructed 3D scenes.
We design a fast (38FPS), simple, 2D network for single-view 3D reconstruction that represents shapes with 3D Gaussians.
As a result, it can leverage Gaussian Splatting for rendering (588FPS), achieves state-of-the-art quality in several cases and is trains on just a single GPU.
Novel formulation of the denoising function in Diffusion Models lets us train 3D generative models from 2D data only. Our models can perform both few-view 3D reconstruction and 3D generation.
We unlock training NeRFs of 360◦ human heads from mobile phone captures.
Synthetically-trained face keypoint detector allows to get a rough camera pose estimate which is then refined in the optimization process.
We can controllably animate Neural Radiance Fields by deforming tetrahedral cages. Tetrahedral connectivity allows us to run the animations in real-time.