1:30pm - 2:30pm

Thursday 19 October 2023

Geometry Guided Image-Based Lighting and Relighting

CVSSP External Seminar - Dr Julien Philip

Free

21BA02 or via Teams
University of Surrey
Guildford
Surrey
GU2 7XH
back to all events

This event has passed

Meeting ID: 387 565 171 359 
Passcode: MdeyQr 

Speakers

  • Dr Julien Philip

Geometry Guided Image-Based Lighting and Relighting

Dr Julien Philip

Short bio:

Julien Philip is a research scientist at Adobe Research London, before that he received his PhD in 2020 from inria Sophia Antipolis where he worked under the supervision of George Drettakis. He interned at Adobe in 2019 working with Michaël Gharbi. Before inria, Julien completed his undergraduate studies in France, with a joint degree from Télécom Paris with a major in Computer Science and ENS Paris-Saclay with a major in Applied Mathematics. His research interests are at the crossroads of Computer Graphics, Vision, and Deep Learning. During his PhD, he focused on providing editability in the context of multi-view datasets with a focus on relighting. Since then, his research is focused on neural rendering, 3D capture and image relighting.

 

Abstract:  
We acquire more images than ever before. While smartphones can now take beautiful pictures and videos, we often capture content under constrained conditions. For instance, lighting and viewpoint are notably hard to control. In this talk, we will discuss how we can leverage geometry from either single or multi-view inputs to enable novel view synthesis and relighting. The key idea in the work we will discuss is that providing a good representation of geometric information to neural networks allows us to tackle these complex synthesis and edition tasks. We will discuss both lighting and relighting applications, starting with multi-view inputs and showing how traditional computer graphics buffers can be combined with a neural network to produce realistic-looking relit images both outdoors and indoors. We will then see how the ideas developed for multi-view can be transferred to the single-view case using a neural network to do image-space ray-casting. Finally, we'll dig into combining recent neural rendering approaches and pre-trained diffusion models in the context of lighting-based generation and relighting.