Computational lighting in video

This studentship will work alongside the BBC to bring artificial intelligence (AI) technologies into the production process for broadcast content. The student will be integral to creating an “AI editor” who can automatically enforce directorial/editorial rules in the cutting room, including postprocessing data to improve perceived visual quality.

Start date

1 October 2019

Duration

4 years

Application deadline

Funding source

Research Councils UK via the BBC

Funding information

Funding for this project is available to citizens of the UK. Approximately £25,000 per annum.

About

Project

This project will explore how advances in Generative networks and deep-learning approaches to beautification and stylistic transfer, can be combined to artificially recreate the appearance of professional lighting. The system will use recordings from one or more viewpoints of a scene with an unknown lighting setup, and will generate a video of the same scene but which a viewer would judge to have been recorded under “three point lighting”. The developed technology is of interest as a potential extension to the capabilities of the Ed system currently being developed by BBC R&D.

The Ed system is an automated system for creating edited coverage of live events. Its inputs are high-resolution locked-off cameras that are deployed at venues with live audiences. The system then creates “virtual camera” views by cropping these raw camera feeds and cuts between them to produce output. In real-world deployments, the lighting configuration is often constrained to a suboptimal setup. The ability to synthesise more appealing data than what can practically be achieved, would improve the output of the Ed system

Program of research

The first 15 months will be based at the University of Surrey, and will focus on integrating deep style transfer with computational lighting and monocular scene understanding. The initial research will focus primarily on static environments captured at Surrey’s Audio-Visual lab. This period will also include extensive personal development and training opportunities covering technical and professional skills.

The following 18 months will be undertaken at the BBC North site in Manchester, and will explore the temporal aspects of the problem including temporal lighting consistency and correct shadowing for dynamic objects. This will use dynamic footage captured at the BBC site and consequently the majority of the data collection will also happen during this period, with radio theatre as a likely use-case. Difficulties in capturing multiple lighting conditions for a dynamic scene and may necessitate an exploration of unsupervised/semi-supervised learning or domain-transfer from simulation or between frequency spectrums.

The remainder of the PhD is expected to be based primarily at BBC North, but some flexibility may be possible, subject to the requirements of the project and the student. The research during this period will focus on preliminary feasibility studies for potential follow-on research, such as conditional generation and structure/lighting disentanglement to allow the simulated lighting setup to be varied dynamically by Ed or a human user.

Key highlights of this Studentship

This studentship should appeal particularly to scientists and engineers who are interested in working with the creative industries or have a particular interest in cutting edge and next generation vision technologies.  The project will be very interdisciplinary, with exposure to both academic and industrial research centres. Furthermore, this studentship could potentially benefit from the University of Surrey’s Centre for Doctoral Training in Audio-Visual Machine Perception. This will constitute a rigorous fully-funded programme of personal development, including professional research and software skills, interdisciplinary hackathon events, and a co-located peer-support network of approximately 100 other PhD students in related areas. Student’s receiving their doctorate through this scheme are expected to be exceptionally well-qualified for a subsequent career in research.

Eligibility criteria

Candidates are expected to have (by October 2019) either a 2:1 or First Class Honours undergraduate degree or a masters in one of the areas highlighted below.

The ideal candidate for this studentship should have a strong academic background in software development from studying computer science, electronic engineering or a related subject. Furthermore, the candidate should have a keen interest in AI/machine learning. Prior experience (including formal study, project work or MOOCs) is advantageous but not required.

Non-native speakers of English will normally be required to have IELTS 6.5 or above (or equivalent) with no sub-test of less than 6.

Due to the nature of funding the studentship is available to UK. Overseas applications will not be considered.

How to apply

Applications must specify the point of contact as Dr Hadfield and should be made through the online portal on the Vision, Speech and Signal Processing PhD course page. 

You must also attach the following:

  • A CV
  • Certified copies of degree certificates and transcripts
  • A personal statement describing relevant experience (maximum two pages)
  • Two references, and proof of eligibility (eg passport or residence permit). 

Shortlisted applicants will be contacted directly to arrange a suitable time for an interview.

Vision, Speech and Signal Processing PhD

Studentship FAQs

Read our studentship FAQs to find out more about applying and funding.

Contact details

Simon Hadfield
11 BA 00
Telephone: +44 (0)1483 689856
E-mail: s.hadfield@surrey.ac.uk
studentship-cta-strip

Studentships at Surrey

We have a wide range of studentship opportunities available.