Generating virtual camera views using generative networks

This studentship will work alongside the BBC to bring AI technologies into the production process for broadcast content. The student will be integral to creating an “AI editor” who can automatically enforce directorial/editorial rules in the cutting room, including mixing in virtual camera viewpoints which were not actually recorded.

Start date

1 October 2019

Duration

3.5 years

Application deadline

Funding source

Research Councils UK via the BBC

Funding information

Approx £21,000 per annum with an enhanced stipend of £20,000 per annum. A directly funded project available to UK students only.

About

This project will explore how advances in generative networks and deep-learning approaches to inpainting can be combined to create virtual camera views.  A unified generative deep learning framework for viewpoint interpolation and inpainting, should effectively blend the observed viewpoints and hallucinated information, to produce a virtual camera feed from otherwise impractical vantage points. The developed technology is of interest as a potential extension to the capabilities of the Ed system currently being developed by BBC R&D.

The Ed system is an automated system for creating edited coverage of live events. Its inputs are high-resolution locked-off cameras that are deployed at venues with live audiences. The system then creates “virtual camera” views by cropping these raw camera feeds and cuts between them to produce output. In real-world deployments, camera positioning is commonly constrained to suboptimal positions. The ability to synthesise more favourable camera views that were physically untenable could improve the output of the Ed system.

Program of research

The first 15 months will be based at the University of Surrey, and will focus on integrating deep generative viewpoint interpolation with Surrey’s existing research on deep inpainting. The initial research will focus primarily on static environments. This period will also include extensive personal development and training opportunities covering technical and professional skills.

The following 18 months will be undertaken at the BBC North site in Manchester, and will explore problem specific extensions to the developed framework, such as enforcing temporal consistency and the insertion of artificial motion blurring for dynamic objects. The majority of the data collection will also happen during this period, with radio theatre as a likely use-case.

The remainder of the PhD is expected to be based primarily at BBC North, but some flexibility may be possible, subject to the requirements of the project and the student. The research during this period will focus on preliminary feasibility studies for potential follow-on research, such as initial explorations of developing (moving) shots from virtual cameras, and the interaction of lighting with the virtual camera.

Key highlights of this studentship

This studentship should appeal particularly to scientists and engineers who are interested in working with the creative industries, or have a particular interest in cutting edge and next generation vision technologies.  The project will be very interdisciplinary, with exposure to both academic and industrial research centres. Furthermore, this studentship could potentially benefit from the University of Surrey’s Centre for Doctoral Training in Audio-Visual Machine Perception. This will constitute a rigorous fully-funded programme of personal development, including professional research and software skills, interdisciplinary hackathon events, and a co-located peer-support network of approximately 100 other PhD students in related areas. Student’s receiving their doctorate through this scheme are expected to be exceptionally well-qualified for a subsequent career in research.

Eligibility criteria

The ideal candidate for this studentship should have a strong academic background in software development from studying Computer Science, Electronic Engineering or a related subject. Furthermore, the candidate should have a keen interest in AI/machine learning. Prior experience (including formal study, project work or MOOCs) is advantageous but not required. Candidates would be expected to have (by October 2019) either a 2:1 or First class honours undergraduate degree or a Masters in one of the areas highlighted above.

Due to the nature of funding the studentship is available to UK citizens. Overseas applications will not be considered.

The PhD scholarship is offered for the duration of three and a half years, starting on 1 October 2019, and will cover full university fees and a stipend. No external funds are required from the student.

Non-native speakers of English will normally be required to have IELTS 6.5 or above (or equivalent) with no sub-test of less than 6.

How to apply

Applications should specify the point of contact as Dr. Hadfield, and should be made through our Vision Speech and Signal Processing PhD programme page. In your application you must mention this studentship in order to be considered.

You must also attach a CV, certified copies of degree certificates and transcripts, a personal statement describing relevant experience (maximum 2 pages), 2 references, and proof of eligibility (eg passport or residence permit). Shortlisted applicants will be contacted directly to arrange a suitable time for an interview.

Vision Speech and Signal Processing PhD

Studentship FAQs

Read our studentship FAQs to find out more about applying and funding.

Contact details

Simon Hadfield
11 BA 00
Telephone: +44 (0)1483 689856
E-mail: s.hadfield@surrey.ac.uk
studentship-cta-strip

Studentships at Surrey

We have a wide range of studentship opportunities available.