
A-Lab: Machine audition
Technologies for sound-related machine perception, augmentation and reproduction of sound scenes.

SoundSphere
Current activities
- Spatial audio production and reproduction
- Audio source separation, localisation and tracking
- Detection and classification of audio scenes and events
- Audio-visual speech processing.
Recognition and achievements
- 2017: Ranked 1st in Google-sponsored DCASE Challenge Task 4 “Audio Tagging”
- 2017: S3A/BBC The Turning Forest Finalist - Best Googleplay VR Experience
- 2016: S3A and BBC Research won TVB Europe Award for Best Achievement in Sound for ‘The Turning Forest’ VR sound experience.
Current projects
- EPSRC “Making Sense of Sounds” (£1.27M, 2016-2019)
- EU H2020 “Audio Commons” (2016-2019)
- EPSRC Platform Grant “Audio-Visual Media Research Platform” (2017-2022)
- EPSRC MARuSS “Musical Audio Repurposing using Source Separation” (£850k, 2015-2018)
- EU H2020 MSCA ITN “MacSeNet: Machine Sensing Training Network” (2015-2018, Coordinator)
- EU FP7 Marie Curie ITN “Sparse Representations and Compressed Sensing” (SpaRTaN, 2014-2018, Coord)
- EPSRC/dstl “Signal processing solutions for the networked battlespace” (2013-2018)
- EPSRC Programme Grant “S3A: Future Spatial Audio for an Immersive Listener Experience at Home” (2013-19)
- EU H2020 CONTENT4ALL "Personalised Content Accessibilities for hearing impaired people forconnected digital single market" (2017-2020).
Internal collaborations
- Institute of Sound Recording (IoSR) - Human audio perception
- Digital World Research Centre (DWRC) - Design of digital technologies
- Centre for Digital Economy (CoDE) - Digital audio business models.
Future focus
- Object-based audio
- Analysis of large-scale datasets
- Audio-visual sensing: Combining audio and visual perception.