Brief details of recently completed CVSSP (Centre for Vision, Speech and Signal Processing) research projects, funded through the EPSRC, European Commision (EC), industry and governmental sources, are listed here. Further details can be obtained either by visiting the relevant websites or by contacting those involved in the research. The list of current projects is non-exhaustive.
The EPSRC have provided strategic long-term support for visual media research within CVSSP for period 2003-2013, through the platform grant scheme.
Strategic partnership with the BBC for collaboration in audio-visual research with the BBC and other companies.
The project is concerned with 3D face analysis.Faces of the British Isles project page for more information.
Dicta-Sign is a three-year EU-funded research project that aims at making online communications more accessible to deaf sign language users.Dictasign project page for more information.
Video analysis to enable through-the-lens analysis of athlete performance.
Investigating body shape measurement in the home for online clothing retail. A collaboration with the London College of Fashion, bodymetrics, guided collective.
Developing the UK’s first cross-collection online portal to explore 100 years of archival dance performance.Visit the DDA website for more information.
This project attempts to use both the audio and visual modalities for the problem of source separation of target speech in the presence of multiple competing speech interferences and sound sources in room environments for a robotic system.
Human beings have developed a unique ability to communicate within a noisy environment, such as at a cocktail party. This skill is dependent upon the use of both the aural and visual senses together with sophisticated processing within the brain. To mimic this ability within a machine is very challenging, particularly if the humans are moving. This project attempts to address major challenges in audio-visual speaker localization, tracking and separation.
Investigating the optimal adoption of digital imaging techniques/technology for UK breast screening programme. Involves simulation of imaging systems, lesion simulation, generating synthetic mammograms etc.
The MI3 project is developing the largest rad-hard CMOS imaging sensor for biomedical applications. The device is being used at Surrey for electrophoresis imaging applications.
The project is applying PCA/PFs/KDEs to model, correct and predict respiratory motion present in medical images and for application in therapeutic radiotherapy.
Developing visual search technology to enable the detection of visual plaigiarism in the arts. A collaboration with the University of the Creative Arts (UCA), and the Visual Arts Data Service (VADS).
3D capture of digital doubles of actors and integration in the film production pipeline.
Production of interactive animated 3D characters in conjunction with conventional broadcast production using multiple view video acquisition and 3D video reconstruction.
Collaborators: BBC, Vicon, Artefacto, Fraunhoffer HHI, INRIA.
3D acquisition and representation of real-world scenes from video+depth capture for film production.
Collaborators: Technicolor, Intel Visual Computing Institute, ARRI, Brainstorm Multimedia, 3DLIZED, BarcelonaMedia, Fraunhoffer HHI, IBBT.
Improved onset processing of multimodal data sources (video, image, 3D, high-dynamic range, user annotation) in film production.
Collaborators: DoubleNegative, Filmlight.