Interfaces / visual interaction

Adaptive summarisation of large video repositories

The conventional paradigm to bridge the semantic gap between low-level information extracted from the digital videos and the user's need to meaningfully interact with large multimedia databases in an intuitive way is to learn and model the way different users link perceived stimuli and their meaning. This widespread approach attempts to uncover the underpinning processes of human visual understanding and thus often fails to achieve reliable results, unless it targets a narrow application context or only a certain type of the video content. The work presented in this paper makes a shift towards more user centered summarisation and browsing of large video collections by augmenting user's interaction with the content rather than learning the way users create related semantics.

In order to create an effortless and intuitive interaction with the overwhelming extent of information embedded in video archives, we are studying two systems for generation of compact video summaries in two different scenarios. The first system targets high-end users such as broadcasting production professionals, exploiting the universally familiar narrative structure of comics to generate easily readable visual summaries.

In case of browsing video archives in a mobile application scenario, visual summary is generated using a model of human visual attention. The extracted salient information from the attention model is exploited to lay out an optimal presentation of the content on a device with a small size display, whether it is a mobile phone, handheld PC or PDA.

SP-ARK: Access to large-scale film archive

The SP-ARK was project set out to develop a web access to the world only resource that comprises the full range of assets related to the whole film production process: from initial sketches to launch and film festival presentations. This collaboration with the Adventure Pictures Ltd, a film production house of a renowned British director Sally Potter, showcases a synergy of state-of-the-art content management technology developed at the I-Lab, Centre for Vision, Speech and Signal Processing, University of Surrey with this unique film archive.

The collaboration delivered a globally accessible digital asset management platform and the core media processing engines, enable intuitive access to this vast archive for a wide range of users. In order to develop a platform that would facilitate commercial exploitation and extend the research testbed, the SP-ARK KTA project focused on delivering effective interaction through a web interface to SP-ARK archive. This activity enabled further research and development by both partners, as well as attracted potential users and maximised the licensing potential of the platform.

Interaction with 3D video content

One of the main emerging challenges of future multimedia platforms is the development of three-dimensional (3D) display technology, resulting in a plethora of research activities in the video research community focusing on this topic. This emerging technology is capable of bringing a whole new experience to the end user by offering a 3D real immersive feeling experience. However, research towards meaningful user interaction with the real 3D content is still at its early stages.

Having this in mind, the main aim of this activity is to provide a comprehensive understanding and investigation about how to develop an interactive 3D video platform that facilitates support of intuitive interaction with 3D video content. The key elements of the proposed platform include effective interaction with the content and the design of appropriate graphical user interface. Moreover, in order to specify the requirement for the designs, a number of studies into the implication of the 3D content delivery mechanism as well as the best user practices are being conducted.

Contact us

Find us

Address
Centre for Vision Speech and Signal Processing
Alan Turing Building (BB)
University of Surrey
Guildford
Surrey
GU2 7XH