REVEAL: Recovering evidence from video by fusing video evidence thesaurus and video meta-data
Start date01 December 2004
End date30 June 2008
During any major crime investigation, the establishment of the identity of vehicles and individuals involved in crime incidents necessitates the extremely time-consuming process of manually annotating all available CCTV tapes and digital archives.
Therefore, any technology for recovering intelligence automatically from video footage must be a priority for development of the evidence-gathering capability of our police forces. This project aims to advance research in the recovery of evidence from video footage.
There are two key crime-oriented applications that will directly benefit from this research. First, video summarisation of CCTV archives i.e. the automatic generation of a gallery of mugshots and number plates for all moving objects. Such a gallery represents the most effective method of enlisting the knowledge of local police officers and the general public.
Second, automatic annotation of video footage to ensure all evidence should be capable of automatic entry into HOLMES 2 - the investigation management system used by police forces to collect, manage and analyse intelligence data.
Two novel areas of investigation are proposed. First, methods for representing and analysing crowds are to be developed to process typically crowded scenes. Second, multimodal data fusion couples the linguistic structure of current police annotation practice with the metadata structure of the video interpretation process to generate a rich homogenous data representation that can drive the annotation process.