Dr Helen Cooper


Project Officer and Facilities Manager
+44 (0)1483 689851
34 BA 00

Biography

Affiliations and memberships

British Machine Vision Association (BMVA)
I am part of the BMVA Executive Committee and am responsible for the memberships and meeting organisation.

Research projects

My publications

Publications

Cooper H, Ong E-J, Bowden R (2011) Give Me a Sign : A Person Independent Interactive Sign Dictionary, VSSP-TR-1/2011
This paper presents a method to perform person independent sign recognition. This is achieved by implementing generalising features based on sign linguistics. These are combined using two methods. The first is traditional Markov models, which are shown to lack the required generalisation. The second is a discriminative approach called Sequential Pattern Boosting, which combines feature selection with learning. The resulting system is introduced as a dictionary application, allowing signers to query by performing a sign in front of a Kinect. Two data sets are used and results shown for both, with the query-return rate reaching 99.9% on a 20 sign multi-user dataset and 85.1% on a more challenging and realistic subject independent, 40 sign test set.
Cooper HM (2010) Sign Language Recognitions: Generalising to More Complex Corpora.,
The aim of this thesis is to find new approaches to Sign Language Recognition (SLR) which are suited to working with the limited corpora currently available. Data available for SLR is of limited quality; low resolution and frame rates make the task of recognition even more complex. The content is rarely natural, concentrating on isolated signs and filmed under laboratory conditions. In addition, the amount of accurately labelled data is minimal. To this end, several contributions are made: Tracking the hands is eschewed in favour of detection based techniques more robust to noise; for both signs and for linguistically-motivated sign sub-units are investigated, to make best use of limited data sets. Finally, an algorithm is proposed to learn signs from the inset signers on TV, with the aid of the accompanying subtitles, thus increasing the corpus of data available.
Tracking fast moving hands under laboratory conditions is a complex task, move this to real world data and the challenge is even greater. When using tracked data as a base for SLR, the errors in the tracking are compounded at the classification stage. Proposed instead, is a novel sign detection method, which views space-time as a 3D volume and the sign within it as an object to be located. Features are combined into strong classifiers using a novel boosting implementation designed to create optimal classifiers over sparse datasets. Using boosted volumetric features, on a robust frame differenced input, average classification rates reach 71% on seen signers and
66% on a mixture of seen and unseen signers, with individual sign classification rates gaining 95%.
Using a classifier per sign approach to SLR, means that data sets need to contain numerous examples of the signs to be learnt. Instead, this thesis proposes learnt classifiers to detect the common sub-units of sign. The responses of these classifiers can then be combined for recognition at the sign level. This approach requires fewer examples per sign to be learnt, since the sub-unit detectors are trained on data from multiple signs. It is also faster at detection time since there are fewer classifiers to consult, the number of these being limited by the linguistics of sign and not the number of signs being detected. For this method, appearance based boosted classifiers are introduced to distinguish the sub-units of sign. Results show that when combined with temporal models, these novel sub-unit classifiers, can outperform similar classifiers
Cooper H, Bowden R (2010) Sign Language Recognition using Linguistically Derived Sub-Units, pp. 57-61
Cooper H, Bowden R (2010) Sign Language Recognition using Linguistically Derived Sub-Units,Proceedings of 4th Workshop on the Representation and Processing of Sign Languages: Corpora and Sign Language Technologies pp. 57-61 European Language Resources Association (ELRA)
This work proposes to learn linguistically-derived sub-unit classifiers for sign language. The responses of these classifiers can be
combined by Markov models, producing efficient sign-level recognition. Tracking is used to create vectors of hand positions per frame
as inputs for sub-unit classifiers learnt using AdaBoost. Grid-like classifiers are built around specific elements of the tracking vector to
model the placement of the hands. Comparative classifiers encode the positional relationship between the hands. Finally, binary-pattern
classifiers are applied over the tracking vectors of multiple frames to describe the motion of the hands. Results for the sub-unit classifiers
in isolation are presented, reaching averages over 90%. Using a simple Markov model to combine the sub-unit classifiers allows sign
level classification giving an average of 63%, over a 164 sign lexicon, with no grammatical constraints.
Cooper H, Bowden R (2007) Large lexicon detection of sign language, HUMAN-COMPUTER INTERACTION, PROCEEDINGS 4796 pp. 88-97 SPRINGER-VERLAG BERLIN
Cooper H, Bowden R (2009) Learning Signs From Subtitles: A Weakly Supervised Approach To Sign Language Recognition, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp. 2568-2574
This paper introduces a fully-automated, unsupervised method to recognise sign from subtitles. It does this by using data mining to align correspondences in sections of videos. Based on head and hand tracking, a novel temporally constrained adaptation of apriori mining is used to extract similar regions of video, with the aid of a proposed contextual negative selection method. These regions are refined in the temporal domain to isolate the occurrences of similar signs in each example. The system is shown to automatically identify and segment signs from standard news broadcasts containing a variety of topics.
Cooper HM, Pugeault N, Bowden R (2011) Reading the Signs: A Video Based Sign Dictionary,2011 International Conference on Computer Vision: 2nd IEEE Workshop on Analysis and Retrieval of Tracked Events and Motion in Imagery Streams (ARTEMIS 2011) pp. 914-919 IEEE
This article presents a dictionary for Sign Language using visual sign recognition based on linguistic subcomponents. We demonstrate a system where the user makes a query, receiving in response a ranked selection of similar results. The approach uses concepts from linguistics to provide sign sub-unit features and classifiers based on motion, sign-location and handshape. These sub-units are combined using Markov Models for sign level recognition. Results are shown for a video dataset of 984 isolated signs performed by a native signer. Recognition rates reach 71.4% for the first candidate and 85.9% for retrieval within the top 10 ranked signs.
Cooper H, Bowden R (2009) Sign Language Recognition: Working with Limited Corpora, UNIVERSAL ACCESS IN HUMAN-COMPUTER INTERACTION: APPLICATIONS AND SERVICES, PT III 5616 pp. 472-481 SPRINGER-VERLAG BERLIN
Elliott R, Cooper HM, Ong EJ, Glauert J, Bowden R, Lefebvre-Albaret F (2011) Search-By-Example in Multilingual Sign Language Databases,
We describe a prototype Search-by-Example or look-up tool for signs, based on a newly developed 1000-concept sign lexicon for four national sign languages (GSL, DGS, LSF,BSL), which includes a spoken language gloss, a HamNoSys description, and a video for each sign. The look-up tool combines an interactive sign recognition system, supported by Kinect technology, with a real-time sign synthesis system,using a virtual human signer, to present results to the user. The user performs a sign to the system and is presented with animations of signs recognised as similar. The user also has the option to view any of these signs performed
in the other three sign languages. We describe the supporting technology and architecture for this system, and present some preliminary evaluation results.
Holt B, Ong EJ, Cooper H, Bowden R (2011) Putting the pieces together: Connected Poselets for human pose estimation,Proceedings of the IEEE International Conference on Computer Vision pp. 1196-1201
We propose a novel hybrid approach to static pose estimation called Connected Poselets. This representation combines the best aspects of part-based and example-based estimation. First detecting poselets extracted from the training data; our method then applies a modified Random Decision Forest to identify Poselet activations. By combining keypoint predictions from poselet activitions within a graphical model, we can infer the marginal distribution over each keypoint without any kinematic constraints. Our approach is demonstrated on a new publicly available dataset with promising results. © 2011 IEEE.
Cooper H, Bowden R (2007) Sign Language Recognition Using Boosted Volumetric Features,Proceedings of the IAPR Conference on Machine Vision Applications pp. 359-362 MVA Organisation
This paper proposes a method for sign language recognition that bypasses the need for tracking by classifying the motion directly. The method uses the natural extension of haar like features into the temporal domain, computed efficiently using an integral volume. These volumetric features are assembled into spatio-temporal classifiers using boosting. Results are presented for a fast feature extraction method and 2 different types of boosting. These configurations have been tested on a data set consisting of both seen and unseen signers performing 5 signs producing competitive results.
Cooper HM Kinect Sign Recognition, University of Surrey
Ong EJ, Cooper H, Pugeault N, Bowden R (2012) Sign Language Recognition using Sequential Pattern Trees, Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on pp. 2200-2207
Cooper HM, Holt B, Bowden R (2011) Sign Language Recognition,In: Moeslund TB, Hilton A, Krüger V, Sigal L (eds.), Visual Analysis of Humans: Looking at People pp. 539-562 Springer Verlag
This chapter covers the key aspects of sign-language recognition (SLR), starting with a brief introduction to the motivations and requirements, followed by a précis of sign linguistics and their impact on the field. The types of data available and the relative merits are explored allowing examination of the features which can be extracted. Classifying the manual aspects of sign (similar to gestures) is then discussed from a tracking and non-tracking viewpoint before summarising some of the approaches to the non-manual aspects of sign languages. Methods for combining the sign classification results into full SLR are given showing the progression towards speech recognition techniques and the further adaptations required for the sign specific case. Finally the current frontiers are discussed and the recent research presented. This covers the task of continuous sign recognition, the work towards true signer independence, how to effectively combine the different modalities of sign, making use of the current linguistic research and adapting to larger more noisy data sets
Cooper Helen, Ong Eng-Jon, Pugeault Nicolas, Bowden Richard (2017) Sign Language Recognition Using Sub-units,In: Escalera Sergio, Guyon Isabelle, Athitsos Vassilis (eds.), Gesture Recognition pp. 89-118 Springer International Publishing
This chapter discusses sign language recognition using linguistic sub-units. It presents three types of sub-units for consideration; those learnt from appearance data as well as those inferred from both 2D or 3D tracking data. These sub-units are then combined using a sign level classifier; here, two options are presented. The first uses Markov Models to encode the temporal changes between sub-units. The second makes use of Sequential Pattern Boosting to apply discriminative feature selection at the same time as encoding temporal information. This approach is more robust to noise and performs well in signer independent tests, improving results from the 54% achieved by the Markov Chains to 76%.