Marcel S, McCool C, vejka PM, Ahonen T, vCernocký J, Chakraborty S, Balasubramanian V, Panchanathan S, Chan CH, Kittler J, Poh N, Fauve B, Glembek O, Plchot O, Jan
ík ZV, Larcher A, Lévy C, Matrouf D, Bonastre J-F, Lee P-H, Hung J-Y, Wu S-W, Hung Y-P, Machlica L, Mason J, Mau S, Sanderson C, Monzo D, Albiol A, Nguyen HV, Bai L, Wang Y, Niskanen M, Turtinen M, Nolazco-Flores JA, Garcia-Perera LP, Aceves-Lopez R, Villegas M, Paredes R (2010) On the Results of the First Mobile Biometry (MOBIO) Face and Speaker Verification Evaluation, ICPR Contests
Poh N, Chan CH, Kittler J (2014) Fusion of Face Recognition Classifiers under Adverse Conditions, In: De Marsico M, Nappi M, Tistarelli M (eds.), Face Recognition in Adverse Conditions IGI Global
Pang G, Kwan T, Liu H, Chan C-H (1999) Optical wireless based on high brightness visible LEDs, Industry Applications Conference, 1999. Thirty-Fourth IAS Annual Meeting. Conference Record of the 1999 IEEE 3 pp. 1693 -1699 vol.3-1693 -1699 vol.3
Poh N, Chan CH (2015) Generalizing DET Curves Across Application Scenarios, IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 10 (10) pp. 2171-2181 IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Pang G, Kwan T, Chan C-H, Liu H (1999) LED traffic light as a communications device, Intelligent Transportation Systems, 1999. Proceedings. 1999 IEEE/IEEJ/JSAI International Conference on pp. 788 -793-788 -793
Chan CH, Kittler J (2010) Sparse Representation of (Multiscale) Histograms for Face Recognition Robust to Registration and Illumination Problems, ICIP
Hu G, Chan CH, Kittler J, Christmas W (2012) Resolution-Aware 3D Morphable Model, Proceedings of the British Machine Vision Conference pp. 109.1-109.10 BMVA Press
Chan CH, Goswami B, Kittler J, Christmas W (2011) Kernel-based Speaker Verification Using Spatiotemporal Lip Information, Proceedings of MVA 2011 - IAPR Conference on Machine Vision Applications pp. 422-425 MVA Organization
Poh N, Chan CH, Kittler J, Marcel S, Mc Cool C, Argones Rua E, Alba Castro JL, Villegas M, Paredes R, Struc V, Pavesic N, Salah AA, Fang H, Costen N (2009) Face Video Competition, ADVANCES IN BIOMETRICS 5558 pp. 715-724 SPRINGER-VERLAG BERLIN
Koppen WP, Chan CH, Christmas WJ, Kittler J (2012) An intrinsic coordinate system for 3D face registration, Proceedings - International Conference on Pattern Recognition pp. 2740-2743
We present a method to estimate, based on the horizontal symmetry, an intrinsic coordinate system of faces scanned in 3D. We show that this coordinate system provides an excellent basis for subsequent landmark positioning and model-based refinement such as Active Shape Models, outperforming other -explicit- landmark localisation methods including the commonly-used ICP+ASM approach. © 2012 ICPR Org Committee.
Chan C-H, Kittler J, Messer K (2007) Multi-scale local binary pattern histograms for face recognition, Advances in Biometrics, Proceedings 4642 pp. 809-818 SPRINGER-VERLAG BERLIN
Tahir MA, Chan CH, Kittler J, Bouridane A (2011) Face Recognition using Multi-Scale Local Phase Quantisation and Linear Regression Classifier, ICIP
Zou X, Kittler J, Messer K (2007) Illumination invariant face recognition: A survey, pp. 113-120 IEEE
Vazquez HM, Kittler J, Chan C-H, Reyes EBG (2010) On Combining Local DCT with Preprocessing Sequence for Face Recognition under Varying Lighting Conditions, CIARP pp. 410-417-410-417
© 2014 IEEE.Large pose and illumination variations are very challenging for face recognition. The 3D Morphable Model (3DMM) approach is one of the effective methods for pose and illumination invariant face recognition. However, it is very difficult for the 3DMM to recover the illumination of the 2D input image because the ratio of the albedo and illumination contributions in a pixel intensity is ambiguous. Unlike the traditional idea of separating the albedo and illumination contributions using a 3DMM, we propose a novel Albedo Based 3D Morphable Model (AB3DMM), which removes the illumination component from the images using illumination normalisation in a preprocessing step. A comparative study of different illumination normalisation methods for this step is conducted on PIE and Multi-PIE databases. The results show that overall performance of our method outperforms state-of-the-art methods.
Mendez-Vázquez H, Kittler J, Chan CH, García-Reyes E (2013) Photometric normalization for face recognition using local discrete cosine transform, International Journal of Pattern Recognition and Artificial Intelligence 27 (3)
Variations in illumination is one of major limiting factors of face recognition system performance. The effect of changes in the incident light on face images is analyzed, as well as its influence on the low frequency components of the image. Starting from this analysis, a new photometric normalization method for illumination invariant face recognition is presented. Low-frequency Discrete Cosine Transform coefficients in the logarithmic domain are used in a local way to reconstruct a slowly varying component of the face image which is caused by illumination. After smoothing, this component is subtracted from the original logarithmic image to compensate for illumination variations. Compared to other preprocessing algorithms, our method achieved a very good performance with a total error rate very similar to that produced by the best performing state-of-the-art algorithm. An in-depth analysis of the two preprocessing methods revealed notable differences in their behavior, which is exploited in a multiple classifier fusion framework to achieve further performance improvement. The superiority of the proposal is demonstrated in both face verification and identification experiments. © 2013 World Scientific Publishing Company.
Chan CH, Kittler J, Poh N (2013) State-of-the-Art LBP Descriptor for Face Recognition, In: Brahnam S, Jain LC, Nanni L, Lumini A (eds.), Local Binary Patterns: New Variants and Applications 506 Springer Berlin / Heidelberg
Lyons MJ, Chan C-H, Tetsutani N (2004) MouthType: text entry by hand and mouth, CHI Extended Abstracts pp. 1383-1386-1383-1386
Goswami B, Chan C, Kittler J, Christmas W (2011) SPEAKER AUTHENTICATION USING VIDEO-BASED LIP INFORMATION, 2011 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING pp. 1908-1911 IEEE
Chan CH, Kittler J (2012) Blur kernel estimation to improve recognition of blurred faces, Image Processing (ICIP), 2012 19th IEEE International Conference on pp. 1989-1992-1989-1992
Chan CH, Tahir MA, Kittler J, Pietika inen M (2013) Multiscale Local Phase Quantization for Robust Component-Based Face Recognition Using Kernel Fusion of Multiple Descriptors, Pattern Analysis and Machine Intelligence, IEEE Transactions on 35 5 pp. 1164-1177-1164-1177
Chan CH, Goswami B, Kittler J, Christmas W (2010) Local Ordinal Contrast Pattern Histograms for Spatiotemporal, Lip-Based Speaker Authentication, IEEE Transactions on Information, Forensics and Security 7 pp. 602-612
Lip region deformation during speech contains biometric information and is termed visual speech. This biometric information can be interpreted as being genetic or behavioral depending on whether static or dynamic features are extracted. In this paper, we use a texture descriptor called local ordinal contrast pattern (LOCP) with a dynamic texture representation called three orthogonal planes to represent both the appearance and dynamics features observed in visual speech. This feature representation, when used in standard speaker verification engines, is shown to improve the performance of the lip-biometric trait compared to the state-of-the-art. The best baseline state-of-the-art performance was a half total error rate (HTER) of 13.35% for the XM2VTS database. We obtained HTER of less than 1%. The resilience of the LOCP texture descriptor to random image noise is also investigated. Finally, the effect of the amount of video information on speaker verification performance suggests that with the proposed approach, speaker identity can be verified with a much shorter biometric trait record than the length normally required for voicebased biometrics. In summary, the performance obtained is remarkable and suggests that there is enough discriminative information in the mouth-region to enable its use as a primary biometric trait.
© 2014 Elsevier Ltd. All rights reserved.In this paper we propose to use the full ranking of a set of pixels as a local descriptor. In contrast to existing methods which use only partial ranking information, the full ranking encodes the complete comparative information among the pixels, while retaining invariance to monotonic photometric transformations. The descriptor is used within the bag-of-visual-words paradigm for visual recognition. We demonstrate that the choice of distance metric for assigning the descriptors to visual words is crucial to the performance, and provide an extensive evaluation of eight distance metrics for the permutation group Sn on four widely used face verification and texture classification benchmarks. The results demonstrate that (1) full ranking of pixels encodes more information than partial ranking, consistently leading to better performance; (2) full ranking descriptor can be trivially made rotation invariant; (3) the proposed descriptor applies to both image intensities and filter responses, and is capable of producing state-of-the-art performance.
Chan CH, Goswami B, Kittler J, Christmas B (2011) Non-linear Speaker Verification Using Spatiotemporal Lip Information, MVA
Chan CH, Kittler J, Tahir MA (2010) Kernel Fusion of Multiple Histogram Descriptors for Robust Face recognition, SSPR/SPR
Poh N, Chan CH, Kittler J, Marcel S, Mc Cool C, Argones Rua E, Alba Castro JL, Villegas M, Paredes R, Struc V, Pavesic N, Salah AA, Fang H, Costen N (2010) An Evaluation of Video-to-Video Face Verification, IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 5 (4) pp. 781-801 IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Performing facial recognition between Near Infrared (NIR) and visible-light (VIS) images has been established as a common method of countering illumination variation problems in face recognition. In this paper we present a new database to enable the evaluation of cross-spectral face recognition. A series of preprocessing algorithms, followed by Local Binary Pattern Histogram (LBPH) representation and combinations with Linear Discriminant Analysis (LDA) are used for recognition. These experiments are conducted on both NIRVIS and the less common VISNIR protocols, with permutations of uni-modal training sets. 12 individual baseline algorithms are presented. In addition, the best performing fusion approaches involving a subset of 12 algorithms are also described. © 2011 IEEE.
Lyons MJ, Kluender D, Chan C-H, Tetsutani N (2003) Vital signs: exploring novel forms of body language, SIGGRAPH
Chan C-H, Lyons MJ, Tetsutani N (2003) Mouthbrush: drawing and painting by hand and mouth, ICMI pp. 277-280-277-280
Chan, C., Yan, F., Kittler, K., Mikolajczyk K (2014) Full ranking as local descriptor for visual recognition: A comparison of distance metrics on Sn, Pattern Recognition PR-D-14-00330R
3D face reconstruction of shape and skin texture from a single 2D image can be performed using a 3D Morphable Model (3DMM) in an analysis-by-synthesis approach. However, performing this reconstruction (fitting) efficiently and accurately in a general imaging scenario is a challenge. Such a scenario would involve a perspective camera to describe the geometric projection from 3D to 2D, and the Phong model to characterise illumination. Under these imaging assumptions the reconstruction problem is nonlinear and, consequently, computationally very demanding. In this work, we present an efficient stepwise 3DMM-to-2D image-fitting procedure, which sequentially optimises the pose, shape, light direction, light strength and skin texture parameters in separate steps. By linearising each step of the fitting process we derive closed-form solutions for the recovery of the respective parameters, leading to efficient fitting. The proposed optimisation process involves all the pixels of the input image, rather than randomly selected subsets, which enhances the accuracy of the fitting. It is referred to as Efficient Stepwise Optimisation (ESO). The proposed fitting strategy is evaluated using reconstruction error as a performance measure. In addition, we demonstrate its merits in the context of a 3D-assisted 2D face recognition system which detects landmarks automatically and extracts both holistic and local features using a 3DMM. This contrasts with most other methods which only report results that use manual face landmarking to initialise the fitting. Our method is tested on the public CMU-PIE and Multi-PIE face databases, as well as one internal database. The experimental results show that the face reconstruction using ESO is significantly faster, and its accuracy is at least as good as that achieved by the existing 3DMM fitting algorithms. A face recognition system integrating ESO to provide a pose and illumination invariant solution compares favourably with other state-of-the-art methods. In particular, it outperforms deep learning methods when tested on the Multi-PIE database.
The 3D Morphable Model (3DMM) is currently receiving considerable attention for
human face analysis. Most existing work focuses on fitting a 3DMM to high resolution
images. However, in many applications, fitting a 3DMM to low-resolution images
is also important. In this paper, we propose a Resolution-Aware 3DMM (RA-
3DMM), which consists of 3 different resolution 3DMMs: High-Resolution 3DMM
(HR- 3DMM), Medium-Resolution 3DMM (MR-3DMM) and Low-Resolution 3DMM
(LR-3DMM). RA-3DMM can automatically select the best model to fit the input images
of different resolutions. The multi-resolution model was evaluated in experiments
conducted on PIE and XM2VTS databases. The experimental results verified that HR-
3DMM achieves the best performance for input image of high resolution, and MR-
3DMM and LR-3DMM worked best for medium and low resolution input images, respectively.
A model selection strategy incorporated in the RA-3DMM is proposed based
on these results. The RA-3DMM model has been applied to pose correction of face images
ranging from high to low resolution. The face verification results obtained with
the pose-corrected images show considerable performance improvement over the result
without pose correction in all resolutions