Dr Terry Windeatt


Associate Tutor
+44 (0)1483 689286
43 BA 01

About

Teaching

Publications

K Ahmad, M Casey, B Vrusias, P Saragiotis, T Windeatt, F Roli (2003)Combining multiple modes of information using unsupervised neural classifiers, In: MULTIPLE CLASSIFIER SYSTEMS, PROCEEDING2709pp. 236-245
K Dias, T Windeatt (2014)Dynamic Ensemble Selection and Instantaneous Pruning for Regression Used in Signal Calibration., In: S Wermter, C Weber, W Duch, T Honkela, PD Koprinkova-Hristova, S Magg, G Palm, AEP Villa (eds.), ICANN8681pp. 475-482
T Windeatt, K Dias (2008)Ensemble Approaches to Facial Action Unit Classification, In: J RuizShulcloper, WG Kropatsch (eds.), PROGRESS IN PATTERN RECOGNITION, IMAGE ANALYSIS AND APPLICATIONS, PROCEEDINGS5197pp. 551-559

Facial action unit (au) classification is an approach to face expression recognition that decouples the recognition of expression from individual actions. In this paper, upper face aus are classified using an ensemble of MLP (Multi-layer perceptron) base classifiers with feature ranking based on PCA components. This approach is compared experimentally with other popular feature-ranking methods applied to Gabor features. Experimental results on Cohn-Kanade database demonstrate that the MLP ensemble is relatively insensitive to the feature-ranking method but optimized PCA features achieve lowest error rate. When posed as a multi-class problem using Error- Correcting-Output-Coding (ECOC), error rates are comparable to two-class problems (one-versus-rest) when the number of features and base classifier are optimized.

T Windeatt, C Zor, G Gimelfarb, E Hancock, A Imiya, A Kuijper, M Kudo, S Omachi, T Windeatt, K Yamada (2012)Low Training Strength High Capacity Classifiers for Accurate Ensembles Using Walsh Coefficients, In: STRUCTURAL, SYNTACTIC, AND STATISTICAL PATTERN RECOGNITION7626pp. 701-709 SPRINGER-VERLAG BERLIN
RS Smith, T Windeatt (2010)Facial Expression Detection using Filtered Local Binary Pattern Features with ECOC Classifiers and Platt Scaling., In: Journal of Machine Learning ResearchTrackpp. 111-118 Microtome Publishing

We outline a design for a FACS-based facial expression recognition system and describe in more detail the implementation of two of its main components. Firstly we look at how features that are useful from a pattern analysis point of view can be extracted from a raw input image. We show that good results can be obtained by using the method of local binary patterns (LPB) to generate a large number of candidate features and then selecting from them using fast correlation-based ltering (FCBF). Secondly we show how Platt scaling can be used to improve the performance of an error-correcting output code (ECOC) classi er.

AJ STODDART, J ILLINGWORTH, T WINDEATT (1995)OPTIMAL PARAMETER SELECTION FOR DERIVATIVE ESTIMATION FROM RANGE IMAGES, In: IMAGE AND VISION COMPUTING13(8)pp. 629-635 BUTTERWORTH-HEINEMANN LTD
A Hilton, AJ Stoddart, J Illingworth, T Windeatt (1996)Marching Triangles: Range Image Fusion for Complex Object Modelling, In: ICIPpp. 381-384

A new surface based approach to implicit surface polygonisation is introduced. This is applied to the reconstruction of 3D surface models of complex objects from multiple range images. Geometric fusion of multiple range images into an implicit surface representation was presented in previous work. This paper introduces an efficient algorithm to reconstruct a triangulated model of a manifold implicit surface, a local 3D constraint is derived which defines the Delaunay surface triangulation of a set of points on a manifold surface in 3D space. The `marching triangles' algorithm uses the local 3D constraint to reconstruct a Delaunay triangulation of an arbitrary topology manifold surface. Computational and representational costs are both a factor of 3-5 lower than previous volumetric approaches such as marching cubes

C Zor, B Yanikoglu, T Windeatt, E Alpaydin (2010)FLIP-ECOC: A greedy optimization of the ECOC matrix, In: Lecture Notes in Electrical Engineering: Computer and Information Sciences62(5)pp. 149-154

Error Correcting Output Coding (ECOC) is a multiclass classification technique, in which multiple base classifiers (dichotomizers) are trained using subsets of the training data, determined by a preset code matrix. While it is one of the best solutions to multiclass problems, ECOC is suboptimal, as the code matrix and the base classifiers are not learned simultaneously. In this paper, we show an iterative update algorithm that reduces this decoupling. We compare the algorithm with the standard ECOC approach, using Neural Networks (NNs) as the base classifiers, and show that it improves the accuracy for some well-known data sets under different settings.

J Kittler, A Ahmadyfard, D Windridge (2003)Serial multiple classifier systems exploiting a coarse to fine output coding, In: T Windeatt, F Roli (eds.), Multiple Classifier Systemspp. 106-114
C Zor, T Windeatt (2009)Upper Facial Action Unit Recognition, In: M Tistarelli, MS Nixon (eds.), ADVANCES IN BIOMETRICS5558pp. 239-248

This paper concentrates on the comparisons of systems that are used for the recognition of expressions generated by six upper face action units (AUs) by using Facial Action Coding System (FACS). Haar wavelet, Haar-Like and Gabor wavelet coe cients are compared, using Adaboost for feature selection. The binary classi cation results by using Support Vector Machines (SVM) for the upper face AUs have been observed to be better than the current results in the literature, for example 96.5% for AU2 and 97.6% for AU5. In multi-class classi cation case, the Error Correcting Output Coding (ECOC) has been applied. Although for a large number of classes, the results are not as accurate as the binary case, ECOC has the advantage of solving all problems simultaneously; and for large numbers of training samples and small number of classes, error rates are improved.

RS Smith, M Bober, T Windeatt (2011)A comparison of random forest with ECOC-based classifiers, In: Lecture Notes in Computer Science: Multiple Classifier Systems6713pp. 207-216

We compare experimentally the performance of three approaches to ensemble-based classification on general multi-class datasets. These are the methods of random forest, error-correcting output codes (ECOC) and ECOC enhanced by the use of bootstrapping and class-separability weighting (ECOC-BW). These experiments suggest that ECOC-BW yields better generalisation performance than either random forest or unmodified ECOC. A bias-variance analysis indicates that ECOC benefits from reduced bias, when compared to random forest, and that ECOC-BW benefits additionally from reduced variance. One disadvantage of ECOC-based algorithms, however, when compared with random forest, is that they impose a greater computational demand leading to longer training times.

GL Gimel'Farb, E Hancock, A Imiya, M Kudo, A Kuijper, S Omachi, T Windeatt, K Yamada (2012)Preface, In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)7626 L
Cemre Zor, Berrin Yanikoglu, Erinc Merdivan, Terry Windeatt, Josef Kittler, Ethem Alpaydin (2017)BeamECOC: A local search for the optimization of the ECOC matrix, In: ICPR 2016 Proceedingspp. 198-203 IEEE

Error Correcting Output Coding (ECOC) is a multi- class classification technique in which multiple binary classifiers are trained according to a preset code matrix such that each one learns a separate dichotomy of the classes. While ECOC is one of the best solutions for multi-class problems, one issue which makes it suboptimal is that the training of the base classifiers is done independently of the generation of the code matrix. In this paper, we propose to modify a given ECOC matrix to improve its performance by reducing this decoupling. The proposed algorithm uses beam search to iteratively modify the original matrix, using validation accuracy as a guide. It does not involve further training of the classifiers and can be applied to any ECOC matrix. We evaluate the accuracy of the proposed algorithm (BeamE- COC) using 10-fold cross-validation experiments on 6 UCI datasets, using random code matrices of different sizes, and base classifiers of different strengths. Compared to the random ECOC approach, BeamECOC increases the average cross-validation accuracy in 83 : 3% of the experimental settings involving all datasets, and gives better results than the state-of-the-art in 75% of the scenarios. By employing BeamECOC, it is also possible to reduce the number of columns of a random matrix down to 13% and still obtain comparable or even better results at times.

A Hilton, AJ Stoddart, J Illingworth, T Windeatt (1994)Automatic inspection of loaded PCB’s using 3D range data, In: SPIE Machine Vision Application in Industrial Inspection II, International Symposium on Electronic Imaging: Science and Technology, San Jose, CA Volume 2183
Georgy Gimel Farb, Edwin Hancock, Atsushi Imiya, Arjan Kuijper, Mineichi Kudo, Shinichiro Omachi, Terry Windeatt, Keiji Yamada (2012)Structural, syntactic, and statistical pattern recognition. Joint IAPR International Workshop, SSPR & SPR 2012: Hiroshima, Japan, November 7-9, 2012; proceedings Springer, Berlin

This volume constitutes the refereed proceedings of the Joint IAPR International Workshops on Structural and Syntactic Pattern Recognition (SSPR 2012) and Statistical Techniques in Pattern Recognition (SPR 2012), held in Hiroshima, Japan, in November 2012 as a satellite event of the 21st International Conference on Pattern Recognition, ICPR 2012. The 80 revised full papers presented together with 1 invited paper and the Pierre Devijver award lecture were carefully reviewed and selected from more than 120 initial submissions. The papers are organized in topical sections on structural, syntactical, and statistical pattern recognition, graph and tree methods, randomized methods and image analysis, kernel methods in structural and syntactical pattern recognition, applications of structural and syntactical pattern recognition, clustering, learning, kernel methods in statistical pattern recognition, kernel methods in statistical pattern recognition, as well as applications of structural, syntactical, and statistical methods.

R Duangsoithong, T Windeatt (2011)Hybrid correlation and causal feature selection for ensemble classifiers, In: Studies in Computational Intelligence373pp. 97-115 Springer

PC and TPDA algorithms are robust and well known prototype algorithms, incorporating constraint-based approaches for causal discovery. However, both algorithms cannot scale up to deal with high dimensional data, that is more than few hundred features. This chapter presents hybrid correlation and causal feature selection for ensemble classifiers to deal with this problem. Redundant features are removed by correlation-based feature selection and then irrelevant features are eliminated by causal feature selection. The number of eliminated features, accuracy, the area under the receiver operating characteristic curve (AUC) and false negative rate (FNR) of proposed algorithms are compared with correlation-based feature selection (FCBF and CFS) and causal based feature selection algorithms (PC, TPDA, GS, IAMB).

R Duangsoithong, T Windeatt (2009)Relevance and Redundancy Analysis for Ensemble Classifiers, In: P Perner (eds.), MACHINE LEARNING AND DATA MINING IN PATTERN RECOGNITION5632pp. 206-220
RS Smith, T Windeatt (2009)The Bias Variance Trade-Off in Bootstrapped Error Correcting Output Code Ensembles, In: JA Benediktsson, J Kittler, F Roli (eds.), MULTIPLE CLASSIFIER SYSTEMS, PROCEEDINGS5519pp. 1-10

By performing experiments on publicly available multi-class datasets we examine the effect of bootstrapping on the bias/variance behaviour of error-correcting output code ensembles. We present evidence to show that the general trend is for bootstrapping to reduce variance but to slightly increase bias error. This generally leads to an improvement in the lowest attainable ensemble error, however this is not always the case and bootstrapping appears to be most useful on datasets where the non-bootstrapped ensemble classifier is prone to overfitting.

D Windridge (2003)The practcal performance characteristics of tomographically filtered multiple classifier fusion, In: T Windeatt, F Roli (eds.), Multiple Classifier Systemspp. 186-195
R S Smith, T Windeatt (2010)A Bias-Variance Analysis of Bootstrapped Class-Separability Weighting for Error-Correcting Output Code Ensembles, In: 2010 20th International Conference on Pattern Recognitionpp. 61-64 IEEE

We investigate the effects, in terms of a bias-variance decomposition of error, of applying class-separability weighting plus bootstrapping in the construction of error-correcting output code ensembles of binary classifiers. Evidence is presented to show that bias tends to be reduced at low training strength values whilst variance tends to be reduced across the full range. The relative importance of these effects, however, varies depending on the stability of the base classifier type.

A HILTON, J ILLINGWORTH, T WINDEATT (1994)STATISTICS OF SURFACE CURVATURE ESTIMATES, In: PROCEEDINGS OF THE 12TH IAPR INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION - CONFERENCE A: COMPUTER VISION & IMAGE PROCESSINGpp. 37-41
Terry Windeatt (2005)Diversity measures for multiple classifier system analysis and design, In: Information fusion6(1)pp. 21-36 Elsevier B.V

In the context of Multiple Classifier Systems, diversity among base classifiers is known to be a necessary condition for improvement in ensemble performance. In this paper the ability of several pair-wise diversity measures to predict generalisation error is compared. A new pair-wise measure, which is computed between pairs of patterns rather than pairs of classifiers, is also proposed for two-class problems. It is shown experimentally that the proposed measure is well correlated with base classifier test error as base classifier complexity is systematically varied. However, correlation with unity-weighted sum and vote is shown to be weaker, demonstrating the difficulty in choosing base classifier complexity for optimal fusion. An alternative strategy based on weighted combination is also investigated and shown to be less sensitive to number of training epochs.

Rakkrit Duangsoithong, Terry Windeatt (2010)Correlation-Based and Causal Feature Selection Analysis for Ensemble Classifiers, In: F Schwenker, N ElGayar (eds.), ARTIFICIAL NEURAL NETWORKS IN PATTERN RECOGNITION, PROCEEDINGS5998pp. 25-36 Springer Nature

High dimensional feature spaces with relatively few samples usually leads to poor classifier performance for machine learning, neural networks and data raining systems. This paper presents a comparison analysis between correlation-based and causal feature selection for ensemble classifiers. MLP and SVM are used as base classifier and compared with Naive Bayes and Decision Tree. According to the results, correlation-based feature selection algorithm can eliminate more redundant and irrelevant features, provides slightly better accuracy and less complexity than causal feature selection. Ensemble using Bagging algorithm can improve accuracy in both correlation-based and causal feature selection.

S Jitaree, Terry Windeatt, P Boonyapiphat, P Phukpattaranont (2017)Classifying Breast Cancer Microscopic Images using Fractal Dimension and Ensemble Classifier, In: Biomedical Engineering International Conference (BMEiCON-2017) Proceedings IEEE

To improve the performance of the computer-aided systems for breast cancer diagnosis, the ensemble classifier is proposed for classifying the histological structures in the breast cancer microscopic images into three region types: positive cancer cells, negative cancer cells and non-cancer cell (stromal cells and lymphocyte cells) image. The bagging and boosting ensemble techniques are used with the decision tree (DT) learner. They are also compared with the single classifier, DT. The feature used as an input of classifiers is the fractal dimension (FD) based 12 color channels. It is computed from the image datasets, which are manually prepared in small cropped image with 3 window sizes including 128×128 pixels, 192×192 pixels and 256×256 pixels. The results show that the boosting ensemble classifier gives the best accuracy about 80% from window size of 256, although it is the lowest when using the single DT as classifier. The results indicated that the ensemble method is capable of improving the accuracy in the classification compared to the single classifier. The classification model using FD and the ensemble classifier would be applied to develop the computer- aided systems for breast cancer diagnosis in the future.

M Prior, T Windeatt (2009)Improved Uniformity Enforcement in Stochastic Discrimination, In: JA Benediktsson, J Kittler, F Roli (eds.), MULTIPLE CLASSIFIER SYSTEMS, PROCEEDINGS5519pp. 335-343

There are a variety of methods for inducing predictive systems from observed data. Many of these methods fall into the field of study of machine learning. Some of the most effective algorithms in this domain succeed by combining a number of distinct predictive elements to form what can be described as a type of committee. Well known examples of such algorithms are AdaBoost, bagging and random forests. Stochastic discrimination is a committee-forming algorithm that attempts to combine a large number of relatively simple predictive elements in an effort to achieve a high degree of accuracy. A key element of the success of this technique is that its coverage of the observed feature space should be uniform in nature. We introduce a new uniformity enforcement method, which on benchmark datasets, leads to greater predictive efficiency than the currently published method.

We investigate the effects, in terms of a bias-variance decomposition of error, of applying class-separability weighting plus bootstrapping in the construction of error-correcting output code ensembles of binary classifiers. Evidence is presented to show that bias tends to be reduced at low training strength values whilst variance tends to be reduced across the full range. The relative importance of these effects, however, varies depending on the stability of the base classifier type.

Terry Windeatt (2018)Optimising Ensemble of Two-Class classifiers using Spectral Analysis, In: ICPR 2018 Proceedings IEEE

An approach to approximating the decision boundary of an ensemble of two-class classifiers is proposed. Spectral coefficients are used to approximate the discrete probability density function of a Boolean Function. It is shown that the difference between first and third order coefficient approximation is a good indicator of optimal base classifier complexity. A theoretical analysis is supported by experimental results on a variety of Artificial and Real two-class problems.

A Hilton, J Illingworth, T Windeatt (1995)Statistics of Surface Curvature Estimates, In: Pattern Recognition288

Within the context face expression classication using the facial action coding system (FACS), we address the problem of detecting facial action units (AUs). The method adopted is to train a single error-correcting output code (ECOC) multiclass classier to estimate the probabilities that each one of several commonly occurring AU groups is present in the probe image. Platt scaling is used to calibrate the ECOC outputs to probabilities and appropriate sums of these probabilities are taken to obtain a separate probability for each AU individually. Feature extraction is performed by generating a large number of local binary pattern (LBP) features and then selecting from these using fast correlation-based ltering (FCBF). The bias and variance properties of the classifier are measured and we show that both these sources of error can be reduced by enhancing ECOC through the application of bootstrapping and class-separability weighting.

D Windridge, R Patenall, J Kittler (2004)The relationship between classifier factorisation and performance in stochastic vector quantisation, In: F Roli, J Kittler, T Windeatt (eds.), Multiple Classifier Systemspp. 194-203
T Windeatt, RS Smith, K Dias (2011)Weighted Decoding ECOC for Facial Action Unit Classification, In: Applications of Supervised and Unsupervised Ensemble Methods
Terry Windeatt, Kaushala Dias (2008)Feature ranking ensembles for facial action unit classification, In: L Prevost, S Marinai, F Schwenker (eds.), ARTIFICIAL NEURAL NETWORKS IN PATTERN RECOGNITION, PROCEEDINGS5064pp. 267-279 Springer Nature

Recursive Feature Elimination RFE combined with feature-ranking is an effective technique for eliminating irrelevant features. In this paper, an ensemble of MLP base classifiers with feature-ranking based on the magnitude of MLP weights is proposed. This approach is compared experimentally with other popular feature-ranking methods, and with a Support Vector Classifier SVC. Experimental results on natural benchmark data and on a problem in facial action unit classification demonstrate that the MLP ensemble is relatively insensitive to the feature-ranking method, and simple ranking methods perform as well as more sophisticated schemes. The results are interpreted with the assistance of bias/variance of 0/1 loss function.

A Hilton, AJ Stoddart, J Illingworth, T Windeatt (1998)Implicit Surface based Geometric Fusion, In: International Journal of Computer Vision and Image Understanding69(3)pp. 273-291 Inderscience
Matthew Prior, Terry Windeatt (2005)Over-Fitting in Ensembles of Neural Network Classifiers Within ECOC Frameworks, In: Multiple Classifier Systemspp. 286-295 Springer Berlin Heidelberg

We have investigated the performance of a generalisation error predictor, Gest, in the context of error correcting output coding ensembles based on multi-layer perceptrons. An experimental evaluation on benchmark datasets with added classification noise shows that over-fitting can be detected and a comparison is made with the Q measure of ensemble diversity. Each dichotomy associated with a column of an ECOC code matrix is presented with a bootstrap sample of the training set. Gest uses the out-of-bootstrap samples to efficiently estimate the mean column error for the independent test set and hence the test error. This estimate can then be used select a suitable complexity for the base classifiers in the ensemble.

K Dias, T Windeatt (2014)Dynamic ensemble selection and instantaneous pruning for regression used in signal calibration, In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)8681 Lpp. 475-482

A dynamic method of selecting a pruned ensemble of predictors for regression problems is described. The proposed method enhances the prediction accuracy and generalization ability of pruning methods that change the order in which ensemble members are combined. Ordering heuristics attempt to combine accurate yet complementary regressors. The proposed method enhances the performance by modifying the order of aggregation through distributing the regressor selection over the entire dataset. This paper compares four static ensemble pruning approaches with the proposed dynamic method. The experimental comparison is made using MLP regressors on benchmark datasets and on an industrial application of radio frequency source calibration. © 2014 Springer International Publishing Switzerland.

M Prior, T Windeatt (2007)An ensemble dependence measure, In: J MarquesDeSa, LA Alexandre, W Duch, DP Mandic (eds.), Artificial Neural Networks - ICANN 2007, Pt 1, Proceedings4668pp. 329-338

Bias and variance (B&V) decomposition is frequently used as a tool for analysing classification performance. However, the standard B&V terminologies were originally defined for the regression setting and their extensions to classification has led to several different models / definitions in the literature. Although the relation between some of these models has previously been explored, their links to the standard terminology in terms of the Bayesian statistics has not been established. In this paper, we aim to provide this missing link via employing the frameworks of Tumer & Ghosh (T&G) and James. By unifying the two approaches, we relate the classification B&V defined for the 0/1 loss to the standard B&V of the boundary distributions given for the squared error loss. The closed form relationships derived in this study provide deeper understanding of the classification performance, and their example uses on predictor design and analysis are demonstrated in two case studies.

— A spectral approximation of a Boolean function is proposed for approximating the decision boundary of an ensemble of Deep Neural Networks (DNNs) solving two-class pattern recognition problems. The Walsh combination of relatively weak DNN classifiers is shown experimentally to be capable of detecting adversarial attacks. By observing the difference in Walsh coefficient approximation between clean and adversarial images, it appears that transferability of attack may be used for detection. Approximating the decision boundary may also aid in understanding the learning and transferability properties of DNNs. While the experiments here use images, the proposed approach of modelling two-class ensemble decision boundaries could in principle be applied to any application area. Index Terms—Adversarial robustness, Boolean functions, ensemble, deep neural networks, machine learning, pattern analysis, spectral analysis.

T Windeatt, M Prior, M Haindl, J Kittler, F Roli (2007)Stopping criteria for ensemble-based feature selection, In: Multiple Classifier Systems, Proceedings4472pp. 271-281
T Windeatt, M Prior, N Effron, N Intrator (2007)Ensemble-based Feature Selection Criteria., In: P Perner (eds.), MLDM Posterspp. 168-182
RS Smith, T Windeatt (2015)Facial action unit recognition using multi-class classification, In: Neurocomputing150(PB)pp. 440-448

Within the context of facial expression classification using the facial action coding system (FACS), we address the problem of detecting facial action units (AUs). Feature extraction is performed by generating a large number of multi-resolution local binary pattern (MLBP) features and then selecting from these using fast correlation-based filtering (FCBF). The need for a classifier per AU is avoided by training a single error-correcting output code (ECOC) multi-class classifier to generate occurrence scores for each of several AU groups. A novel weighted decoding scheme is proposed with the weights computed using first order Walsh coefficients. Platt scaling is used to calibrate the ECOC scores to probabilities and appropriate sums are taken to obtain separate probability estimates for each AU individually. The bias and variance properties of the classifier are measured and we show that both these sources of error can be reduced by enhancing ECOC through bootstrapping and weighted decoding.

J Kittler, R Ghaderi, T Windeatt, J Matas (2001)Face identification and verification via ECOC, In: J Bigun, F Smeraldi (eds.), AUDIO- AND VIDEO-BASED BIOMETRIC PERSON AUTHENTICATION, PROCEEDINGS2091pp. 1-13
A Hilton, AJ Stoddart, J Illingworth, T Windeatt (1996)Reliable Surface Reconstruction from Multiple Range Images, In: 4th European Conference on Computer Vision1064

This paper addresses the problem of reconstructing an integrated 3D model from multiple 2.5D range images. A novel integration algorithm is presented based on a continuous implicit surface representation. This is the first reconstruction algorithm to use operations in 3D space only. The algorithm is guaranteed to reconstruct the correct topology of surface features larger than the range image sampling resolution. Reconstruction of triangulated models from multi-image data sets is demonstrated for complex objects. Performance characterization of existing range image integration algorithms is addressed in the second part of this paper. This comparison defines the relative computational complexity and geometric limitations of existing integration algorithms.

T Windeatt, C Zor (2013)Ensemble Pruning Using Spectral Coefficients, In: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS24(4)pp. 673-678 IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
K Dias, T Windeatt (2015)Hybrid Dynamic Learning Systems for Regression, In: I Rojas, G Joya, A Catala (eds.), ADVANCES IN COMPUTATIONAL INTELLIGENCE, PT II9095pp. 464-476
Terry Windeatt (2000)Classifier Instability and Partitioning, In: Multiple Classifier Systemspp. 260-269 Springer Berlin Heidelberg

Various methods exist for reducing correlation between classifiers in a multiple classifier framework. The expectation is that the composite classifier will exhibit improved performance and/or be simpler to automate compared with a single classifier. In this paper we investigate how generalisation is affected by varying complexity of unstable base classifiers, implemented as identical single hidden layer MLP networks with fixed parameters. A technique that uses recursive partitioning for selectively perturbing the training set is also introduced, and shown to improve performance and reduce sensitivity to base classifier complexity. Benchmark experiments include artificial and real data with optimal error rates greater than eighteen percent.

T Windeatt, K Dias, L Prevost, S Marinai, F Schwenker (2008)Feature ranking ensembles for facial action unit classification, In: ARTIFICIAL NEURAL NETWORKS IN PATTERN RECOGNITION, PROCEEDINGS5064pp. 267-279
Terry Windeatt, Robert Tebbs (1997)Spectral technique for hidden layer neural network training, In: Pattern recognition letters18(8)pp. 723-731 Elsevier B.V

We propose a new constructive algorithm for learning binary-to-binary mappings. Weight constraints derived from a spectral summation are used to check separability during the partitioning phase, and to limit hyperplane movement during training.

T Windeatt (2009)Weighted decoding ECOC for facial action unit classification, In: Studies in Computational Intelligence245pp. 59-77 Springer

There are two approaches to automating the task of facial expression recognition, the first concentrating on what meaning is conveyed by facial expression and the second on categorising deformation and motion into visual classes. The latter approach has the advantage that the interpretation of facial expression is decoupled from individual actions as in FACS (Facial Action Coding System). In this chapter, upper face action units (aus) are classified using an ensemble of MLP base classifiers with feature ranking based on PCA components. When posed as a multi-class problem using Error-Correcting-Output-Coding (ECOC), experimental results on Cohn-Kanade database demonstrate that error rates comparable to two-class problems (one-versus-rest) may be obtained. The ECOC coding and decoding strategies are discussed in detail, and a novel weighted decoding approach is shown to outperform conventional ECOC decoding. Furthermore, base classifiers are tuned using the ensemble Out-of-Bootstrap estimate, for which purpose, ECOC decoding is modified. The error rates obtained for six upper face aus around the eyes are believed to be among the best for this database.

E Hancock, R Wilson, T Windeatt, I Ulusoy, F Escolano (2010)Preface: Lecture Notes in Computer Science: Structural, Syntactic and Statistical Pattern Recognition, In: Lecture Notes in Computer Science: Structural, Syntactic and Statistical Pattern Recognition6218pp. v-vi
Terry Windeatt, Kaushala Dias (2008)Ensemble Approaches to Facial Action Unit Classification, In: J RuizShulcloper, W G Kropatsch (eds.), PROGRESS IN PATTERN RECOGNITION, IMAGE ANALYSIS AND APPLICATIONS, PROCEEDINGS5197pp. 551-559 Springer Nature

Facial action unit (au) classification is ail approach to face expression recognition that decouples the recognition of expression from individual actions. In this paper, upper face aus are classified using an ensemble of MLP (Multi-layer perceptron) base classifiers with feature ranking based on PCA This approach is compared experimentally with other popular feature-ranking methods applied to Gabor features. Experimental results on Cohn-Kanade database demonstrate that the MLP ensemble is relatively insensitive to the feature-ranking method but optimized PCA features achieve lowest error rate. When posed as a multi-class problem using Error-Correcting-Output-Coding (ECOC). error rates are comparable to two-class problems (one-versus-rest) when the number of features and base classifier are optimized.

Terry Windeatt (2004)Spectral measure for multi-class problems, In: Lecture notes in computer science3077pp. 184-193 Springer
C Zor, T Windeatt, B Yanikoglu (2011)Bias-variance analysis of ECOC and bagging using neural nets373/20pp. 59-73 Springer

One of the methods used to evaluate the performance of ensemble classifiers is bias and variance analysis. In this chapter, we analyse bootstrap aggregating (bagging) and Error Correcting Output Coding (ECOC) ensembles using a biasvariance framework; and make comparisons with single classifiers, while having Neural Networks (NNs) as base classifiers. As the performance of the ensembles depends on the individual base classifiers, it is important to understand the overall trends when the parameters of the base classifiers -nodes and epochs for NNs-, are changed.We show experimentally on 5 artificial and 4 UCI MLR datasets that there are some clear trends in the analysis that should be taken into consideration while designing NN classifier systems.

A Hilton, AJ Stoddart, J Illingworth, T Windeatt (1996)Implicit Surface based Geometric Fusion, In: Leeds 16th Annual Statistics Workshoppp. 1-8
Terry Windeatt, Cemre Zor, Necati Cihan Camgöz (2018)Approximation of Ensemble Boundary using Spectral Coefficients, In: IEEE Transactions on Neural Networks and Learning Systems IEEE

A spectral analysis of a Boolean function is proposed for ap- proximating the decision boundary of an ensemble of classifiers, and an in- tuitive explanation of computing Walsh coefficients for the functional ap- proximation is provided. It is shown that the difference between first and third order coefficient approximation is a good indicator of optimal base classifier complexity. When combining Neural Networks, experimental re- sults on a variety of artificial and real two-class problems demonstrate un- der what circumstances ensemble performance can be improved. For tuned base classifiers, first order coefficients provide performance similar to ma- jority vote. However, for weak/fast base classifiers, higher order coefficient approximation may give better performance. It is also shown that higher order coefficient approximation is superior to the Adaboost logarithmic weighting rule when boosting weak Decision Tree base classifiers.

RS Smith, T Windeatt (2010)Facial Expression Detection using Filtered Local Binary Pattern Features with ECOC Classifiers and Platt Scaling, In: Journal of Machine Learning Research Proceedings

We outline a design for a FACS-based facial expression recognition system and describe in more detail the implementation of two of its main components. Firstly we look at how features that are useful from a pattern analysis point of view can be extracted from a raw input image. We show that good results can be obtained by using the method of local binary patterns (LPB) to generate a large number of candidate features and then selecting from them using fast correlation-based filtering (FCBF). Secondly we show how Platt scaling can be used to improve the performance of an error-correcting output code (ECOC) classifier.

Reza Ghaderi, Terry Windeatt (2001)Least squares and estimation measures via Error Correcting Output Code, In: Lecture notes in computer science2096pp. 148-157 Springer
Terry Windeatt, Reza Ghaderi (2003)Coding and decoding strategies for multi-class learning problems, In: Information fusion4(1)pp. 11-21 Elsevier B.V

It is known that the error correcting output code (ECOC) technique, when applied to multi-class learning problems, can improve generalisation performance. One reason for the improvement is its ability to decompose the original problem into complementary two-class problems. Binary classifiers trained on the sub-problems are diverse and can benefit from combining using a simple distance-based strategy. However there is some discussion about why ECOC performs as well as it does, particularly with respect to the significance of the coding/decoding strategy. In this paper we consider the binary (0,1) code matrix conditions necessary for reduction of error in the ECOC framework, and demonstrate the desirability of equidistant codes. It is shown that equidistant codes can be generated by using properties related to the number of 1’s in each row and between any pair of rows. Experimental results on synthetic data and a few popular benchmark problems show how performance deteriorates as code length is reduced for six decoding strategies.

Terry Windeatt, Reza Ghaderi Binary Strings and multiclass learning problems, In: Pattern Recognition and String Matchingpp. 741-763 Springer US

The Output Coding technique for solving multi-class learning problems was originally proposed with rows of an error-corecting code matrix acting as code words to represent the classes. We summarise the requirements on design of the binary strings in the code matrix and consider alternate combining strategies. For shorter codes, it is shown that both code design and combining strategy can affect generalisation. Examples are presented on synthetic data, on natural benchmark data and on an application in face verification.

T Windeatt, R Duangsoithong, R Smith (2011)Embedded feature ranking for ensemble MLP classifiers, In: IEEE Transactions on Neural Networks22(6)pp. 988-994 IEEE

A feature ranking scheme for multilayer perceptron (MLP) ensembles is proposed, along with a stopping criterion based upon the out-of-bootstrap estimate. To solve multi-class problems feature ranking is combined with modified error-correcting output coding. Experimental results on benchmark data demonstrate the versatility of the MLP base classifier in removing irrelevant features.

Terry Windeatt, Reza Ghaderi (2001)Binary labelling and decision-level fusion, In: Information fusion2(2)pp. 103-112 Elsevier B.V

Two binary labelling techniques for decision-level fusion are considered for reducing correlation in the context of multiple classifier systems. First, we describe a method based on error correcting coding that uses binary code words to decompose a multi-class problem into a set of complementary two-class problems. We look at the conditions necessary for reduction of error and introduce a modified version that is less sensitive to code word selection. Second, we describe a partitioning method for two-class problems that transforms each training pattern into a vertex of the binary hypercube. A constructive algorithm for binary-to-binary mappings identifies a set of inconsistently classified patterns, random subsets of which are used to perturb base classifier training sets. Experimental results on artificial and real data, using a combination of simple neural network classifiers, demonstrate improvement in performance for these techniques, the first suitable for k-class problems, k>2 and the second for k=2.

Terry Windeatt, R Ghaderi, G Ardeshir (2003)Spectral coefficients and classifier correlation, In: Lecture notes in computer science2709pp. 276-285 Springer
R Duangsoithong, P Phukpattaranont, T Windeatt (2013)Bootstrap Causal Feature Selection for irrelevant feature elimination, In: BMEiCON 2013 - 6th Biomedical Engineering International Conference

Irrelevant features may lead to degradation in accuracy and efficiency of classifier performance. In this paper, Bootstrap Causal Feature Selection (BCFS) algorithm is proposed. BCFS uses bootstrapping with a causal discovery algorithm to remove irrelevant features. The results are evaluated by the number of selected features and classification accuracy. According to the experimental results, BCFS is able to remove irrelevant features and provides slightly higher average accuracy than using the original features and causal feature selection. Moreover, BCFS also reduces complexity in causal graphs which provides more comprehensibility for the casual discovery system. © 2013 IEEE.

A Hilton, AJ Stoddart, J Illingworth, T Windeatt (1996)Reconstruction of 3D Delaunay Surface Models of Complex Objects, In: IEEE International Conference on Systems, Man and Cyberneticspp. 2445-2450
RS Smith, T Windeatt (2011)Facial Action Unit Recognition using Filtered Local Binary Pattern Features with Bootstrapped and Weighted ECOC Classifiers, In: Studies in Computational Intelligence373/20pp. 1-20 Springer

Within the context face expression classification using the facial action coding system (FACS), we address the problem of detecting facial action units (AUs). The method adopted is to train a single error-correcting output code (ECOC) multiclass classifier to estimate the probabilities that each one of several commonly occurring AU groups is present in the probe image. Platt scaling is used to calibrate the ECOC outputs to probabilities and appropriate sums of these probabilities are taken to obtain a separate probability for each AU individually. Feature extraction is performed by generating a large number of local binary pattern (LBP) features and then selecting from these using fast correlation-based filtering (FCBF). The bias and variance properties of the classifier are measured and we show that both these sources of error can be reduced by enhancing ECOC through the application of bootstrapping and class-separability weighting.

C Zor, T Windeatt, B Yanikoglu, (2012)Bias-Variance Analysis of ECOC and Bagging Using Neural Nets, In: Proceedings of the the Third Workshop on Supervised and Unsupervised Ensemble Methods and Their Applications, European Conference on Machine Learningpp. 109-118
J Kittler, Y Yusoff, W Christmas, T Windeatt, D Windridge (2001)Boosting multiple experts by joint optimisation of decision thresholds, In: Pattern Recognition and Image Analysis113pp. 529-541

We consider a multiple classifier system which combines the hard decisions of experts by voting. We argue that the individual experts should not set their own decision thresholds. The respective thresholds should be selected jointly as this will allow compensation of the weaknesses of some experts by the relative strengths of the others. We perform the joint optimization of decision thresholds for a multiple expert system by a systematic sampling of the multidimensional decision threshold space. We show the effectiveness of this approach on the important practical application of video shot cut detection.

M Prior, T Windeatt (2006)Parameter mining using the out-of-bootstrap generalisation error estimate for Stochastic Discrimination and Random Forests, In: YY Tang, SP Wang, G Lorette, DS Yeung, H Yan (eds.), 18th International Conference on Pattern Recognition, Vol 2, Proceedingspp. 498-501
S Akyuz, T Windeatt, R Smith (2017)Ensemble Pruning via DC Programming

Ensemble learning is a method of combining learners, however the ensemble sizes are sometimes unnecessarily large which causes extra memory usage and decrease in effectiveness. Error Correcting Output Code (ECOC) is one of the well known ensemble techniques for multiclass classification which combines the outputs of binary base learners to predict the classes for multiclass data. We formulate ECOC for ensemble selection problem by using difference of convex functions (dc) programming and zero norm approximation to cardinality constraint. Experiments show that it outperforms the standard ECOC.

Terry Windeatt, Gholamreza Ardeshir (2001)An empirical comparison of Pruning methods for ensemble classifiers, In: Lecture notes in computer science2189pp. 208-217 Springer
T Windeatt (2006)Accuracy/diversity and ensemble MLP classifier design, In: IEEE TRANSACTIONS ON NEURAL NETWORKS17(5)pp. 1194-1211 IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Asavaron Limshuebchuey, Rakkrit Duangsoithong, Terry Windeatt (2016)Redundant feature identification and redundancy analysis for causal feature selection, In: 2015 8th Biomedical Engineering International Conference (BMEiCON) IEEE

High dimensional data can lead to low accuracy of classification and take a long time to calculate because it contains irrelevant features and redundant features. To overcome this problem, dimension of data has to be reduced. Causal feature selection is one of methods for feature reduction but it cannot identify redundant features. This paper presents Parent-Children based for Causal Redundant Feature Identification (PCRF) algorithm to identify and remove redundant features. The accuracy of classification and number of feature reduced by PCRF algorithm are compared with correlation feature selection. According to the results, PCRF algorithm can identify redundant feature but has lower accuracy of classification than correlation feature selection.

A dynamic method of selecting a pruned ensemble of predictors for regression problems is described. The proposed method enhances the prediction accuracy and generalization ability of pruning methods that change the order in which ensemble members are combined. Ordering heuristics attempt to combine accurate yet complementary regressors. The proposed method enhances the performance by modifying the order of aggregation through distributing the regressor selection over the entire dataset. This paper compares four static ensemble pruning approaches with the proposed dynamic method. The experimental comparison is made using MLP regressors on benchmark datasets and on an industrial application of radio frequency source calibration. © 2014 Springer International Publishing Switzerland.

Terry Windeatt, Cemre Zor (2011)Minimising added classification error using walsh coefficients, In: IEEE Transactions on Neural Networks22(8)pp. 1334-1339 IEEE

Two-class supervised learning in the context of a classifier ensemble may be formulated as learning an incompletely specified Boolean function, and the associated Walsh coefficients can be estimated without knowledge of the unspecified patterns. Using an extended version of the Tumer-Ghosh model, the relationship between Added Classification Error and second order Walsh coefficients is established. In this paper, the ensemble is composed of Multi-layer Perceptron (MLP) base classifiers, with the number of hidden nodes and epochs systematically varied. Experiments demonstrate that the mean second order coefficients peak at the same number of training epochs as ensemble test error reaches a minimum.

T Windeatt (2007)Ensemble neural classifier design for face recognition., In: ESANNpp. 373-378
AJ STODDART, J ILLINGWORTH, T WINDEATT (1994)OPTIMAL PARAMETER SELECTION FOR DERIVATIVE ESTIMATION FROM RANGE IMAGES, In: ER Hancock (eds.), BMVC94 - PROCEEDINGS OF THE 5TH BRITISH MACHINE VISION CONFERENCE, VOLS 1 AND 2pp. 165-174
D Duangsoithong, T Windeatt (2012)Hybrid Correlation and Causal Feature Selection for Ensemble Classifiers, In: Proceedings of the the Third Workshop on Supervised and Unsupervised Ensemble Methods and Their Applications, European Conference on Machine Learningpp. 23-32

PC and TPDA algorithms are robust and well known prototype algorithms, incorporating constraint-based approaches for causal discovery. However, both algorithms cannot scale up to deal with high dimensional data, that is more than few hundred features. This paper presents hybrid correlation and causal feature selection for ensemble classifiers to deal with this problem. The number of eliminated features, accuracy, the area under the receiver operating characteristic curve (AUC) and false negative rate (FNR) of proposed algorithms are compared with correlation-based feature selection (FCBF and CFS) and causal based feature selection algorithms (PC, TPDA, GS, IAMB).

R Duangsoithong, T Windeatt, F Schwenker, N ElGayar (2010)Correlation-Based and Causal Feature Selection Analysis for Ensemble Classifiers, In: ARTIFICIAL NEURAL NETWORKS IN PATTERN RECOGNITION, PROCEEDINGS5998pp. 25-36
RS Smith, T Windeatt (2010)Class-Separability Weighting and Bootstrapping in Error Correcting Output Code Ensembles, In: N El Gayar, J Kittler, F Roli (eds.), Multiple Classifier Systems, Proceedings5997pp. 185-194

A method for applying weighted decoding to error-correcting output code ensembles of binary classifiers is presented. This method is sensitive to the target class in that a separate weight is computed for each base classifier and target class combination. Experiments on 11 UCI datasets show that the method tends to improve classification accuracy when using neural network or support vector machine base classifiers. It is further shown that weighted decoding combines well with the technique of bootstrapping to improve classification accuracy still further.

Cemre Zor, Terry Windeatt, Berrin Yanikoglu (2011)Bias-Variance Analysis of ECOC and Bagging Using Neural Nets, In: O Okun, G Valentini, M Re (eds.), ENSEMBLES IN MACHINE LEARNING APPLICATIONS373pp. 59-73 Springer Nature

One of the methods used to evaluate the performance of ensemble classifiers is bias and variance analysis. In this chapter, we analyse bootstrap aggregating (Bagging) and Error Correcting Output Coding (ECOC) ensembles using a biasvariance framework; and make comparisons with single classifiers, while having Neural Networks (NNs) as base classifiers. As the performance of the ensembles depends on the individual base classifiers, it is important to understand the overall trends when the parameters of the base classifiers - nodes and epochs for NNs -, are changed. We show experimentally on 5 artificial and 4 UCI MLR datasets that there are some clear trends in the analysis that should be taken into consideration while designing NN classifier systems.

S Özöğür-Akyüz, T Windeatt, R Smith (2014)Pruning of Error Correcting Output Codes by optimization of accuracy–diversity trade off, In: Machine Learning

Ensemble learning is a method of combining learners to obtain more reliable and accurate predictions in supervised and unsupervised learning. However, the ensemble sizes are sometimes unnecessarily large which leads to additional memory usage, computational overhead and decreased effectiveness. To overcome such side effects, pruning algorithms have been developed; since this is a combinatorial problem, finding the exact subset of ensembles is computationally infeasible. Different types of heuristic algorithms have developed to obtain an approximate solution but they lack a theoretical guarantee. Error Correcting Output Code (ECOC) is one of the well-known ensemble techniques for multiclass classification which combines the outputs of binary base learners to predict the classes for multiclass data. In this paper, we propose a novel approach for pruning the ECOC matrix by utilizing accuracy and diversity information simultaneously. All existing pruning methods need the size of the ensemble as a parameter, so the performance of the pruning methods depends on the size of the ensemble. Our unparametrized pruning method is novel as being independent of the size of ensemble. Experimental results show that our pruning method is mostly better than other existing approaches.

R Duangsoithong, T Windeatt (2010)Bootstrap feature selection for ensemble classifiers, In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)6171 Lpp. 28-41
A Hilton, J Illingworth, T Windeatt (1994)Surface Curvature Estimation, In: 12th IAPR International Conference on Pattern Recognition
RS Smith, T Windeatt (2005)Decoding rules for error correcting output code ensembles, In: NC Oza, R Polikar, J Kittler, F Roli (eds.), MULTIPLE CLASSIFIER SYSTEMS3541pp. 53-63
R Duangsoithong, T Windeatt (2009)Relevant and Redundant Feature Analysis with Ensemble Classification, In: B Chanda (eds.), ICAPR 2009: SEVENTH INTERNATIONAL CONFERENCE ON ADVANCES IN PATTERN RECOGNITION, PROCEEDINGSpp. 247-250
Cemre Zor, Terry Windeatt, Josef Kittler (2013)ECOC Pruning using Accuracy, Diversity and Hamming Distance Information, In: 2013 21ST SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU)pp. ?-?

Existing ensemble pruning algorithms in the literature have mainly been defined for unweighted or weighted voting ensembles, whose extensions to the Error Correcting Output Coding (ECOC) framework is not successful. This paper presents a novel pruning algorithm to be used in the pruning of ECOC, via using a new accuracy measure together with diversity and Hamming distance information. The results show that the novel method outperforms those existing in the state-of-the-art.

Additional publications