
Professor Ferrante Neri
Academic and research departments
Nature Inspired Computing and Engineering Research Group, Department of Computer Science, Surrey Institute for People-Centred Artificial Intelligence.About
Biography
I am an all-round academic equally passionate about teaching and research. My teaching expertise is about Mathematical subjects for Computer Science while my research expertise lies at the intersection of Optimisation, Explainable AI, and Machine Learning. I am also happy to serve the community I work for, by undertaking managerial duties.
Before joining the faculty at Surrey I was at the University of Nottingham and previously I was at De Montfort University and at the University of Jyväskylä (Finland). I am a scholar in Optimisation and Computational Modelling. I work at the boundary between mathematical theory and practical engineering solutions. I’m the author of Linear Algebra for Computational Sciences and Engineering. I am also a very active editor for multiple journals including Information Sciences, Memetic Computing, and Integrated Computer-Aided Engineering.
My qualifications
Previous roles
Affiliations and memberships
ResearchResearch interests
My research expertise is in optimisation and machine learning. I am currently interested in algorithmic design for black-box optimisation problems informed by fitness landscape analysis. This topic lies at the intersection of optimisation, both exact and heuristic, and machine learning. I am also interested in applications of data science.
Historically, my main topic has been (and still is) the design and understanding of meta-heuristic algorithms. The main two subareas I contributed to are Memetic Computing and Differential Evolution. Since 2010 I am Chair/Vice-Chair of the IEEE Task Force on Memetic Computing.
Indicators of esteem
In the list of the top 2% scientists according to Stanford World Ranking of Scientists
Research interests
My research expertise is in optimisation and machine learning. I am currently interested in algorithmic design for black-box optimisation problems informed by fitness landscape analysis. This topic lies at the intersection of optimisation, both exact and heuristic, and machine learning. I am also interested in applications of data science.
Historically, my main topic has been (and still is) the design and understanding of meta-heuristic algorithms. The main two subareas I contributed to are Memetic Computing and Differential Evolution. Since 2010 I am Chair/Vice-Chair of the IEEE Task Force on Memetic Computing.
Indicators of esteem
In the list of the top 2% scientists according to Stanford World Ranking of Scientists
Supervision
Postgraduate research supervision
Current PhD students and starting month
- Aisha E S E Saeid Aug 2022
If you are looking for a PhD in a topic related to Artificial Intelligence, send me an email to have an informal chat about it. Please include in your email
- your CV
- a short description of the topic (maximum 2 pages) you would like to investigate
- a statement about how you are planning to fund your PhD studies, e.g., you are self-funded, have a scholarship, or are seeking for financial support
Be aware that PhD scholarships are available only at some points of the year through competitive processes (usually deadlines in January to start in October).
Postgraduate research supervision
Graduated PhD students
Hoang Lam Le, University of Nottingham. Thesis: “Novel Strategies to Accelerate Search Algorithms in Data Reduction", 2022
Shouyong Jiang, De Montfort University, "Dynamic Multi-Objective Optimisation", 2017
Michael Cochez, University of Jyväskylä, “Taming Big Knowledge Evolution”, 2016
Fabio Caraffini, De Montfort University, “Novel Memetic Structures for Continuous Optimisation”, 2014
Ilpo Poikolainen, University of Jyväskylä, “Simple Memetic Computing Structures for Global Optimization”, 2014
Annemari Soranto, University of Jyväskylä, “Interest-based topology management in unstructured peer-to-peer networks”, 2012
Ernesto Mininno, University of Jyväskylä, “Advanced Optimization Algorithms for Applications in Control Engineering”, 2011
Giovanni Iacca, University of Jyväskylä, “Memory-saving Optimization Algorithms for Systems with Limited Hardware”, 2011
Matthieu Weber, University of Jyväskylä, “Parallel Global Optimization, Structuring Populations in Differential Evolution”, 2010
Ville Tirronen with University of Jyväskylä, “Global Optimization using Memetic Differential Evolution with Applications to Low-Level Machine Vision”, 2008
Teaching
Happiness is to be understood – Georgi Polonsky – (We’ll Live Till Monday)
Current Teaching
Computational Intelligence (Lectures, Week 1-11, Semester 1)
Foundations of Computing II (Linear Algebra Section, Week 1-6 Semester 2)
Past Teaching
Undergraduate Modules
- 2020-2022 Mathematics for Computer Scientists 2, University of Nottingham, UK
- 2019-2022 Languages and Computation, University of Nottingham, UK
- 2018-2019 Abstract Algebra I and II, De Montfort University, UK
- 2015-2018 Linear Algebra and Discrete Mathematics, De Montfort University, UK
- 2014-2018 Foundations and Algebra, De Montfort University, UK
- 2012-2013 Introduction to Artificial Intelligence and Mobile Robotics, De Montfort University, UK
- 2012-2013 Connected Devices, De Montfort University, UK
Postgraduate Modules
- 2013-2015 Computational Intelligence Optimisation, De Montfort University, UK
- 2007-2012 Research Projects, University of Jyväskylä, Finland
- 2007 Evolutionary Computational Intelligence, University of Jyväskylä, Finland
International Research/Postgraduate Modules
- 2020 Automatic Design of Optimisation Algorithms in the Continuous Domain, NATCOR Course Heuristics & Approximation Algorithms, Nottingham, UK
- 2008 Medicine+Computer Science=Computational Medicine; Computational Intelligence for Multi-drug Therapies, Winter School on Mathematical and Computational Biology, University of Newcastle, Australia
- 2007 Recent Advances in Evolutionary Computing, 17th Jyväskylä Summer School, University of Jyväskylä, Finland
Publications
Deep convolutional neural networks (CNNs) are widely used for image classification. Deep CNNs often require a large memory and abundant computation resources, limiting their usability in embedded or mobile devices. To overcome this limitation, several pruning methods have been proposed. However, most of the existing methods focus on pruning parameters and cannot efficiently address the computation costs of deep CNNs. Additionally, these methods ignore the connections between the feature maps of different layers. This paper proposes a multi-objective pruning based on feature map selection (MOP-FMS). Unlike previous studies, we use the number of floating point operations (FLOPs) as a pruning objective in addition to the accuracy of the pruned network. First, we propose an encoding method based on feature map selection with a compact and efficient search space. Second, novel domain-specific crossover and mutation operators with reparation are designed to generate new individuals and make them meet the constraint rules. Then, decoding and pruning methods are proposed to prune networks based on the results of feature map selection. Finally, multi-objective optimisation is used for evaluation and individual selection. Our method has been tested with commonly used network structures. Numerical results demonstrate that the proposed method achieves better results than other state-of-the-art methods in terms of pruning rate without decreasing the accuracy rate to a high degree. •Considering the relation between the feature maps from different layers, the pruning problem is formulated as a bi-objective optimisation problem with feature map selection, and the accuracy rate and computation cost are simultaneously optimised.•A novel feature map-based encoding method and a unique decoding method are proposed for pruning common structures or networks with additive aggregation.•Special initialisation, crossover and mutation operators are designed with the quick reparation method to satisfy the encoding constraints of this specific problem.
A Generative Adversarial Network (GAN) can learn the relationship between two image domains and achieve unpaired image-to-image translation. One of the breakthroughs was Cycle-consistent Generative Adversarial Networks (CycleGAN), which is a popular method to transfer the content representations from the source domain to the target domain. Existing studies have gradually improved the performance of CycleGAN models by modifying the network structure or loss function of CycleGAN. However, these methods tend to suffer from training instability and the generators lack the ability to acquire the most discriminating features between the source and target domains, thus making the generated images of low fidelity and few texture details. To overcome these issues, the present paper proposes a new method that combines Evolutionary Algorithms (EAs) and Attention Mechanisms to train GANs. Specifically, from an initial CycleGAN, binary vectors indicating the activation of the weights of the generators are progressively improved upon by means of an EA. At the end of this process, the best-performing configurations of generators can be retained for image generation. In addition, to address the issues of low fidelity and lack of texture details on generated images, we make use of the channel attention mechanism. The latter component allows the candidate generators to learn important features of real images and thus generate images with higher quality. The experiments demonstrate qualitatively and quantitatively that the proposed method, namely Attention evolutionary GAN (AevoGAN) alleviates the training instability problems of CycleGAN training. In the test results, the proposed method can generate higher-quality images and obtain better results than the CycleGAN training methods present in the literature, in terms of Inception Score (IS), Fréchet Inception Distance (FID) and Kernel Inception Distance (KID).
In this paper, a hybrid feature selection algorithm based on a multi-objective algorithm with ReliefF (MOFS-RFGA) is proposed. Combining the advantages of filter and wrapper methods, the two types of algorithms are hybridized to improve the capability when solving feature selection problems. First, the ReliefF algorithm is used to score the features according to their importance to the instance class. Then, the feature scoring information is used to initialize the population. Also, a new crossover and mutation operator are designed in this paper to guide the crossover and mutation process based on feature scoring information to improve the search direction of MOFS-RFGA in search space and enhance the convergence performance. In the experiments, MOFS-RFGA is compared with seven advanced multi-objective feature selection algorithms on 20 datasets, and the results show that MOFS-RFGA can fully utilize the advantages of filter and wrapper methods, beating the excellent performance of the comparison algorithms on a large number of datasets, and ensuring good classification performance while cutting a large number of features.
Biological brains have a natural capacity for resolving certain classification tasks. Studies on biologically plausible spiking neurons, architectures and mechanisms of artificial neural systems that closely match biological observations while giving high classification performance are gaining momentum. Spiking neural P systems (SN P systems) are a class of membrane computing models and third-generation neural networks that are based on the behavior of biological neural cells and have been used in various engineering applications. Furthermore, SN P systems are characterized by a highly flexible structure that enables the design of a machine learning algorithm by mimicking the structure and behavior of biological cells without the over-simplification present in neural networks. Based on this aspect, this paper proposes a novel type of SN P system, namely, layered SN P system (LSN P system), to solve classification problems by supervised learning. The proposed LSN P system consists of a multi-layer network containing multiple weighted fuzzy SN P systems with adaptive weight adjustment rules. The proposed system employs specific ascending dimension techniques and a selection method of output neurons for classification problems. The experimental results obtained using benchmark datasets from the UCI machine learning repository and MNIST dataset demonstrated the feasibility and effectiveness of the proposed LSN P system. More importantly, the proposed LSN P system presents the first SN P system that demonstrates sufficient performance for use in addressing real-world classification problems.
Spiking neural P systems (SN P systems), inspired by biological neurons, are introduced as symbolical neural-like computing models that encode information with multisets of symbolized spikes in neurons and process information by using spike-based rewriting rules. Inspired by neuronal activities affected by enzymes, numerical variants of SN P systems called enzymatic numerical spiking neural P systems (ENSNP systems) are proposed wherein each neuron has a set of variables with real values and a set of enzymatic activation-production spiking rules, and each synapse has an assigned weight. By using spiking rules, ENSNP systems can directly implement mathematical methods based on real numbers and continuous functions. Furthermore, ENSNP systems are used to model ENSNP membrane controllers for robots implementing wall following. The trajectories, distances from the wall, and wheel speeds of robots with ENSNP membrane controllers for wall following are compared with those of a robot with a membrane controller for wall following. The average error values of the designed ENSNP membrane controllers are compared with three recently fuzzy logical controllers with optimization algorithms for wall following. The experimental results showed that the designed ENSNP membrane controllers can be candidates as efficient controllers to control robots implementing the task of wall following.
Spiking neural P systems (abbreviated as SNP systems) are models of computation that mimic the behavior of biological neurons. The spiking neural P systems with communication on request (abbreviated as SNQP systems) are a recently developed class of SNP system, where a neuron actively requests spikes from the neighboring neurons instead of passively receiving spikes. It is already known that small SNQP systems, with four unbounded neurons, can achieve Turing universality. In this context, ‘unbounded’ means that the number of spikes in a neuron is not capped. This work investigates the dependency of the number of unbounded neurons on the computation capability of SNQP systems. Specifically, we prove that (1) SNQP systems composed entirely of bounded neurons can characterize the family of finite sets of numbers; (2) SNQP systems containing two unbounded neurons are capable of generating the family of semilinear sets of numbers; (3) SNQP systems containing three unbounded neurons are capable of generating nonsemilinear sets of numbers. Moreover, it is obtained in a constructive way that SNQP systems with two unbounded neurons compute the operations of Boolean logic gates, i.e., OR, AND, NOT, and XOR gates. These theoretical findings demonstrate that the number of unbounded neurons is a key parameter that influences the computation capability of SNQP systems.
Generative adversarial networks have made remarkable achievements in generative tasks. However, instability and mode collapse are still frequent problems. We improve the framework of evolutionary generative adversarial networks (E-GANs), calling it phased evolutionary generative adversarial networks (PEGANs), and adopt a self-attention module to improve upon the disadvantages of convolutional operations. During the training process, the discriminator will play against multiple generators simultaneously, where each generator adopts a different objective function as a mutation operation. Every time after the specified number of training iterations, the generator individuals will be evaluated and the best performing generator offspring will be retained for the next round of evolution. Based on this, the generator can continuously adjust the training strategy during training, and the self-attention module also enables the model to obtain the modeling ability of long-range dependencies. Experiments on two datasets showed that PEGANs improve the training stability and are competitive in generating high-quality samples.
Adam is an adaptive gradient descent approach that is commonly used in back-propagation (BP) algorithms for training feed-forward neural networks (FFNNs). However, it has the defect that it may easily fall into local optima. To solve this problem, some metaheuristic approaches have been proposed to train FFNNs. While these approaches have stronger global search capabilities enabling them to more readily escape from local optima, their convergence performance is not as good as that of Adam. The proposed algorithm is an ensemble of differential evolution and Adam (EDEAdam), which integrates a modern version of the differential evolution algorithm with Adam, using two different sub-algorithms to evolve two sub-populations in parallel and thereby achieving good results in both global and local search. Compared with traditional algorithms, the integration of the two algorithms endows EDEAdam with powerful capabilities to handle different classification problems. Experimental results prove that EDEAdam not only exhibits improved global and local search capabilities, but also achieves a fast convergence speed.
To guarantee their locomotion, biped robots need to walk stably. The latter is achieved by a high performance in joint control. This article addresses this issue by proposing a novel human-simulated fuzzy (HF) membrane control system of the joint angles. The proposed control system, human-simulated fuzzy membrane controller (HFMC), contains several key elements. The first is an HF algorithm based on human-simulated intelligent control (HSIC). This HF algorithm incorporates elements of both multi-mode proportional-derivative (PD) and fuzzy control, aiming at solving the chattering problem of multi-mode switching while improving control accuracy. The second is a membrane architecture that makes use of the natural parallelisation potential of membrane computing to improve the real-time performance of the controller. The proposed HFMC is utilised as the joint controller for a biped robot. Numerical tests in a simulation are carried out with the planar and slope walking of a five-link biped robot, and the effectiveness of the HFMC is verified by comparing and evaluating the results of the designed HFMC, HSIC and PD. Experimental results demonstrate that the proposed HFMC not only retains the advantages of traditional PD control but also improves control accuracy, real-time performance and stability.
Feature selection (FS) is an important data pre-processing technique in classification. In most cases, FS can improve classification accuracy and reduce feature dimension, so it can be regarded as a multi-objective optimization problem. Many evolutionary computation techniques have been applied to FS problems and achieved good results. However, an increase in data dimension means that search difficulty also greatly increases, and EC algorithms with insufficient search ability maybe only find sub-optimal solutions in high probability. Moreover, an improper initial population may negatively affect the convergence speed of algorithms. To solve the problems highlighted above, this paper proposes MOEA-ISa: a multi-objective evolutionary algorithm with interval based initialization and self-adaptive crossover operator for large-scale FS. The proposed interval based initialization can limit the number of selected features for solution to improve the distribution of the initial population in the target space and reduce the similarity of the initial population in the decision space. The proposed self-adaptive crossover operator can determine the number of nonzero genes in offspring according to the similarity of parents, and it combines with the feature weights obtained by ReliefF method to improve the quality of offspring. In the experiments, the proposed algorithm was compared with six other algorithms on 13 benchmark UCI datasets and two benchmark LIBSVM datasets, and an ablation experiment was performed on MOEA-ISa. The results show that MOEA-ISa's performance is better than the six other algorithms for solving large-scale FS problems, and the proposed interval based initialization and self-adaptive crossover operator can effectively improve the performance of MOEA-ISa. The source code of MOEA-ISa is available on GitHub at https://github.com/xueyunuist/MOEA-ISa.