Professor Ferrante Neri

Professor of Machine Learning and Artificial Intelligence, Head of the Nature Inspired Computing and Engineering (NICE) Research Group)
Docent (University of Jyväskylä), PhD (University of Jyväskylä), PhD (Polytechnic University of Bari), MEng (Polytechnic University of Bari)
+44 (0)1483 682646
16 BB 02


My qualifications

Docent in Mathematical Information Technology (Computational Intelligence)
University of Jyväskylä, Finland
PhD in Mathematical Information Technology (Scientific Computing and Optimisation)
University of Jyväskylä, Finland
PhD in Electrotechnical Engineering
Polytechnic University of Bari, Italy
Laurea Degree in Electrical Engineering
Polytechnic University of Bari, Italy

Previous roles

01 April 2019 - 31 March 2022
Associate Professor (School of Computer Science)
University of Nottingham
24 May 2013 - 31 March 2019
Professor of Computational intelligence Optimisation
De Montfort University
15 March 2012 - 23 May 2013
Reader in Computational Intelligence
De Montfort University
01 September 2009 - 31 August 2014
Academy Research Fellow (hosted at the Department of Mathematical Information Technology, University of Jyväskylä, Finland)
Academy of Finland
15 November 2007 - 31 August 2012
Senior Assistant in Simulation and Optimisation (on study leave from 01/09/2009 until the end of the contract)
Department of Mathematical Information Technology, University of Jyväskylä, Finland

Affiliations and memberships

IEEE Senior Member
Senior Fellow Higher Education Academy


Research interests

Indicators of esteem

  • In the list of the top 2% scientists according to Stanford World Ranking of Scientists


    Postgraduate research supervision

    Postgraduate research supervision



    Gexiang Zhange, Xihai Zhang, Haina Rong, Prithwineel Paul, Ming Zhu, Ferrante Neri, Yew-Soon Ong (2022)A Layered Spiking Neural System for Classification Problems, In: International Journal of Neural Systems32(8)2250023 World Scientific Publishing

    Biological brains have a natural capacity for resolving certain classification tasks. Studies on biologically plausible spiking neurons, architectures and mechanisms of artificial neural systems that closely match biological observations while giving high classification performance are gaining momentum. Spiking neural P systems (SN P systems) are a class of membrane computing models and third-generation neural networks that are based on the behavior of biological neural cells and have been used in various engineering applications. Furthermore, SN P systems are characterized by a highly flexible structure that enables the design of a machine learning algorithm by mimicking the structure and behavior of biological cells without the over-simplification present in neural networks. Based on this aspect, this paper proposes a novel type of SN P system, namely, layered SN P system (LSN P system), to solve classification problems by supervised learning. The proposed LSN P system consists of a multi-layer network containing multiple weighted fuzzy SN P systems with adaptive weight adjustment rules. The proposed system employs specific ascending dimension techniques and a selection method of output neurons for classification problems. The experimental results obtained using benchmark datasets from the UCI machine learning repository and MNIST dataset demonstrated the feasibility and effectiveness of the proposed LSN P system. More importantly, the proposed LSN P system presents the first SN P system that demonstrates sufficient performance for use in addressing real-world classification problems.

    Tingfang Wu, Ferrante Neri, Linquiang Pan (2022)On the Tuning of the Computation Capability of Spiking Neural Membrane Systems with Communication on Request, In: International Journal of Neural Systems32(8)2250037 World Scientific Publishing

    Spiking neural P systems (abbreviated as SNP systems) are models of computation that mimic the behavior of biological neurons. The spiking neural P systems with communication on request (abbreviated as SNQP systems) are a recently developed class of SNP system, where a neuron actively requests spikes from the neighboring neurons instead of passively receiving spikes. It is already known that small SNQP systems, with four unbounded neurons, can achieve Turing universality. In this context, ‘unbounded’ means that the number of spikes in a neuron is not capped. This work investigates the dependency of the number of unbounded neurons on the computation capability of SNQP systems. Specifically, we prove that (1) SNQP systems composed entirely of bounded neurons can characterize the family of finite sets of numbers; (2) SNQP systems containing two unbounded neurons are capable of generating the family of semilinear sets of numbers; (3) SNQP systems containing three unbounded neurons are capable of generating nonsemilinear sets of numbers. Moreover, it is obtained in a constructive way that SNQP systems with two unbounded neurons compute the operations of Boolean logic gates, i.e., OR, AND, NOT, and XOR gates. These theoretical findings demonstrate that the number of unbounded neurons is a key parameter that influences the computation capability of SNQP systems.

    Yu Xue, Yling Tong, Neri Ferrante (2022)An ensemble of differential evolution and Adam for training feed-forward neural networks, In: Information Sciences608pp. 453-471 Elsevier

    Adam is an adaptive gradient descent approach that is commonly used in back-propagation (BP) algorithms for training feed-forward neural networks (FFNNs). However, it has the defect that it may easily fall into local optima. To solve this problem, some metaheuristic approaches have been proposed to train FFNNs. While these approaches have stronger global search capabilities enabling them to more readily escape from local optima, their convergence performance is not as good as that of Adam. The proposed algorithm is an ensemble of differential evolution and Adam (EDEAdam), which integrates a modern version of the differential evolution algorithm with Adam, using two different sub-algorithms to evolve two sub-populations in parallel and thereby achieving good results in both global and local search. Compared with traditional algorithms, the integration of the two algorithms endows EDEAdam with powerful capabilities to handle different classification problems. Experimental results prove that EDEAdam not only exhibits improved global and local search capabilities, but also achieves a fast convergence speed.

    Feature selection (FS) is an important data pre-processing technique in classification. In most cases, FS can improve classification accuracy and reduce feature dimension, so it can be regarded as a multi-objective optimization problem. Many evolutionary computation techniques have been applied to FS problems and achieved good results. However, an increase in data dimension means that search difficulty also greatly increases, and EC algorithms with insufficient search ability maybe only find sub-optimal solutions in high probability. Moreover, an improper initial population may negatively affect the convergence speed of algorithms. To solve the problems highlighted above, this paper proposes MOEA-ISa: a multi-objective evolutionary algorithm with interval based initialization and self-adaptive crossover operator for large-scale FS. The proposed interval based initialization can limit the number of selected features for solution to improve the distribution of the initial population in the target space and reduce the similarity of the initial population in the decision space. The proposed self-adaptive crossover operator can determine the number of nonzero genes in offspring according to the similarity of parents, and it combines with the feature weights obtained by ReliefF method to improve the quality of offspring. In the experiments, the proposed algorithm was compared with six other algorithms on 13 benchmark UCI datasets and two benchmark LIBSVM datasets, and an ablation experiment was performed on MOEA-ISa. The results show that MOEA-ISa's performance is better than the six other algorithms for solving large-scale FS problems, and the proposed interval based initialization and self-adaptive crossover operator can effectively improve the performance of MOEA-ISa. The source code of MOEA-ISa is available on GitHub at

    Luping Zhang, Fei Xu, Dongyang Xiao, Jianping Dong, Gexiang Zhange, Neri Ferrante (2022)Enzymatic Numerical Spiking Neural Membrane Systems and Their Application in Designing Membrane Controllers, In: International Journal of Neural Systems

    Spiking neural P systems (SN P systems), inspired by biological neurons, are introduced as symbolical neural-like computing models that encode information with multisets of symbolized spikes in neurons and process information by using spike-based rewriting rules. Inspired by neuronal activities affected by enzymes, numerical variants of SN P systems called enzymatic numerical spiking neural P systems (ENSNP systems) are proposed wherein each neuron has a set of variables with real values and a set of enzymatic activation-production spiking rules, and each synapse has an assigned weight. By using spiking rules, ENSNP systems can directly implement mathematical methods based on real numbers and continuous functions. Furthermore, ENSNP systems are used to model ENSNP membrane controllers for robots implementing wall following. The trajectories, distances from the wall, and wheel speeds of robots with ENSNP membrane controllers for wall following are compared with those of a robot with a membrane controller for wall following. The average error values of the designed ENSNP membrane controllers are compared with three recently fuzzy logical controllers with optimization algorithms for wall following. The experimental results showed that the designed ENSNP membrane controllers can be candidates as efficient controllers to control robots implementing the task of wall following.

    Y Xue, W Tong, Neri Ferrante, Yixia Zhang (2022)PEGANs: Phased Evolutionary Generative Adversarial Networks with Self-Attention Module, In: Mathematics10(15)2792 MDPI

    Generative adversarial networks have made remarkable achievements in generative tasks. However, instability and mode collapse are still frequent problems. We improve the framework of evolutionary generative adversarial networks (E-GANs), calling it phased evolutionary generative adversarial networks (PEGANs), and adopt a self-attention module to improve upon the disadvantages of convolutional operations. During the training process, the discriminator will play against multiple generators simultaneously, where each generator adopts a different objective function as a mutation operation. Every time after the specified number of training iterations, the generator individuals will be evaluated and the best performing generator offspring will be retained for the next round of evolution. Based on this, the generator can continuously adjust the training strategy during training, and the self-attention module also enables the model to obtain the modeling ability of long-range dependencies. Experiments on two datasets showed that PEGANs improve the training stability and are competitive in generating high-quality samples.