About

Research

Research interests

Publications

Mukunthan Tharmakulasingam, Brian Gardner, Roberto La Ragione, Anil Fernando (2022)Explainable Deep Learning Approach for Multilabel Classification of Antimicrobial Resistance with Missing Labels, In: IEEE Access10 Institute of Electrical and Electronics Engineers (IEEE)

Predicting Antimicrobial Resistance (AMR) from genomic sequence data has become a significant component of overcoming the AMR challenge, especially given its potential for facilitating more rapid diagnostics and personalised antibiotic treatments. With the recent advances in sequencing technologies and computing power, deep learning models for genomic sequence data have been widely adopted to predict AMR more reliably and error-free. There are more than 30 different types of AMR; therefore, any practical AMR prediction system must be able to identify multiple AMRs present in a genomic sequence. Unfortunately, most genomic sequence datasets do not have all the labels marked, thereby making a deep learning modelling approach challenging owing to its reliance on labels for reliability and accuracy. This paper addresses this issue by presenting an effective deep learning solution, Mask-Loss 1D convolution neural network (ML-ConvNet), for AMR prediction on datasets with many missing labels. The core component of ML-ConvNet utilises a masked loss function that overcomes the effect of missing labels in predicting AMR. The proposed ML-ConvNet is demonstrated to outperform state-of-the-art methods in the literature by 10.5%, according to the F1 score. The proposed model's performance is evaluated using different degrees of the missing label and is found to outperform the conventional approach by 76% in the F1 score when 86.68% of labels are missing. Furthermore, the proposed ML-ConvNet is established with an explainable artificial intelligence (XAI) pipeline, thereby making it ideally suited for hospitals and healthcare settings where model interpretability is an essential requirement.

Mukunthan Tharmakulasingam, Brian Gardner, Roberto La Ragione, Anil Fernando (2022)Rectified Classifier Chains for Prediction of Antibiotic Resistance from Multi-labelled Data with Missing Labels, In: IEEE/ACM transactions on computational biology and bioinformaticsPP(1)pp. 1-1 IEEE

Predicting Antimicrobial Resistance (AMR) from genomic data has important implications for human and animal healthcare, and especially given its potential for more rapid diagnostics and informed treatment choices. With the recent advances in sequencing technologies, applying machine learning techniques for AMR prediction have indicated promising results. Despite this, there are shortcomings in the literature concerning methodologies suitable for multi-drug AMR prediction and especially where samples with missing labels exist. To address this shortcoming, we introduce a Rectified Classifier Chain (RCC) method for predicting multi-drug resistance. This RCC method was tested using annotated features of genomics sequences and compared with similar multi-label classification methodologies. We found that applying the eXtreme Gradient Boosting (XGBoost) base model to our RCC model outperformed the second-best model, XGBoost based binary relevance model, by 3.3% in Hamming accuracy and 7.8% in F1-score. Additionally, we note that in the literature machine learning models applied to AMR prediction typically are unsuitable for identifying biomarkers informative of their decisions; in this study, we show that biomarkers contributing to AMR prediction can also be identified using the proposed RCC method. We expect this can facilitate genome annotation and pave the path towards identifying new biomarkers indicative of AMR.

B Gardner, I Sporea, A Grüning (2014)Classifying spike patterns by reward-modulated STDP, In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)8681 Lpp. 749-756

Reward-modulated learning rules for spiking neural networks have emerged, that have been demonstrated to solve a wide range of reinforcement learning tasks. Despite this, little work has aimed to classify spike patterns by the timing of output spikes. Here, we apply a rewardmaximising learning rule to teach a spiking neural network to classify input patterns by the latency of output spikes. Furthermore, we compare the performance of two escape rate functions that drive output spiking activity: the Arrhenius & Current (A&C) model and Exponential (EXP) model. We find A&C consistently outperforms EXP, and especially in terms of the time taken to converge in learning. We also show that jittering input patterns with a low noise amplitude leads to an improvement in learning, by reducing the variation in the performance. © 2014 Springer International Publishing Switzerland.

H Ziaeepour, B Gardner (2011)Broad band simulation of Gamma Ray Bursts (GRB) prompt emission in presence of an external magnetic field, In: JOURNAL OF COSMOLOGY AND ASTROPARTICLE PHYSICS(12)ARTN 001 IOP PUBLISHING LTD
B Gardner, A Gruening, V Mladenov, P KoprinkovaHristova, G Palm, AEP Villa, B Appollini, N Kasabov (2013)Learning Temporally Precise Spiking Patterns through Reward Modulated Spike-Timing-Dependent Plasticity, In: ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 20138131pp. 256-263
Brian Gardner, Andre Gruning (2016)Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding, In: PLoS One11(8)e0161335 Public Library of Science (PLoS)

Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule’s error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism.

Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that are theoretically justified, and yet can be considered biologically relevant. Here we examine the general conditions under which optimal synaptic plasticity takes place to support the supervised learning of a precise temporal code. As part of our analysis we introduce two analytically derived learning rules, one of which relies on an instantaneous error signal to optimise synaptic weights in a network (INST rule), and the other one relying on a filtered error signal to minimise the variance of synaptic weight modifications (FILT rule). We test the optimality of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes. Our results demonstrate the optimality of the FILT rule in most cases, underpinned by the rule's error-filtering mechanism which provides smooth convergence during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism.

B Gardner, I Sporea, A Gruening (2015)Learning Spatiotemporally Encoded Pattern Transformations in Structured Spiking Neural Networks, In: NEURAL COMPUTATION27(12)pp. 2548-2586 MIT PRESS
B Gardner, I Sporea, A Grüning (2015)Learning Spatiotemporally Encoded Pattern Transformations in Structured Spiking Neural Networks., In: Neural Comput27(12)pp. 2548-2586

Information encoding in the nervous system is supported through the precise spike timings of neurons; however, an understanding of the underlying processes by which such representations are formed in the first place remains an open question. Here we examine how multilayered networks of spiking neurons can learn to encode for input patterns using a fully temporal coding scheme. To this end, we introduce a new supervised learning rule, MultilayerSpiker, that can train spiking networks containing hidden layer neurons to perform transformations between spatiotemporal input and output spike patterns. The performance of the proposed learning rule is demonstrated in terms of the number of pattern mappings it can learn, the complexity of network structures it can be used on, and its classification accuracy when using multispike-based encodings. In particular, the learning rule displays robustness against input noise and can generalize well on an example data set. Our approach contributes to both a systematic understanding of how computations might take place in the nervous system and a learning rule that displays strong technical capability.

B Gardner, A Grüning (2014)Classifying Patterns in a Spiking Neural Network, In: Proceedings of the 22nd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning – ESANN