Biography

Research

Research interests

My publications

Publications

Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that are theoretically justified, and yet can be considered biologically relevant. Here we examine the general conditions under which optimal synaptic plasticity takes place to support the supervised learning of a precise temporal code. As part of our analysis we introduce two analytically derived learning rules, one of which relies on an instantaneous error signal to optimise synaptic weights in a network (INST rule), and the other one relying on a filtered error signal to minimise the variance of synaptic weight modifications (FILT rule). We test the optimality of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes. Our results demonstrate the optimality of the FILT rule in most cases, underpinned by the rule's error-filtering mechanism which provides smooth convergence during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism.

B Gardner, I Sporea, A Grüning (2015)Learning Spatiotemporally Encoded Pattern Transformations in Structured Spiking Neural Networks., In: Neural Comput27(12)pp. 2548-2586

Information encoding in the nervous system is supported through the precise spike timings of neurons; however, an understanding of the underlying processes by which such representations are formed in the first place remains an open question. Here we examine how multilayered networks of spiking neurons can learn to encode for input patterns using a fully temporal coding scheme. To this end, we introduce a new supervised learning rule, MultilayerSpiker, that can train spiking networks containing hidden layer neurons to perform transformations between spatiotemporal input and output spike patterns. The performance of the proposed learning rule is demonstrated in terms of the number of pattern mappings it can learn, the complexity of network structures it can be used on, and its classification accuracy when using multispike-based encodings. In particular, the learning rule displays robustness against input noise and can generalize well on an example data set. Our approach contributes to both a systematic understanding of how computations might take place in the nervous system and a learning rule that displays strong technical capability.

B Gardner, I Sporea, A Gruening (2015)Learning Spatiotemporally Encoded Pattern Transformations in Structured Spiking Neural Networks, In: NEURAL COMPUTATION27(12)pp. 2548-2586 MIT PRESS
B Gardner, A Gruening, V Mladenov, P KoprinkovaHristova, G Palm, AEP Villa, B Appollini, N Kasabov (2013)Learning Temporally Precise Spiking Patterns through Reward Modulated Spike-Timing-Dependent Plasticity, In: ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 20138131pp. 256-263
B Gardner, I Sporea, A Grüning (2014)Classifying spike patterns by reward-modulated STDP, In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)8681 Lpp. 749-756

Reward-modulated learning rules for spiking neural networks have emerged, that have been demonstrated to solve a wide range of reinforcement learning tasks. Despite this, little work has aimed to classify spike patterns by the timing of output spikes. Here, we apply a rewardmaximising learning rule to teach a spiking neural network to classify input patterns by the latency of output spikes. Furthermore, we compare the performance of two escape rate functions that drive output spiking activity: the Arrhenius & Current (A&C) model and Exponential (EXP) model. We find A&C consistently outperforms EXP, and especially in terms of the time taken to converge in learning. We also show that jittering input patterns with a low noise amplitude leads to an improvement in learning, by reducing the variation in the performance. © 2014 Springer International Publishing Switzerland.

H Ziaeepour, B Gardner (2011)Broad band simulation of Gamma Ray Bursts (GRB) prompt emission in presence of an external magnetic field, In: JOURNAL OF COSMOLOGY AND ASTROPARTICLE PHYSICS(12)ARTN 001 IOP PUBLISHING LTD
B Gardner, A Grüning (2014)Classifying Patterns in a Spiking Neural Network, In: Proceedings of the 22nd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning – ESANN
Brian Gardner, Andre Gruning (2016)Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding, In: PLoS One11(8)e0161335 Public Library of Science (PLoS)

Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule’s error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism.