Placeholder image for staff profiles

Dr Brian Gardner


Research Fellow
+44 (0)1483 682627
23 BB 02

Biography

Biography

I completed an undergraduate masters in Physics at the University of Exeter. I got PhD in Computer Science on the theoretical aspects of spiking neural networks in 2016. Currently I am working at the Department as a research fellow, on the European project SPIKEFRAME: A framework for learning rules in networks of spiking neural networks, under the supervision of Dr André Grüning who was my former PhD supervisor as well.

Research interests

  • Neural networks
  • Synaptic plasticity
  • Reinforcement learning

My publications

Publications

Gardner B, Sporea I, Grüning A (2014) Classifying spike patterns by reward-modulated STDP,Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 8681 LNCS pp. 749-756
Reward-modulated learning rules for spiking neural networks have emerged, that have been demonstrated to solve a wide range of reinforcement learning tasks. Despite this, little work has aimed to classify spike patterns by the timing of output spikes. Here, we apply a rewardmaximising learning rule to teach a spiking neural network to classify input patterns by the latency of output spikes. Furthermore, we compare the performance of two escape rate functions that drive output spiking activity: the Arrhenius & Current (A&C) model and Exponential (EXP) model. We find A&C consistently outperforms EXP, and especially in terms of the time taken to converge in learning. We also show that jittering input patterns with a low noise amplitude leads to an improvement in learning, by reducing the variation in the performance. © 2014 Springer International Publishing Switzerland.
Gardner B, Sporea I, Grüning A (2015) Learning Spatiotemporally Encoded Pattern Transformations in Structured Spiking Neural Networks.,Neural Comput 27 (12) pp. 2548-2586
Information encoding in the nervous system is supported through the precise spike timings of neurons; however, an understanding of the underlying processes by which such representations are formed in the first place remains an open question. Here we examine how multilayered networks of spiking neurons can learn to encode for input patterns using a fully temporal coding scheme. To this end, we introduce a new supervised learning rule, MultilayerSpiker, that can train spiking networks containing hidden layer neurons to perform transformations between spatiotemporal input and output spike patterns. The performance of the proposed learning rule is demonstrated in terms of the number of pattern mappings it can learn, the complexity of network structures it can be used on, and its classification accuracy when using multispike-based encodings. In particular, the learning rule displays robustness against input noise and can generalize well on an example data set. Our approach contributes to both a systematic understanding of how computations might take place in the nervous system and a learning rule that displays strong technical capability.
Gardner B, Grüning A (2013) Learning Temporally Precise Spiking Patterns through Reward Modulated Spike-Timing-Dependent Plasticity., ICANN 8131 pp. 256-263 Springer
Ziaeepour H, Gardner B (2011) Broad band simulation of Gamma Ray Bursts (GRB) prompt emission in presence of an external magnetic field,JOURNAL OF COSMOLOGY AND ASTROPARTICLE PHYSICS (12) ARTN 001 IOP PUBLISHING LTD
Gardner B, Gruning A (2016) Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding,PLoS One 11 (8) e0161335 Public Library of Science (PLoS)
Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule?s error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism.