Explainable and trustworthy AI

We research privacy-preserving artificial intelligence techniques such as adversarial techniques for local anonymisation algorithms that reduce the risk of re-identification through linkage with other identifiable datasets.


Person touching symbols on glass screen

Federated learning seeks to make data ownership and provenance first-class concepts of learning and analytics systems through the principle of data minimisation, as applied to aggregations. Research into multi-objective evolutionary federated learning aims to minimise the model update communication cost.

Our research

We are pushing the boundaries in this emerging field, with an emphasis on how privacy technologies may be combined in real-world systems. Our members also research interpretability of deep learning networks through feature interpretability for experiment design.

Get in touch

Contact us at nicequery@surrey.ac.uk if you'd like to find out more about our research in explainable and trustworthy AI.

Contact us

Find us


School of Computer Science and Electronic Engineering
University of Surrey
See map