Explainable and trustworthy AI
We research privacy-preserving artificial intelligence techniques such as adversarial techniques for local anonymisation algorithms that reduce the risk of re-identification through linkage with other identifiable datasets.
Federated learning seeks to make data ownership and provenance first-class concepts of learning and analytics systems through the principle of data minimisation, as applied to aggregations. Research into multi-objective evolutionary federated learning aims to minimise the model update communication cost.
We are pushing the boundaries in this emerging field, with an emphasis on how privacy technologies may be combined in real-world systems. Our members also research interpretability of deep learning networks through feature interpretability for experiment design.
Get in touch
Contact us at email@example.com if you'd like to find out more about our research in explainable and trustworthy AI.