Robust, certified, and scalable federated machine unlearning for privacy-preserving AI

This project aims to develop the next generation of federated machine unlearning algorithms that can efficiently and verifiably erase the influence of specific data in distributed models, while preserving both model performance and participant privacy.

Start date

1 October 2026

Duration

3.5 years

Application deadline

Funding information

Fully funded studentship opportunity covering home university fees, additional research training, travel funds and UKRI standard rate (£21,805 for 2026/27 academic year).

About

As federated learning systems become increasingly embedded in high‑stakes domains where data cannot be directly shared, such as healthcare and finance, the ability to selectively remove the influence of specific data from trained models becomes critical. Yet, despite the EU’s General Data Protection Regulation (GDPR) enshrining a “right to be forgotten”, current federated learning practice offers no robust, scalable, or provable mechanism to guarantee this right once a model has been trained. 

The distributed nature of federated settings introduces unique challenges for unlearning: the central server never directly accesses raw data, information encoded in aggregated models can persist across participants, and retraining from scratch is often computationally infeasible at scale.

In this project, you will develop the next generation of federated machine unlearning algorithms—methods that can efficiently deliver genuine, verifiable, and robust erasure without sacrificing model performance or participant privacy.

Several of the most active research frontiers in this field include:

  • Certified unlearning with formal guarantees — developing methods with provable erasure bounds, connecting to differential privacy and statistical divergence frameworks
  • Robustness to adversarial relearning — designing unlearning protocols that remain stable under fine-tuning attacks, jailbreak-style probing, and multi-turn adversarial interaction
  • Evaluation and verification — building federated-specific benchmarks and auditing tools
  • Scalable unlearning for foundation models — extending federated unlearning to large pretrained models, where parameter-efficient methods must balance erasure guarantees against model utility
  • Unlearning as an AI safety primitive — exploring how federated unlearning can contribute to removing hazardous or harmful knowledge from collaboratively trained models, positioning the work within the broader trustworthy AI agenda.

The project sits at the intersection of privacy-preserving machine learning, distributed systems, and trustworthy AI, with implications for regulatory compliance and real-world deployment of federated systems.

Eligibility criteria

We are looking for a motivated and intellectually curious researcher with:

  • A first-class or strong upper-second-class undergraduate degree (or a Master's degree) in Computer Science, Mathematics, Statistics, or a closely related field
  • Solid grounding in machine learning and/or probability/statistics
  • Programming proficiency in Python and familiarity with ML frameworks (PyTorch, JAX, or similar)
  • Strong analytical and problem-solving skills
  • Good written and verbal communication in English.

Experience in any of the following is desirable but not required: federated learning, differential privacy, adversarial robustness, distributed systems, or LLM fine-tuning.

You will need to meet the minimum entry requirements for our Computer Science PhD programme.

Open to candidates who pay UK/home rate fees. See UKCISA for further information.

How to apply

Applications should be submitted via the Computer Science PhD programme page. In place of a research proposal, you should upload a document stating the title of the project that you wish to apply for and the name of the relevant supervisor.

Studentship FAQs

Read our studentship FAQs to find out more about applying and funding.

Application deadline

Contact details

Pedro Porto Buarque de Gusmão
05 BB 02
E-mail: p.gusmao@surrey.ac.uk
studentship-cta-strip

Studentships at Surrey

We have a wide range of studentship opportunities available.