press release
Published: 26 April 2022

New tools could stop the deluge of WhatsApp spam messages and strengthen user privacy

Cybersecurity experts have unveiled new privacy tools that could put a stop to the growing number of spam messages found in public groups on instant messaging platforms such as WhatsApp and Signal.   

The innovation comes after researchers joined thousands of politically focused WhatsApp groups to study how spam messages were distributed. Researchers found that 1 in 10 messages was considered spam and bore a striking resemblance to their email counterparts, with clickbait, adult content, and hidden URLs being most prevalent.  

Concerned about the scale of this problem, the researchers developed a new set of privacy-preserving tools that can automatically detect and filter spam locally without ever needing to allow third parties, such as Facebook, to inspect message content.  

Professor Nishanth Sastry, co-author of the study from the University of Surrey, said: 

"Messaging platforms such as WhatsApp and Signal are increasingly important because they are fast becoming the norm for how the world communicates with family and friends, as well keeping up to date with the news. So, it is important that users have access to tools that will safeguard their privacy and stop them being vulnerable to malicious actors wanting to cause harm or spread misinformation.  

"In light of recent developments, such as the UK's Online Harms Bill, our innovation offers a technological advance that can help deliver on user-safety goals while protecting user privacy." 

Dr Gareth Tyson, co-author of the study from Queen Mary University of London, said:  

"End-to-end encryption is a vital technology for ensuring user privacy. However, its introduction has meant that certain beneficial services, such as spam filtering, become much harder to implement. This project is one of the first to offer robust techniques to help users moderate their social messaging feeds, without having to compromise their privacy. Considering the amount of data we generate online, keeping it safe from prying eyes will become ever more important in the future."  

The research team is now looking to use their detection technology to help detect hate speech on messaging platforms and other digital spaces.  

The research, which will be presented at this year's World Web Conference, was conducted by researchers from the University of Surrey, King's College London, Telefonica Research, Queen Mary University of London, Hong Kong University of Science, and Technology and Rutgers University.   

Note to editors  

·         Read the full study here.  

·         Professor Nishanth Sastry is available for interview upon request.   

·         Contact the University of Surrey's Press Office via 

Media Contacts

External Communications and PR team
Phone: +44 (0)1483 684380 / 688914 / 684378
Out of hours: +44 (0)7773 479911