People-centred AI Institute logo

Trustworthy and responsible AI

How can we ensure AI is of benefit to all?

Trustworthy AI ensuring fairness, inclusion and benefit for all members of society is central to the future acceptance and adoption of AI technologies in areas from healthcare to education. Realisation of trusted AI technologies requires cross-cutting collaboration in AI governance (law, regulation, ethics), AI technology (explainability, uncertainty, fairness/bias) and end-user application domains (health, business, entertainment).

Responsible AI must be embedded throughout research, design, development and deployment of AI technologies.

 

In emerging technologies such as self-driving cars, medical diagnostics and financial advice systems, AI makes decisions which have a major influence on our lives, health and rights – decisions that would normally be the responsibility of a human. With these technologies advancing at an unprecedented rate, it is vital that the ethical, legal and regulatory framework keeps pace in order to protect people, businesses and society from negative impacts.

Surrey’s Department of Politics draws on its knowledge of armed conflict and intervention, political science and EU studies to research the implications of AI across a range of fields including automation in the military (led by the Centre for International Intervention), democratic politics, the processing of electoral datasets, and the use of AI-driven analytical tools in public policy.

The legal implications of AI are also being explored at Surrey, with the School of Law undertaking pioneering research into how AI will change legal standards in intellectual property, tax and criminal law. A forthcoming book by Professor of Law and Health Sciences Ryan Abbott, The Reasonable Robot: Artificial Intelligence and the Law, seeks to answer questions such as ‘can computers have rights?’, ‘who is responsible for a crime committed by a machine?’ and ‘are killer robots more humane than conventional soldiers?’

    The University has led fundamental advances in AI and machine perception for over three decades through its Centre for Vision, Speech and Signal Processing (CVSSP) established in 1987 and, over the past decade, the Nature Inspired Computing and Engineering Group (NICE).

    CVSSP is an internationally recognised leader in AI and audio-visual machine perception research, and has pioneered technology and award-winning spin-out companies in the biometric, communication, medical and creative industries.

    The NICE group focuses on the development of computational models and algorithms inspired from systems found in the natural world to solve practical problems in sectors such as health, security, energy and the environment.

    Globally, Surrey is taking a leading role in developing the fundamental principles which will underpin effective ways to characterise ‘information semantics’ for machine perception. It has joined a group of universities across the UK and US for the Multidisciplinary University Research Initiative (MURI) which aims to enable future machine perception systems to extract meaningful and actionable information from sensors mounted on autonomous vehicles, installed in smart cities, or supporting assisted living.

    A major focus for the Institute will be ensuring that AI systems are fair and transparent. Professor Hilton explains: “If an algorithm uses data which is based on the status quo – for example an existing workforce where there are more men than women – the technology will replicate that bias. We have to consciously design AI systems which are inclusive.

    “The other issue is that consumers need to be able to understand why certain decisions are made, so that we avoid the ‘black box’ situation where you feed in a question and the ‘computer says no’, with no explanation.”

    There are also myriad challenges around the governance and regulation of AI, partly because the legal framework has not caught up with the technology. One big question Surrey’s law researchers are wrestling with is responsibility: if a piece of AI-based technology such as an automatic car goes wrong, who is to blame? The inventor, the driver… or the machine itself? Tesla’s fatal crash was a sobering reminder that an unexpected situation – such as a broken safety barrier – can cause AI to fail, with devastating consequences.

    However the potential of AI to improve lives is enormous. In the field of healthcare, it promises to revolutionise diagnosis and treatment through data analysis and smart devices. It is enabling educational tools which are tailored to the way individuals learn, as well as new hyper-personalised experiences in entertainment. A five-year ‘Prosperity Partnership’ has seen Surrey teaming up with the BBC to develop technologies which will enable storytelling and news content to adapt based on users’ individual interests, location, devices and accessibility needs – with the aim of helping the UK creative industry to become a world leader in personalised media experiences.

    Despite the challenges, it seems that AI is predominantly a force for good, with the potential to help businesses create value, improve our health and wellbeing, and reduce the problems faced by our ageing population.

    “To me it’s not about machines taking over,” says Professor Hilton. “Rather than taking jobs away, I think AI will be the driver for upskilling the labour force, automating tedious tasks and freeing people to use their skills in a more creative way. At its best, AI will be about enriching lives and helping to create a safer, fairer society. The People-Centred AI Institute will lead research and training to ensure that future AI enabled solutions are of benefit to all”

     

    AI poses a huge challenge in terms of cybersecurity because it involves vast amounts of data which may be highly sensitive (such as medical records). However AI is also improving our security online through blockchain, which enables the storing of tamper-proof data without relying on a single person or organisation.

    The University’s Surrey Centre for Cyber Security, an Academic Centre of Excellence in Cyber Security Research recognised by the UK government’s National Cyber Security Centre, is finding innovative ways to protect the privacy of individuals and organisations.

    One example is its use of AI techniques to tackle online safety issues such as predatory grooming or production of first-generation indecent images of children. The Centre’s spin-out company Securium Ltd has developed ways of picking up nuances in chatroom and livestreamed communications, separating in real time what would otherwise appear to be a normal online conversation from one in which a child is being groomed.