Firebreaks to involve human decision making are crucial to safe AI systems
Commenting on this morning’s Reith Lecture by Professor Stuart Russell, the University of Surrey’s Professor Adrian Hilton, Director of the Surrey Institute for People-Centred Artificial Intelligence, said:
“There is a common approach towards general purpose AI which conceptualises it as an autonomous, end-to-end solution coming up with ideas and implementing those it classifies as best. Thankfully, the technology still has a long way to go so there’s a chance to reframe our thoughts and ambitions so that we can benefit safely from AI.
“Rather than lazily leaving AI to make our decisions, AI needs to be designed to work in partnership with humans. We must build firebreaks into AI systems where machines check in with real people to determine the best way forward. But because humans are flawed, those firebreaks can be designed to challenge us and encourage good decision making, not just plumping for the easy option.
“Take the example Professor Russell gave of using AI machines to look after our children. What working parent facing the possibility of Covid school closures wouldn’t welcome some robot help with childcare? An intelligent educator which adapts to our children’s learning needs? But we must set ourselves parental limits on how much responsibility that machine has, and we need it to require us to oversee the values it imparts to our children, just as we would oversee what a private tutor taught.
“Decision-making firebreaks are a way of ensuring humans stay in control. We need more robust controls than the three-pronged approach proposed by Professor Russell. With AI currently being pushed forward by private corporations, this is a priority we must address before the technology advances much further.”
The Surrey Institute for People-Centred AI brings together over 30 years of leading expertise in AI and machine learning foundations and practice with domain expertise across the humanities, law and regulation, ethics, politics, business and the physical and health sciences to inform future AI policy and ensure that people are at the heart of future AI.
External Communications and PR team
Phone: +44 (0)1483 684380 / 688914 / 684378
Out of hours: +44 (0)7773 479911