Artificial intelligence logo

Ethics, law and regulation

Developing ethical, legal and regulatory framework to protect people, businesses and society from negative impacts of AI.

In emerging technologies such as self-driving cars, medical diagnostics and financial advice systems, AI makes decisions which have a major influence on our lives, health and rights – decisions that would normally be the responsibility of a human. With these technologies advancing at an unprecedented rate, it is vital that the ethical, legal and regulatory framework keeps pace in order to protect people, businesses and society from negative impacts.

Surrey’s Department of Politics draws on its knowledge of armed conflict and intervention, political science and EU studies to research the implications of AI across a range of fields including automation in the military (led by the Centre for International Intervention), democratic politics, the processing of electoral datasets, and the use of AI-driven analytical tools in public policy.

The legal implications of AI are also being explored at Surrey, with the School of Law undertaking pioneering research into how AI will change legal standards in intellectual property, tax and criminal law. A forthcoming book by Professor of Law and Health Sciences Ryan Abbott, The Reasonable Robot: Artificial Intelligence and the Law, seeks to answer questions such as ‘can computers have rights?’, ‘who is responsible for a crime committed by a machine?’ and ‘are killer robots more humane than conventional soldiers?’

    An AI system’s sense of fairness can only be as good as the data its algorithm is trained on, and sometimes, these algorithms are so complex that even the person who built them may not be able to identify why certain decisions are made. A bank’s refusal to give a customer a mortgage, for example, may be based on an intricate web of knowledge about their income, spending patterns and internet history.

    Surrey’s Centre for Vision, Speech and Signal Processing (CVSSP) has identified ‘counterfactual explainability’ as a method of analysing a complex algorithm and discovering why it behaves as it does, empowering individuals and businesses to challenge the ‘computer says no’ conundrum.

    This concept has now been included in one of the world’s most widely-used machine toolkits, Google’s TensorFlow, and has also been referenced in guidelines to GDPR in a House of Commons Select Committee Report.

     

                                                Home  |  About  |  Research  |  People  |  Study  |  Collaboration  |  Activities

     

    Contact us

    Find us

    Address
    AI@Surrey
    University of Surrey
    Guildford
    Surrey
    GU2 7XH