Trustworthy AI for socially compliant autonomous driving
This PhD research aims to develop an intelligent driving decision-making system for autonomous vehicles, addressing the challenges of navigating complex driving situations while adhering to ethical and legal principles using symbolic AI and reinforcement learning.
Start date1 January 2024
Funding sourceProject Funded - AI Institute
A stipend of £18,622 for 23/24, which will increase each year in line with the UK Research and Innovation (UKRI) rate, plus Home rate fee allowance of £4,712 (with automatic increase to UKRI rate each year). The studentship is offered for 3.5 years. For exceptional international candidates, there is the possibility of obtaining a scholarship to cover overseas fees.
Dr Alireza Tamaddoni NezhadReader (Associate Professor) in Machine Learning and Computational Intelligence
Driving is highly dynamic within varied environments and for an autonomous vehicle (AV) to perform reasonably and safely, many variables need to be considered which go well beyond simple rules. As a result, there are concerns about AV behaviour in complex and uncertain driving situations where there is a need to navigate the roadways and balance factors such as safety, legality, ethics, comfort, etc., in the same way functionally as a responsible human driver.
To drive responsibly, AVs need to be provided with the capability to adhere to a complex set of legal and ethical requirements in addition to those of technical design. There are laws and regulations specific to AVs that vary across jurisdictions, but AVs are also required generally to conform to human-centric laws and regulations. The former category is likely simpler from a compliance standpoint, but human-centric laws and regulations are not always so clear cut. For instance, if a speed limit is 50 miles per hour (MPH) in a particular area, it seems clear enough that the AV cannot exceed 50 MPH. What happens, however, in an overtaking situation where an AV needs to speed up to safely overtake a slow vehicle to avoid collision?
Speeding is often justified as a matter of law to avoid an injury, but how that is determined is a very fact-specific inquiry. Similarly, many cases of liability stand on whether a driver has behaved “reasonably” according to the standard of what a hypothetical “responsible person” would have done in the same circumstance. AVs will need to be designed in way that they at least behave reasonably, although people disagree about what that entails.
How the AV will respond in these sorts of circumstances must be guided first and foremost by an appropriate normative framework aligning to predefined legal rules and societal values. One challenge for AV design is therefore to create a framework of what is generally acceptable, because even lawyers and ethicists can and do debate what this framework should look like. A more substantial challenge is translating a human-centric framework to a format and system of rules understandable to an AV machine and deciding how to integrate the framework into a Driving Decision-Making (DDM) system for responsible behaviour.
The aim of this PhD research is to address these challenges by designing an intelligent DDM system which integrates higher-level reasoning, building upon the recent progress in symbolic AI and reinforcement learning, while respecting a range of ethical and legal principles.
Open to any UK or international candidates.