Moral legitimisation in science, technology and AI innovation policies

This research project aims to show how policy makers use discursive strategies to construct a new morally oriented social contract on Artificial Intelligence. 

Overview

European institutions formulating Artificial Intelligence (AI) national strategies endeavour to reconcile the aspiration of harnessing its potential with safeguarding communities against its perceived ills.

In this research, we show how policy makers use discursive strategies to construct a new morally oriented social contract on AI. We point to three main strategies aiming at creating the new meaning of AI outlining its moral domain, stabilizing meaning through identification with the AI moral domain, and enacting meaning via an emotional tension between fear and hope in AI. We contribute to an understanding of the moral legitimation of Science Technology and Innovation (STI) policy through discursive strategies that combine cognitive references, moral domains and emotions. In doing so, we establish the importance of analysing STI policy beyond its constructs and the creation of ethical frames.

Team