Eleanor Mill

Eleanor Mill

Postgraduate Research Student

Academic and research departments

Department of Business Analytics and Operations.


My research project

University roles and responsibilities

  • University Ethics Committee

    My qualifications

    BSc Mathematics
    University of East Anglia
    MSc Business Analytics
    University of Surrey


    Eleanor Mill, Wolfgang Garn , Nick Ryman-Tubb (2022)Managing Sustainability Tensions in Artificial Intelligence: Insights from Paradox Theory, In: AIES '22: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Societypp. 491-498 Association for Computing Machinery (ACM)

    This paper offers preliminary reflections on the sustainability tensions present in Artificial Intelligence (AI) and suggests that Paradox Theory, an approach borrowed from the strategic management literature, may help guide scholars towards innovative solutions. The benefits of AI to our society are well documented. Yet those benefits come at environmental and sociological cost, a fact which is often overlooked by mainstream scholars and practitioners. After examining the nascent corpus of literature on the sustainability tensions present in AI, this paper introduces the Accuracy-Energy Paradox and suggests how the principles of paradox theory can guide the AI community to a more sustainable solution.

    Eleanor Ruth Mill, Wolfgang Garn, Nicholas F Ryman-Tubb, Christopher Turner (2024)The SAGE Framework for Explaining Context in Explainable Artificial Intelligence, In: Applied artificial intelligence : AAI38(1)2318670

    Scholars often recommend incorporating context into the design of an ex-plainable artificial intelligence (XAI) model in order to deliver the successful integration of an explainable agent into a real-world operating environment. However, few in the field of XAI have expanded upon the meaning of context, or provided clarification as to what they consider its constituent parts. This paper answers that question by providing a thematic review of the extant literature , revealing an interaction between the contextual elements of Setting, Audience, Goals and Ethics (SAGE). This paper therefore proposes SAGE as a conceptual framework that enables researchers to build audience-centric and context-sensitive XAI, thereby strengthening the prospects for successful adoption of an XAI solution.

    Eleanor Ruth Mill, Wolfgang Garn, Nicholas F Ryman-Tubb, Christopher Turner (2023)Opportunities in Real Time Fraud Detection: An Explainable Artificial Intelligence (XAI) Research Agenda, In: International Journal of Advanced Computer Science and Applications14(5)pp. 1172-1186 SAI Organization

    Regulatory and technological changes have recently transformed the digital footprint of credit card transactions, providing at least ten times the amount of data available for fraud detection practices that were previously available for analysis. This newly enhanced dataset challenges the scalability of traditional rule-based fraud detection methods and creates an opportunity for wider adoption of artificial intelligence (AI) techniques. However, the opacity of AI models, combined with the high stakes involved in the finance industry, means practitioners have been slow to adapt. In response, this paper argues for more researchers to engage with investigations into the use of Explainable Artificial Intelligence (XAI) techniques for credit card fraud detection. Firstly, it sheds light on recent regulatory changes which are pivotal in driving the adoption of new machine learning (ML) techniques. Secondly, it examines the operating environment for credit card transactions, an understanding of which is crucial for the ability to operationalise solutions. Finally, it proposes a research agenda comprised of four key areas of investigation for XAI, arguing that further work would contribute towards a step-change in fraud detection practices.

    (2020)Interpretable Machine Learning, In: Eleanor Ruth Mill (eds.), POSTnote(633) The Parliamentary Office of Science and Technology

    Machine learning (ML, a type of artificial intelligence) is increasingly being used to support decision making in a variety of applications including recruitment and clinical diagnoses. While ML has many advantages, there are concerns that in some cases it may not be possible to explain completely how its outputs have been produced. This POSTnote gives an overview of ML and its role in decision-making. It examines the challenges of understanding how a complex ML system has reached its output, and some of the technical approaches to making ML easier to interpret. It also gives a brief overview of some of the proposed tools for making ML systems more accountable.