Justice and Artificial Intelligence
Hardly a day passes in which we do not hear about the alleged benefits that increased reliance upon technology will bring: self-driving cars and self-monitoring prescription drugs will free us from some of life’s daily burdens, while drones and robots will manage and solve pollution and related environmental concerns. Yet, alongside this optimistic, ‘solutionist’ view of technological development, there exists a less sanguine picture in which the difficulties and challenges of our technological present and future are central. This Major Project seeks to moderate and combine both strands, its starting point being an examination of some of the many current intersections between justice and artificial intelligence, intersections that will not only become more pronounced but also more numerous as reliance upon technology increases.
The project focusses in particular upon some of the specific legal and ethical issues raised by reliance upon deep neural networks (DNNs) and cognate technologies as decision-making-cum-surveillance tools in many areas of contemporary society. The project will do three things. First, it addresses the apparent justice concerns which arise within contexts such as (i) the implementation of automated decision-making in the immigration/citizenship/welfare/public and criminal law sphere; and (ii) the use of facial recognition and related technologies for surveillance, crowd control and numerous other purposes by police services and many other related public and private bodies world-wide. The initial research idea here is the truism that the notion of justice is complex, with numerous facets. Since every one of these facets need not necessarily be in play in each area of technological concern, the aim is to determine which aspects of justice are in play when, matching particular areas of concern with particular facets of (in)justice.
Second, and in addition to articulating and evaluating those justice concerns, the project will examine technological means of alleviating those concerns. The potential technical means of redress that will be evaluated include recent developments in explainable machine learning. The project investigates how explainable outputs of DNNs could be utilised by human decision-making experts with a view to conducting ‘justice audits’ of the outcomes of both surveillance-cum-recognition and automated decision processes.
Third, the project examines the broad spectrum of changes in personal, social and institutional life that the technologies underpinning automated decision-making, facial recognition and surveillance technology and the like – what some have called ubiquitous computing or ‘everyware’ – both portend and have already brought about. They will focus in particular on three apparent changes. First, upon the on-going shift in legal-regulatory mindset – from rule-based regulation to technological management – and its implications for human agency; second, upon how the presentation and understanding of self are mediated by these technologies and how, if at all, those processes connect with the quantification and management of data by agents and others; and, third, upon the ethical and political implications of the continuing entanglement of algorithms and agents’ data attributes.
The project is organised around three workshops during Epiphany term 2023-24. The first draws upon the participants expertise in the areas of bias in AI, facial recognition and surveillance, and automated decision-making to examine the issues of justice that these fields present; the second workshop considers the broader issues about regulatory style, agency and digital life which constitute the context in which the developments examined in the first workshop exist; and the third evaluates various technical means by the justice issues identified in the first might be redressed and mitigated and algorithmic justice achieved. Overall, the project seeks not just to identify some of the steps we need to take to ensure that our technological future is less unjust than our present, but also to chart and evaluate some of the changes – with regard to regulation, agency and self – that we will encounter there.
Dr Noura Al Moubayed (Noura.al-moubayed@durham.ac.uk)
Professor William Lucy (w.n.lucy@durham.ac.uk)
0 Comments