The Participatory Harm Auditing Workbenches and Methodologies (PHAWM) Project

The challenge of developing trustworthy and safe AI

A significant barrier to reaping the benefits of predictive and generative AI is their unassessed potential for harms. Hence, AI auditing has become imperative, in line with existing and impending regulatory frameworks. Yet, AI auditing has been haphazard, unsystematic, and left solely in the hands of experts. Our project, bringing together a consortium of 7 academic institutions and 23 partner organisations, aims to fix this fundamental challenge through the novel concept of participatory AI auditing where a diverse set of stakeholders without a background in AI, such as domain experts, regulators, decision subjects and end-users, undertake audits of predictive and generative AI, either individually or collectively. Our research is grounded in four use cases: Health, Media Content, Cultural Heritage and Collaborative Content Generation. To enable stakeholders to carry out an audit, our project will produce workbenches that support them in assessing the quality and potential harms of AI. The participatory audits will be embedded in methodologies which guide how, when and who carries out these audits. We will train stakeholders in carrying out participatory audits and work towards a certification framework for AI solutions.

PHAWM brings together 20 academics from seven academic institutions.

We are involving 24 external partner organisations.

Our research is grounded in 4 use cases, developed with partners who will provide datasets and facilitate access to stakeholders: NHS NSS will contribute to the Health use case, Istella will contribute to the Media Content use case, National Library of Scotland, Museum Data Service and David Livingstone Birthplace Trust will participate in the Cultural Heritage use case, and Wikimedia and Full Fact will be involved in the Collaborative Content Generation use case. Companies already actively involved in RAI research, such as Nokia Bell Labs, Microsoft Research NYC and Microsoft Research UK, Meta and Fujitsu will contribute directly to project activities. We also have partners who will help us with reaching stakeholders: Cybersalon, Datasparq, Open Data Institute, Research Data Scotland, ScotlandIS, Scottish AI Alliance, Scottish Informatics and Computer Science Alliance (SICSA), SoBigData.

Individuals from Women’s Enterprise Scotland, the Scottish Government, the National Institute for Standards and Technology (USA) and IBM will also participate in the Advisory Board, joining Gina Neff (University of Cambridge), Reuben Binns (University of Oxford), Ute Schmid (University of Bamberg, Germany), Sheey Tongshuang Wu (Carnegie Mellon University, USA), and Abeba Birhane (Trinity College Dublin, Ireland).