Michael Sierra - “Auditing Bias under the EU AI Act: Notified Bodies as Regulatory Intermediaries in Algorithmic Governance”

Abstract

The European Union’s Artificial Intelligence Act assigns notified bodies a pivotal role in the conformity assessment of high-risk AI systems. While Article 31(4) mandates independence, the ability of these intermediaries to detect and mitigate algorithmic bias remains empirically uncertain. Drawing on Regulatory Intermediary Theory (Abbott et al., 2017a), this paper investigates the institutional and epistemic conditions of intermediary effectiveness through a comparative analysis of DEKRA and TUV SUD. Utilizing document analysis and policy mapping, the study examines how DEKRA’s proactive investment in interdisciplinary expertise and standardization enables the operationalization of ethical principles into audit tools. Conversely, TUV SUD exemplifies the risks of private polycentric governance, where historical failures such as the PIP scandal and ongoing challenges in medical-device certification expose structural dependencies, limited expertise, and opaque methodologies. Ultimately, the analysis demonstrates how institutional design, epistemic capacity, and independence condition regulatory success in the digital realm.