Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

Risking lives: Smart borders, private interests and AI policy in Europe

Metcalfe, Philippa, Dencik, Lina ORCID:, Chelioudakis, Eleftherios and van Eerd, Boudewijn 2023. Risking lives: Smart borders, private interests and AI policy in Europe. [Project Report]. Cardiff: Data Justice Lab. Available at:

[thumbnail of Risking-Lives-report.pdf]
PDF - Published Version
Download (1MB) | Preview


Recent years have seen huge investment in, and advancement of, technologically aided border controls, from biometric databases for identification to unmanned drones for external border surveillance. Data infrastructures and Artificial Intelligence (AI), often from private providers, are playing an increasingly pivotal role in attempts to predict, prevent and control often illegalised mobility into and across Europe. At the same time, the European Union is in the final stages of negotiating and adopting a final text of the proposed AI act, the inaugural EU legislation designed to establish comprehensive protections and safeguards with regards to the development, application and use of AI technology. This report explores and interrogates the interplay between smart borders, private interests, and policy surrounding AI within Europe. It does so to make apparent how the concept of ‘risk’ is integral to the advancement of smart border controls, while concurrently providing the framework for the governance of data infrastructures and AI. This highlights how AI is both embedded within and entrenching particular approaches to migration controls. To understand the relationship between smart borders, private interests and AI policy, we explore four components of smart borders in Europe: the development of ‘Fortress Europe’ in terms of securitisation, militarisation, and externalisation; technology used in smart borders; funding and profits; and AI policy. The report demonstrates that the concept of ‘risk’ in the context of migration and AI is used as both a legitimisation and regulatory tool. On the one hand, we see risk used to legitimise the ongoing investment in and development of hi-tech surveillance and AI at the border to prevent illegalised migrants from reaching European territory. Here, illegalised migrants are portrayed as a security issue and threat to Europe. On the other hand, the language of risk is also adopted as a regulatory tool to categorise AI applications within the AI act. Within these policy developments, we maintain that it is essential to include an exploration of the role of private defence and security companies and, as we investigate, their lobbying activities throughout the development of the AI act. These companies stand to make huge profits from the development of smart, securitised borders, seen as the answer to the problem of ‘risky’ migrants. From this, we end by considering the extent to which the AI act fails to benefit and protect those most affected by the harmful effects of smart borders.

Item Type: Monograph (Project Report)
Date Type: Publication
Status: Published
Schools: Journalism, Media and Culture
Subjects: H Social Sciences > H Social Sciences (General)
Publisher: Data Justice Lab
Funders: European Research Council
Date of First Compliant Deposit: 6 September 2023
Last Modified: 06 Sep 2023 09:00

Actions (repository staff only)

Edit Item Edit Item


Downloads per month over past year

View more statistics