Privatised border regulation, AI, MigTech and public procurement

Migration, Mobilities and Digital Technologies – a special series published in association with the ESRC Centre for Sociodigital Futures.

By Albert Sanchez-Graells.

Moving across borders used to involve direct contact with the State. Moving people faced border agents attached to some police corps or the army. Moving things were inspected by customs agents. Entry was granted or denied according to rules and regulations set by the State, interpreted and enforced by State agents, and potentially reviewed by State courts. Borders were controlled by the State, for better or worse.

As States pivot to ‘technology-enhanced’ border control and experiment with artificial intelligence (AI) – let’s call this the ‘MigTech’ shift – this is no longer the full story, or even an accurate one.

More and more, crossing borders involves interactions with machines such as eGates, with increasing levels of automated facial recognition ‘capabilities’ (see Travis Van Isacker’s blogpost in this series). Face-to-face interviews are progressively (planned to be) replaced by AI ‘solutions’ such as ‘lie detectors’ or ‘emotion recognition’ tests. The pervasiveness of AI touches moving people’s lives before they start to move – such as when visa and travel permits are granted or denied through algorithmically-supported or automated decision systems that raise red flags or draw inferences from increasingly dense and opaque data thickets (see Kuba Jablonowski’s discussion of the UK’s shift from border documentation to computation). The movement of things is similarly exposed to all sorts of technological deployments, such as ‘smart sensors’ or drone-supported surveillance.

(Image by Markus Spiske on Unplash)

We could think that borders are now controlled by technology. But that would, of course, conflate the tool with the agent. To understand the implications of this paradigm shift towards MigTech we need to focus on control over these technologies. Control rarely rests with the State that purports to use the technology. Control mostly lies with the technology providers. Digitalisation thus goes hand in hand with the privatisation of border regulation. Entry is granted or denied as a result of ‘technical’ embeddings over which technology providers hold almost absolute control. Technology providers increasingly control borders, mostly for the worse.

There is a rich body of research on the impacts of digitalisation and automation of border control on people, communities and injustice. And also increasing calls for a reconsideration of this approach in view of its harms. At first sight, it could even seem that new legislation such as the EU AI Act addresses the risks and harms arising from digital borders. After all, the EU AI Act classes as ‘high-risk’ the use of AI for ‘Migration, asylum and border control management’. High-risk classification entails a long list of obligations, including pre-deployment fundamental rights impact assessments. By subjecting the technology to a series of ‘assurances’, the Act seeks to ensure that its deployment is ‘safe’. This regulatory approach can create the illusion that the State has regained control over the technology by tying the hands of the technology provider. Indirectly, the State would also have regained control of the borders, for better or worse.

My research challenges this understanding. It highlights how the regulatory tools that are being put in place – such as the EU AI Act – will not sufficiently address the issue of ‘tech-mediated privatisation’ of ‘core’ State functions because, in themselves, these tools transfer regulatory power back to technology providers. By focusing on how the technology reaches the State, and who holds control over the technology and how, I highlight important gaps in law and regulation.

The State rarely develops its own AI or other digital technologies. On most occasions, the State buys technology from the market. This involves public contracts that are meant to set the relevant requirements and to complement regulatory frameworks through tailor-made obligations. To put it simply, my research shows that public contracts are not an effective mechanism to impose specific obligations. Take the example of a State buying an ‘AI lie detector’. The ‘accuracy’ and the ‘explainability’ of the AI will be crucial to its adequate use. However, the EU AI Act does not contain any explicit requirement or minimum benchmark in relation to either of them. Let’s take accuracy.

The EU AI Act solely establishes that ‘High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy’ (Art 15(1)). Identically, the model contractual clauses for the procurement of AI that support the operationalisation of the EU AI Act do not contain specific requirements either. They simply state an obligation for the technology provider to ensure that the AI system is ‘developed following the principle of security by design and by default … it should achieve an appropriate level of accuracy’ (Clause 8.1). The specific levels of accuracy and the relevant accuracy metrics of the AI system are meant to be described in Annex G. But Annex G is blank!

It will be for the public buyer and the technology provider to contractually agree the applicable level of accuracy. This will most likely be done either by reference to the ‘state-of-the-art’ (which privatises the ‘art of the possible’), or by reference to industry-led technical standards (which are poor tools for socio-technical regulation and entirely alien to fundamental rights norms). Or, perhaps even more likely, accuracy will be set at levels that work for the technology provider, which is most likely going to have superior digital and commercial skills than the public buyer. After all, there are many ways to measure and report an AI system’s accuracy and they can be gamed.

In most cases, the operationalisation of the EU AI Act will leave the specific interpretation of what is ‘an appropriate level of accuracy’ in the hands of the technology provider. The same goes for explainability, and for any other ‘technical’ issue with large operational implications. Which does not significantly change the current situation, and which certainly does not mitigate the effects (risks and harms) of the privatisation of AI regulation – or, in the context of MigTech, the privatisation of border regulation. The EU AI Act – and other approaches to ‘AI regulation by contract’, including in the UK under the ‘pro-innovation approach’ to AI and the recently announced AI Opportunities Action Plan – creates a funnel of regulatory power that dangerously exposes the public sector to risks of regulatory capture and commercial determination. And, ultimately, exposes all of us to the ensuing risks and harms. A different regulatory approach is necessary.

Albert Sanchez-Graells (he/him) is a Professor of Economic Law at the University of Bristol Law School. He is also an affiliate of the Bristol Digital Futures Institute. Albert’s current research focuses on the role of procurement in regulating public sector AI use, on which he recently published the monograph Digital Technologies and Public Procurement: Gatekeeping and Experimentation in Digital Public Governance (Oxford University Press, 2024).