From surveillance to solidarity: the political implications of digital monitoring in conservation and migration

Migration, Mobilities and Digital Technologies – a special series published in association with the ESRC Centre for Sociodigital Futures.

By Naomi Millner.

During the mobilisation of migrant solidarity and anti-racist protests in the UK last summer, digital technologies – particularly smartphones – played a pivotal role. These devices became lifelines, with group chats buzzing with updates about far-right activities, tactical advice and legal guidance. The same platforms were also used by far-right groups, with live TikTok feeds and other social media channels attempting to galvanise ‘protests’ and reinforce anti-migrant narratives. Such use of digital technologies is clearly mediating new and contrasting forms of collective political action in relation to migration and mobilities.  

The mobilisations were part of a broader wave of unrest across the UK in late July and early August 2024, following a tragic killing of three girls in Southport. Misinformation quickly spread online, including claims that the perpetrator was a Muslim immigrant. This sparked violent riots, where mosques, migrant centres and hotels housing asylum-seekers were targeted.

The events illustrate how the internet and social media are transforming the way people receive information, form opinions and engage with others. Some describe these new forms of connectivity as a series of interlocking echo chambers where opinions carry rapidly along social networks without entering meaningful dialogue, fostering polarisation and extremism. The rise of artificial intelligence (AI) in this context, and the incorporation of algorithms that prioritise engagement in social media over accuracy has further intensified the spread of sensationalist and often false content.

On the other hand, notions of echo chambers and even polarisation have been called simplistic, relying as they do on conventional ideas of the political left and right, a glossing of what information individuals are exposed to, and an easy dismissal of populist views as being naïve and/or manipulated.

Emerging technologies in migration and conservation surveillance

The impact of new digital technologies is not limited to social media reporting on migration. As this blog series has already explored, governments and corporations increasingly employ advanced technologies such as satellite monitoring and AI-driven predictive policing to control migration flows and enforce border security.

The world of environmental conservation is sometimes imagined as being separate from migration, the former configured by a geography of protected areas and wildlife reserves, the latter linked with state borders, administrative processes for legislating il/legal movement and asylum seeking, and the configuration of associated rights. But today, the two are far more entangled, especially where biodiversity is a major subject of international geopolitics. As I have shown in my own work (Millner 2020; Millner et al. 2024, the definition of the borders of protected areas is increasingly used to control groups considered ‘risky’ by the state – including Indigenous groups and ethnic minorities – while conservation technologies such as drones can be turned on people for surveillance purposes as well as wildlife. In these areas, migrants, Indigenous peoples, ethnic minorities and political activists are frequently collapsed together to portray an abstract threat of ‘global terrorism’ and thus authorise special military powers and actions.

Drones are used to maintain a regime of fear in Corbett Tiger Reserve, India (image: A. Pawar and T. Simlai, 2022)

In the world of global conservation, we have seen a trend of what political ecologists call ‘green securitisation’, where agendas of historical racism and Indigenous genocide are greenwashed to justify them to a global audience. In countries from Guatemala and Colombia to India and Tanzania, state exceptional powers have been enabled for protected areas, based on claims that national biodiversity is threatened by poachers or ‘potential nature destroyers’.

In South Africa’s iconic Kruger national park, for example, militarised methods and technologies have been used to control illegal migration from Mozambique by the coding of migrant groups as potential poachers. State-of-the art military technologies (including helicopters and drones) combine with media interventions that cultivate hysteria over the peril of individual animal species (rhinos, in Kruger) to legitimise strong-handed interventions.

Meanwhile, satellite technologies, especially GIS systems, have long been celebrated for their potential in environmental conservation, enabling the early monitoring of forest fires and quantification of forest loss. However, such imagery has been significantly co-opted for state control in conflicted places such as the Amazon rainforest. In Brazil, satellite data intended to monitor deforestation is often manipulated to favour corporate interests and counter-activism, with enforcement actions disproportionately targeting small-scale farmers and Indigenous communities.

Social movements and digital resistance

Despite the potential for digital technologies to be abused, social movements and activists have found innovative ways to use these same tools for resistance. In the Amazon, Indigenous groups from across Péru, Ecuador and Colombia have harnessed GPS devices, drones and smartphones to document illegal logging and land grabs, providing tangible evidence to support their claims and rally international support. Through such forms of monitoring, Indigenous Peoples and Local Communities (IPLCs) may create counter-narratives to official state reports and highlighting the agency of marginalised groups.

In my own research in the Maya Biosphere Reserve, Guatemala, I have seen forest-based organisation ACOFOP foster a new kind of expertise focused on RGB sensors mounted on drones. In conjunction with tablets or smartphones, images collected by drones are used to monitor forest cover and potential fires but have also been used to evidence effective management by rural communities in the face of false claims.

Community drone users from across Latin America share new skills to manage and protect forests in a workshop in Guatemala led by Naomi in 2023 (image: C. Doviaza)

Historical continuity or new power relations?

The deployment of digital technologies, particularly AI and drones, clearly represents a significant shift in the global exercise of power, especially in the governance of migration. These technologies enable states and other actors to predict, monitor and control migration flows with unprecedented precision. This capability not only strengthens traditional border enforcement mechanisms but also introduces new actors – such as private tech companies – into the complex power dynamics surrounding migration. As monitoring technologies are deployed in contested conservation spaces, private interests often collaborate with states to develop and deploy surveillance technologies, blurring the lines between public and private interests in migration governance.

Yet, digital technologies also offer avenues for resistance. Marginalised communities and social movements are leveraging digital tools, from smartphones to drones, to challenge dominant narratives and expose the underlying agendas of state and corporate actors. Just as indigenous groups in the Amazon have utilized GPS devices, drones, and satellite data to document illegal deforestation and land grabs, so NGOs and activist networks use social media and real-time communication tools to assist migrants in distress and document human rights abuses at borders, thereby contesting the biopolitical control imposed by states.

In this sense, the example of drones in conservation resonates with the ‘autonomy of migration’ perspective, which has long been active in critical migration studies and critical geographies. To assert the autonomy of migration is to assert the primacy of migration, which is to say, it is an originary and creative force, which precedes all forms of state-based citizenship and state-making. Rethinking anticipatory technologies of migrant tracking, for example, places clear imperatives on states to keep up with migrant movements to maintain a sense of sovereignty. But digital technologies also have the potential to mediate infrastructures of connectivity, affective cooperation, mutual support and care by people on the move. This shows the possibility of (re)claiming digital technologies into alternative productions of knowledge and more-than-human creativity.

Naomi Millner is Associate Professor in Human Geography at the University of Bristol. She works at the intersection between political geography and political ecology, focusing on rural environments undergoing transformation in Latin America. She has recently been awarded an ERC Consolidator Grant to develop PEATSENSE: Diverse Knowledges and Sensing Practices in Peatland for Inclusive Climate Futures.

‘El Carrusel’: digitising the US-Mexico border with(out) the CBP One app

Migration, Mobilities and Digital Technologies – a special series published in association with the ESRC Centre for Sociodigital Futures.

By Martin Rogard.

Many people had been waiting in Mexico for months to make their asylum claim legally in the US when, at midday on 20th January 2025, all CBP One appointments with the US Border Force were suddenly cancelled. The CBP One app, which was the only legal land asylum route for people arriving at the US’s southern border, was discontinued the moment Donald Trump’s inauguration began.  

The MAGA wing of the US Republican Party had long campaigned to shut down Biden’s controversial CBP One app, claiming it had become a back door facilitating undocumented immigration into the US. I spent two years researching this app – and the various claims made about it – for a chapter of my PhD thesis on the digitalisation of bordering practices, only to realise that it would be discontinued overnight. But how did this app compare to the COVID-era asylum ban that Trump has now effectively reinstated?

The US-Mexico border fence at Tijuana, 2021 (image: Barbara Zandoval on Unsplash)

Far from the open border policy its detractors portrayed it as, the CBP One app – which has previously been used to automate commercial travel processing – became a mandatory pre-registration step for all non-Mexican US asylum seekers arriving by land. This new protocol, which included a 5-year asylum ban penalty for non-compliance, made:

… people who traveled through a third country but failed to seek asylum or other protections in those countries ineligible for asylum in the United States… [except for those people who can reach central and northern Mexico and make an appointment]… through a DHS [Department of Homeland Security] scheduling system (AIC, 2023; see also Federal Register, 2023: 31399).

Since the app was the only such ‘scheduling system’, the protocol effectively forced asylum-seeking individuals and families to wait in Mexico for months by making their asylum eligibility contingent on securing an appointment through a glitchy, geofenced and data-harvesting lottery system.

CBP One has therefore been part of a shadowy binational bordering scheme colloquially known as El Carrusel’ or ‘the merry-go-round’. The majority of people who made their long journeys to the restricted locations in Mexico where the app could function were swiftly targeted by the heavily militarised Mexican migration governance regime, which included parastatal security agencies such as the ‘grupo enlace’, which claims to enforce government contracts. Migrants report being forcibly bussed back down to Mexico’s border with Guatemala before they had a chance to pre-book or attend their CBP One appointments. Others who evaded ‘El Carrusel’ became highly visible targets for extortion, abduction, theft, exploitation and torture. As a Human Rights Watch report (2024: 4) states, ‘The more difficult it is for migrants to cross the US-Mexico border, the more money cartels make, whether from smuggling operations or from kidnapping and extortion.’

Migrants who did manage to attend their appointment on time after clearing the app’s highly data extractive preliminary security checks were subjected to a ‘credible-fear’ interview. Those deemed convincingly fearful of persecution were granted admission into the US under a temporary, criminalised and precarious status known as ‘humanitarian parole’ while they waited for their asylum decisions – the majority of which were denials, expeditiously followed by detention and eventual deportation.

The CBP One policy has recently been replaced with Trump’s renewed ‘Remain in Mexico’ asylum ban (an indiscriminate policy officially known as ‘Migrant Protection Protocols’ or MPP). In the US, as elsewhere, election cycles tend to be punctuated with big promises of ‘fixing’ the broken asylum system and/or finally ‘securing’ or ‘taking back control’ of national borders. Beneath the rhetoric, however, MPP, CBP One and Trump’s recent flurry of ‘emergency’ executive orders only maintain the status quo: they subject racialised people fleeing persecution and violence to further suffering and containment, failing to meet the standards of international law or provide truly accessible, safe and legal routes for asylum.

Despite the recent termination of the CBP One app as an asylum tool, much of my research remains relevant because it speaks to broader patterns of border digitisation that are expanding states’ reach far beyond pre-existing democratic and legal limits (see also Albert Sanchez-Graells’ post on AI and MigTech in this series). CBP One was repurposed for asylum processing during the COVID-19 pandemic with little public attention. At the time, humanitarian shelter workers in Mexico were tasked with filling out questionnaires on the app on behalf of asylum seekers under a deceptive US government promise to expedite their claims; in reality the app was introduced alongside restrictive immigration policies ‘that sought to increase penalties for crossing the border unlawfully, even to request asylum, and greatly reduce the number of migrants eligible for asylum’ (Kocher, 2023: 6).

But the CBP One app was never just the efficacious ‘scheduling tool’ that the DHS claimed it to be. It was principally a mass-scale data-gathering experiment that exploited undocumented migrants in order to extract a large-scale, non-cooperative dataset featuring biographic, biometric and live-location information. These data were avowedly shared across an equivocal ‘law enforcement community’, likely to train risk-predictive policing algorithms (AIC, 2025: 6; Longo, 2017: 150-153).

As Matthew Longo explains in The Politics of Borders, contemporary ‘smart’ borders have become increasingly reliant on large risk-predictive algorithms in order to ensure that ‘the good [are] let in quickly, and only the risky are slowed down… a process that depends heavily on data’ (Longo, 2017: 141; see also Travis Van Isacker’s post in this series, ‘Who’s in the fast lane?’). These large algorithms are known as ‘convolutional neural networks’. They work by combining the users’ biometric data (facial recognition, iris scans, liveness checks) and biographical data (travel history, gender, age, recent contacts) to build adaptive and multi-layered ‘risk profiles’. The more data they are fed, the better these so-called neural networks allegedly become at predicting and flagging potential ‘criminals,’ ‘terrorists’ and ‘impostors’ prior to any crime, attack or threat having taken place.

Before the CBP One app, for legal and practical reasons, the US’s physical borders were its primary site of personal data accumulation and surveillance. There are restrictions around the private information states can uncooperatively capture from non-citizens outside their jurisdictions. By requiring prospective asylum seekers to book an appointment via a smartphone while they waited in Mexico, the US decentralised and expanded its surveillance capacity far beyond preexisting limits. Conveniently, the geofenced app – requiring live location and prohibiting VPNs – forced its users to remain in Mexico. Since CBP One users were outside its jurisdiction, the US could shed its accountability for the human rights abuses asylum seekers faced while waiting across the border.

The app required the latest phone technology, updated software and stable broadband, excluding anyone unable to meet these expensive requirements. Its limit of four language options also disempowered those who didn’t read English, Spanish, French or Haitian Creole as well as anyone without good literacy skills. Similarly, the app’s so-called ‘glitches’ and design choices, prevented its users from correcting mistakes, contacting support or speaking to a human. The app automatically deleted profiles flagged as spurious, disadvantaging families and/or people with similar names and/or facial features (see, for example, Kocher, 2023: 7-8).

While this ‘smart’ digitised asylum system promised efficiency, its black box design prevented accountability and transparency for the harms it caused. Furthermore, as Human Rights Watch (2024: 26) explains, the insufficient number of appointments available on the app was presented as being due to ‘limited capacity’. Yet, this limited capacity largely reflected the US government’s prioritising of removal proceedings and hyper-securitised bordering over humanely processing asylum seekers.

By amassing vast archives of personal data from non-citizens, which were then shared domestically and internationally without their informed consent, the CBP One app fuelled a new wave of discriminatory and abusive bordering practices. Sold as a ‘technological fix’ this digitisation only perpetuated cycles of violence and disempowerment while expanding the US’s imperial reach beyond existing democratic, physical and legal limits.

Martin Rogard is a doctoral candidate in political theory at the University of Bristol. His research explores how artefactually mediated practices of memory-making and forgetting constitute and unsettle (b)ordering processes in the North American borderlands.

Privatised border regulation, AI, MigTech and public procurement

Migration, Mobilities and Digital Technologies – a special series published in association with the ESRC Centre for Sociodigital Futures.

By Albert Sanchez-Graells.

Moving across borders used to involve direct contact with the State. Moving people faced border agents attached to some police corps or the army. Moving things were inspected by customs agents. Entry was granted or denied according to rules and regulations set by the State, interpreted and enforced by State agents, and potentially reviewed by State courts. Borders were controlled by the State, for better or worse.

As States pivot to ‘technology-enhanced’ border control and experiment with artificial intelligence (AI) – let’s call this the ‘MigTech’ shift – this is no longer the full story, or even an accurate one.

More and more, crossing borders involves interactions with machines such as eGates, with increasing levels of automated facial recognition ‘capabilities’ (see Travis Van Isacker’s blogpost in this series). Face-to-face interviews are progressively (planned to be) replaced by AI ‘solutions’ such as ‘lie detectors’ or ‘emotion recognition’ tests. The pervasiveness of AI touches moving people’s lives before they start to move – such as when visa and travel permits are granted or denied through algorithmically-supported or automated decision systems that raise red flags or draw inferences from increasingly dense and opaque data thickets (see Kuba Jablonowski’s discussion of the UK’s shift from border documentation to computation). The movement of things is similarly exposed to all sorts of technological deployments, such as ‘smart sensors’ or drone-supported surveillance.

(Image by Markus Spiske on Unplash)

We could think that borders are now controlled by technology. But that would, of course, conflate the tool with the agent. To understand the implications of this paradigm shift towards MigTech we need to focus on control over these technologies. Control rarely rests with the State that purports to use the technology. Control mostly lies with the technology providers. Digitalisation thus goes hand in hand with the privatisation of border regulation. Entry is granted or denied as a result of ‘technical’ embeddings over which technology providers hold almost absolute control. Technology providers increasingly control borders, mostly for the worse.

There is a rich body of research on the impacts of digitalisation and automation of border control on people, communities and injustice. And also increasing calls for a reconsideration of this approach in view of its harms. At first sight, it could even seem that new legislation such as the EU AI Act addresses the risks and harms arising from digital borders. After all, the EU AI Act classes as ‘high-risk’ the use of AI for ‘Migration, asylum and border control management’. High-risk classification entails a long list of obligations, including pre-deployment fundamental rights impact assessments. By subjecting the technology to a series of ‘assurances’, the Act seeks to ensure that its deployment is ‘safe’. This regulatory approach can create the illusion that the State has regained control over the technology by tying the hands of the technology provider. Indirectly, the State would also have regained control of the borders, for better or worse.

My research challenges this understanding. It highlights how the regulatory tools that are being put in place – such as the EU AI Act – will not sufficiently address the issue of ‘tech-mediated privatisation’ of ‘core’ State functions because, in themselves, these tools transfer regulatory power back to technology providers. By focusing on how the technology reaches the State, and who holds control over the technology and how, I highlight important gaps in law and regulation.

The State rarely develops its own AI or other digital technologies. On most occasions, the State buys technology from the market. This involves public contracts that are meant to set the relevant requirements and to complement regulatory frameworks through tailor-made obligations. To put it simply, my research shows that public contracts are not an effective mechanism to impose specific obligations. Take the example of a State buying an ‘AI lie detector’. The ‘accuracy’ and the ‘explainability’ of the AI will be crucial to its adequate use. However, the EU AI Act does not contain any explicit requirement or minimum benchmark in relation to either of them. Let’s take accuracy.

The EU AI Act solely establishes that ‘High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy’ (Art 15(1)). Identically, the model contractual clauses for the procurement of AI that support the operationalisation of the EU AI Act do not contain specific requirements either. They simply state an obligation for the technology provider to ensure that the AI system is ‘developed following the principle of security by design and by default … it should achieve an appropriate level of accuracy’ (Clause 8.1). The specific levels of accuracy and the relevant accuracy metrics of the AI system are meant to be described in Annex G. But Annex G is blank!

It will be for the public buyer and the technology provider to contractually agree the applicable level of accuracy. This will most likely be done either by reference to the ‘state-of-the-art’ (which privatises the ‘art of the possible’), or by reference to industry-led technical standards (which are poor tools for socio-technical regulation and entirely alien to fundamental rights norms). Or, perhaps even more likely, accuracy will be set at levels that work for the technology provider, which is most likely going to have superior digital and commercial skills than the public buyer. After all, there are many ways to measure and report an AI system’s accuracy and they can be gamed.

In most cases, the operationalisation of the EU AI Act will leave the specific interpretation of what is ‘an appropriate level of accuracy’ in the hands of the technology provider. The same goes for explainability, and for any other ‘technical’ issue with large operational implications. Which does not significantly change the current situation, and which certainly does not mitigate the effects (risks and harms) of the privatisation of AI regulation – or, in the context of MigTech, the privatisation of border regulation. The EU AI Act – and other approaches to ‘AI regulation by contract’, including in the UK under the ‘pro-innovation approach’ to AI and the recently announced AI Opportunities Action Plan – creates a funnel of regulatory power that dangerously exposes the public sector to risks of regulatory capture and commercial determination. And, ultimately, exposes all of us to the ensuing risks and harms. A different regulatory approach is necessary.

Albert Sanchez-Graells (he/him) is a Professor of Economic Law at the University of Bristol Law School. He is also an affiliate of the Bristol Digital Futures Institute. Albert’s current research focuses on the role of procurement in regulating public sector AI use, on which he recently published the monograph Digital Technologies and Public Procurement: Gatekeeping and Experimentation in Digital Public Governance (Oxford University Press, 2024).

From documentation to computation: the shifting logic of UK border control

Migration, Mobilities and Digital Technologies – a special series published in association with the ESRC Centre for Sociodigital Futures.

By Kuba Jablonowski.

The UK immigration status is going online. Tangible documents issued by the Home Office are set to expire at midnight on 31st December 2024 as the department has been short-dating them for years. From 1st January 2025, status holders will transact through a set of websites called View and Prove to access their status, which is now called an eVisa, and to evidence it to others using share codes. Status checkers will transact through Home Office websites too, verifying people’s right to work or right to rent online as part of the British government’s ‘hostile environment’ policy. Carriers, such as airlines, will rely on automated status checks as part of their check-in procedures. Should these fail, they can resort to View and Prove as well. This unassuming portal warns users it is in the beta phase: feature-complete but not bug-free. And yet, it controls access to a vast network of casework systems and data stores holding information that is used to generate an immigration subject and their immigration status.

Screenshot from the UK Government’s View and Prove website (accessed by the author on 5th November 2024)

Borders, once firmly on the ground and often imagined as cliffs and rivers, walls and fences, are about to be governed entirely through online computing. It is hard to overstate the significance of this seemingly technical change. It does not just transform who enacts borders and how. It also transforms the way the subject of immigration control is administratively constructed by the border bureaucracy.

Immigration status was traditionally inscribed into a token that would represent the person as a subject of immigration control: a visa sticker, a biometric residence permit, a permanent residence card, and so on. What makes these into tokens is not the material quality but the inscribing of multiple types of information such as biographic, biometric, and immigration records into a single and stable medium. This medium then remains in the hands of the person who holds the immigration status inscribed into it. There are also digital tokens of status, such as machine-readable codes used in boarding passes and vaccine passports. They give their holder a similar level of autonomy as residence cards or visa stickers. They can be downloaded onto a personal device or printed on a physical medium, and they grant access as long as they remain valid.

The online system designed by the Home Office replaces such stable tokens with online transactions. Each time the holders want to check or evidence their status, they must transact through the View and Prove portal. They first log on with the document they used to create their online account. They are then sent a verification code to the email address or phone number held for that account. Once logged on, users have the option to view their status or generate a share code. This code, which is valid for 90 days but which can glitch for a number of reasons, then needs to be shared with the status checker – the landlord, the employer, the airline, and so on – who in turn enters it into the relevant status checking website along with their own details to verify the holder’s right to work, rent, travel, and so on. Status holders and checkers use different portals but they are all hosted on the gov.uk domain.

Screenshot from the UK Government’s View and Prove website (accessed by the author on 5th November 2024)

However, the View and Prove portal is merely the front end of a complex network of upstream services that store and compute data at the back end. This network includes legacy services and novel systems developed incrementally as part of the Immigration Platform Technologies programme since 2014, at the cost of around GBP 500 million to date. In total, there are more than 90 different casework systems that feed data into this network. Two central components amongst them are the Person Centric Data Platform, which holds historic records from legacy systems and live records from new applications, and the Immigration and Asylum Biometric System, which holds the facial image and finger scans.

As we show in a paper recently published with my colleague Monique Hawkins in the Journal of Immigration, Asylum and Nationality Law, this design proved to be prone to glitching when originally rolled out for the European Union Settlement Scheme. Our paper argues that glitching, albeit marginal in the sense that it affects the minority of users, is nonetheless systemic because it results from the design and configuration of digital status services. This argument is built on hundreds of cases reported by status holders and legal representatives to the3million, a civil society organisation and a strategic research partner on the Algorithmic Politics and Administrative Justice project.

Based on that evidence, we outline a typology of glitches. They include problems with service availability or user login, as well as errors with profile maintenance or status sharing. In the most serious cases user profiles can become entangled with each other due to problems with data linking. When viewing or sharing status after login, such users see someone else’s photo, name or visa in their own profile. A whistleblower report earlier this year suggested this type of glitch, which the Home Office refers to as a merged identity, was affecting more than 76,000 people in early 2024. The Home Office later disclosed that it had identified around 46,000 ‘records with an identity issue’ and managed to fix some of them, but not others. And that was earlier this year, before the estimated four million users with expiring biometric residence permits were added to the millions of those who have to rely on digital services to prove their status.

Fundamentally, these problems stem from the specific design of digital status services. The Home Office insists the system must reflect the current immigration status of the status holder. In 2023 the department reaffirmed its commitment ‘to a digital system of real-time checks’ and said it ‘will not compromise on this principle’. This necessitates ongoing computation of identity and immigration data processed on different systems that handle immigration transactions: applications for grants and upgrades of status, reviews and appeals of caseworker decisions, updates of images and documents linked to the user’s account, and so on. There is always a risk this computation will go wrong – and that if it does, the holder is locked out of their status as they are trying to evidence it.

This is why View and Prove should not be seen as a digital immigration document. Rather, it is the online interface of a transactional system set to replace immigration documents. This system does not swap tangible tokens of status – residence cards or visa stickers – for digital tokens. Instead, it mandates online checks of immigration status in real time. This system does not come with any document that can be stored on a personal device or reproduced on a physical medium. The proof of immigration status is produced on the screen in the moment of the check – and it vanishes into the cloud of Home Office servers as soon as the check is done.

Kuba Jablonowski (he/him) is a Lecturer in Digital Sociology at the School of Sociology, Politics and International Studies at the University of Bristol. His research investigates the design and operation of digital identity systems in the context of governance, and he approaches the border as a site of identity production rather than a device of mobility control. To generate and disseminate research findings, Kuba collaborates with civil society, the civil service, private actors and the media.

Please note, we have temporarily turned off the comment option on our blogposts while we try to deal with an overwhelming spam issue. If you would like to comment on any of our blogs please contact us by email.

Who’s in the fast lane? Will new border tech deliver seamless travel for all?

Migration, Mobilities and Digital Technologies – a special series published in association with the ESRC Centre for Sociodigital Futures.

By Travis Van Isacker.

For the past year I have been attending border industry conferences to understand the future claims they are making as part of my research on digitised borders for the ESRC Centre for Sociodigital Futures. Listening to their keynotes and speaking with industry professionals I have learned that the border crossing of the future is imagined as a ‘seamless’ one, devoid of gates, booths, even border officers. In their place will be ‘biometric corridors’ lined with cameras observing people as they move. Captured images will be fed back to computer systems matching facial biometrics ‘on the fly’ with those held in a database of expected arrivals. Passports will not be needed as travellers will digitally share all necessary information—biographical details, images of faces and fingerprints, travel permissions—with states and carriers before a trip. There will be no need to stop and question people at the border as they will have been vetted and pre-authorised before leaving their homes. Any people considered ‘risky’ will not even be able to book tickets. The goal: ‘good’ travellers won’t feel the cold gaze of the border’s scrutiny, nor be slowed down.

‘PREPARING FOR THE FUTURE: Seamless Traveller Journey’. World Travel & Tourism Council, April 2019 (image: original photograph cropped, from the World Travel & Tourism Council Flickr account)

This seamless vision is mostly shared by governments who do not want to give visitors a bad first impression and consider that most travellers do not pose any risk. However, there are still differing levels of enthusiasm for the seamless border depending upon, for example, a state’s economic dependence on tourism, or its concern about a terrorist attack. Curaçao recently launched its Express Pass to allow travellers to provide passport information and a selfie from home to then (hopefully) be quickly and automatically verified at the border. By contrast, the United States still does not use automated ePassport gates despite already biometrically verifying all travellers upon entry and exit.

‘Faster meets more secure’. Biometric Facial Comparison advertisement, Miami International Airport, June 2024 (image: author’s photograph)

There has long been a trade-off between speed and security at the border. Screening, searching and questioning people all take time, and that means disgruntled passengers, travel delays, economic loss and press headlines that can make a country less appealing for travel and investment. New digital border technologies promise to resolve this tension. Automated identification through biometrics claims to work with greater speed, accuracy and consistency than humans, allowing for increased security and expedited checks. Advanced data analytics makes a similar claim: knowing more about people earlier, and ‘risking’ them algorithmically, allows digital borders to automatically deflect those it considers undesirable at an earlier stage. By combining these systems, the border industry promises to reduce to seconds the time it takes average business travellers to clear immigration.

The seamless vision of the future border is sold to us all, but is it actually for everyone? A facial recognition system I observed for passengers to bypass showing their passports to a border officer when disembarking cruise ships in Miami failed to deliver for families with small children and people who did not have a passport. There were no suitable images of their faces in the government’s database to match against. Later, I explained to border experts my difficulties in getting airlines to allow me to board flights to the UK with my European identity card. I was told by the vice-president of a facial biometrics company that it sounded as if I was ‘trying to make a point’. Why didn’t I just always carry my passport to avail myself of the eGates? For him it wasn’t only illogical that I wasn’t always able to adopt the most ‘seamless’ path, it was suspicious.

Technologies claiming to offer a seamless border crossing for everyone in fact create a two-tiered system. True, some experience an accelerated border crossing, but those who, for whatever reason, cannot satisfy the tech’s requirements are held up, if allowed to cross at all. The fact that border professionals are all in the first group means they usually don’t experience their systems not working for them.

Edited photograph of the arrivals hall of the Miami Cruise Terminal, June 2024. To the left are exit lanes with facial recognition tablets. To the right is the long queue of people waiting to see an officer (image: author’s photograph)

At industry events there is limited recognition of the fact seamlessness is not necessarily what it claims to be. A senior manager for one of the world’s largest systems integrators (an IT and management consultancy contracted by states to implement border tech) admitted that, actually, a ‘frictionless’ border was not the end goal. Instead, the future border would be one that applies ‘variable friction’, easily speeding up or slowing down movement depending on who or what is crossing it.

Despite the hype around tech-enabled seamless crossings there is nothing to guarantee that the widespread adoption of new digital border tech will necessarily take us towards that future. Just recently, the UK Home Office took Jordan off its list of countries allowed to use its new Electronic Travel Authorisation (which uses an app to biometrically enrol people’s faces at home) due to an increase in the number of Jordanian nationals travelling to the UK to claim asylum, and for purposes other than what is permitted under visitor rules. Perhaps the clearest example is the European Union’s new Entry/Exit System (EES), which will require third country nationals to provide face and fingerprint biometrics against which to verify their entering and leaving the Schengen-bloc. This system promises to eventually allow for faster automated crossings but is primarily intended to identify overstayers and strengthen border checks, especially against criminal records databases which are centred on fingerprints. Despite state-of-the-art biometric enrolment kiosks and tablets, there have been nightmarish predictions for the disruption caused when suddenly everyone entering or leaving the EU has to stop to provide their biometrics at the border, especially at the Port of Dover. The fact that the implementation of EES has been delayed for years and was recently postponed again, without a new timeline for implementation, proves just how contingent the future border is.

Sign announcing works in the Port of Dover for Entry/Exit System’s enrolment zone, August 2024 (image: author’s photograph)

State initiatives to increase friction at the border often frustrate those working to develop and sell new border technologies. Some I spoke with believe quite wholeheartedly that everyone will soon be able to cross borders without even realising it, in large part thanks to their inventions. They like to think our modern, globalised world has moved on from the need for severe mobility restrictions and heavy-handed border controls and that we would all be more prosperous if everyone could travel more easily. Unfortunately, the opposite appears to be true. With anti-immigration rhetoric increasing in the world’s richer countries, the EU is currently facing the collapse of restriction-free travel within the Schengen area (itself enabled by immense databases of people considered risky and/or foreign built in the early 2000s). However, luckily for the borders’ builders, their products—originally developed for applications in defence, security and policing, and designed to better identify and surveil individuals—are just as suited to a future in which states are walled off from one another, and movement between them is heavily monitored and restricted. If and when this future vision becomes promoted instead of seamlessness largely depends upon the political moment and intended audience.

Travis Van Isacker is a Senior Research Associate at the University of Bristol working in the Moving Domain of the ESRC Centre for Sociodigital Futures. His research focuses on the transformation of border infrastructures through the application of new digital technologies. Travis has written previously for the MMB blog on ‘Environmental racism in the borderland: the case of Calais‘.

Find out more about the ESRC Centre for Sociodigital Futures here.

Please note, we have temporarily turned off the comment option on our blogposts while we try to deal with an overwhelming spam issue. If you would like to comment on any of our blogs please contact us by email.