Background

Terrorist risks and threats are increasingly identified and countered through new forms of data analytics made possible by rapid advances in machine learning (ML) and AI. Private actors, including social media platforms, airlines and financial institutions, now actively collaborate with states and international organisations (IOs) to implement ambitious data-led security projects to support global counter-terrorism efforts. For example, Passenger Name Record (PNR) data from the aviation industry is routinely analysed to identify suspicious ‘patterns of behaviour’ and control the movements of ‘risky’ travellers.

Data collated into terrorist watchlists and databases are subjected to ML techniques in innovative and experimental ways, to predict ‘candidates of interest’ and identify ‘future terrorists’ in advance of travel. Internet platforms are deploying ML – augmented by teams of human content moderators and a complex array of private rules and technical protocols and infrastructures – to identify and remove online ‘terrorist’ and ‘extremist’ content that is being created and disseminated at an unprecedented global scale. These data-driven techniques are gaining the status of best practice’ with the UN Security Council has recently calling on all states to ‘intensify and accelerate the exchange of operational information’ about suspected terrorists and create systems for collecting and sharing biometric data to counter the potential threats (UNSCR 2178, UNSCR 2396).

These practices rely upon and foster the development of new and far-reaching global information infrastructure projects – with states, IOs and private actors collaborating, across borders and jurisdictions, to extract, exchange and interconnect vast amounts of data with the aim of pre-emptively identifying and countering potential threats before they can materialise via sophisticated techniques of predictive analytics. Yet the broader implications of these shifts for how international law and global governance is practiced, human rights protected and powerful actors held accountable remains deeply uncertain.

How is global security governance by data transforming the world of actors and reshaping relations between states, IOs, platforms and individuals? What novel governance techniques and knowledge practices are being enacted in the rapid turn to AI systems in global security law and governance? How ‘the law’ should respond to technological change is often asked. But more pressing questions about how legal frameworks and security practices are themselves being put ‘into motion’ and reconfigured through AI systems and datafication processes remains open. And whilst the potential problems that AI-based security systems pose (discrimination, bias and privacy violations) are becoming clearer, the solutions to these issues remain elusive – especially in the security domain where secrecy is paramount and the inner workings of algorithms are ‘black-boxed’ even more than usual.

Little is known about how individuals, groups and populations come to be classified as ‘risky’ or dangerous, let alone whether they can meaningfully challenge targeting decisions, especially those made via automated ML processes and predictive analytics. Can rights protections be designed into AI-based systems and global security infrastructures or do human rights frameworks themselves need to be reconfigured in ways capable of taking into account our changing socio-technical architecture?