Interoperability, multi-agency sensemaking and the potential of AI for more politically feasible & effective strategies and operations

Project Live

PROJECT TEAM

Photo of Professor Christopher Baber

Professor Christopher Baber

University of Birmingham

Contact: c.baber@bham.ac.uk

Professor Chris Baber is Chair of Pervasive and Ubiquitous Computing in the School of Computer Science at the University of Birmingham. He joined the University of Birmingham in 1990 and, after working in several Engineering schools, joined the School of Computer Science in 2018. His research concerns human interaction with technology – specifically, in terms of human people form teams with intelligent technology, and in terms of sensor-based human-technology interaction.  He has published over 100 papers in international journals, as well as over 400 conference contributions and half a dozen books.  His research has been funded by the UK Ministry of Defence, RCUK, European Union and various industries. He is a member of the University’s new Institute for Interdisciplinary Data Science and AI and the Institute for Global Innovation.

 
Photo of Professor Andrew Howes

Professor Andrew Howes

University of Birmingham

Contact: a.howes@bham.ac.uk

Professor Andrew Howes is Chair of Human-Computer Interaction in the School of Computer Science at the University of Birmingham. He is interested in the application of computational thinking to explaining human behaviour and how to design tools that help people make better decisions. He is a member of the University’s new Institute for Interdisciplinary Data Science and AI.

 
Photo of Professor Heather Marquette

Professor Heather Marquette

University of Birmingham

Contact: h.a.marquette@bham.ac.uk

Professor Heather Marquette is the Director of the Serious Organised Crime & Anti-Corruption Evidence (SOC ACE) research programme. She is Professor of Development Politics in the International Development Department at the University of Birmingham and is seconded part-time to FCDO’s Research and Evidence Directorate as Senior Research Fellow (Governance and Conflict). In addition, she is an Expert Member of the Global Initiative Against Transnational Organised Crime’s expert network, a member of the RUSI State Threats Task Force and a Lead Advisor and founding member of the global Thinking & Working Politically Community of Practice. Her research, which has been funded by the British Academy/Global Challenges Research Fund, DFID/FCDO, DFAT and the EU, focuses on transnational threats, particularly corruption and organised crime, as well as aid and foreign policy, governance and political analysis. She is a member of the University’s Institute for Global Innovation.

 

PROJECT SUMMARY

This project aims to better understand some of the interoperability challenges for improved multi-agency thinking and working. 

While a growing body of evidence suggests we need to develop more problem-driven, politically feasible strategies and operations on organised crime and corruption, there are often differences of opinion in multi-agency teams on what this means and what is needed. Some may translate this as a need to develop responses that reflect contextual realities on the ground and are politically feasible within that context, while others may believe it means that we need more political influence to convince or press local counterparts to focus on our priorities. 

Because of this, we are also likely to find framing challenges among different agencies and teams involved, in terms of:

  • how they define the problem (for example, one of security, politics, society, economics and so on);

  • what they think of as the right starting point or solution (for example, military, law enforcement, conflict prevention, diplomacy, aid, civil society, social policy, psychology and so on);

  • where they see the ethical and moral parameters for strategy and action;

  • what assumptions they bring in and typical mental models;

  • what the primary purpose of analytical products are (such as, to inform a short-term operational response versus developing a longer-term approach to tackle underlying causes and drivers of particular threats); and so on.

In other words, there appears to be a typical interoperability challenge at play here.

There are well known problems in human decision-making at both individual and group levels, and sensemaking is useful as a first pass in framing the situation. As our previous research has shown, ‘Sense-making happens when you experience a 'gap', or contradiction, in your understanding of the context in which you are currently acting; it is a means by which uncertainty or discomfort can be dealt with through the recruitment of prior experiences or new information’. We will apply Klein’s Data-Frame Model as a lens to explore how people working in multi-agency teams on SOC-related issues select ‘data’ (evidence that is available to them) and combine these into a ‘frame’ (an explanatory model) in terms of their prior knowledge, beliefs and expectations. As new data become available, so the frame can be elaborated or questioned. This offers a reasonable description of experts dealing with ambiguous and uncertain data. 

Drawing on semi-structured interviews and focus group discussions with SOC policymakers and practitioners from a range of agencies, we will adapt a model originally developed by Professor Baber for the National Cyber Security Centre focused on single agency work on intelligence and AI. The team will design a workshop where participants in multi-agency groups reflect on 'critical incidents' (that is, situations from their experience in which things did not go as planned or required adaptation).  Once groups have a 'scenario' to work from, the next stage involves producing a timeline of events within the scenario, including mapping different actors, groups, organisations and so on.

Following this preliminary stage, participants will reflect on which organisations are responsible for specific events and information and how these organisations share information. The result will be a set of high-level questions relating to information sharing, managing conflict or competition, and how the different organisations might have different interpretations that they apply to the information. The final phase of the workshop will involve critically reviewing the scenarios, particularly in terms of bottlenecks for communications and implications of missing or ambiguous information, and proposing potential solutions to these problems.

Expected impact 

While generating useful insights in its own right, this scoping phase will also inform the potential development of a Cooperative AI system that draws on multiple possible versions of workshop scenarios in order to create a 'many-worlds' perspective on problems ranging from a 'world' in which all information was correct and unambiguous, but with competing aims of different organisations, to others in which aims aligned or information was ambiguous. The ambition for this Cooperative AI system is to help better bring together multi-agency teams and to improve decision-making for SOC strategies and interventions in the future.


PUBLICATIONS


Previous
Previous

Assessing the effectiveness of sanctions as a tool to disrupt serious organised crime

Next
Next

Understanding functionality for more effective SOC & corruption strategies and interventions