Risks and rewards of AI for organised crime in Africa
Artificial intelligence is rapidly developing, presenting both new threats and opportunities for law enforcement.
In 2019, a British company was defrauded of US$243 000 when the voice of the chief executive was impersonated by criminals using artificial intelligence (AI). In South Africa, impersonation fraud increased by 356% between April 2022 and April 2023, according to the Southern African Fraud Prevention Service. The organisation’s Executive Director, Manie van Schalkwyk, said a 2021 Interpol report put South Africa at the top in Africa regarding cyber threats.
Technology advances are making these crimes easier to commit and harder to distinguish between fake and real. The widespread personal use of AI applications such as ChatGPT and resemble.ai increases concerns about potential abuses.
While AI is still nascent in Africa, it has potentially positive and negative implications for transnational organised crime. AI technologies help organised criminal groups to commit crimes that are more complex, at a greater distance, and involve less physical risk. Researchers say criminals use AI in the same ways as legitimate companies, for ‘supply chain management, risk assessment and mitigation, personnel vetting, social media data mining, and various types of analysis and problem-solving.’
Organised criminals in Africa use drones for intelligence, surveillance and reconnaissance. Drug cartels in Mexico already use autonomous attack drones under AI control, providing flexibility and coordination during physical attacks on human, supply chain or infrastructure targets.
Satellite imagery can help criminals plot and manage smuggling routes with AI systems
Satellite imagery can help criminals plot and manage smuggling routes with AI systems such as Earth Observations, which provide accurate and near real-time local terrain data. Organised criminals can also attack AI systems to evade detection (e.g. biometric screening processes), circumvent security systems at banks, warehouses, airports, ports and borders, or wreak havoc with private sector and government networks and economic infrastructure.
AI-enabled attacks on confidential personal databases and applications allow criminals to extort or blackmail to generate income or leverage political influence. Deepfake technology can access money by impersonating account holders, requesting access to secure systems, and manipulating political support through fake videos of public figures or politicians speaking or acting reprehensibly.
However, AI also provides new ways for law enforcement to police crime. It can map movements, identify patterns and anticipate, investigate and prevent crime. Predictive policing allows law enforcement to calculate where crime will likely occur based on AI algorithms. But this comes with associated harms, including discriminatory policing patterns.
In Africa, private security companies are often more technologically advanced than police forces, many of which lack basic internet access, technology resources and capacity. But private sector digital systems can be linked to police databases or used to prosecute suspects.
AI can be used against organised crime in remote areas to collect data from ranger patrols and spatial data
For example, VumaCam’s licence plate recognition system in Johannesburg uses over 2 000 cameras and is connected to the South African Police Service’s national database of suspicious or stolen vehicles. Bidvest Protea Coin’s Scarface in South Africa uses facial recognition for the real-time detection of potential suspects. Its data can be used as evidence in criminal cases.
AI can also be used against organised crime in remote areas. EarthRanger uses AI and predictive analytics to collect historical and real-time data from wildlife, ranger patrols, spatial data and observed threats in protected environmental areas. The technology has helped dismantle poaching rings in Grumeti Game Reserve in Tanzania and encouraged local communities to coexist with protected wildlife in Liwonde National Park, Malawi.
The Africa Regional Data Cube is an AI system that layers 17 years of satellite imagery and Earth Observations data for five African countries (Kenya, Senegal, Sierra Leone, Tanzania and Ghana). It stacks 8 000 scenes across a time series and makes the compressed, geocoded and analysis-ready data accessible via an online user interface. Its data on comparing land changes over time can, for example, help identify and track illegal mining operations.
Although AI can help tackle organised crime in Africa, it comes with several limitations – and risks. AI needs access to an uninterrupted power supply, a stable internet connection and the ability to store and process vast quantities of data. That means its rollout across Africa will be uneven, depending on countries’ resources, law enforcement capabilities and willingness to work through public-private partnerships.
The Africa Regional Data Cube can, for example, help identify and track illegal mining operations
Since AI systems can be attacked or fail, an over-reliance on AI in addressing organised crime is another risk. The technologies can also give law enforcement agencies enormous powers, which could violate citizens’ rights to privacy, freedom of assembly and association.
Because AI continuously evolves, legal frameworks are always trying to catch up. Private companies and even governments may capitalise on this to circumvent privacy concerns. VumaCam has come under scrutiny for collecting potentially sensitive data on individuals with no links to crime.
Authoritarian governments could use legitimate AI systems to monitor political opponents or suppress criticism from civil society. Human rights advocates in Zimbabwe worry about the government’s implementation of Chinese-developed facial recognition software and the ownership and potential use(s) of this data.
In September 2021, then United Nations (UN) High Commissioner for Human Rights, Michelle Bachelet, called for a moratorium on governments’ use of AI that threatens or violates human rights. In March this year, major AI developers called for a pause on giant AI experiments to allow for rigorous safety protocols and oversight mechanisms to be developed.
But AI is developing fast, both in scope and reach. International bodies, governments and civil society must keep pace in establishing responsible and ethical AI principles. And laws need to be developed to investigate, prosecute and punish those using AI for criminal and violent ends.
Romi Sigsworth, Research Consultant, ENACT, ISS
This article was first published by ENACT.
Image: © Alamy Stock Photo
Exclusive rights to re-publish ISS Today articles have been given to Daily Maverick in South Africa and Premium Times in Nigeria. For media based outside South Africa and Nigeria that want to re-publish articles, or for queries about our re-publishing policy, email us.
Development partners
ENACT is funded by the European Union and implemented by the Institute for Security Studies in partnership with INTERPOL and the Global Initiative against Transnational Organized Crime. The ISS is also grateful for support from the members of the ISS Partnership Forum: the Hanns Seidel Foundation, the European Union, the Open Society Foundations and the governments of Denmark, Ireland, the Netherlands, Norway and Sweden.