Putting tech in its place
As human relations become digitised, the controls and human values that underpin tech must be saved.
Published on 03 December 2019 in
ISS Today
By
When J Robert Oppenheimer, the man hailed as the father of the first atomic bomb in the 1940s, realised that the tool he had created was being put to devastating effect in Japan, he committed his later years to developing controls on nuclear proliferation. This serves as a useful reminder of the power of tech and the need to ‘tame’ technology, or at least retain a human dimension to its development and application.
Representatives from South Africa and 22 other states including the G5 are scheduled to meet this month to mull over these moral dilemmas as part of the United Nations General Assembly’s Group of Government Experts on Advancing responsible State behaviour in cyberspace in the context of international security.
Their task is to help devise norms for states on how they apply existing international law principles to developments in information and communications technology. They must also determine whether the global community needs new rules of engagement or whether existing legal frameworks are sufficient. It’s a tough task, not least because of divergent opinion between states on how to balance human rights, data security and privacy.
Given the growing digitisation of human relations, and people’s seeming inability to ‘opt out’ of this new digital ecosystem, there is much talk about ‘putting tech in its place’. The current debate is about how to ensure that the human safeguards, controls and values that underpin the tech are not lost.
How do you hold a killer robot or lethal autonomous weapon to account when things go wrong?
Our human-centred international order is designed to keep power in check and hold states and individuals to account. Autonomous-based technologies are disrupting that. For example, how do you hold a so-called killer robot or lethal autonomous weapon to account when things go wrong?
During the Iraq war of 2003 people watched in horror as a pre-programmed United States patriot missile battery, as part of a missile defence shield, took down a British Royal Air Force Tornado jet on an air base where I was embedded. It killed the aircraft’s pilot and navigator who I had shared a coffee with just hours earlier. It was a dreadful ‘accident’.
The plane was mistaken for an incoming missile and the pre-programmed machine reacted. This was before the days of artificial intelligence as we know it, and serves as a salutary reminder that machines cannot always distinguish the nuances that shape us as human beings.
In this digital age the need to consider a revision of the rules of the global game and how interactions take place is no longer confined to state-on-state behaviour. It also touches on how industry, militaries, governments, armed groups, civil society and the media rub up alongside each other.
The growing use of drones and increased autonomy could lead to perceptions of casualty-free warfare
A recent conference in Stockholm was dominated by the question of how to ensure that at a time of increased machine autonomy, human control and decision making retains primacy in policymaking. Among the vexing questions is what balance to strike between human and artificial intelligence. Specifically, how can we audit artificial intelligence, or subject it to the rule of law, when we increasingly rely on it to help in vital decision making?
The debate about putting the ‘human dimension’ back into tech focuses on questions of control. Surrendering complete control to machines in an era of artificial intelligence also affects the way government policy is conducted in peacetime. The application of new autonomous technologies may determine policing or counter-terrorism strategies that decide whether based on our ‘score’ we are considered a threat, and determine who should be detained or targeted, and who not.
The algorithms already being used in decision making, including in decisions that help save lives, have a flip side which arguably intrudes on our privacy. They can help determine whether we are likely to reoffend, or display personality traits that mean we can be easily radicalised.
But machines can’t read between the lines or operate in the grey zone of uncertainty. The international community is confronted with the challenge of setting limits to the amount of autonomy society is prepared to cede to machines while protecting human security. In terms of checks and balances, how do we define privacy both on an international and domestic level?
Autonomous technologies are disrupting our human-centred international order
Experts remind us that as technology has developed, so too have the legal definitions of what constitutes public and private space. For African countries like South Africa that seek to centre human rights in their policy, there is a case for them to assert themselves at a time when states with divergent opinions on privacy and security are deepening their business interests in Africa.
At the Stockholm conference, UN High Representative for Disarmament Affairs Izumi Nakamitsu, warned that the growing use of unmanned aerial vehicles or drones and increased autonomy could lead to perceptions of casualty-free warfare. She also cautioned that ‘the possibility of third parties with malicious intent interfering in control systems to incite conflict cannot be discounted.’
Without human controls Nakamitsu said, ‘artificial intelligence in the digital space threatens to ‘exacerbate political divisions … even in the most benign of international environments.’ So emerging technologies in the digital sphere may act as an accelerant to existing simmering tensions, leaving governments unable to react as quickly as machines.
A recent report by the International Committee of the Red Cross warns that, ‘[Artificial intelligence] and machine-learning systems remain tools that must be used to serve human actors, and augment human decision makers, not replace them.’
In a world where the internet of things could enable a refrigerator or any other wireless domestic appliance to be remotely captured, weaponised and used to cause mass destruction, the need for human-centred technology grows more pressing.
Karen Allen, Senior Research Adviser, Emerging Threats in Africa, ISS Pretoria
In South Africa, Daily Maverick has exclusive rights to re-publish ISS Today articles. For media based outside South Africa and queries about our re-publishing policy, email us.