Law and the Future of War
Through conversation with experts in technology, law and military affairs, this series explores how new military technology and international law interact. Edited and produced by Dr Lauren Sanders, the podcast is published by the Asia-Pacific Institute for Law and Security. Until July 2024, the podcast was published by the University of Queensland School of Law.
Law and the Future of War
AWS, the Alignment problem and regulation - Brendan Walker-Munro and Sam Hartridge
In this interview, we are continuing our series on legal review of AWS, and speaking with two of the Law and Future of war research team, about an issue that impacts the design approaches to AWS: the alignment problem. In May 2023, there were reports of an AWS being tested, that turned upon its operator, and eventually cut its communications links so it could go after its originally planned mission... this prompted discussion about the alignment problem with AWS, impacting future TEVV strategies and regulatory approaches to this technology.
The conference referred to in the episode can be found in the notes to the attached link, with relevant excerpts extracted below: - Highlights from the RAeS Future Combat Air & Space Capabilities Summit (aerosociety.com):
'Could an AI-enabled UCAV turn on its creators to accomplish its mission? (USAF)
[UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". ]
Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, ... cautioned against relying too much on AI noting how easy it is to trick and deceive.
... Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
Dr Brendan Walker-Munro is a Senior Research Fellow with the University of Queensland's Law and the Future of War research group. Brendan's research focus is on criminal and civil aspects of national security law, and the role played by intelligence agencies, law enforcement and the military in investigating and responding to critical incidents. He is also interested in the national security impacts of law on topics such as privacy, identity crime and digital security.
Dr Sam Hartridge is a post-doctoral researcher at the University of Queensland. His research is currently examining the interplay between technical questions of AI safety, AI risk management frameworks and standards, and foundational international and domestic legal doctrine.
Additional Resources:
- Autonomy in weapons systems: playing catch up with technology - Humanitarian Law & Policy Blog (icrc.org)
- Striking Blind | The Forge (defence.gov.au)Concrete Problems in AI Safety (arxiv.org)
- The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents (researchgate.net)
- The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities (arxiv.org)