Inside Geneva

New wars, new weapons and the Geneva Conventions

April 30, 2024 SWI swissinfo.ch
New wars, new weapons and the Geneva Conventions
Inside Geneva
More Info
Inside Geneva
New wars, new weapons and the Geneva Conventions
Apr 30, 2024
SWI swissinfo.ch

Send us a Text Message.

In the wars in Ukraine and in the Middle East, new, autonomous weapons are being used. Our Inside Geneva podcast asks whether we’re losing the race to control them – and the artificial intelligence systems that run them.  

 “Autonomous weapons systems raise significant moral, ethical, and legal problems challenging human control over the use of force and handing over life-and-death decision-making to machines,” says Sai Bourothu, specialist in automated decision research with the Campaign to Stop Killer Robots.  

How can we be sure an autonomous weapon will do what we humans originally intended? Who’s in control? 

Jean-Marc Rickli from the Geneva Centre for Security Policy adds: “AI and machine learning basically lead to a situation where the machine is able to learn. And so now, if you talk to specialists, to scientists, they will tell you that it's a black box, we don't understand, it's very difficult to backtrack.” 

Our listeners asked if an autonomous weapon could show empathy? Could it  differentiate between a fighter and a child? Last year, an experiment asked patients to rate chatbot doctors versus human doctors. 

“Medical chatbots ranked much better in the quality. But they also asked them to rank empathy. And on the empathy dimension they also ranked better. If that is the case, then you opened up a Pandora’s box that will be completely transformative for disinformation,” explains Rickli.  

Are we going to lose our humanity because we think machines are not only more reliable, but also kinder? 

“I think it's going to be an incredibly immense task to code something such as empathy.  I think almost as close to the question of whether machines can love,” says Bourothu.  

Join host Imogen Foulkes on the Inside Geneva podcast to learn more about this topic.  

Please listen and subscribe to our science podcast -- the Swiss Connection. 

Get in touch!

Thank you for listening! If you like what we do, please leave a review or subscribe to our newsletter.

For more stories on the international Geneva please visit www.swissinfo.ch/

Host: Imogen Foulkes
Production assitant: Claire-Marie Germain
Distribution: Sara Pasino
Marketing: Xin Zhang

Show Notes Transcript Chapter Markers

Send us a Text Message.

In the wars in Ukraine and in the Middle East, new, autonomous weapons are being used. Our Inside Geneva podcast asks whether we’re losing the race to control them – and the artificial intelligence systems that run them.  

 “Autonomous weapons systems raise significant moral, ethical, and legal problems challenging human control over the use of force and handing over life-and-death decision-making to machines,” says Sai Bourothu, specialist in automated decision research with the Campaign to Stop Killer Robots.  

How can we be sure an autonomous weapon will do what we humans originally intended? Who’s in control? 

Jean-Marc Rickli from the Geneva Centre for Security Policy adds: “AI and machine learning basically lead to a situation where the machine is able to learn. And so now, if you talk to specialists, to scientists, they will tell you that it's a black box, we don't understand, it's very difficult to backtrack.” 

Our listeners asked if an autonomous weapon could show empathy? Could it  differentiate between a fighter and a child? Last year, an experiment asked patients to rate chatbot doctors versus human doctors. 

“Medical chatbots ranked much better in the quality. But they also asked them to rank empathy. And on the empathy dimension they also ranked better. If that is the case, then you opened up a Pandora’s box that will be completely transformative for disinformation,” explains Rickli.  

Are we going to lose our humanity because we think machines are not only more reliable, but also kinder? 

“I think it's going to be an incredibly immense task to code something such as empathy.  I think almost as close to the question of whether machines can love,” says Bourothu.  

Join host Imogen Foulkes on the Inside Geneva podcast to learn more about this topic.  

Please listen and subscribe to our science podcast -- the Swiss Connection. 

Get in touch!

Thank you for listening! If you like what we do, please leave a review or subscribe to our newsletter.

For more stories on the international Geneva please visit www.swissinfo.ch/

Host: Imogen Foulkes
Production assitant: Claire-Marie Germain
Distribution: Sara Pasino
Marketing: Xin Zhang

Speaker 2:

This is Inside Geneva. I'm your host, imogen Foulkes, and this is a production from Swissinfo, the international public media company of Switzerland.

Speaker 3:

In today's program, For the first time, the US Army tested a new radio target plane fitted with remote control.

Speaker 4:

You're ready for landing position? Any system that identifies human targets is concerning it brings into question the compliance with international humanitarian law, principles such as precaution, such as distinction, proportionality.

Speaker 1:

The same technology that is making your life easier is being weaponized that feature that unlocks your phone with your face. Here it is attached to a self-warning machine gun.

Speaker 5:

The question is is it ethical to be killed by a machine? And this is something that there is no right or wrong answer.

Speaker 2:

Hello and welcome again to Inside Geneva. I'm Imogen Folks Now. Loyal listeners may remember that way back when this podcast was in its infancy, we produced an episode looking at lethal autonomous weapons, or killer robots as some call them. What are they exactly? Who controls them? Who takes responsibility if something goes wrong?

Speaker 5:

Four years on, with new and terrible conflicts underway, Russian forces deploying small, inexpensive night vision equipped drones to we have seen at least a glimpse of what this new technology could mean for warfare.

Speaker 3:

There are screams of desperation and the sounds of silence as the Israeli drone circles overhead.

Speaker 2:

Today, we're going to take another look at all this and ask whether we have already lost meaningful control of the weapons we have developed. First, let's hear from Jean-Marc Rickley, professor of Global and Emerging Risks at Geneva's Centre for Security Policy, who told me that in many cases, modern warfare is not as modern as we might think.

Speaker 5:

If you remember, the weapon that was responsible for the worst massacre after the Second World War was the Machete in Irlanda. And so you know high tech, very much low tech, and what we are seeing now is a combination of yes, off the shelf technology, especially digital technologies, and especially in the field of artificial intelligence, in the field of cybersecurity, and so what we are witnessing now is that some of these technologies are being used and weaponized. And so, for instance, if you take drones, you know drones have been there for a while. It's nothing new. First, drones appeared on the battlefield already in the Vietnam War, but what is different is at the turn of, you know, 2010,.

Speaker 5:

You have this wave of democratization with portable drones, you know, commercial drones that you can buy in any electronic shop in any country, and suddenly you had non-state actors that started to weaponize these drones and, for instance, isis, during the Battle of Mosul, used DJI drones and replaced the camera with a small pod that could contain a hand grenade. And during the Battle of Mosul, up to 30 Iraqi soldiers lost their life on a weekly basis. This was the first time that the non-state actor groups managed to win tactical air supremacy for a specific time over an adversity, and Ukraine is demonstrating this, at the same time that you have the use of new weapons and very advanced weapons. At the same time, you saw also scenes that you could have seen basically during the First World War, with trench warfare based on basically trying to wear down your adversary.

Speaker 3:

The war in Ukraine might look like something from the 20th century. You see a sort of trench-like landscape that would be familiar to soldiers.

Speaker 2:

So in some ways, wars have stayed the same. The age-old struggle for territory is being fought in age-old ways, in trenches and with mortars, but at the same time, cheap and cheerful technology is being weaponised, not just by warring states, but by non-state armed groups, some regarded as terrorists as well. The drones being used on the battlefields are not yet fully autonomous, but they soon could be, and right now in Vienna, an international conference is underway. It's called Humanity at the Crossroads and its goal is to take a long, hard look at how these new weapons fit, if at all, into international law. I caught up with Sai Bharatu, specialist in automated decision research, with the campaign to stop killer robots.

Speaker 4:

Autonomous weapons don't only threaten people in conflict. Think about future protests, border control, policing and surveillance, or even other types of technologies we use. What would it say about a society and what impact would it have on the fight for ethical technology if we let ultimate life and death decisions be made by machines, whether autonomous weapons could ever be used in compliance with international humanitarian law?

Speaker 6:

That is a question.

Speaker 4:

We are hoping that a new legally binding instrument on this issue and global multilateral efforts to regulate these technologies would answer in much more depth. There need to be restrictions and prohibitions in place that make sure that certain decisions are taken with appropriate and meaningful human control, without which it would be really difficult to say that any weapon system could be used in compliance with international humanitarian law.

Speaker 3:

Killer robots, machines that hunt down and terminate humans, have caught the public's imagination.

Speaker 2:

So what about meaningful human control? What about compliance with international law, with the Geneva Conventions? Those are questions some of you, our listeners, sent in when we let you know we were going to discuss this subject again. So I asked Jean-Marc Rickley where exactly is the control of these weapons?

Speaker 5:

Here you have the issue of accountability. If you have a soldier that basically fired with a gun to another soldier or to civilian, you know the responsibility is easy to attribute. Now, when it comes to autonomous weapon system, if we end up having this weapon, what you have to understand is, unlike landmines that are pre-programmed to do a specific task, you know explode when the weight on the mine is changing AI and machine learning basically leads to a situation where the machine is able to learn. So, yes, you can program algorithms, but you're not programming the final outcome, You're programming the process, and that process then also changes. And so now if you talk to specialists and scientists in AI, they will tell you that it's a black box. We don't understand. It's very difficult to backtrack decision-making process.

Speaker 5:

So some people now are working on responsible artificial intelligence where you can audit algorithm, but it's absolutely not easy to do that. So here the issue is if you engage a weapon that is autonomous, who would be responsible? Would that be the manufacturer? Would that be the scientist that developed be responsible? Would that be the manufacturer? Would that be the scientist that developed the algorithm? Would that be the commander in the field or someone else? So here there is huge questions from a legal perspective, but also from an ethical perspective. Ai will fundamentally alter every aspect of human life.

Speaker 3:

We do not fully know what it is that we do not know about AI.

Speaker 1:

It is as though we are building engines without understanding the science of combustion.

Speaker 2:

So I already feel we're on new territory from just four years ago, when Inside Geneva first looked at autonomous weapons. It's not just the weapons themselves, it's the systems, autonomous or even semi-autonomous, that run them. As Jean-Marc said, even the scientists developing them don't quite understand how machine learning works, which leads me to the uncomfortable conclusion that we really aren't in control. That's why Sai Bharatu believes even the tentative proposals to control these weapons risk becoming out of date.

Speaker 4:

Technology and the understanding of how these technologies are to be regulated has also changed quite drastically.

Speaker 4:

While certain benchmarks that previously were used to talk about these technologies as being ethical might no longer be valid, at one point it's also possible that we'll have to continuously evolve the ways in which we talk about regulating these technologies or making these technologies more ethical, and hence we have continuously stressed that it is important that we act on this sooner rather than later and we create an open-ended conversation around the regulation of these technologies to make sure that they are compatible with future progress of these technologies.

Speaker 5:

Artificial intelligence as an Israeli weapon of war and the real life and death consequences for Palestinians.

Speaker 3:

In a system dubbed Where's Daddy, bombings often taking place when targets have been tracked to their family homes. At its peak, Lavender allegedly picked out 37,000 possible targets.

Speaker 2:

Perhaps that conversation Sai wants has been given new impetus by the revelations that Israel has been deploying a sophisticated system called Lavender, which uses artificial intelligence to select targets in Gaza. Critics say the collateral damage factored into hitting those targets up to 100 civilians for one senior Hamas operative, it's claimed. 100 civilians from one senior Hamas operative, it's claimed is unacceptable and violates the Geneva Convention on Discrimination and Proportionality. Jean-marc Rickley, again.

Speaker 5:

Israel, because of its peculiar geostrategic situation, you know, is a small country, has no strategic depth, unlike, you know, Russia, for instance, and therefore the Israelis have to compensate for that, and the way to do that is through technology, and Israel have invested a lot in AI.

Speaker 5:

In 2020, there was an operation against Hamas, and the Jerusalem Post at the time titled an article the First AI War, because they use algorithms to identify tunnels and operatives. Now, what has come out in this war is the use of that kind of target identification algorithm to a level we didn't see, Because basically what the Israelis have done is to use an algorithm to identify Hamas operatives up to 35,000 of them. So here again, what we are witnessing is technology helping in target identification and boosting the intelligence cycle. Obviously, in this algorithm, military commander, when they identify a specific target according to the seniority of the target, then you can in the parameter, ascribe a level of collateral damage that would be deemed acceptable, and this level of collateral damage ranges from 10 to 130 or more people being king for a single target, and obviously that also raises some ethical issues here.

Speaker 1:

I love that I can unlock my phone with my face and that Google can predict what I'm thinking. But what if all this technology was trying to kill me?

Speaker 2:

These are the ethical questions that Psy and the campaign to stop killer robots are raising at that conference in Vienna, and the questions are not just about what these weapons do to victims of war, it's what they also do to those fighting.

Speaker 4:

The use of target recommendation systems in the Gaza Strip is deeply concerning from a legal, moral and humanitarian perspective. Although the Lavender system is not an autonomous weapon, they both raise serious concerns about increasing use of artificial intelligence in conflict. These concerns include automation, bias, digital dehumanization and the loss of human control in the use of force. Reports of the Lavender have been used by the IDF to generate human targets are deeply troubling. The system reportedly makes targeting recommendations based on behavioral features, including communication patterns, social media connections and changing addresses frequently.

Speaker 2:

I mean there could be all sorts of reasons for that, not linked to terrorism.

Speaker 4:

Yes, and any system that identifies human targets is concerning, and the major ease of concern is that there is a dilution of human control over target selection and hence it brings into question the compliance with international humanitarian law, principles such as precaution, such as distinction, proportionality. But at the same time, it is also concerning because it dehumanizes people to a certain extent. Digital dehumanization is very real, and using data points to identify whether a person is a legitimate target in an attack might cause further aggravation of this concept.

Speaker 2:

That brings me to another quite interesting question that we had from one of our listeners was to define, even culturally, spatially, in terms of temporally extremely difficult to define because the definitions of empathy have changed drastically, even with time, and they change drastically between cultures.

Speaker 4:

So we are not able to, even at present, come to as humanity, we are not able to come to a uniform, universal definition of concepts that are so in need to human behavior like empathy. I think it's going to be an incredibly immense task to code something such as empathy. It's, I think, almost as close to the question of whether machines can love.

Speaker 2:

So, if I understand this, we're talking about technology which originally has been programmed by a human who may have his or her own bias, a bias which may impact negatively on those other humans who the machine is attacking or tracking or even looking after, and after that the machine with that inbuilt bias starts learning on its own. Can a machine be programmed to have empathy, or can it learn it Again? Lots of our listeners asked about this. Jean-marc Rickley told me something which I think might make you rather uneasy.

Speaker 5:

So, empathy there was an experiment conducted last year where patients were asked to rate the quality of medical advice received by medical doctors and chatbots. Medical chatbots and medical chatbots ranked much better in the quality, but they also asked them to rank empathy, and on the empathy dimension they also ranked better. The study would need further confirmation with further experiment, but if this study is being validated but if this study is being validated it would mean that a machine now has a better ability than human being to relate to other human beings. If that is the case, then you open up a Pandora box that will be completely transformative for disinformation.

Speaker 5:

Now, some people, when I make this argument, are telling me well, you know, a machine cannot be empathetic because they work on different principles, and it might be right. It's like a machine cannot have a conscience. Then it's about how you define empathy. How do you define conscience? What is more important, though, is the perception, the perception by human being that a machine is more empathetic than a medical doctor, and, basically, when human perceive that they are more empathetic, it doesn't really matter how you find empathy All right, and so this result would tend to prove that, yes, we are witnessing machines that now are better able to relate to human beings than we are, and that, obviously, in the field of disinformation, is completely transformative. And it's also very concerning because that would mean that we can engineer a machine that will be able to manipulate human beings without human beings being aware that they are being manipulated.

Speaker 3:

The sci-fi of the past is starting to become today's reality. Ai is becoming a key weapon in the arsenal of 21st century warfare.

Speaker 2:

What Jean-Marc describes there goes so much deeper than the AI deep fakes we've been talking about in relation to, for example, all the elections taking place this year. It's about a machine making us feel that it cares about us, that it's empathetic to us in such a way that we lose our instinctive human understanding of what real empathy is or, as Sai said, what real love is. That idea opens up so many terrifying possibilities. We're not there yet, but, as Sai points out, the new weapons are already a cause of what he calls digital dehumanization, because the distance between the killer and those being killed has become so great.

Speaker 4:

I think it takes human agency away from the use of force and as soon as that happens, even if there are two, three additional steps or layers that are added onto it, for example, there might be prompts on a computer screen for somebody to decide whether to kill or not, whether to use force or not.

Speaker 4:

But even that additional layering creates a dissonance between the human contact that it took to use force at one point of time. And because of that dissonance that is being created right now, it's extremely difficult, as we've been saying, to maintain compliance with principles such as distinction, which have been pivotal in the formulation of international humanitarian law. So, yes, it is extremely concerning and scary that such a thing is possible.

Speaker 4:

But I think with concentrated efforts globally and in forums that are relevant for regulation of weapon systems, it is possible to create a situation where we are able to keep humans in control.

Speaker 2:

I'm heartened by Sai's optimism. Not that having humans in control has led to a perfect world obviously not but the idea of taking the accountability for good or bad deeds away from humans and giving it to basically unaccountable machines is, for me at least, chilling. I had one final question for Jean-Marc Rickley. Can machines distinguish between good and bad, between, on a simple level, competence and civilians? When it comes to warfare.

Speaker 5:

You can develop algorithms in lab, but then warfare is a very complex environment, and so, even though you could run an experiment where you can distinguish between human beings and objects, what we call the fog of war makes that thing much more complicated. Now, the issue of distinguishing between combatant and incompetent is a very difficult problem, even for human beings. If you think that the combatant is someone in military fatigue, yes, okay, fine, then you can make a point, but increasingly, what we're witnessing are people that are meshing into the civilian population. Even for human beings, it's difficult to distinguish. So then the question is and this is again an ethical debate that we have then is it ethical to be killed by a machine? And this is something that there is no right or wrong answer, you know. It comes out of your basic assumptions of what do you think is being ethical in terms of dying, you think is being ethical in terms of dying, and that's why it would be very difficult to answer these questions, because there is no right or wrong answers.

Speaker 2:

Is it ethical to be killed by a machine? Do we need international regulations to determine much more precisely and rigorously what autonomous weapons and weapons using AI can do? That's the question we leave you with this week. As we heard from our expert guests Sai Bharatu and Jean-Marc Wrigley, there are no easy answers. Many thanks to them both for their time and analysis. In the next episode of Inside Geneva.

Speaker 6:

The political urgency and the willingness of political leaders to think differently has almost completely evaporated, and that has made it difficult to reach agreement. If we don't make it across the finish line now for this World Health Assembly in 2024, it's just going to get harder the clock is ticking down towards the ambitious deadline set by the World Health Organization for member states to agree a pandemic treaty.

Speaker 2:

No one wants the lockdowns and loneliness of 2020 again, but can we agree measures to make sure it never happens again? Join us on May 14th for that. A reminder you've been listening to Inside Geneva from Swissinfo, the international public media company of Switzerland, Available in many languages as well as English. Check out our other content at wwwswissinfoch. You can find Inside Geneva and review us and subscribe to us wherever you get your podcasts. I'm Imogen Folks. Thanks again for listening and do join us next time on Inside Geneva.

The Ethics of Autonomous Weapons
The Rise of Empathetic Machines