Bullaki Science Podcast

12. On the Future (Existential Threats) | Prof. Lord Martin Rees

Bullaki Season 1 Episode 12

In this Bullaki Science Podcast we cover some of the aspects illustrated by Professor Lord Martin Rees in his latest book “On the Future”, where he talks about existential risks including nuclear warfare, climate change, biotech, and artificial intelligence.

Prof. Martin Rees is a leading cosmologist and astrophysicist. He has conducted influential work on galaxy formation, cosmic jets, black holes, gamma ray bursts, and speculative aspects of cosmology. Prof. Rees is the Astronomer Royal and a Past President of the Royal Society. He is a prominent scientific spokesperson and author of popular science books. In 2012 he co-founded the Centre for the Study of Existential Risk, which is a research Centre at the University of Cambridge intended to study possible extinction-level threats posed by present and future technology.

Future, existential threats, bullaki science podcast, Martin Rees, Lord Martin Rees, Sir Martin Rees, Professor Martin Rees, Prof Martin Rees, Existential Risks, future threats. 

The video podcast is available here: 

The following is a conversation with Lord Professor Martin Rees, a leading cosmologist and astrophysicist. He has conducted influential work on galaxy formation, cosmic jets, black holes, gamma ray bursts, and speculative aspects of cosmology. Prof. Rees is the Astronomer Royal and a Past President of the Royal Society. He is a prominent scientific spokesperson and author of popular science books. 

In 2012 he co-founded the Centre for the Study of Existential Risk, which is a research Centre at the University of Cambridge intended to study possible extinction-level threats posed by present and future technology.

In this podcast we cover some of the aspects illustrated by professor Rees in his latest book “On the Future”, where he talks about existential risks including nuclear warfare, climate change, biotech, and artificial intelligence. Due to these challenges, he believes we will face a growing tension between three things we want to preserve: privacy, security, and freedom. 

I know that this is a polarizing topic and please consider that this is not a debate but an opportunity to hear what one of the most prominent scientists and policymakers has to say about global threats, including the current situation.

Samuele Lilliu (SL). Good morning Professor Martin Rees.

Martin Rees (MR). Good morning. 

SL. I’d like to start our discussion with… maybe if we could discuss a bit about… what was your major scientific contribution? And if you had to pick up a paper, among your 500+ publications, which one would you choose? 

Martin Rees (MR). Well, I find it difficult. I like to say really, that my contribution has been to be involved in various debates over the last 40 or more years, which have clarified how our universe began, and also clarified the nature of some of the extreme objects in it. It obviously involved neutron stars, black holes, cosmic explosions, etc. So I’ve written a lot on all those themes.

SL. I was checking some projects online and I found the IllustrisTNG project, which is a fascinating attempt to simulate galaxy formation and evolution. These simulations basically start from collections of particles and the laws of physics such as gravitational interactions and magneto hydro-dynamical interactions. So my question is - I don’t know if this makes sense - but what would it take to simulate the emergence of the laws of physics from the Big Bang?

MR. Well, we don’t understand how the basic laws originates, but we know that the expanding universe is governed by those laws. 

We can trace the Big Bang, and its history right back to when the universe had been expanding for about a nanosecond. Before a nanosecond it’s hard because the energy of a particle is higher than the energy we can achieve in the Large Hadron Collider, the biggest accelerator. So the physics is not well understood, for the first nanosecond. 

But from that time onwards, we can understand how has the universe expanded and cooled, the different processes that took place, we can understand how helium and deuterium were formed when the universe was a few minutes old. We can understand how the radiation, the thermal radiation, was produced in the early universe and it cooled down. 

These obey the laws of physics that we understand. 

It’s been quite easy to understand how a homogeneous universe, this uniform gas, would behave. But, of course, what we want to do is to understand how structures emerge. We know that the early universe wasn’t completely smooth, there were some places slightly denser than average, some less dense than average. As the universe expanded, then the over dense regions, lag behind their expansion and eventually condensed out. This is how we believe galaxies formed. 

What was one of the great triumphs of science in the last 25 years has been to observe the fluctuations in the universe at an early epoch. This is done by looking at the background radiation over the sky and seeing these non-uniformities and to put that in as initial conditions in a computer program and work forward. The good news is that working forward you end up with a Universal rather like the one we live in. This is done by putting into the calculation gravity and gas dynamics, and maybe some other things. 

This has been great progress in the last 20 years because it needs good computer power. Until 20 years ago, it was not possible. Of course, more generally, astronomy has benefited hugely from computer simulations, because if you’re a particle physicist you can actually crash particles together and see how they behave. Whereas we can’t actually crash real galaxies together, but we can, in the virtual world of our computer, see what happens if two galaxies collide, where stars exploded, etc. That’s how we have been able to see if our models do represent the real world. We now have a picture for how galaxies evolve, which comes from simulations, but which is tested. In fact, we can tell it very well because we look at galaxies now, but we have an advantage over paleontologists, in that we can actually see the past. We can compare our simulations with the universe now, the universe of billion years ago, 2 billion years ago, three million years ago, etc., by looking at more and more distant galaxies, where the light set out earlier. So we have quite good evidence that we understand the broad outlines of how cosmic structures evolved from amorphous the Big Bang, which started out very hot and dense, with small fluctuations in it. But of course, all progress in science leads to new questions, new mysteries, and what we’d like to know is the crucial process that has happened right in the very first nanosecond, when the physics is uncertain, because it’s those processes, which determine how the universe expands, and the fact that it contains atoms, dark matter and radiation. And that’s still a challenge.

SL. In your books, you spoke about having a sort of “cosmic perspective” towards the problems of the world. You became interested in topics related to existential threats. You discuss these things in your book “On the Future” and I’ve also read your other book “Our Final Hour”, where you talk about these issues. When did you become interested in these topics?

MR. Well, of course, that’s quite separate. There’s a huge gulf between the cosmic scale and terrestrial scale, but I’ve always been interested in politics since going back to student demonstrations for disarmament, anti-Vietnam War and things like that, when I was a student. Later in my career, I got involved more in disarmament through things like the Pugwash conferences, which are links between scientists in the East and the West during the Cold War. So I got involved in that. But later in life, I had more opportunity and more obligation, in fact, to engage in policy and politics because I spent five years as president of our Royal Society, which is the UK National Academy of Sciences and that of course covers all sciences and their policy implications. So during that period, and since then, I’ve spent a fraction of my time on things which are not astronomy, but more general policy related to what’s happening here on Earth now.

SL. You’re also the Astronomer Royal. What’s the role of the Astronomer Royal? 

MR. It’s just an old title. It used to be the person who ran the observatory, the National observatory at Greenwich, near London. But that became a museum when became possible to build telescopes on higher and drier locations, Hawaii and Canary Islands, etc. But they kept the title but it’s just an honorary title.

SL. And you’re also the co-founder of the Center for the Study of Existential Risks here in Cambridge. What’s the purpose of this centre?

MR. Well, this is something which I become more committed to over the years. In my early book, “Our Final Hour”, which you mentioned, I drew attention to the fact that we were, in the world, becoming vulnerable to new classes of threats that weren’t important in the past. In my new book “On the Future”, I explain this in more detail. The main points that I make in these books, is that this century is special. The Earth’s been around for 45 million centuries. But this is the first when one species, the human species, has been numerous enough and empowered enough to actually determine the fate of the planet. We can affect the Earth’s climate, the flora and fauna on it, and of course, we are interconnected through air, travel, internet and all that. So for the first time, we run a risk of catastrophes, which could cascade globally. This is something which is new. These are less studied than more local conventional risks. I wrote quite a bit about that. 

Our Center in Cambridge has a focus on these extreme high-consequence low-probability threats, which are becoming more important and looming larger. Of course, the COVID-19 pandemic that we are having this year has, of course, made people take these seriously because that in itself shows how some unexpected threat can have global consequences. We clearly are learning that we need to prepare more to cope with catastrophes of that kind and to minimize the chances they occur. 

Pandemics are just one category. There are other things like a breakdown in the electric power grid in a large region, either accidentally or [from] a cyber-attack, and other global catastrophes, which could come about. So what our centre is doing, is collecting a group of people who will focus on these and draw on the sort of convening power of the great universities like Cambridge to bring more expertise to bear on these questions.

SL. In terms of politics, does it cover the full spectrum of politics? Is it a bipartisan organization?

MR. It’s not party politics, but it does include social scientists and economists of course, that’s very important. But of course, we are experts who advise politicians and, of course, we do engage with politicians, and participate in parliamentary affairs and try and raise the profile of these issues on the agenda, compared to the way they were. So this something which we think is important because politicians obviously have a crowded agenda, they tend to focus on the immediate and the local, what happens before the next election. 

The COVID-19 pandemic is a wakeup call, because this is a crisis, which they never prepared for properly and it indicates what can happen. What we need to do is to make sure that governments do prioritize these. There’s an important maxim, which I like to quote, is “the unfamiliar is not the same as the improbable”. Because these things never happened before, that doesn’t mean that it won’t happen. A pandemic, in retrospect, wasn’t all that improbable and there have been others a bit like it in the last 20 years. So we need to prepare more for these things. 

Of course, there are the class of threats, which are caused collectively by our heavy footprints on the planet. This is because of climate change. We know that is going to have severe consequences. But the political problem of dealing with that, prioritizing that, is that the main effects are going to be a few decades in the future, they are not immediate. It’s not like the COVID-19, which has immediate effects. So we’re rather like the, the frog in the pot of water that’s being heated, which doesn’t realize that it’s going to boil until it’s too late to save itself. We’ve got to think further ahead to do this. Of course the problem with politicians is that they tend to focus on the more immediate concerns and also on the concerns for their constituents. Of course, climate change is going to have a bigger effect in parts of Africa that does here. So to get politicians to prioritize what happens in Africa 30 years from now, is hard. The only way this can be done is if the public cares, and so they respond to public pressures. That’s why raising the agenda on these long term issues among the wider public is very important.

SL. Do you guys also collaborate with SAGE, which is the Scientific Advisory Group for Emergencies here in the UK?

MR. Not directly, there’s some overlap, obviously, and there is a parliamentary committee at the moment, which is going to discuss better ways in which this country can prepare itself for these different kinds of threats. So, we have, obviously, in a University, in all of science trying to be useful when you can, and, clearly, these extreme threats are now being taken more seriously.

SL. So when we talk about the threats from space, I mean, we have threats from comets, asteroids, solar flares, I mean, for those things, there isn’t much we can do at the moment, maybe in the future, we could think about changing the path of meteor or a comet or whatever. So I guess the only option now is building underground bunkers, or Faraday cages, have you built one under your home?

MR. No, but because asteroid impacts don’t keep me awake at night. The reason for that is that we understand their probability, we understand, I mean, asteroids, etc. And we know its possibility with a low probability. We know that that probability is no higher now than it was 10,000 years ago or 10 million years ago. It’s so small and indeed, as you say, we might be able to do something to protect against some within the next century, so it’s important to try and map out the orbits of all asteroids that come here. But is not top priority. 

What is more worrying, are the classes of threats, which are getting more probable year by year, like a pandemic, cyber-attacks, and things like that. I’d worry far more about those and about engineered pandemics, things of that kind. They’re the big worry. 

Solar flares are in a slightly different category because they’re natural phenomena, but they had no effect on life on Earth until 150 years ago when we had electricity. But they do, of course, now have a potentially serious effects. So they are something which we want to predict a bit better by observing the sun carefully and also, of course, to mitigate the effects of by the way we construct the electricity grid, and also the way we harden the electronics of spacecraft.

SL. And also we have plans in place in case something happens, which is taking all the government or Her Majesty the Queen to the bunkers. They have similar programs in the US, where they have this emergency plans, which were from the Cold War, I guess, when there was a threat of nuclear war.

MR. Yes, well, certainly there have been those threats. The key question is to decide what is the appropriate response to any kind of catastrophe. Of course, we know the problems of hospitals being overwhelmed by a severe pandemic, we know that if electricity failed for a few days over a large area that would lead to complete social breakdown. The lights going out is the least of our problems, all the electronics and everything would fail. So we need to absolutely minimize the likelihood of those things happening and also have emergency plans in operation to deal with them.

SL. Let’s talk about artificial intelligence. Because I mean, there are so many people talking about super AI and rogue AI and all these things. At page 104 of your book “On the Future”, you spoke about... you wrote, “would a powerful futuristic AI […] go rogue? […] would it learn enough ethics and common sense that it knew when this should override its other motives?” So you talk about ethics, you say that this hypothetical super AI should be equipped with the ethics. But my question is: which ethics? Is it Christian ethics, communist ethics, anarchic ethics, or new atheist ethics? 

MR. Well, I mean, I think that is a big challenge for those who control it. Let me say that I worry less than some people about these science fiction scenarios. I think AI has important positives and negatives in the short term. The positives are that it does enable us to control better things like traffic flows in a city or electric power grids, where you have to process information and respond very quickly. Indeed, it would be of huge benefit to, say China, if they wanted to have a planned economy. They could have a planned economy of the kind that Marx and Stalin could only dream of, because they can now monitor every transaction, and the inventory in every shop, etc. So they could use it for that purpose. 

Also, in the short term, there are concerns about leaving decisions to AI, because the AI learns from large data sets and it incorporates all the biases that went into that data set. I think we should be reluctance to leave a decision on whether we are sent to prison, whether we are recommended for surgery, or indeed whether we have a good credit rating the bank to a machine, because even if a machine can be shown to, on average, make better decisions than the human, we feel if we’d like to be able to contest and query a decision if we don’t like it and get an answer we can understand. Because there’s always uncertainty about whether there’s some bugs in the program, which aren’t being taken in account of. That’s already been found in face recognition techniques. It turns out that you can confuse an AI in face recognition in a very subtle way. And so that’s a real danger.

SL. If you wear those masks…

MR. Even a small change in an image can have a big effect on the response for many of these programs. So I tend to think that it should be quite a long time before some of these more futuristic things come about. I mean, for instance, even something like fully driverless cars, what’s called stage five, where you can sit in the back, like having a chauffeur. People used to say that would come in five years. I don’t think anyone thinks that now, we could have driverless cars in controlled environments, on a grid of city streets for instance, but not in all weather conditions or traffic conditions. That’s further in the future. 

Going back to machine learning, the problem is that they can absorb data at huge rates. But they can really only get a feel for common sense by watching real human beings. The trouble there is that everything we do is very slow. It’s like us watching trees grow. It’s very slow and boring. So the data input, which goes into a machine from watching actual human beings, doing real things, in real environments, is too slow to be very effective. That’s why there is a problem in getting machines to have what we would call common sense. This sets a limit to the amount of discretion that we should give them really, and we should be beware of a situation where the machines are too much in control.

SL. I think we’re having also hard time understanding what common sense is and also what intelligence is. And these are big questions in philosophy and product.

MR. Yes, of course, and as you say, ethics is a matter for dispute and is very culture dependent. So yeah, that’s a very big problem.

SL. I had a discussion with a senior director from DARPA. At DARPA, they work on very advanced technologies and one of them is artificial intelligence. Recently, they came up with a thing called AlphaDogfight Trials, where an AI system was basically piloting an F-16 fighter jet. They managed to beat one of the best pilots in the US. All the media were scared, they said, “Oh, AI is going to take over, we’re going to have the next Terminator” and all these things. But I think the lesson there was that AI can be part of the human endeavour in terms of collaboration. So AI can help humans for tasks that are repetitive and boring. 

MR. That’s right, and can process information much faster, do calculations faster, those are the advantaged. But of course, robots are still clumsy compared to real people. A machine can beat the world chess champion, but a robot can’t move the pieces on a real chessboard as adeptly as a child can. So there’s a long way to go before a robot shows as good as we are in actually handling things, and as good as a squirrel is in jumping from one tree to another. There’s a long way to go before the interaction of robots with the real world is good. I think it’s going to come and there are great values. 

Since I’m interested in space, of course, the main location where robots can be far better than humans is in space. Because humans are ill-adapted to being in space and it’s very expensive and risky to send them here. So my personal view, is that the case for sending people into space, the practical case is getting weaker all the time as the robots get better. They can explore for us, and they can assemble structures in space under Zero-G. And that will be a very exciting development for robots.

SL. Because you spoke about robots, I want to ask you a question about autonomous weapons. We have autonomous weapons, defensive autonomous weapons. For example, in Israel, there is the Iron Dome, which is a sort of defensive autonomous weapon. Then more recently, there was the issue of super swarm drones. Swarm drones, so lots of drones that maybe you can give them a task, they go and attack an objective, and it’s going to be very difficult to defend from those things. Do you think these should be classified as weapons of mass destruction?

MR. Well, certainly [we should] try and inhibit their development, just like we try to inhibit the development of biological weapons and chemical weapons. In the same way, it’s going to be very dangerous if we do have robots which fight each other and use facial recognition to identify an individual and then kill them. This is the kind of thing that people talk about. I think we need to do all we can to slow down those developments, just as we’ve tried to with the chemical biological weapons.

SL. You also mentioned another thing, you said, when things are possible, people will do them and you make the example of taxes and drugs. You said, those things are illegal, but people do them anyways. How can you find them out if you don’t know where things are? Right?

MR. Well, indeed, and that’s why I’m pessimistic. I think if you ask me, what’s my biggest worry, in the next 20 years, it is pandemics, which are engineered artificially. As I mentioned, my book, I think, about nine years ago, two groups of scientists showed they could make the influenza virus more virulent and more transmissible by engineering it, and that they will be able to do the same thing for the coronavirus, and they’ll be able to synthesize the smallpox virus and all that. Obviously there will be regulations against all these things internationally agreed and academies of different countries are already discussing these. But the point I made and the reason I’m scared, is just what you say that we can’t enforce these laws globally any more than the tax laws or the drug laws, and that’s because they don’t require large, special purpose facilities, like making an atom bomb does. We have the International Atomic Energy Agency, which can, with reasonable success, monitor what countries are doing in nuclear arena. It’s very hard for them to get away. I mean, Israel has made a pretty good job of it but it’s pretty hard for countries to actually get away with too much. But in the context of biological weapons, the technology is present in many university labs and industrial labs, it’s dual use, etc. and we can’t monitor all those labs all the time. 

I think one of the problems is that there’s going to be a growing tension between three things we want to preserve: privacy, security, and freedom. Because even if a few people, mavericks, try to build a weapon, that’s too many. So I think we’ve got to accept that there will have to be more sort of surveillance. The way to balance is struck between those three things will be between different countries. I guess the Chinese will be happier to give up their privacy in order to have security. The Americans at the other extreme, maybe reluctant and they’ll have to accept greater insecurity. And but I think that’s a problem simply because we can’t monitor everyone all the time.

SL. Maybe Chinese Communist Party Communist Party would be happy to give up freedom, maybe not the Chinese people. It’s a different thing. I think. Probably the Chinese people would want to have more freedom, but the Chinese Communist Party is not going to give them freedom.

MR. Well, I don’t know, I think they owe a lot to the Communist Party.

SL. Now going back to the threats from Super AI and these kind of weapons. What do you think about the concept of Mutual Assured Destruction? The MAD concept, which we had during the Cold War. Is that effective? Is it more effective than regulations? 

MR. I think it’s very dangerous because, going back to what we were saying earlier, there could be false alarms, misunderstandings, etc. If you think back to the Cold War, I’m old enough to be alive during much of the Cold War, and if we read what was said about the Cold War by people who were in power at that time, the memos of Kennedy and McNamara and people like that, it’s fairly clear that they thought the risks were rather high. In fact, at the time of the Cuba crisis in 1962, Kennedy said the threat of a nuclear exchange was between one in three evens. McNamara, who was the secretary of defense of America at that time, he said, they were lucky rather than wise to survive the Cuba crisis. Of course, there’s been documentation, subsequently, of various false alarms, where this could be triggered. 

So I think if we look back at the Cold War we can say it worked, there was no exchange and mutually assured destruction did work, because game theory suggests that if people are rational and well informed, then they might avoid that. But I think and I say in my book, that it was an unwise policy, because if the chance was one in three and even one in six, it’s like playing Russian roulette with one or two bullets in the cylinder. You don’t do that unless the stakes are very high, or unless you value your life very low. That’s a situation we were placed under. I myself, if I’d been asked, I would not have risked one in three or even one in six chance of the destruction of the fabric of European civilization, even if the alternative was a certain takeover by Russia of Western Europe.

SL. “Better red than dead”, which is what Bertrand Russell said.

MR. Indeed, yes, yes. He was quite right and of course, we were lucky, since, but of course, that’s like the guy who plays Russian roulette, you may be lucky, but that doesn’t mean if there’s a sensible gamble.

SL. And then in the US, after a few years someone said “better dead than red”…

MR. That’s right, I don’t have much sympathy with them.

SL. So we spoke about Cold War. When there was the Cold War, it was mainly the West against the East, say capitalists against communists. The good thing there was that we were sort of independent, we could make our goods in our countries, in the UK, Europe, US, etc. But now the situation is different. Now, if we say there is a Cold War between, let’s say, US and China, these two countries are strongly interdependent. So in case there is a war, then there will be a huge disruption in the supply chains, which is what happened with the pandemic. So what do you think would be the solution? Of course, the solution would be peace. But that’s not always the thing that happens. But do you think we should reduce globalization and go more towards like produce locally and where things are needed?

MR. I think globalisation on the whole is probably good, because it’s an extra disincentive to having some sort of serious conflict. But on the other hand, I think what we’ve learned is that there is another trade off, which is the one between efficiency and resilience. And we see these examples of that in it. If you have a factory and you depend on parts coming from supply chains all over the world, and if there’s a supply chain that leads to somewhere in the Far East, and if one link in that chain is broken, then that stopped production, then of course, you’re vulnerable. I think people have learned that it’s probably worth the extra cost and, in a sense, the inefficiency of having multiple supply chains, and keeping an inventory in stock, rather than depending on Just-in-time delivery, in order to be more resilient, if there is some unexpected crisis. So I think that’s certainly a lesson that we’re going to learn from this pandemic that efficiency is not the only thing that matters. To take a quite different example, Germany had a policy of keeping a larger fraction of its hospital intensive care beds empty all the time, except for crisis, so they could cope with the crisis better than we could, where we pride ourselves on 95% occupancy, which is, in a sense, more efficient, but it doesn’t give you the headroom, if there is a crisis. That’s another example, where it’s worth accepting inefficiency in order to be more resilient when there is a crisis.

SL. Yeah. And then, of course, I mean, trying to reduce globalization will also help with other things like pollution. Because I mean, it is a huge supply chains that to go far, to China and so on, they have a huge carbon footprint. Also, there is the issue of labour rights there. Plenty of issues there.

MR. Yes, I’m not sure I agree with you and wanted to reduce globalisation. But I think one thing, which I do think very strongly is we need to do something to help Africa. Because Africa is lagging behind in terms of its economic development. And it’s also the area of the world, where population is still growing fastest. Sub-Saharan Africa hasn’t been through the demographic transition towards a lower birth rate. So its population is growing fast. 

There’s a risk that Africa be trapped in poverty, partly because they can’t develop the way the East Asian countries did, which is by cheap manufacturing, undercutting the wage levels in Europe and North America, because robots are doing that. We have what called reshoring of manufacturing in rich countries, because it doesn’t need so much labour. So that’s a ladder that has been kicked away from African countries. And so it’s not so obvious how they are going to develop. The other point is that the one thing they do have is information and the ability to travel. They know by mobile phones, what they missing, they know the injustice to their fates. If this huge inequality persists between the lifestyle you have in Africa and our lifestyles, then this is going to be a recipe for embitterment, mass migration, and conflict. So I think there’s a very strong motive for the northern countries, especially those in Europe, to have a sort of mega Marshall Plan in order to ensure that Africa doesn’t fall further behind. This is not just for altruistic reasons is to make a safer and more stable world.

SL. Yeah, but the problem is that I think colonialism never ended in Africa, right? I mean, we never stopped exploiting Africa in any way on another one. In the past, we used to send soldiers, now maybe we pay some corrupt politician to get some resources from those places.

MR. And I think there are positives as well as negative and the Chinese Belt and Road Initiative.

SL. And also there are the wars that we started in North Africa, for example, that created all sorts of problems. For example, in Libya... 

MR. Indeed, yes. The westerners, especially the Americans, have a great deal to answer for.

SL. One thing I want to ask is that so basically you kind of predicted the pandemic in this book. Actually, in this book [Our final hour] you actually bet with Mr. [Dr Stephen] Pinker $1,000. Did you win that? 

MR. Well, it’s interesting because I had the bet that there would be a pandemic with a million casualties and of course the actual COVID-19 been far worse than that. But I may lose the bet because I said in my bet that I thought the pandemic would be not a natural one but an engineered one. Although of course, some people think that the virus leaked out of the lab in Wuhan, I don’t think most believe that. So technically, I’m going to lose my bet with Stephen. On the other hand, I think he has learned the lesson that we can’t be so optimistic about the future, with something far worse than he envisaged has actually happened naturally. 

SL. Yeah, I mean, the investigation is going to be complicated there. But do you think it really matters? Do you think it’s important to investigate on what really happened there? If it came from animals or if it came from a lab? Do you think it’s relevant at all?

MR.  Well, I certainly think is important. I mean, the most likely thing is it did come from animals and if that’s the case, then it’s hugely important to minimize the sort of situations in those markets of live animals, etc. Because most of the experts think that as the world’s getting more crowded, there’s going to be more interaction between animals and humans and that the risk of the transfer of viruses between species is going to grow. So I think it’s very, very important to do all we can to minimize that sort of transfer, because most of the experts think that if we don’t, then the probability of these pandemic’s is going to get higher and higher. And of course, not to get that they could be far worse than the present one.

SL. I guess there I mean, if it came from a wet market, then the solution is simply more hygiene. Just avoid those bad conditions, stacking the animals on top of each other, if it came from a wet market...

MR. Yes. Or maybe something more drastic. Yeah.

SL. Or close them down… Before we spoke about the “short term perspective” from politicians and so on, also, probably from the media. One thing I want to discuss is that… I was looking at the statistics of deaths worldwide. So from the pandemic, we had, that at least that’s the reported one, about 1.8 million people died with COVID. But then there were also 1 million suicides. And there were 8 million deaths due to cancer. So it seems that why do you think we don’t give enough attention to “long term” problems like cancer and so on? Because I mean, this is a far bigger problem, I think.

MR. Well, I mean, I disagree. I think there’s a huge amount of research on trying to deal with cancer, it’s a very intractable problem. They haven’t made tremendous progress, and it’s in part due to the fact that people live longer for other reasons. I would actually say that, if you think of the balance of medical research, and there’s not enough being done on tropical diseases, diseases of poor. Because the number of children who die from infectious diseases in Africa, I don’t what the numbers are, but that’s in millions per year. I think we certainly need to deal with all these diseases, including cancer, but I think they are much more difficult to deal with and I think the effort of medical research is pretty strongly focused on cancer, isn’t it, at the moment.

SL. Okay. Yeah. And then there is a problem of suicides, which is another problem, suicides are increasing especially now with the all the psychological problems that this pandemic has created.

MR. That’s fully understandable because of course, the pandemic has not hugely affected people who can work from home etc. But those who are forced to go out and risk dangers and who are living in cramped apartments, they’ve had a really, really bad time and if they got children, it’s understandable. This is just an example of how mental health and greater equality are two things we don’t prioritize enough. And we need in particular, to reduce inequalities. 

In fact, one of the lessons that will be learned for pandemic is that the people who really matter in a crisis are delivery drivers, health workers, and people like that. They are the people whose jobs are insecure, who get low status and they ought to be raised. So I think the inequalities, which are far too great within countries, as well as between countries need to be reduced. I worry that in this country we are not moving fast enough in that direction. Of course America is even worse. I think we should realize that we, in this country, can learn far more from Scandinavian countries than we can the United States in terms of a better social system. Because for mental health, people have to feel secure and valued. That’s not happening in this country or in America. But it’s happening more in the Scandinavian countries. 

When there are sort of attempts to measure, on the basis of opinion polls, general happiness and contentment, then I think Denmark comes out as number one. Britain and America are much lower, but the Scandinavian countries come out top. That’s because they have public services, they accept high tax rates to provide a good welfare state, high quality services for everyone, and avoid excessive inequalities. So that’s the way we need to go.

SL. And you think we should have universal basic income?

MR. I mean, something in effect like that. I think everyone who can work should have to work. But I think what we should have is very large numbers of dignified well-paid jobs, paid for by the public sector, for instance, careers in old people’s homes, assistance for teachers and things like that. Because the people are going to be put out a work by AI in the short term, are people working in Amazon warehouses, in call centres, etc., which are fairly mind numbing jobs anyway. But what’s needed is that the companies that own those robots should be heavily taxed, so that the government can fund large numbers of properly paid jobs for carers, gardeners in public parks, teacher assistants, the kind of jobs, where being human is important, and where there’s a huge undersupply at the moment. So that’s the kind of thing that needs to happen. And so this would not be giving people an automatic wage, but it would be giving people the opportunity of jobs, which are socially beneficial and satisfying, and also it would be producing a sort of welfare state so that the everyone was kept above a much higher level than the rest of this rather vestigial welfare state does at the moment.

This means higher taxation. So the mantra of low taxed economies is in my view a very damaging one. We should accept that we need we need higher taxation.

SL. You said that the wages of so-called “essential workers” should increase, but also during the pandemic we had a huge problem, which is the sort of destruction of small businesses in favour of a big corporations like Amazon, for example?

MR. Absolutely, yes. And, of course, there is a problem with big corporations, which are multinational, they famously don’t pay anything like their fair share of taxes. So it’s the tax from them, which could make a huge difference in helping small businesses but also in producing enough jobs to meet the need in care homes and in schools.

SL. I wanted to go back a little bit about policies and prevention and all these things. What do you think we could have done in order to avoid this pandemic?

MR. It’s not clear... I mean, it would have been worth spending a lot of money if we could, because the pandemic is going to over the next few years have a total cost for the world, which people estimated at least $20 trillion. Even if you think the probability was only once per 50 years, that means that if you work out the insurance premium by multiplying probability by impact, it would be worth spending several $100 billion in order to reduce it or eliminate it. That wasn’t done and that’s a lesson we’ve learned. 

I suppose what we could have done is been better prepared to produce vaccines for the coronavirus, as we had done incidentally for influenza virus, but the British had stocked up on, on vaccines for the flu pandemic, not for this kind. They also hadn’t taken into account that this kind of pandemic required lots of protective clothing for medical staff and it might be hard to develop a vaccine. So things like that. 

But also, of course, if you think about the World Health Organization could do, if one could have a more effective monitoring program, certainly, again, as I say in my book, so that Vietnamese farmer immediately notices and reports any strange disease in his flock, etc. and ways of monitoring these newly emergent things to catch them before they spread to humans or spread very far, then that would be better and it will clearly be worth having a far more scaled up program to do that sort of thing. 

I mean, these are fairly sort of obvious suggestions, but it’s clear that if you think of the costs of a pandemic, it’s worth spending a lot more in trying to mitigate it or prevent it and given that pandemics often start in remote places, on farms, etc., then having a more efficient way of ensuring that are stopped near the sources as it were before they spread. That’s just one example.

SL. Essentially you’re suggesting more power transfer from countries to sort of global organizations. At page 32, you wrote that “we need to ask whether nations need to give more sovereignty to new organizations [like] United Nations, […] International Atomic Energy Agency, World Health Organization. But one of the reasons why, at least here, people voted for Brexit was sovereignty and disagreement with policies developed by unelected Eurocrats and technocrats. So people wanted to take back sovereignty…

MR. As someone who was a very strongly against Brexit and think it was a major mistake, which was achieved by a very dishonest campaign by [the Prime Minister Boris] Johnson and his colleagues, which people will regret in the next few years, I strongly disagree. Because do you think Mr. President Macron, is worried about losing sovereignty? Because he’s in the EU? Certainly not? And are we gaining sovereignty? We are losing authority, we will be more marginalized. Just take one example we say that we are sovereign, but our nuclear power stations are being built by French and Chinese organizations. Our railways are, we’re told, privatized, but they’re partly owned by foreign governments. Our car factories are mainly owned by Japanese or German companies. So it’s complete nonsense to say that we are an economically independent nation. We are in an interdependent world, where many of our facilities are foreign owned, and this is not a bad thing necessarily. But it just means it’s absurd to try and be a little England again.

SL. For scientific research, it’s really bad…

MR. That’s one downside, but there are more important things than that, for the average public. The latest opinion polls show that, I think, a majority is now against Brexit. of course, when they’ll realize what it will actually involve in next six months. I predict that more and more people will regret that we ever went through with it. 

But going back to what we were saying earlier, it was a debate where the people who voted for Brexit were often the people who did not feel empowered, they felt left behind. They thought this was the consequence of being in the EU. In fact, it wasn’t. It was a fault of the present government austerity policies, which was weakening public services, etc. 

Similarly in America, there was a feeling that workers have been left behind, because of course, there’s even less a welfare state in America than here. So this is a good reason why many people in very inegalitarian society should have felt aggrieved. But they were entirely mistaken to believe that either Mr. Trump in the US or Mr. Johnson in this country was going to do anything to improve their situation because the mantra of the present government is low taxation, etc. and not to improve the welfare state. So I think it was a sort of contrick and I just think that it will end up badly for the people who were taken in by the vote.

SL. Of course, UK… US has a very different economy.

MR. We got to realize just how different the US is, for from Europe. 70 million people voted for Trump even a second time and even more than 70 million people want to carry guns and it seems alien to Northern Europeans. That’s why I say we can learn more from the example in Scandinavian countries than from America.

SL. The gun right is a something that comes from the history of the United States, and there were reasons for that. But here, we have a different, different past.

MR. Indeed, and we’re lucky for that. 

SL. So do you think global governance is the best approach towards global risks? Or do you think each country can work out their deals with partners separately without using a global organization? What do you think? 

MR. Well I mean, I think we’ve got to bear in mind, the reality now is that there are global companies, the big Internet companies, and Amazon and all these, which, in a sense, they’re providing a huge service, but they are not under the legislative control of any single country. That is why, as we know, they pay far less of their profit in tax than a small business has to and this is a plain injustice. This certainly needs some sort of international regulation. People talk about the regulation of AI and the Internet and all that. We certainly need regulation, because otherwise, there’s no way that regulation or control of these huge global conglomerates can be implemented. So I think that that’s an example. 

I think there may be other needs for organizations rather like the WHO. Maybe, if we have a deal for countries to cut carbon emissions for reasons of mitigating climate change, then there may need to be a separate organization, rather like those to actually monitor progress and see if countries are actually implementing what they are pledging to do in cutting CO2 emissions. That’s an example. There could be a commission, which is international for dealing with climate change and energy policy generally, because that’s becoming more international. Aviation obviously is international. I think even though we may want more regionalism in some context, go to smaller scales, there are some areas, which are consequences of the way technologies has developed, which do need to have more international regulation than the current system allows.

SL. How would you ensure that organizations like the WHO... there are many organizations like that… do the interests of the citizens of the world, rather than particular interests, corporate interests, and specific country’s interests? This is very difficult. This is going to be very difficult thing...

MR. It’s difficult, but I would have far, far, more hope that they will be altruistic than that Amazon will be.

SL. For sure, yeah. The reason why I’m saying that is that because I mean, if we consider what happened with the WHO, recently United States stopped funding the WHO. So now the WHO at the moment is mainly…

MR. Quite disgraceful decision that was. 

SL. …It’s mainly founded by the Bill and Melinda Gates Foundation, those are the first ones in the list. It looks like the director has got strong links with China. So there is a huge influence of China in the WHO, maybe there is less influence from other countries and so on.

MR. Yes, but that’s not actually a bad thing, especially if you think of what China has done for WHO compared to what Trump did for the WHO, the Americans don’t have any high ground in that area. Although Bill Gates’s Foundation has been a definite plus. And thank goodness, there are some billionaires with his concern about these problems, especially Africa. I’ve just been reading a book he’s publishing next month on climate change and how to deal with it.

SL. Okay. Okay. And another thing that the Americans pulled off from was the Paris agreement for carbon emissions. But if you look at the graphs and the statistics, I mean, the carbon emissions have been steadily going down for America, United Kingdom, European Union and so on. I mean, for the West in general, they’ve been going down since the beginning of 2000. So I think we’ve been doing a good job, irrespective of who the President was. The problem then becomes, how do you make sure that countries like India and China don’t pollute too much? How do you enforce that?

MR. Yes, yes. Well, first, I mean, I think we shouldn’t be very smug because, of course, two things have happened in the last 30 years. One is going from coal to gas. The other is that a lot of our emissions sort of have been transferred to China, because they manufacture lots of stuff we buy. So you’ve got to allow for that sort of thing before you interpret these figures properly. So I think we have a long way to go. 

If we look at the figures, China is producing CO2 at about half the level per capita that we are, but of course, India is far, far lower. One of the things that I think is very important is that India, which needs to develop, needs more energy per capita to develop. They needed a grid and not just smoky stoves, burning wood and dung in people’s homes. So they need more. The important thing is to arrange that they can leapfrog directly to clean energy without being forced to build more coal fired power stations, because that’s cheaper. So, in my view, that’s why it’s very important for countries like ours, to accelerate our research and development into clean energy in all its forms, and I would include nuclear in this, and also in the energy storage, batteries, and smart grids and all these things, so that the cost comes down as technology advances, and India can then jump, leapfrog to it just like they’ve never had landline phones, they’ve leapfrog directed mobile phones. So we should work with India, it’s not neo-colonialism, we should work with India to have an expanded research program so that they can expand their energy use, which they need to do clearly for air conditioning, nothing else, without producing more CO2. Because if you if you look at the projections, then it’s true that we could reduce our per capita emissions still more by energy efficiency in various ways, but India is probably going to need to increase its energy consumption by quite a big factor if they are all to develop to middle class lives and have air conditioning. And so the more important thing is how those countries get their energy 20 years from now, and we’ve got to do all we can to ensure that they can get that energy in carbon-free ways.

SL. Yeah, yeah. That’s going to be difficult. Yeah. We spoke about the sort of duality between liberty and safety. Some people value safety more than freedom, and other people value freedom more than safety, and we spoke about the “better red than dead” or “better dead than red”. So do you think adjusting and controlling freedom and safety should be a government prerogative? Or do you think government should give citizens the option to choose what they prefer? Whether they prefer freedom over their preferred safety?

MR. Um, well, it’s not such a stark trade off, as you imply, but I think most people would say that the prime object of the government is to keep its citizens safe. And this obviously, involves some constraints on the more aggressive among the citizens. So everyone agrees that to ensure the rule of law and safety of the public is the prime goal of civilization of any nation and government.

SL. Okay. And if you had a look at the Great Barrington Declaration on the concept of focused protection? 

MR. Yes, I have, yes. 

SL. What do you think about that?

MR. Well, I mean, it was a supported by some of the usual suspects, but by a very few of the genuine experts. So as I give a little weight to it, and I think we’ve seen to this pandemic, that the people who said it was not serious, have been proved completely wrong. And indeed, they interventions have been damaging. And I think the Great Barrington Declaration was another example of that.

SL. So basically, what the declaration says is that we should protect people that are at risk, but then open the economy, because otherwise it’s going to be a disaster. 

MR. They said, but I mean, there’s been endless debates, which I think suggests that that’s not feasible, because children carry the disease back to their grandparents and all that, so there were all kinds of reasons. That was a very unhelpful initiative. It came from some American that think tank, which had been against dealing with climate change in the same way. These are people who object to any kind of regulation. You should look at the provenance of that particular document before you take it too seriously.

SL. We’ve got so many different vaccines now. And we got different types of vaccines. What do you think about the way the vaccine campaign has been done? 

MR. I think so far as I can tell it’s done better than some people suspected because it wasn’t clear that a vaccine could be developed quickly. For instance AIDS is a 40 year old disease and there is still no proper vaccine for that. Some people were saying, it’d be very difficult to have any kind of vaccine for this. So, to that extent, I think things have gone better than people expected. And that’s gone well. I think the things that went badly, certainly in this country, it was the test and trace, which was inefficient compared to what happened in the countries of Southeast Asia. Taiwan was the biggest success in this. But we did very badly, partly because it was a contracted out to a private sector company without the right expertise or commitment. But we did very badly and I think the US also did badly. But I think the vaccine program has turned out to have achieved successes rather quicker than some people suspected. And so that’s been a bit of good news, I think.

SL. Yeah. And the thing is that there are different types of vaccines, there are the more traditional ones that are based on deactivated viruses, but then there are also the new ones, which are the ones based on mRNA. The thing is that there are some risks with the mRNA vaccines. I don’t know if these are being discussed enough. So that’s why I was asking that question.

MR. Well, I mean, they’re clearly being discussed a huge amount and some may work better for mutation and others we don’t know. All I can say is that I think it’s good that we do have this variety, and have huge expertise around the world in developing them. So I think that’s been good news. But obviously, it’s a complicated science. And they’re learning all the time.

SL. And there’s not as easy as physics. In physics, we deal with the easier samples…

MR. Well indeed, I always make the point that physics is an easy science compared to anything involving biology.

SL. Do you think there is any risk that the policies that are being implemented now in terms of lockdowns will persist in the next maybe 20 years? I mean, we’ve seen something... and I am drawing a parallel with the, with the war on terror, because, I mean, that was drawn by Klaus Schwab from the World Economic Forum. He spoke about the parallels between COVID and the 9/11. There are many parallel. With 9/11, we’ve seen intensification of monitoring with cameras everywhere and all these restrictions. Do you think there is any risk that the lockdowns will continue in the future?

MR. Well, I mean, I think there is. And of course, you’re right in saying it hadn’t been proportionate. I mean, maybe some of the concerns involved in current airport security, for instance, may be excessive, and CCTV cameras… I think most people now welcome those. But I think obviously there’s got to be got to be a balance, but I think we have to be prepared. That there could be something even worse than COVID. And we should be prepared for that.

SL. So the last thing I wanted to discuss with you is future. You spoke a lot about the post-humanism and trans-humanism. Would you like to explain what’s this concept of trans-humanism?

MR. Well, I mean, it’s really some ideas, which are very widely known, which is that within 50 years, it may be possible to produce sort of designer babies as it were, because we know that you can already do genetic modification that reduces risk to some single gene disease like Huntington’s, but it may be possible 50 years from now, using AI to analyse large samples, to see which combination of genes optimizes certain desirable characteristics in humans, and then to synthesize a genome with those. So, it may be possible by the end of the century, to have sort of designed humans etc. 

Now, the question is, is this something which on ethical grounds we should try to regulate against or not? There’s a debate there, some people like this idea. I think, I have two thoughts on this. One is that I think it would lead to a more fundamental kind of inequality that we now have, if only a few people could afford for this and could enhance themselves in this fundamental way. So I would like to see it strongly regulated on Earth. But on the other hand, I think by the end of the century there’ll be a few people living on Mars. Mr Elon Musk et al. may be on Mars and I think they’re the people who should be allowed to do this if they want, because they are adapting to a hostile environments. We’re pretty well adapted living on the Earth, but they’re badly adapted to living on Mars. So I think we should cheer them on in using all these techniques of genetic modification and cyborg to adapt themselves or their progeny to living in these hostile environments. Good luck to them. I feel that we should try to restrain these technologies here on Earth.

SL. Because then there is a risk that you might create two sorts of races a superior race and an inferior race. And then what’s going to happen, they’re going to put in the zoo the inferior race or something…

MR. That’s why I’m against this being allowed here on Earth.

SL. What’s your view on the sort of interfacing between humans and machines? Because you mentioned the Elon Musk...

MR. Yes. Well, I mean, of course, we do interface with our mobile phones in a way. I know, some people like him are talking about the idea of sort of plugging in additional memories, and things like that, you know, linking to our brains. Maybe this could be done one day. I don’t know. It’s possible. But I don’t go along with the idea that we can download our brains completely.

SL. As Ray Kurzweil said...

MR. Right, yes. Because it’s not clear in what sense that could still be you. I think we’ve, that our bodies are essential to being us. So if you can download your brain into some machine, and it may have some features of your brain, maybe some memories, and you could it could be done one, you could make lots of duplicates of it. Which one would be you?

SL. They’re going to fight against each other? Who’s going to win?

MR. That’s right. It’s not clear what possibilities there are and whether that’ll stay science fiction. I rather hope it will. But of course, it is interesting that the problem of “personal identity” is something which has been discussed by philosophers for generations, you know, what is the essence of you? But it could be that some of those issues will become part of practical ethics, not just academic philosophy, if any of these ideas become feasible.

SL. Yeah. One thing I want to point out is that you said we are already interface with the mobile phones… think about, okay, mobile phones, they’re basically tracking you, they check what you do, or what are your preferences, and they use that for commercial purposes. They sell your data. Now think about if you are connected with the… if the thing is connected to your brain, they might read your thoughts, or maybe plant ideas into your mind and make you buy things… then they transform into a zombie or something like that. 

MR. Yes. That’s why I don’t want that. Leave it for people on Mars to do that. 

SL. What’s your next prediction? 

MR. I worry that we are going to have a rather bumpy ride to the coming decades, I think because governing society is going to become harder. And that’s partly because inequalities are far too great and people have very good reasons for feeling embittered or angry about the system they’re living under. That’s one. Also the point which in fact I made in my first book 2003 is that we are in a world, where a few people, if they have the motive can produce a catastrophe which could cascade globally. This is something which is only possible because of a cyber, the internet and biotech. So this is a challenge to governance, which has never been so severe until this century. So the question is, to what extent can government’s cope and minimize these risks which will cause global catastrophes and social breakdown. I think it’s going to be a big challenge. I just hope for the best.

SL. Okay, Professor Martin Rees, thank you very much for doing this and have a great day.

MR. Okay, same to you.

Thank you for listening to this conversation with Professor Martin Rees

If you enjoyed it, please leave a comment below, like, subscribe, turn on the notification bell, and support this channel on Patreon. This podcast is also available on Spotify and Apple Podcast. You can find links in the description below.

 

 

People on this episode