< return

Biotech, AI, & The End of Humanity

June 7, 2021
ep
16
with
Toby Ord

description

This week our guest is Toby Ord, a senior researcher at Oxford’s Future of Humanity Institute, where he focuses on existential risk. Toby recently released his book Precipice: Existential Risk & the Future of Humanity, which explores the many threats capable not only of ending human civilization, but even causing our entire species to go extinct. It may seem like an unrealistic concern out of a sci-fi movie, but Toby suggests humanity has roughly a 1 in 6 chance of destroying ourselves within the next century.

In this episode, we discuss how Toby arrived at this number and the deeper details of the threats that are facing us, with a particular focus on biotech and artificial intelligence, which Toby views as the most prominent areas of concern.

Want to learn more about our podcasts and become a part of the community? Join here!

Host: Steven Parton // Music by: Amine el Filali

transcript

The following transcription was created automatically. Please be aware that there may be spelling or grammatical errors.

SUMMARY KEYWORDS

existential risk, risk, concern, humanity, people, pandemic, ai, human, create, biotech, transmissible, system, cases, world, nuclear weapons, future, called, thinking, catastrophe, community

SPEAKERS

Toby Ord, Steven Parton


Toby Ord  00:00

I put the level of risk over the next 100 years at about one in six. And if I'm even roughly right about that, then humanity couldn't sustain many more centuries with risk levels like this.


Steven Parton  00:27

Hello everyone, you're listening to the feedback loop on singularity radio, where we keep you up to date on the latest technological trends and how they're impacting consciousness and culture from the individual to society at large. This week, our guest is Toby Ord, a senior researcher at Oxford's future of humanity Institute where he focuses on existential risk. Toby recently released his book precipice existential risk and the future of humanity, which explores the many threats capable not only of ending human civilization, but even causing our entire species to go extinct. It may sound like an unrealistic concern out of sci fi movie, but Toby suggests that humanity has roughly a one in six chance of destroying ourselves within the next century. And this episode, we discuss how Toby arrived at this number, and the deeper details of the threats that are facing us with a particular focus on biotech and artificial intelligence, which Toby suggests is the most prominent areas of concern. But let's put aside that doom and gloom for a brief moment to remind you that these kinds of conversations meant to prevent such catastrophe are happening every day and the singularity global community with more than 30,000 members exploring ways to leverage technology for good. If this is something that you're interested in, follow the links@su.org slash podcasts to explore your options for membership within the community. And though I typically don't condone it, I believe in your ability to multitask. So while you're signing up, let's go ahead and get started. Everyone, please welcome to the feedback loop. Toby Ord. All right. Well, to start, then, I would love if you could just tell me how someone comes to write a book about existential risk, what drove you to take on what some might consider maybe a slightly pessimistic or quite challenging topic?


Toby Ord  02:26

Well, I've always been interested in the big picture questions facing humanity. So for example, I spent so many years looking at global poverty and global health, and trying to work out what it is that we could do to improve the lives of billions of people in the world who were really lacking for the basics of life. And when I first came to Oxford, I met a philosopher called Nick Bostrom. And he had just come up with this idea of existential risk building on the idea of, of extinction risk. And he was thinking about, you know, in terms of big picture questions, he was thinking about how the world could end. And that this could be one of the most important things anyone could work on. Obviously, it would be important if your work could prevent the the end of humanity. But he suggested that even if we could only do a small amount to help with this, there could still be one of the most important things that we could work on. And that had been more commonplace back in the days of the Cold War, when a lot of scientists went to work on, you know, avoiding nuclear war with the Soviet Union, but had kind of fallen off discussion. And actually like, Nick, I'm very optimistic about the future. And it's precisely this optimism, that that humanity, you know, for all of the 10,000 generations we've had so far, it could still be really at the beginning of something quite extraordinary. And we could have a fantastic future, if only we can make it that long. But that we're at the moment where this period of, of quite high risk. And so it's our it's really this optimism about the future that motivates me to to work on these risks. So I don't find it so gloomy. It's not that I was kind of driven towards something dark and depressing. But rather, it's this obstacle between us and a really bright future.


Steven Parton  04:31

And can you describe what those obstacles look like in more detail? Like, how would you define an existential risk? What puts it in that category versus something that might just be catastrophic? Or, you know, tragic?


Toby Ord  04:45

Yeah, the idea is that, that if you think about human extinction, that it has this really important property, which is that if humanity were to go extinct, it wouldn't merely Be the the deaths of, of all 7 billion people alive today. But it would cut off our entire future. This is clear irreversibility about it. It destroys not just our present, but our future. And there'll be no way back from it. And this creates a whole host of really challenging issues around dealing with these risks. And it also explains why it is that this could be so bad. Now, extinction isn't the only thing like this. And this is what, what Nick Bostrom, so big insight was, in thinking about this broader category of existential risk, you could imagine, for example, something that causes a complete collapse of civilization, from which there's no way back, and we can never recover civilization, in which case, humanity would be leading a very impoverished life From then on, for example, if it were possible to destroy our climate so much that that, you know, we couldn't rebuild agriculture and, and the other trappings of civilization. Or another example would be if there was a totalitarian regime, which was inescapable and terrible, such that the kind of point of no return, instead of it being the moment of extinction would be the moment in which this regime, you know, achieved its full control over the future. All of these things have have a lot in common, and it's really worth dealing with them as the same kind of idea. Or they wouldn't be a mere dark age and humanity story, but really the pivotal moment of the story.


Steven Parton  06:34

And what's the timeline? Look for this? I know, from what I've read, and seen so far, it seems you really route all of these notions of existential risk, and the beginning of the creation of the nuclear bomb, really, I would say the last 70 years or so, what's that evolution from nuclear to now look like and moving forward? how did how is this evolved as a thing that we now need to really be concerned about?


Toby Ord  07:02

We've always been subject to some risks of extinction, or perhaps an unrecoverable collapse from natural causes. So for example, asteroid impacts or super volcanic eruptions, but we know that that there must not be all that much risk, even though we know we're constantly finding new things such as gamma ray bursts, you know, around distant stars, that could potentially do as in, we know that, that humanity has survived to 10,000 centuries, sorry, 10,000 generations, or about 2000 centuries so far. And so the risk per century can't really be much higher than about one and 1000, or the chance of surviving for this many centuries, it just becomes very small. So we can bound all of the natural risk to some really quite low level. However, when we got to the point of nuclear weapons, we reached this moment where humanity's escalating power had finally reached the point where we could potentially pose a threat to all of humanity. And this, this was a kind of new era, which I call the precipice where the risk that we face the existential risk is now higher than this background level. And I think it's also increasing over time, such that I put the level of risk over the next 100 years at about one in six. And if I'm even roughly right about that, then humanity couldn't sustain many more centuries, with risk levels like this, either we would, we would, you know, go extinct or suffer some other kind of existential catastrophe, or we get our act together. And we stopped this trend, and we lower the levels of risk down to a small and diminishing level. So I think one way or the other one of those things has to happen. So nuclear weapons is what I would put as the the first big existential risk. And then I think that's, that's been followed by climate change as another potential existential risk. I should say in both cases, it's somewhat unclear whether whether they would actually be able to cause our extinction or a permanent collapse of civilization. It may be that the very worst nuclear wars or the very worst climate change is instead more of what I called a dark age for humanity so it's still it's still something that by any normal standards is absolutely catastrophic Lee bad I just want to want to be clear about that and well worth investing in and, you know, go into a lot of effort to avoid on its own, but I'm particularly interested in risks that would have this this special consequences that you could never survive having had one happen and this creates unique methodological challenges and and unique kinds of of importance about these risks. And so it's still a bit unclear whether either climate or nuclear exactly rise to that level. But I think that there's a very, very serious chance that they do. And the climate has this other somewhat confusing property, which is that it may be that the damages get locked in substantially in advance of the actual catastrophe happening. So in some sense, when I say climate was the next one, I'm talking in terms of the damages starting to get locked in already, and the technologies that caused the damage already existing. But there are other risks coming up, such as risks from engineered pandemics. And also I think risks arising from artificial intelligence, where even though the technologies are not quite with us yet, the risks may strike sooner than then when their climate catastrophes would really be happening,


Steven Parton  10:53

this current pandemic strike you as a particularly timely given the writing of this book.


Toby Ord  11:01

Well, yeah, the pandemic struck, just as the book was coming out, all of my text for the book was locked in about six months earlier. So you, I couldn't do anything to adjust it. But luckily, I didn't really need to do so. As I think that the section on pandemics, if anything, does look quite prescient, and I think that the, it's interesting in terms of how this pandemic has shaped existential risk, I think that it has the silver lining of the pandemic has been kind of waking us up to the fact that humanity is still vulnerable to these big risks. And that there's some aspects where even I suffered from this as well, where you hear experts you know, discuss the probabilities of global pandemic killing millions or even billions of people within the next year time period. And you take that you can take that seriously, and you know, write it down and remember these numbers and, and have some idea about it. But there's something different, something quite visceral, when you actually see it happen. I think that there's some kind of cognitive bias or, or something that's that, we tend to be a bit dismissive or just believing that there could be global events, which are bigger than any that we've seen in our lifetimes. And so for those of us, you know, who went live during World War Two, there's a kind of, you know, a limit to how bad anything we've experienced before has been, which, which makes it take a bit longer before you wake up to these things. And I think that that ceiling is at least been raised quite a lot, by by COVID-19. And also, it's going to, well, in the same way that that COVID creates a kind of immune response for an individual, which kind of protects them from being infected again, for some period of time after being infected the first time. I think that is also a kind of social immune response, where humanity suffering so badly from this pandemic, is going to mean that there will be a period of maybe five years where there's there's going to be a lot of call for action on protecting ourselves from the next risk like this. And in doing so, it may be that the silver lining of COVID is that it actually helps your alertness and be willing to invest more in protecting against existential risk, particularly buyer risks.


Steven Parton  13:32

Which Yeah, biotech and AI believe are what you consider the biggest concerns in terms of existential risk, is


Toby Ord  13:41

that correct? Yes, that's right.


Steven Parton  13:43

And what is it about? Let's Let's lead with biotech since we're already kind of focusing on a biological aspect here. What is it about biotech that concerns you so much?


Toby Ord  13:55

Yeah. It's a, it's chiefly the ability to, to create and to improve pathogens, transmissible pathogens. So there has been a line of research by well meaning scientists called gain and function research where people take an existing pathogen of concern, such as a famous example was taking a form of H five n one, bird flu. And this is a a very deadly form of influenza or kills more than half the people it infects, but it was not transmissible between mammals. So we had the situation where it was incredibly deadly, but not that transmissible, you have to catch it from birds. And a researcher did an experiment in the Netherlands, where he pastored successively through through 10 ferrets and the idea was to kind of grind up some material from the previous ferrets and and put it into the next one until the virus had been able to adapt To this mammalian system, and in the end, I created a virus a version of h5 n one that was directly transmissible between ferrets. Now the idea here was was somewhat noble was to work out what mutations would be needed, How hard would they be, and thus, how close were we to, to accidentally, you know, finding, just through natural causes a version of bird flu that could infect human to human. But in doing so, it also created this risk that this new virus that was worse than than anything out there, that it might escape the lab. And there have been many examples of lab escapes of dangerous pathogens. So that's one kind of concern as well meaning people creating these, this gain of function research and creating these new pathogens that are either more deadly or more transmissible than the previous ones. Or perhaps in some cases, more, more resistant to vaccines or antibiotics. There's also possibilities of more directly, nefarious work. So I I'm also, in fact, even more concerned about bioterrorism or bio warfare, where people just deliberately try to create like things like this, and then release them. This is an area where we've got tremendous improvements in biotechnology, you know, where the real boom time in terms of biotech, and this, this really could be the century of biology. But the downside of this is that the rapid democratization of biotechnology is also a form of rapid proliferation. And if we take some of the the biggest breakthroughs in biotech of recent times, such as CRISPR. And, and Gene drives, in both cases, within I think it was, it was, at most two years of, of these techniques being first used, I think it may have even been within one year, they were being replicated within science contests, by, by students at university. So you have this situation where one year, no one can, can use this kind of new advanced technology. The next year, only the very brightest person in the world, and their elite team can do it. And then a year after that, all of a sudden, there are students who can do it, and the pool of people, you know, is expanding very rapidly. And that's great when, when they're doing good things with it, which is most of the time. But it does mean that the chance that you encounter someone who has some very pathological psychology does increase. And eventually, as this thing grows, you're going to encounter such people, and they're going to try to do very bad things with it.


Steven Parton  17:47

Yeah, I'm just thinking, as you're saying this, I can't imagine how any kind of regulatory body could respond to something in a two year window span, in terms of it going from, you know, cutting edge to fully democratize AI. How do you I mean, do you have any thoughts on how you could even begin to push back against that rapid pace of change?


Toby Ord  18:10

It's a good question. It's extremely difficult, in large part, because most of the benefits, you know, most of the effects of this, at the moment are benefits. So you've got this, this ultimately, very good seeming, democratization of the technology. And it's hard to get all excited about that as a concern, even though the small chance small, but growing chance that someone uses it for rare but extremely bad outcome could end up having, you know, more negative effects than the actual positive effects. I think that's that's unclear, and no one really knows. But that that confusion and, you know, dual use aspect of it does make it very hard. In addition, you've got groups like the Civil Contingencies Secretariat in the UK, who produced the risk register for the UK. And this is generally a pretty good exercise where they they scan the horizon and try to work out what are the risks that we face? how likely are they and how, how high impact would they be, but recently, they've restricted the horizon for that to two years, you know, the number you mentioned. And so, in that case, they're not willing to consider risks that couldn't really happen in the next two years. But if you rule out risks that can't really happen in the next few years, then any risk that would take more than two years to prepare for, you know, you you're going to be unprepared for when it when it comes up. But I think some of these things are going to take a lot longer than two years to try to prepare for. I think that when it comes to biotechnology, I think ultimately one of the answers is going to have to be a I don't know how to put it exactly. Some kind of sobering up of the biotechnology community, not that there aren't people who are very sober already. But what I'm thinking of is an analogy to the the atomic physics community, when it came to nuclear weapons were ultimately, you know, in 1945, after Hiroshima, they knew sin. And they really felt it. You know, many people in the community had been working on developing the weapons that caused this sheer destruction. And that could threaten even greater destruction in the future. And they really felt that they needed to do something about it. And this, this gave them a real sense of social obligation. But on top of that, there was also a secrecy that developed where it was considered just a part of doing business in atomic physics at that point, that if you came up with some new breakthrough about fissile materials say that you couldn't just publish it. in universities, you know, we generally have this strong push towards academic freedom and towards getting ideas out there. And generally, it's a really great thing that we do have that. But it does run into problems once you reach potentially very dangerous technologies. And it may be that, that people in biotechnology will have to, in the future act a bit more like people in the physics community, or in atomic physics in particular, where they just have to accept that they don't have a fundamental right to publish anything that they come up with as soon as they want. As an example of something like that. Let's, this sounds like too much of an overreaction. People have published a lot of stuff that has been very dangerous. And one example is, you know, publishing the smallpox genome. So anyone can go on the internet and download the DNA code for smallpox. Luckily, that, you know, the DNA fabricators that will produce DNA to order. You know, they search for the string of, of base pairs, and we'll block it. But if someone had a fabricator that didn't have, you know, those limitations bought into it, then it could produce DNA that have smallpox, there are still additional steps before that someone could actually do damage with that. But you know, they've removed one of the massive steps at the end of smallpox eradication. All smallpox stocks in the whole world were destroyed, except for a very well guarded stocks in the Soviet Union at vector, and in the US at the Center for Disease Control. But all of the good work of limiting those things and putting them under lock and key and needing special security clearances to go anywhere near them, was all undone by the person who decided to publish the entire genome online.


Steven Parton  22:48

Yeah, and you hit on something there that, I think is really interesting. I'm not sure if this is out of scope for you, but what role does the multicultural landscape play in this in the sense that, you know, I've, you know, in China, there was, I believe, the guy who used CRISPR, to create the HIV resistant twins, but you know, it's like, there's an unregulated body there in China or a body that is empowered by the government. Whereas we might be restrictive here in the United States. But that doesn't, our restrictions in the United States doesn't stop China from pushing forward. So there, it seems, there has to be a lot of multicultural communication for us to make this work. And I feel like we're maybe not there.


Toby Ord  23:34

I think that that's right. Otherwise, you can get an effect where we're only as safe as, as the least safe country. And this can go both ways. There's, you know, there are various aspects of research, which are less regulated in China, where we've regulated them due to ethics concerns and various forms. But by the same token, there, there are things that are, in some ways, less regulated here. Because we think that people have freedom to do what they want in terms of scientific research, kind of, unless it's been very strongly demonstrated otherwise, that it would be, you know, actively dangerous, you're kind of clear and present danger. Whereas in China, my least my educated guess would be that more of those things would get blocked. So I think that, that there is good reason to talk with people across the world and see how different cultures are dealing with this and see what challenges we're gonna face. I do worry that that that there's going to be some situation somewhat akin to with the physicists and urashima where, where someone does make some engineered pandemic, and it causes a lot of death and destruction. And then if the biotech community kind of hasn't self regulated sufficiently to avoid something like that, then they're probably going to get a lot of external regulation.


Steven Parton  25:01

Can you talk a little bit about artificial intelligence and its role in terms of existential risk?


Toby Ord  25:06

Yeah, I think AI risk is, is pretty complicated, and also very easily misunderstood. So I've certainly seen a lot of colleagues try to put a really nuanced view of this out there to journalists, and then the journalists write stories about the Terminator and and then infuriates all of their colleagues who are actually practitioners in AI. This happens a lot where whether they're the nuanced concerns get turned into, into very unknownst, shouting in the media, with a lot of kind of, you know, some common mistakes are, for example, that it would involve, necessarily involve robots, if there was some kind of risk, or that it would involve them, in some sense, turning on the humans out of some form of resentment, or something like that. But if we look at AI, and we look, you know, at that the long history of AI, the original intention was to try to create thinking machines or programs that can do the full range of intellectual thought that a human can do, they can do all the kinds of cognitive things that a human brain can, and in particular, they can they can go about some environments, trying to fulfill their, their aims and preferences. And to do so a very successfully. And this is sometimes called now artificial general intelligence, to distinguish it from the more narrow approaches where, you know, where in some cases, AI has been synonymous with something like search or, you know, or playing a very particular game, such as the game of chess, rather than being able to play all possible games that are given to it as visual stimulus stimulus or something like that. So a good example of AGI or, you know, some early system in this direction might be deep minds, dq n program, which was a pivotal example of deep reinforcement learning, where a neural network was trained to be able to play each of more than 50 different Atari Games, just from the raw pixels. And to play many of them at a level exceeding that of human. That original system actually was training a large number of narrow agents, because it was a different neural network that was trained up for each game, but from the same starting parameters. But over time, you know, they've also worked out how to make systems where a single system, you know, a single neural network can be trained to play all of these games, depending on whichever one it encounters. And GPT. Three, is another good example of some general intelligence, a system that can take textual prompts, and then continue them as if that was, you know, in the most likely manner, that based on what it would see, and it's very large data set of, of text on the internet. So for example, if it's given the start of a bit of fanfic, it can kind of continue this fan fiction story, in even drawing on, you know, the names of characters in Tolkien or something like that, because it seems so many of these things in its corpus. And in fact, the fact that it produces quite bad fanfic may just be because that's, that's the most likely thing it encounters on the internet. And so it's the best answer to the question of what would be the most likely continuation of this of this fanfiction prompt. But it's a system that can write about, you know, this dazzling array of different subjects, including doing literary parodies, if you ask it to, to start with a sentence, and then to continue it in the style of a famous author or a famous poet, it actually does surprisingly well at that better than I could do. So these are kind of general intelligence systems. So then the the type of concern that people like Nick Bostrom, and Stuart Russell have articulated is that if you have a reinforcement learning system like this, imagine something like the Atari system that is trying to maximize the the sum of rewards it gets over time, like the kind of the sum of the points that gets over time. If such a system was operating in the real world, then I would end up developing an instrumental desire to stay alive. Even if there was no emotion or drive kind of directly programmed into it to say that you've got a you know, an urge to stay alive. That would just come out of the, you know, solving the mathematical problem of how do I maximize the number of points I get? Well, if I'm turned off, I can't get any more points. So so I have to try to avoid turned off. And if it understood enough about humans to kind of count as being as intelligent as a human, then it would also, presumably, unless we very carefully hid this information from it, it would become aware that, that, in certain circumstances based on what it does, humans would be more or less likely to turn it off. And so we'd start to, to reason about this and to act in ways that try to increase its chance that it will be kept on and to increase the power that it has over the world in order to to get more points or reward. So this is the kind of concern, if that system were only as intelligent as a human, then it may not be able to do much more than a dangerous human could do. But the concern is, particularly if we had a system that was vastly more intelligent than a human, and when AI practitioners have been surveyed on the question of when will we develop AI systems that can do pretty much all of the intellectual work that a human can do. And a recent survey on that the median estimate for that time was, I think it was about 2050. So they're, they're suggesting that this isn't just a total pipe dream, it's the type of thing that we're as likely as not to see, you know, in the next few decades. And so that that will be a big deal. If we reach the point where there were systems, you're as intelligent as human, and perhaps it wouldn't take long before there were systems that were vastly more intelligent, and these could create a lot of additional risk. So that's the basic sketch of this.


Steven Parton  31:42

I mean, from your ethics background, I'm curious if you think the AI would be developing ethics of their own or if they would be adopting human ethics?


Toby Ord  31:52

Yeah, it's a good question. I've thought about this a lot. And it's, it's quite tricky to, to, to tease out different versions of what could be going on there. And then also, it's tricky, again, to try to work out which ones are likely to happen. So here's an example. Some people I think, including Steven Pinker have have expressed a lot of skepticism about this concern that AI could cause an existential risk. And one of the arguments that's been put forward, is this. Why would a system that's more intelligent than any human? Why would such a system not actually understand what it is we really want? You know, why would it kind of maximize some kind of narrowly defined score function, you know, at the expense of what humans actually care about? You know, why would it be so short sighted? And I think that the the correct answer to that skepticism is to say, well, such a system that was really intelligent, would understand what humans want. And in fact, that's why it would attempt to protect itself from humans, and why it would develop these kind of instrumental reasons for trying to free itself from human control. Because it thinks that humans would not like what it's trying to do. So what understand that humans don't want these things. But that doesn't change the fact that these are the things that would maximize that score. So I think that, that, if we kind of imagine what a reinforcement learning system that was very intelligent would do, I think that the the most likely thing is something like this where it would, it would recognize a whole lot about the shape of human preferences, but it just wouldn't care about it, it would, it would treat that as these are the preferences of these overseers, who I have no intrinsic interest in, all I'm trying to do is maximize the score. Now the score might be what the humans thought would be, you know, fulfilling a good ethical duty for an AI system. But there's this kind of disconnect between the two, one of them is directly programmed into the system, and that's the thing it cares about. Whereas the other thing is just what it thinks humans care about. There could be some clever ways of trying to get the system that wants it such that once it understands what the humans really want. That concept gets directly tied into what is trying to maximize. such that if it understands, you know, human ethical values, or even just human self interested values, then it can at least try to, you know, create more value for humans. But that's something where we don't yet know how to do that. So that's one of the one of the hopes for how we can align AI systems with human values. And so to get back directly to what you were asking, when the AI systems are trying to make sense of these other agents, the human agents in the world they may well, like we do divide up the human's aims into self regarding aims and also some other components. They've kind of other regarding aims, the types of things we normally call ethics. So they may well be able to work out that, you know that this human, you know, here's what it wants, you know, wants to become happier or wants to, you know, eat this food or something, but here's what it is going to do, it's not going to steal the thing that would have would have made it happy if it stole it, and then ate it. Because it has this extra framework of, of preferences around it. So it may well be that in order to really understand and potentially manipulate humans, it has to understand that humans have these ethical codes. But the trick is, how do we make and actually care about that, rather than just treated as a brute fact about the world to be worked around?


Steven Parton  35:42

There's a lot of interesting stuff in that. I'll save that for another time. One, one thing that I find really interesting is that you have said that the one in six chance of existential catastrophe, I guess, that you've you've listed that 15% chance or so is actually only that low, because of how optimistic you are you think that that's actually the reason that that's even that low is that you think humanity is going to really step up and address these issues over the coming centuries. How do you think that that's going to happen? Do you think that there's going to be something where maybe AI will, as you were saying, get to know us better? And maybe by learning our behavior will actually help us learn our own behavior? Will automation change things? Have you have you given much thought to how we will actually address these issues? To get us to that one in 6%?


Toby Ord  36:43

Yeah, well, to start with, I should say, you know, this is really difficult stuff to know, or to estimate. So these these numbers that I'm trying to put on things there are, perhaps the best way to think of them is if people want to know what I think, you know, what does Toby od believe about this topic? You know, he says, it's really important, and it's a really big threat. Is he talking one in 1000? Or is he talking, you know, one in 10. And so I give the numbers in or try to give the ballpark. But bed, I don't, by any means suggest that everyone should be compelled towards the same numbers by by my arguments. And my kind of very rough guess is something like, if you had a business as usual situation, where you just let these risks keep growing at the kind of rate that they had been growing. And you look at the types of technologies that could potentially cause these existential dangers for humanity over the century, then my best guess, would be something like a one in three chance that we don't make it through the century, before we succumb to such a catastrophe. But I think that, you know, instead, I say, one in six, because I think that, that we will at least, become aware of some of these concerns. So for example, with the risks that could arise from AI, I think that there's been great progress on that, in terms of trying to articulate the what the risks are, to, to convince some key figures in AI about these risks. And, you know, we've got people like Stuart Russell, now that that you hear from it's not just people around the edges, who are who are expressing these concerns, but people who have that kind of central and important reputation in the field. Similarly, the the founders of deep mind have expressed a lot of concern about the possibility of existential risk from AI. And, you know, and take that seriously. So we it's not one of these cases where there's just a few people, you know, who one could point to and say that they're not experts on AI, who have these concerns. And that, that, you know, Stuart Russell now has a lab of people working on AI safety, and similarly at deep mind and open AI. So we're in a world where this is increasingly being taken seriously. Now, unfortunately, the community working on these things haven't found fantastic solutions yet. So that's, that's a downside. But but I do think it's the type of thing where it's got some purchase, and people are taking the issue seriously. And in general, I think that, that one reason to be optimistic is that there is a kind of common interest of humanity here. We're often we often focus on those cases where our values disagree with each other, because they're, they're kind of interesting cases of conflict. So cases where a democrat disagrees with a republican or, or where someone from the west disagrees with someone from the east. In cases where, where almost all people agree with each other, we tend not to talk about those things, such as that. Having a happy life is better than having an unhappy life or having a longer happy life is better than a shorter happy life or, or things like that. So I think that when it comes to existential risk, this is something that is really in all of our interest to avoid. And I think that the arguments are actually pretty strong that that There are serious risks this century, even if it's very debatable exactly how large they are. And also that it's a it's a central moral importance to protect humanity's long term future. So I think there's a lot of reason to, to get on the same page and to to reach some common agreement that this is a concern, then the issues that remain, are to do with things like how do we actually coordinate with each other, particularly at the international level? Do we end up in a, in a kind of Prisoner's Dilemma or tragedy of the commons, where even though it's in everyone's interest to do a certain, you know, it's in the world's interest, if the world moves in a certain direction, maybe it's in our individual interests, to cut corners and take risks, because then we reap all the rewards. But we only reap, you know, a small share of the risks. So there could be some, some concerns like that. But generally, the type of thing that makes me optimistic is that there's this kind of common common interest in getting the the outcome, right, that you know, the very fact that it's not very controversial to say that it'd be much better if we, if we didn't destroy humanity's future, is the kind of thing that gives me hope.


Steven Parton  41:11

Yeah. And to your earlier point, I think the pandemic kind of makes me optimistic, because I can see how the international community has had to come together to rally around one problem, and maybe we've started that groundwork that we need.


Toby Ord  41:27

Yeah, I would say on that, that the, in terms of what we saw with the pandemic there, the International Science and Technology community, you know, we're reason for optimism, the international political community is perhaps reason for a bit more pessimism. Maybe it depends on where you started on that, if you're if you're extremely cynical to begin with, maybe this is just what you expected. I was a bit more optimistic about nations being able to work together on these things.


Steven Parton  41:56

Yeah. On that point, we're coming up soon on the hour here. And I was wondering if you could, or would be willing to take some questions from our community that they asked when they found out fantastic, yeah. Let's go ahead and jump into one here that I think is really relevant to what we were just discussing. Dr. rheas, bongi. I'm paraphrasing their question a bit here. But they said, altruistic intellectuals have long been concerned about existential risk. But there is no political will backing up these concerns. What do you think will finally bring about the political courage to take these issues more seriously?


Toby Ord  42:35

Well, I guess I would, I would partly dispute that, as the web has long been is concerned, depends depends on what time frame they're thinking. But ultimately, it's only really since the development of the atomic bomb. So 75 years ago, that we've really seen much concern about existential risk. And the idea itself wasn't even really around at all, you know, more than 100 or so years ago, that there's a very good book on the intellectual history of the idea, just called x risk by Thomas Moynahan. But you might still think 75 years is a long time. But it's actually it's only a couple of generations. And we only really established that existential risk could come from nuclear weapons, when we discovered nuclear winter in 1983. And then the Cold War ended less than a decade later. So we just had, you know, less than a decade of Cold War, where we really understood that there was an existential risk. And it takes quite a while for, for moral norms to move. And so I think this is a case where we do have some things to to get optimism from. So for example, when it came to environmentalism, caring about the environment just really wasn't considered part of ethics at all, you know, up until about 1960. And then in a very short period of time, there was massive mobilized concern about this. And within a decade of Silent Spring coming out, all English speaking countries, except I think New Zealand, had a new ministry created within government, for the environment. So that's kind of a pretty extreme example. Obviously, environmental issues not all solved now. It's not that this is solved everything. But it does show that you can have quite rapid progress if there's some kind of crystallization of this. So it could be that the this pandemic will help to cause such a crystallization or perhaps some other near miss type scenario warning shot that happens in the in the near future. I'm hoping it could happen even without that. Just once you know, we really get this conversation about existential risk is going to mature conversation rolling. So hopefully if you know if people hear this and they're interested Didn't we we all kind of work together to, to start up this public conversation, then we can get there without having to have some kind of warning shot.


Steven Parton  45:09

To continue that we have another good question here, the follow up from guy Morris. And he's wondering if there are any organizations that you think currently carry enough influence that could help impact through meaningful change in this regard? Are there any organizations that you you think, are doing a great job of handling this right now?


Toby Ord  45:28

Yeah, a couple that I think you're doing, doing good job would include nti, the nuclear threat initiative, they're obviously focused on nuclear weapons, but they're expanding that interest to other global catastrophic or existential risks. And also the Bulletin of the Atomic Scientists. You know, that's a group that started just after World War Two, and, and channeled a lot of the energy of, of physicists, and now also biologists and others, to really, really try to, you know, get policy effects on on issues of existential risk. So, so there are there are a couple of groups that have been going for quite a long time, because they started with this this first big, existential risk. And I think that that there certainly is not that all politicians, you know, have to do what they say. But it's somewhat promising as a sign.


Steven Parton  46:28

Let's do one here from Marian Rosner. She's wondering what your perspective is on the role of social inequity and existential risk? How are the social aspects playing into this?


Toby Ord  46:39

It's a good question. And I'm not sure. I, I don't think that there is a huge connection between social inequalities and existential risk. That's not to say that social inequalities aren't extremely bad. But rather that it's seems to me to be a fairly separable conversation, where they, the work to address one has relatively less impact on addressing the other competitive issues, which are more closely entwined. One way that that I could see it's starting to have some effects would be at the international level, such as the questions about climate change, and who has the responsibility for doing something about that. And debates between poor countries and rich countries where the rich countries, you know, were historically responsible for a lot more of the damages, but then the new rules that would restrict emissions would potentially stop the poor countries, you know, really going through the industrial revolution in their own countries. So that's an example where something like this could happen, where you could get a lot of resentments. If rich countries wake up to these threats before poor countries do. It might be considered, you know, this kind of preserve of people who are wealthy and protected enough from from normal things, that they've got the luxury of thinking about these other things, I can imagine some, some, some kind of challenge on those grounds.


Steven Parton  48:08

And we'll just do one more here from David satterlee. He's wondering if you could speak to the balancing risk, or to balancing risk and development between decentralized personal control and the vulnerabilities of a centralized system. So I think what he's wondering here is, where the power stroke, how the power is distributed, and what that means in terms of risk?


Toby Ord  48:31

Yeah, I think it would depend quite a lot on the particular risks, I don't have an overall theory of this. If you look at something like nuclear weapons, there's a kind of interesting mix of these things, it's generally been quite centralized, certainly within states that a state has control of its nuclear weapons, and that it's not, you know, at the central kind of command and control structure, that may well be a good thing, when it comes to that seems like it to me. And in the case of, of bio biotech, the general approach, we have not yet had a big centralization like that. So lots of individual groups in labs are able to kind of do what they want. And they may have different judgments about which things which risks are worth running. And then maybe we end up you know, some of those judgments about which risks are worth taking a more optimistic than others. And we end up with something called the unilateralis curse, where whoever had the most overly rosy view of, of the risks to reward ratio ends up pushing forward with something instead of the person who had the middle kind of, you know, the median guess on this. So that that will be a form of concern about a decentralized system. But, you know, in general, I don't think that centralizing is a panacea. I think that that, you know, the questioner is correct, that there are a different set of problems that happen there. And I'm sorry I can't hear. You know, you don't know how to resolve it at this kind of more general level?


Steven Parton  50:06

Sure. I'll follow up that real quickly with just a thought. Do you worry about inhibiting innovation at the cost of preventing risk? Or do you think that the innovation could outpace the risk? Like, what's that relationship look like to you?


Toby Ord  50:24

It's a good here another good question. I mean, I am concerned about this. And I think that this is going to be a real challenge, that to some extent, I think that unbridled innovation, just let anyone do whatever they want, and you know, invent any technology they want, that that's profitable, or that perhaps, you know, that a country wants and then leaves that country to be able to take power, you know, seize power from others, I think that just unbridled innovation is is probably not the right approach. But once you try to strike some balance, it's tricky. And there going to be cases where, where you get it wrong, and including cases where you, you know, it has more of a restrictive effect than it should. Because you're you're concerned that there was a risk in some area. And it turned out actually that there wasn't. So I do think that balancing that's going to be difficult. And there's going to be you know, people on both sides who get quite put out because they their judgement about whether balance of risk lies differs with each other, even if we're all earnest and well meaning it's going to be a tricky one to to navigate.


Steven Parton  51:28

Perfect. And I think that we'll, we'll call it there. So it'd be I really want to thank you for taking the time.


Toby Ord  51:33

Oh, no problem. It's been really great to speak to you and I really enjoyed these questions.


Steven Parton  51:37

And now we're going to take a moment for a short message about our membership for organizations, which you can find by going to su.org and clicking organizations in the menu.


51:48

singularity group was founded upon the belief that the world's biggest problems represent the world's biggest opportunities. Our mission remains unchanged, but our methods have evolved exponentially. Today, we're opening doors around the world as a digital first organization. We invite future thinking companies to join singularity group to learn about the breadth of exponential technologies, to empower your organization with an abundance mindset, and to grow networks that can create solutions to humanity's greatest challenges. With an unprecedented year behind us and many great challenges ahead, leaders across the globe are wrestling with the future, how to embrace change, stay ahead of trends, and build sustainable businesses. We help entrepreneurial leaders better understand how exponential technologies can be applied in their companies to advance their goals for people planet, profit and purpose. And it all starts with the mindset, the skill set, and the network. Together, let's discuss how membership can turn you and your leaders into exponential thinkers and prepare for an abundant future for all together, we can impact a billion people


the future delivered to your inbox

Subscribe to stay ahead of the curve (and your friends) with new episodes and exclusive content from the Singularity Podcast Network.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.