description
This week our guest is futurist and speaker, Gerd Leonhard, who authored the 2016 book Technology vs Humanity: The Coming Clash Between Man and Machine.
In this episode, we explore the ideas Gerd puts forth in his book, with a heavy emphasis on how the humanities are a necessary part of what he calls a "Good Future.” This takes us on a tour of the failings of transhumanism, the need to update our regulatory systems, questions around our economic models, the impact of culture and storytelling on how we shape our future, and much more.
Find out more about Gerd and his impressive work at futuristgerd.com
**
Learn more about Singularity: su.org
Host: Steven Parton - LinkedIn / Twitter
Music by: Amine el Filali
transcript
Gerd Leonhard [00:00:01] We need to embrace technology but not become technology. Because becoming technology will seem like it has lots of advantages. And there will be, of course, borderline cases of being half technology and half not like cybercrime, but generally speaking, that is not a good idea because it is not our biggest advantage to be like technology, but it's actually our biggest advantages to be the opposite of technology.
Steven Parton [00:00:38] Quote, everyone. My name is Steven Parton. And you were listening to the feedback loop by Singularity this week. Our guest is futurist and speaker Gerd Leonhard, who authored the 2016 book Technology versus Humanity The Coming Clash between Man and Machine. In this episode, we explore the ideas that Gerd puts forth in his book with a heavy emphasis on how the humanities are a necessary part of what he calls a good future. This takes us on a tour of the failings of transhumanism, the need to update our regulatory systems. Questions around our current economic models and incentives, the impacts of culture and storytelling on how we shape our future and much, much more. So without further ado, everyone, please welcome to the feedback loop. Gerd Leonhard. To start, I want to talk about your background a little bit because in an interesting life path, you started in the humanities with theology and music, and now you're doing futurism, which feels like a very far cry from that early beginning. So what was it that kind of brought you from that unique path in the humanities to now where you are doing the work as a futurist?
Gerd Leonhard [00:02:03] It's a it's a long and winding road. I mean, I've always been interested in the future, really, since Star Trek when I was a kid and Blade Runner when I was a young adult. And at the same time I was also already in the humanities as a musician. I was a musician since the age of ten, and I played professional music for a long time. I had a short stint with, uh, if I'm going evangelica questions when I was a teenager. But, you know, none of that really stuck with me. I just in Germany, where I lived back then, you know, studying theology is more like high level social work. You know, you end up working for the for the Lutheran church and you do what you want. And that kind of seemed like an interesting job to me. And, you know, I was a Green Party activist back then. I was a musician. And so that became something I did. But I realized very quickly that I really wasn't interested in religion as such. But it was an interesting study because I did learn all of the major works of philosophy, and that gave me a long, long background in all of the theories of philosophy and how to look at life all the way from, you know, all the way to the Koran and and and Buddhist learnings and everything. And that was a good background. I was still very young then, you know, as I'm 22 years old. And then I moved to the US and started my musical career and went to public college in Boston. And I realized very early that I was an unconventional musician and that I could play and do things, but I wasn't really trained very well. I could not write very well. I couldn't do the academic stuff. And it for me, going to Berkeley College was a little bit like, okay, it's nice to have that formal education, but it didn't really do much for me as a musician in terms of actually playing music. And then when I got out, I realized I was playing with really amazing people in California and some of them couldn't even read the newspaper. I mean, they were completely illiterate and they were tons better than I was as a musician. So that whole process got me thinking about, you know, what I what I'm good at. And I realized I was quite good at seeing scenarios and seeing I wouldn't say seeing the future, but understanding kind of the context of what's happening and extrapolating and telling stories about it. And inadvertently, when I was a musician, I made many records, I met many people. And then in the mid-nineties, one famous lawyer came up to me and said, Let's do music on the Internet, because that's the hottest thing. And I was in Germany then, but then I moved to New York and then to San Francisco, and I became an Internet entrepreneur and digital music. And that got me thinking about the future because for me, the future was there, you know, music and the cloud. Like we have a Spotify. I was like ten years too early with my start up concepts and and I wrote a book in 2004 called The Future of Music, and that became the blueprint for the music business, including Spotify, because in it we put forward this idea of music like water, and that was an idea we got from David Bowie. And but, you know, it was freely appropriated and then given to the world, and that became Spotify. So I realized very quickly I was good at understanding this stuff and having intuition about it, but not so good at building companies around it. And so that quickly became my story is that I could help people to understand what was coming and to make their own businesses from it. So that became sort of my path and using my skill and and now it's really something that I think if you practice it long enough, eventually you'll learn how to understand the future.
Steven Parton [00:05:53] Yeah, well, and many people think of futurists, they think of Transhumanists and you were talking there about your philosophy and theology background, and it seems like you're not very fond of transhumanism as an idea. I think, in fact, that you say it's a lemming like rush to the unknown. That is the scariest possible approach that we can take. Does how how is that idea formed for you in terms of, you know, do you view Transhumanists as religious in their, you know, fervor or, you know, is there something about that approach that is informed by your early years?
Gerd Leonhard [00:06:35] You know, it's it's been a long discussion of humanism and how I got there, but I have no religious leanings whatsoever. And I, you know, I, I that's not a topic for me at all. For me, the topic is about, you know, what is a life worth living and what makes us human. And I fear that a lot of discussions about merging with technology, which is the proposal of transhumanism to essentially become superhuman. And while everybody wants to be superhuman, I'd love to be superhuman. I have to. I'd be the first one to admit I don't think it's going to be a considerable advantage over the things that are already superhuman, which are machines, because the things that make us human are very hard to scale and to change without losing anything. And every time we add something will lose something else. And there's, of course, Marshall McLuhan's theory of how we evolve and so on. But basically, I think improving humanity and improving diseases and living longer and all of that I think is good within reason, but taken to the extreme, it becomes essentially an experiment and and essentially a modification and in a sort of free for all, no matter how much I have to gear myself up and what is the ultimate destination. And we are already seeing the loss of humanity and social media with this aspect of manipulation and fake world, as I call it. And now we have synthetic worlds coming up with A.I., the Apple headset. And, you know, I'm a geek at heart. I love to play with that stuff. I'll be the first one to buy the Apple headset and all that. But I have to say, it's put the fear into me when I think about that becoming normal, you know, so that if I want to have a phone call, I got to put on this nerd helmet, you know, And that is kind of like if I can't function without technology, without connecting to the Internet, if I if I can't boot in the morning, so to speak. Right. Without connection to the Internet, I would be very worried. And that is kind of like what transhumanism is, is proposing that we can be superior by becoming one with technology. And I think the path is the opposite. It's like we should let technology do all the heavy lifting, solve all of our major problems that are practical, but not create a black box or a super entity that runs our lives or becomes basically, you know, impossible to understand what happens. Like it's already kind of happening with Jeffrey Beatty that we don't know what happens inside, you know, and it's not just a question of controls or it's a question of values. I think we we need to go back to our offline luxury as much as we can. We need to connect with ourselves and not just with our devices. And so this is going to be not an easy path because we want the progress, we want the advancement, we want the good stuff. But the bad stuff could be worse than anything we've ever imagined.
Steven Parton [00:09:52] Yeah. How do you think we're doing in terms of moving down that path from what you said there? You know, you mentioned social media, but you also mentioned several other things that it sounds like you don't think we're doing very well. How do you how do you feel like we are going on this trajectory? Do you feel like we're on a good trajectory right now?
Gerd Leonhard [00:10:12] On the one hand, we're doing great in scientific and technological leaps. We're approaching quantum computing, we're approaching nuclear fusion, we're approaching preventing cancer. I mean, these things are just mind boggling. I'm really totally excited about that part of it. Right. On the other hand, we are not spending enough resources on alignment as to what Russell calls it, the alignment of what we want and of the discussion of who was in charge and what I say mission control for humanity and on collaboration. Because, you know, this is a free market driven by all of that. And it's a total gold rush right now. It's driven by the military, as it always has been, but even more so now. It's driven by power and, you know, by people and companies and states that want this power. And, you know, I isn't the only king maker. You know, there's quantum computing, there's nuclear fusion, there's genetic engineering, there's geo engineering, there is synthetic biology. So whoever wrote rules those those terms rules the world. Right. This is so that is something that worries me about how we can collaborate to use it for the good. Right. And just a couple days ago, I was reading a book and presents a pamphlet on how A.I. will save the world. A can and should and will save the world. In defense of A.I. and AGI and basically saying, Just get off it and let us do what we have to do because that's how we're going to save the world. And it was appalling. I just I mean, the magnitude of all the BS and I, I mean, Mark is an amazing guy, right? And this is the first time we have to say, I don't get where you're coming from. It sounds like a total recipe for world domination without, you know, basically ultra capitalist approach. And so I tweeted a little about it. He blocked me on Twitter. Thanks, Mark. But in any case, so that that got me thinking about essentially heading for a conflict between the singularity, transhumanist, pro liberal money system and the humanist world, which is, of course led by Europe in many ways and some degree also in the US to some degree. But there's a huge conflict there about who's in charge. And to that, my answer is, well, you know, it's very hard to talk about we when we talk about the world, you know, but we're going to have to agree on the agenda here, because this agenda is the same agenda that we had to solve with nuclear power, nuclear weapons, genetic engineering, the the ozone. And we did solve it. If we if we can't together get together on these top level issues and find at least a bottom, bottom, bottom line, you know, then then it's 20 years and we're single largely, and we're gone as a species.
Steven Parton [00:13:10] Yet you mentioned that it's a gold mine right now. And I. Do you think that that desire for the money that exists in there is making us blind, blindly optimistic? It's making us kind of pushed to the side. The ethics and morals and the human humanity is really in general in pursuit of capitalism, which I think at one point you say is, quote, unfit for the future.
Gerd Leonhard [00:13:37] Yeah. This is I mean, it's very human to go to pursue power and money and and all that. So it's probably inadvertent in many cases. It just kind of happens to you. And I think this happens to you when you get co-opted. You know, if you hang out with the rich and famous and you're jetting around the world and private planes and that that becomes a new normal for you. And I think this is what's happening with a lot of tech companies. And, you know, the world is our oyster. And the problems the problem is not any of this. The problem is the humans. And anything can be solved with tech. And so this really is worrisome because it's this underlying belief that we can solve everything with technology and that everything is that's that's what we have to do. And so, yeah, that that is very concerning to me because I find that that we find this really outmoded obsession with, with profit growth and, and power and the combination thereof. You know, how many Internet companies, tech companies are very closely intertwined with the military? The answer is most. And how many futurists work very closely with three letter companies and the military? Many. And and there is a reason for that. And that's power and money. And so I have rules where I don't work with three letter companies or the military or any military, and I try to not support that and the agenda that I have. And so, I mean, it's hard I think it's very hard to put aside the commercial thing. You know, right now we're looking I mean, Stuart Russell said the other day in this conversation, he said ten x GDP is the possible outcome of working with AI in every part of our society. Ten x GDP. Okay, so that's $13.6 trillion. Okay. And and I believe that is possible. And this is, of course, the irony that that we can do many good things with that, you know, solve climate change, solve cancer, you know, So food, solve water. Yes. But, you know, this is this is the biggest thing that, you know, ever really like the printing press, like electricity, probably bigger than that, but much bigger than the Internet itself. And so that that gets a lot of people thinking about, you know, that's within and they should just grab this is like, yes, like the guys you know from the Audi a VW Diesel affair, they had the choice to make a billion every quarter or or 300 million. And so they choose to make a billion because, you know, that's obvious. It's better. And I think that is not fucked up. I think that open air is openly saying that they want to pursue an artificial general intelligence as I agi that is just not good because using A.I. and pursuing A.I. as a business means that one thing completely the pursuing an intelligent entity and building one, because that is the gold prize of beating humans, essentially that is just a really bad idea. And that should also be subject to regulation. Clearly, if your agenda is to build a super intelligent entity, you're going to you're going to change the world. Why can you just change the world? Because you got 10 billion from Microsoft. I know.
Steven Parton [00:17:06] Yeah, That's the that's the typical problem, though, right? If you try to regulate in one country, other countries are going to race ahead and then you're going to lose power in the geopolitical landscape. So how do you approach legislating or regulating these things in a way that doesn't make you fall behind economically on the global scale, but also ensures that you do have some ethical movement forward with your technology?
Gerd Leonhard [00:17:33] Well, I think really this is about I mean, it has been said many times, we probably need to create a global council like the International Atomic Agency, an agency, international agency for artificial intelligence, IAEA or whatever, to create a mechanism of public discussion, um, kind of debates that are moving towards definitive rules like we did in nuclear. It took 14 years for that to happen. We don't have that time here. But, you know, I think every country that is in this race is looking for benefit that is beyond just a pure power grab. You know, we may have the power grab and in North Korea or maybe even Iran, which I doubt. But but, you know, we do have some sensible people in every country and almost every country. You know, let's not talk about Russia right now, but got me seriously worried there. But we have in the past, in our in our short history of 30,000. Yes, we have managed to come together when the shit hits the fan, so to speak, and we will do the same again. I don't believe that we have to say, Well, the Chinese will develop an age high weapon and then we won't have one. You know, this is the Oppenheimer problem. The movie is coming out right now. He didn't want that bomb to happen. He knew it was a chance it would happen. He want to throw it into the water outside of Tokyo Bay And the government said, no, no, no, no, no. We have to show that actually really works and kills lots of people. And and because of that, we got the agreement, of course. So this is a kind of the irony. But we can't do the same with the AI because it's much easier to build A.I., to spread A.I. and to use A.I. in nefarious means. So the pressure is going to explode in the next year for people to get together and say, What are we doing here? What are we doing on the lowest level? Like automation and potential unemployment and potential distortion of truth, bad narratives. You know, all the practical this is not existential, but it's close. Because if we can't trust anything that we see or hear, that's kind of a problem. And then on the next level, control level and building general entities that are generally intelligent. That is going to require global collaboration and will be subject to global regulation. I think it's not unthinkable that we will do that. We may have an incident first, a Hiroshima type incident of AI, which I don't want to see, but it could be happening like air traffic control collapse or stock market collapse by mistake, because, you know, it's not that A.I. has an agenda. It's not going to sit down and say, I want to invest all these guys up, but you know, it's bad alignment. I says, I got to take all these airplanes off line because, you know, that's that's the command. So that's what I'm worried about.
Steven Parton [00:20:29] Do you think there's a chance we'll get ahead of it or do you think there will be some kind of forcing function where mass unemployment puts pressure on the government or, you know, the deepfakes become so commonplace that we no longer can rely on the information ecosystem and these things are going to force us to take action? Or do you think we actually will maybe do something preventative ahead of time?
Gerd Leonhard [00:20:52] Well, on the on the employment side, I'm not that worried. I think that this will be a saving grace if we can all work less because machines do the commodity work. And now that's not a bad thing. It will reorganize the entire employment market, but they'll also be hundreds of millions of new jobs in all of those segments. So, yes, it'll be tough if you work as routine, 80% routine, you are in trouble, you know, But you know which job is 80%? 14, Not even the cold center. Yeah. So that isn't an argument. That's not the biggest argument. The really big argument is the societal narrative that generative A.I. can create that's completely fake and believable, that will destroy democracy and that will create war, Right? Point number one. Point number two, we create an entity that is uncontrollable. It's a black box. It does all of those things that I mentioned before, but then also has failed agendas about alignment so that we create something that we can't control and that it will eventually not even have bear the tent. It will just do whatever it thinks is the best thing to do and it runs out of control. So I think we are not too late to make it happen, to collaborate on this and to also create a council that supervises companies like Openai, AI and many others that are going in a similar vein. And that kind of goes back a little bit on this idea of saying, you know, it's all about monetization and, you know, so basically what it comes down to and this is of course the real challenge also of singularity. If we have a technological singularity, we also need economic singularity. Economic stability means all that money that's being generated has to be put into the hopper in a larger way. That's not just going to the people owning the means of production. Yeah, so the means of technology. So that means people planet purposed prosperity. As I keep saying, the four piece, you know, it means a different nature of stock markets and the global redistribution of money, which we have to do anyway to solve climate change. So that's now. But anyway, we have to reorganize the economics. If we reorganize technology, we have to reorganize economics. And Callum Chase wrote a book about the economic singularity that is well worth reading. He's a transhumanist singularity person now. But and of course, abundance. You know, Peter Diamandis, a lot of stuff in the book is right on. But I just don't believe that the that that humanity will be saved by technology. You know, technology is a tool if we're going to save it.
Steven Parton [00:23:31] In that regard, do you think a cultural singularity is necessary then to some kind of shift in the zeitgeist that makes us change our values as a as a species, really, not just as cultures and nations, but as a as a species worldwide?
Gerd Leonhard [00:23:44] Yeah, that absolutely goes together. You know, I think it's and to a certain degree, of course, it's up to all individuals to decide how superhuman they want to be. But in a world that is the black pill or the red pill, you know that what are the real choices? That's like saying today you want to leave Facebook, Give your publisher your what you can't because your traffic goes down by 70%, you know. So those are not choices that we have. We can't live without Googling. Know we can use Bing. Yeah. Okay. Maybe big is getting better now, but, you know, it's those are not real choices, you know? And so if we're then going to be forced to wear Apple Apple provision and use A.I. and birch and to be to be alive that No, that's that's not a very good outcome. We should have enough liberty to create a world that is still allowing us to be human in various degrees, not outlawing necessarily the alternatives, but finding a way forward that has some sort of common standard, you know.
Steven Parton [00:24:43] And yeah, in chapter four of your book, Technology versus Humanity, you you think ask the question, are you ready for the biggest loss of freewill and human control that humanity has ever seen? And you just kind of mentioned it there a little bit as well. Could you elaborate on what that means, why you see this paradigm shift as a threat to our autonomy?
Gerd Leonhard [00:25:07] You know, I think every time we use more tools that are more sophisticated and we give up thinking on my own and doing things ourselves, we may create some fabulous shortcuts and have amazing results. But part of the result is the effort. So, for example, you can you can get an iPad and download a music app and start playing music. And maybe 20 hours later you have a gig as a deejay. And that's good luck to you if that's what you can do. That's amazing, right? But to study an instrument is 10000 hours. And do you really think that the quality of what you create after 10000 hours is the same than 20 hours on the iPad? And then you could say, well, maybe it's not and who cares? And I would agree with you. I just don't want to call it the same. And I think this is one of the issues is that we are removing our own authority of thinking by using more and more tools that do it for us. So in India, where there's lots of matched marriages, you know, that arranged marriages, I think 70% are arranged. It's the people doing it. And that's weird enough. I mean, from a Western perspective, they seem to do pretty well in India with that, though. However, now it's the AI doing it. There's or there's already a AI system that arranges marriages based on what the A.I. says. To which I say, I'd rather have the person in the local village making lots of mistakes than the AI kind of running our lives in such a way. Or if you take things like replica, you know, replicating that people I can talk to and other perversions like this. Yeah, you can laugh about it. It's not big deal. But you know, nobody actually uses that very much. But then we could say, okay, there are aberrations that will always exist, but we shouldn't make aberrations normal. It's just like the guy who was losing legs, you know, when he's hiking or you were lost two legs and he got himself, you know, boulders built up of feces. And now we can hike better than with his real legs. And that's good for him because he lost his legs. But now those people are saying I should have the right to remove my legs, to buy use legs so I can also hike better. Hmm. No, no, no. That that's not the same thing. And that, you know, that will lead us on the road to dehumanization.
Steven Parton [00:27:23] Yeah. And you mentioned the generative side of that as well. And one of the concerns I've heard a lot lately, I'm surprised, really, at how much I have heard about it. But people are feeling like even if something like automation did come along and we did something like Universal Basic income or some redistribution of wealth that allowed people to pursue meaning, pursue, you know, personal growth, work on cultivating these skills through 10000 hours of, you know, training, as you mentioned, that it wouldn't be really worth it or there wouldn't be the motivation for it, because you would know that a machine could do the same thing in seconds and people wouldn't feel as motivated to do the hard version, to learn the skill when they could just do it easily with digital. Do you worry about that?
Gerd Leonhard [00:28:11] Yeah, I mean, the main thing that bothers me there is the the promise of a great simulation being so easy and instant. It's it's like Blade Runner. Blade Runner 2049. You know the scene where you has the girl hanging from the from the ceiling and the hologram. And then he gets a fancy device and she pops out and she's free from the device, but she's still a hologram. And and then the power goes out and she's gone. And he's the loneliest man ever. That is because the simulation. Is just that. A simulation of love, a simulation of empathy isn't love and empathy. It is a simulation. And if we attempt that into simulations, whether it is with an AI or a love bot or hologram or Apple provision, I could say, Well, that's pretty interesting. If I'm doing it for learning or for, you know, for whatever practical purpose. But when I use it to live in the simulation. You know, then to me that that's a perversion of the purpose of it, because a simulation isn't the same as just putting me into a fake world. And so this whole discussion of A.I. and virtuality and everything, it's really about the synthetic world that we could create with this. Like, you know, Zuckerberg said about the metaverse, there is a time coming where we're going to spend the majority of our lives in the metaverse. Well, good luck with that, Mark. Didn't work, but I sure hope it doesn't work. But that's just ridiculous because, you know, the real life is what we have between humans and between the planet and people and nature and and, you know, other species. Maybe it is not what we create as a hallucination.
Steven Parton [00:29:58] Do you think that becomes normal?
Gerd Leonhard [00:30:00] Huh?
Steven Parton [00:30:01] Do you think that will stay true into the future? I mean, a lot of people say in the short term that I think they would agree with you. But in the long term, as maybe virtual reality gets more advanced in life, like, you know, some people would argue that if the brain can't tell the difference, then we'd be just as happy to live in a synthetic or a fake world as as a real one.
Gerd Leonhard [00:30:22] Well, that that might be true. It's just like, you know, people are saying that, ah, Harare. I think that you've all said that organisms are algorithms, right? Mm hmm. I don't think that's true. If it was true, I don't want to know about it. I mean, maybe in 2050, we'll find out. It's true. That is possible. I just think that from the current point of view, you know, there's many things that we don't know about you once. We don't know how that works. And when we start messing with this, the complicated formula in a large way, then my fear is that we're losing something now, as Marshall McLuhan said. We add something, then first we make technology, and then technology makes us. And so there has to be there has to be a balance between the reality check and what we are trying to do and what we're trying to get to and what the goal is. And I think ultimately it's all about that goal. You know, if our goal is to become as God, whoever that is or become super human, if that's the goal, then, you know, singularities on the right track. But I think the majority of people around the world, you know, I've done a lot of work on this kind of idea of what we as you know, for them, the good future is about not dying, not starving, having children, having basic rights, having self-realization. They don't want to be God. Everybody wants to be God in some way or the other. But, you know, they want to have a good future. It's much more simple than that. And I think that is the first goal. And then we may eventually, if we play our cards right, we may get to a kind of Star Trek society there, you know, to where the material things are no longer that relevant and where there are different people doing different things. But the Star Trek society has a strong consensus on what what is it doing and what is the purpose. And I think this is really the ultimate debate of singularity and transhumanism. And right now it's to me kind of okay, it's a it's a it's a fringe thing, you know? And I like to look at it and I'd like to talk about it. But but I think for most people around the planet, you know, we have much more pressing issues than becoming transhuman.
Steven Parton [00:32:35] Yeah, well, and you mentioned the good future there a few times. And I know that's a narrative that you're very fond of. I believe you put a video out somewhat recently about that as well, and it feels like that's a narrative that isn't well captured in the media. For instance, you mentioned Blade Runner, but I'm also thinking of things like Ex Machina and Black Mirror and a lot of the narratives that we have don't seem like good future narratives as much as they seem like warnings of apocalyptic doom. I would not want in that landscape.
Gerd Leonhard [00:33:10] It's been totally terrible the last 20 years, really. I mean, everything that comes from Hollywood, Nollywood or Bollywood, Netflix about whatever is basically negative on the future. The narrative of the future is terrible. And if you ask the Millennials, Gen Y between 20 and 45 about the future, including my own kids, I should know better. They said, The future will be terrible. My future is going to be worse than the present of my parents. That is the answer. That is because they're watching Black Mirror. They're watching all the Utopia, they're watching Bhutan, they're watching all of the other tyrants and they're watching all the debate about don't look up and on and on and on and on. And all the media we watch about the future is negative pretty much. And then social media on top of that is also amplifying negative, you know, 6 to 8 times as much. And this is why my pitch has been for for three years now. We need a good narrative, a new narrative. We have to do a rebranding of the future. And that's what I'm setting out to do with the Good Future Project and with my new film that's coming out in six weeks called Look Up Now, as the it's a film about AI and say exactly this, You know, this could be heaven if we do it right, or it could be hell if we just don't look up. And my ideas, you know, rather than the asteroid hitting the planet, we harvest the asteroid to put it into orbit and slowly take us energy to improve things, you know, so it doesn't hit the planet. It makes the planet light up and we can use it for energy. You know, that's kind of the concept, you know, And so the good future is entirely possible, but the narrative is just terrible. This is like New York City in the in the seventies, you know, nobody wanted to go there. And then they came up with, you know, I love New York as a yeah, I would like to do a campaign that says I love the future. They are so that that that kids, especially kids, can get off it with their you know, their belief that the future is terrible because nobody is collaborating and everybody's bad.
Steven Parton [00:35:09] Do you think that shift in narrative is crucial for having what you call a future ready mindset and for us to really step up to the problems because it feels like the human animal doesn't respond well to. Stress and fear and anxiety. And if we have all of these narratives that make us feel those negative emotions, then it feels like we're going to make worse decisions and steer technology in a worse way. So it feels like we can move up to have some optimism if we're going to meet the challenges that the future brings.
Gerd Leonhard [00:35:37] We can go into the future based on fear. Barbara Hubbard, famous futurist who died, I think, two years ago. Rest in peace. She said Your mindset contains your future. And to that, I would say you are what you eat, you are what you watch you and you're looking at this stuff like Ex Machina and Black Mirror and saying, Oh, shit, you know, I knew it, you know, Or The Social Network, right? Great film. But you know, or don't I mean, it ends badly for us every time. Every single time. So then you come out as a 17 year old saying the future is going to end badly for us. Right. And this is your mindset and this is the people who are going to vote now and who do they vote for when they vote for some fool was promising stuff from the fifties.
Steven Parton [00:36:20] Yeah, a demagogue usually.
Gerd Leonhard [00:36:22] Yeah. Yeah. You know, somebody was completely, utterly clueless about the future. So. And this has to change. And this is why I think the rest of my career I would dedicate the good future idea because, you know, we can make this work when we get off this idea that we can't make anything work. And, you know, there's been lots of proof shown that humans aren't evil by nature. We do a lot of really bad things, but it has been proved that we can collaborate. We're actually very good at solving emergencies. So all of us is basically a question of decision making by saying I refuse to buy into the despondency and to into this kind of doom rhythm, you know, And that's why I say, you know, AI isn't about doom, Singularity isn't about doom. There's good components in all of these things. We just need to find better governance and more collaboration and get off this either the the semi religious technology cult that Marc Andreessen personifies or this kind of extreme capitalism that says, you know, whatever makes money is fine, you know, and we end up with Aramco. The Saudi oil company is the most richest company in the world. I mean, there are paradoxes like this where I'm like, no, that that is not the future. That is not fit for the future. So then that's the other thing. We have to get off the old world's words that people are using. Like they say, okay, this is socialism, this is communism, this is fascism, whatever. Those words are utterly useless. The only thing that matters for the future is, is this concept fit to create the good future. That's the only. And I don't care what Marx and Engels said and what happened with Russia, you know, basically that's all passe now, because what we need to do is come together and figure out how we can use all that stuff to create a good future.
Steven Parton [00:38:10] Right? Are there any technologies, movements, organizations that you feel are cultivating or championing this idea of a good future that are pointing us in a good direction and and moving in a direction that you would like to see us go?
Gerd Leonhard [00:38:27] I think there are many and I you know, most of my clients that I give speeches for our technology companies, so I know I know them really well. I know that many people in technology are positive about the good future. They're looking to do the right thing. There's a lot of temptation with money, a lot of temptation with power, a lot of pressure on them from the markets to perform. I think companies like Microsoft have a good chance of bringing about something more purposeful than just a race. However, you know, dethroning Google is kind of very tempting. And generally speaking, I think what's happening is we have mostly people with good intent. And the Future of Life Institute, for example, is doing great work on on bringing this agenda home. It's not as simple as they propose. I mean, I did sign the letter, but it's not that simple. However, they have good intent. But is there going to be a sort of overarching factor or group or supervisor board or something? You know, that like, you know, in ancient Greece, we had philosophers that were debating day in and day out about whether we should do A or B and that influence the public agenda. We don't have that. We have Noam Chomsky right in America, and we have utterly in France. And then we have Douglas Rushkoff, you know, and we have interesting people. But do we have leaders that show us what is right or wrong? Not in a political sense, but in an ethical sense. And Silicon Valley is really needs to get on that agenda. And, you know, I would say we have to invest as much in humanity as we invest in technology. Right now, investing 99% in technology and 1% in humanity and the humanities also, well, everybody should program, but nobody should know what right or wrong is right. That's not going to end well.
Steven Parton [00:40:28] Do you how do you invest in humanity? I love just hear you unpack that a bit more. Is that just in the sense of maybe like these think tanks or consortiums or organizations? What? What? Or just like shifting educational priorities? Like how do we focus on you, Andy, more?
Gerd Leonhard [00:40:44] Well, of course, education is a huge thing. You know, it's being underfunded. So we're cutting sports, we're cutting music, we're cutting ethics, we're cutting everything by programing. And everybody should be a damn programmer. Know the machine program itself, you know, And when I at that point already doing that now. And so, yes, we need programmers, we need scientists, but we have to bring the humanities back. The only job that we have in the future is the jobs that machines can't do. And that is getting abundantly clear. And to do that, we have to do things that only humans can do to have to learn how to do that. Compassion, empathy, imagination, intuition. You know, we're bringing all that back, bring that bring the human factor back into school. Right. And then we have to redo the stock market to be focusing at least partly on the human goal. People plan the purpose of prosperity, not just on the money goal. That's not human. That's just one part of human mind. So, yeah, that has to be brought back. And then I think the humanity to focus monumentally also means to have more debate and more room for this. And you know what technology companies are very good at and I know this really well because I work with them a lot, is to essentially say our job is to make just the perfect product, make sure it works safely and nicely and side effects or we're not interested. The side effect of the Internet of Things. No, no, no. Not total of surveillance. No, that's not our topic. Some government should do that, right? Unemployment. But that's not our concern. Face recognition, terrorism? No, not our concern. No, that's not how it works. But so I have told many of my clients and also governments, we should get tech companies to have a technocratic oath, like a Hippocratic oath, that says, I will use my power, my money to make it right by humanity, first and foremost, to not kill people, to not do bad things with what I do. Well, if we did that, they had to think twice about what they're doing and how they're doing it and have a more balanced approach to this. So that would be my wish that we can have this kind of oath and supervision. And, you know, to be fair, everything is regulated but technology. Technology is the holy cow of American business and Chinese business also, and to some degree, Europe as well. Nobody wants to touch it. And what do we have from this? We have many good things. But we have many aberrations that are piling up like, you know, basically making a dysfunctional society. And this is exponential. If it's a little bit dysfunctional now, in ten years, it'll be, you know, 1256.
Steven Parton [00:43:24] Ironically, given everything we've said, do you think that one of the solutions is having more technocrats, a kind of a movement towards technocracy, where we have more scientific experts, people with technological training in our governments? Because right now it feels like that's one of the issues, is we don't regulate technology because the people who are trained as lawyers and politicians don't really have a I don't know if they just don't have the right incentives. If there's you know, if they have perverse reasons for why they make the decisions, they do. But it feels like they also don't understand the technology in a very real way.
Gerd Leonhard [00:44:06] Oh, absolutely. So I wasn't I want a driver's license for politicians and leaders, a driver's license for the future. They have to prove that they get the future before they go into office. They have to be trained. I call this the future mindset. There's no miracle to it. You know, like Bill Gates says, there are five hour rule, 5 hours a week and the future. And then I want to see what you know before I vote for you. And, you know, of course, the voters have to ask for that, too. And, you know, we need to make sure that our politicians and leaders know about what's happening in technology and have opinions, not just like the Facebook hearings, you know, from four. That was pathetic, right? I made you cry. And in Europe, we have the same problem. You know, it's like everybody spends time on the next reelection. And and, you know, the European Commission is doing great with many, many things. But, you know, it is an utterly bureaucratic organization that has great leaders. Most of them are female, by the way, which is a good sign, already doing a great job, in my view. Not a perfect job, but a okay. So these things could be brought forward. And I think we have to put this into the public agenda that to look at the future is part of the job. Mm hmm. And to understand the future and to talk about the future and and to to develop visions and, you know, not not just to be practical, but, you know, in, in, in a real life way. I think what is happening now is that the good future agenda and the agenda of making the future right isn't going to be led by CEOs or by politicians, because for them to go out and say, I want a carbon tax for meat or for flying on the airplane because we have to solve climate change. Yeah, well, good luck for a politician that says that, right? Especially in America. People like to fly on the meter. But that would be a death wish. So first, I think it needs to be all of the millennials and Gen Y and everybody. That's not my generation, because we're a little bit too late for that is to essentially create a huge upheaval about all of these issues and say, We want you to address this, we want you to say something and to start a movement. And when that movement gets going, you find leaders to go in front of it because that's what that's what politicians do when they jump in front of something that is gathering steam, say, Hey, that's my agenda, too. I'm. And then there they are. Like you said, I don't do it, you know. Well, of course, that's New Zealand, also a kind of a fringe case, but still, you know, a really good case know. I admire her work there. So if we have that, I think then we can start a movement, you know, that basically says, okay, this is what we want, and then everybody has to go behind it. And I think this is the kind of thing that we're seeing now. I predict I don't do predictions, perhaps. I observe that as soon as the Russia-Ukraine conflict is somewhat resolved, even if it's fake, there'll be so much energy content in Europe, there'll be a groundswell of discussions about A.I., about automation, about climate change. And we are at a point of time, I always say like 1968, when I was seven years old, where the world changed in five years, 68 to 73, we had a different world economically, socially, culturally in five years. And this is the same time 23 to 25. Everything will change. Yeah, and it's got to change in the right direction.
Steven Parton [00:47:37] Well, on that note, as we start to come to a close in our time here, do you have any maybe words of wisdom or closing thoughts, things you like to point people's attention to that might help them kind of embrace that future ready mindset, whether you're a CEO, whether you are a politician, or whether you're just an average person trying to figure out how to start that movement or maybe find a bit more optimism in how you're approaching the future.
Gerd Leonhard [00:48:03] Yeah. I mean, first, maybe three of my mom's. My first name is The Future is better than we think. Start paying attention to all the good stuff that's happening. Not as grumpy, said the Italian poet. Pessimism of the intellect, optimism of the mind. You know that that's the agenda. So let's look at the good things that are happening. Kevin Kelly says we should not be optimistic because we have less problems. We should be optimistic because we have more capable capacity to solve them. And so let's look at the other features better, we think. The other thing is that as far as technology is, you know, technology is not what we seek. It's how we seek. We should never forget that. What we seek is if a human six different things. But by and large, happiness, self-realization, the good future that, you know, technology does not create any of this. None. Technology is a tool that we use to create the embodiment of what we want there. And we are not going to count on Allah to fix the future for us. That is just utterly ridiculous. Okay, last point I made in my book We need to embrace technology but not become technology. Because becoming technology will seem like it has lots of advantages. And there will be, of course, borderline cases of being half technology and half not like cyborgs. But generally speaking, that is not a good idea because it is not our biggest advantage to be like technology, but it's actually our biggest advantage to be the opposite of technology. So that's inefficient. Mt. Airy. Experimental. Emotional. Unorganized. You know, all the things that make us human. And that is what we need to protect. When we think about the future there. And lastly, I think in terms of actually making the building the future mindset. You spend one hour a day in the future, and I'm not talking about watching films here, but I'm talking about reading the best books you can find. Reading books and of course not electric electronic books. Not not that three, but I read three or four books a month, and I find that by reading the best books I can develop my future mindset, then I can start thinking about this. And then when the time comes, somehow I'll know what I should think about this and what is possible because I've stashed up this kind of background and and that's like, you know, it's something that's already there. All I have to just pull it out. And and so this is basically I think what we need to do is to spend an hour a day in the future to get ready for what's coming.
Steven Parton [00:50:44] Aside from your book, do you have a book recommendation? Maybe you could tell people to start with?
Gerd Leonhard [00:50:50] Well, unfortunately, my book is seven years old now. Technology versus humanity, as most things in my life. This book was early. Yeah, it was about seven years earlier. Right now, this is the topic. You know, I started the debate about digital ethics and the book where people were saying, What the hell is digital ethics? That sounds like sand in the gearbox, you know? And today it's like, Yeah, that's it, that's it. That's the word, right? Anyway, so that's still a very good book to read. And then by far my favorite book, The Ministry for the Future. Kemp, Kemp, Stanley Robinson, and he's a brilliant writer. The book reads like it's about today. It's about 2030. It takes place here on the rig, actually in the Hawks visor, which is 500 feet from here by accident. I don't know. I hope it's this brilliant book and it tells you everything you need to know about how to solve the problem, because the book ends on a positive note, which is another good thing about the book. It ends on things being solved, right? It doesn't end like don't look up. So that's a great book and I have a long book list on my website. Christiana Figueres, the Climate Change Leader, The World We want also a great book. Peter Diamandis Abundance. It's very techno centric, but good read. Yeah, also like still like Records Wild Book about the coming to Clarity old book, but a real classic. Alvin Toffler. Alvin Toffler. And of course, best of all the classics now, Buckminster Fuller, Spaceship Earth. I mean, a lot of these books are kind of, you know, it makes you think, you know, they wrote that stuff 50 years ago. Pretty mind boggling.