< return

The Current State of Transhumanism

February 13, 2023
ep
90
with
Zoltan Istvan

description

In this episode we check back in with one-time presidential candidate and author of The Transhumanist Wager, Zoltan Istvan, to see how his views have changed since we last talked. This included exploring his changing views in ethics thanks to his study at Oxford, the incredible changes that chatGPT appears to be bringing to society, the disappointments of the longevity movement, and especially concerns around AGI retribution and the “useless” class of humans who will have very few skills that are needed in the transhuman future.

To find out more about Zoltan and his work work, go to zoltanistvan.com or twitter.com/zoltan_istvan

**

Apply for registration to our exclusive South By Southwest event on March 14th @ www.su.org/basecamp-sxsw

Apply for an Executive Program Scholarship at su.org/executive-program/ep-scholarship

Learn more about Singularity: su.org

Host: Steven Parton - LinkedIn / Twitter

Music by: Amine el Filali

transcript

The following transcription was created automatically. Please be aware that there may be spelling or grammatical errors.

Zoltan Istvan [00:00:01] For me, the most important thing for transhumanism is that the people never bring out their pitchforks and say, Stop the science. Stop everything until we're fed. So it's really important, especially as someone who's run political campaigns, that governments are able to keep people happy and I mean just the average person happy across the spectrum. And that means well-fed, well-educated, you know, a roof over their head and things like that. If you can keep society running that trend, humanism will arrive on its own. 

Steven Parton [00:00:41] Hello, everyone. My name is Steven Parton and you are listening to the feedback loop by Singularity. Before we jump into today's episode, I am excited to share a bit of news. First, I'll be heading to South by Southwest in Austin on March 14th for an exclusive singularity event at The Contemporary, a stunning modern art gallery that is in the heart of downtown Austin. This will include a full day of connections, discussions and inspiration with coffee and snacks throughout the day with an open bar celebration at night. So if you're heading to South by and you're interested in joining me and having some discussions, meeting our community of experts and changemakers, then you can go to as you dawgs Basecamp Dash. South by Southwest, which I will link in the episode description so you can sign up for this free invite only event and just to know it is not a marketing ploy. When I say that space is genuinely limited, so if you are serious about joining, you probably want to sign up as soon as you can and get one of those reserved spots. And in other news, we have a exciting opportunity for those of you with a track record of leadership who are focused on positive impact. Specifically, we're excited to announce that for 2023, we're giving away a full ride scholarship to each one of our five very renowned executive programs where you can get all kinds of hands on training and experience with world leading experts. You can find the link to that also in the episode description and once more time is of the essence here because the application deadline is on March 15th. And with those notes out of the way, we can now get on to this week's guest, Zoltan Istvan, who was actually one of the very first guests who we've ever had on the show. Zoltan rose to fame at the time when he ran for president of the United States as a transhumanist. Shortly after having published a very controversial book called The Transhumanist Wager, and this episode, we check back in with Zoltan to see how his views have changed since we've last talked. This included exploring his changing views and ethics. Thanks to his current studies at Oxford, the incredible changes that tragic beauty appears to be bringing to society. The disappointments of the longevity movement, and especially the fears around AGI, retribution and the useless class of humans who have very few skills that are needed in the Transhuman future. Depending on your take, this could either be an incredibly optimistic or pessimistic conversation, but I'll leave you to decide that for yourself. So let's jump into it. Everyone, please welcome to the feedback loop. Zoltan Istvan. So I think the best place to start is for those who might not know you, just to get a little bit of your background and some of the stuff you've been working on these days, because it's been, I think, three or four years since our last talk on this podcast. So there's probably a lot to catch up on. 

Zoltan Istvan [00:03:53] Sure. Well, my name is Bill Tennis Fun, and a lot of people know me from running for the US presidency in 2016 for as the nominee for the Transhumanist Party. I did some subsequent campaigns as well for governor of California and another run for president. But basically I was running on a science political platform, so a lot of people know me from that. But since then, a lot of interesting things have happened. I guess the first and foremost is that I'm now a graduate student at the University of Oxford. I'm studying practical ethics in the philosophy department. I've also published a ton of my essay books, kind of all seven book boxset under the Zoltan, each one futurist collection. You can find it on Amazon. I'm turning 50 since honestly, that's the biggest thing on my mind right now at about like six, seven weeks. But what's really interesting about turning 50 is it's also the ten year anniversary of the transition. This wager, which in many ways is what sort of launched even the presidential campaign, kind of put me in the public sphere as somebody as a as a thinker and a writer. So I have a lot of different things going on. And I guess finally, I also have a new wine business, and we're trying to mix new tropics with wine to be one of the very first wines that make you smarter. And it's kind of a transhumanist media trick in many ways, because the media loves going there and learning about transhuman wine. 

Steven Parton [00:05:11] Yeah, that sounds interesting. I'd like to know when that's ready to be consumed. You touched on some a few points there that I'd like to touch on, which is that it's been ten years since your book and it's been a very formative ten years in terms of how technology has evolved over the time. And you also mentioned turning 50. Which part of the, you know, interesting changes that have occurred over the past ten years have a lot to do with how longevity has played out. Some would say the longevity research hasn't done as well as we would have expected. So as we kind of look back on these ten years and, you know, growing older, how do you feel about the way things have changed in that time? 

Zoltan Istvan [00:05:53] Well, I definitely agree with you. And I just wrote a paper at Oxford about this that the longevity movement has been disappointing and what it's produced. There was a lot of hype and honestly, I'm kind of, you know, I'm one to blame. Among others, as a journalist who's written about that hype. You know, we all want to live and definitely nobody wants to get all that sounds like a wonderful thing. And yet the the science and the biological testing, it just takes a lot longer than I think people have considered. So that's been a big disappointment. On the other hand, I have to say, I probably the other big gorilla in the room has gone faster than most people realize, especially recently with Chad GP and some of the Boston Dynamics robots. I mean, I would say ten years ago I wouldn't have I wouldn't have guessed that we would be this far along where I might be able to write half my essays using a chat bot for free. And, and that's that's going faster. Now, I'm not sure where A.I. and longevity intersect. Maybe they do in the future. Or maybe they are already starting to right now, But maybe one will help the other catch up. 

Steven Parton [00:07:00] Yeah, I definitely want to get to Chat GB team. Before that, I want to stick on the biology side of things for a minute. You know, one of the big things obviously has occurred since we last spoke was COVID. And I think from a kind of Transhuman perspective, COVID has been very interesting because it's made a lot of people very aware of death. And in a sense that might have increased our desire to avoid it. But at the same time, I feel like it made a lot of people very hesitant about science. The way that vaccines and whatnot were handled seemed to have kind of given rise to a very anti-science approach in the mainstream. Overall, how do you think these last few years have really played out in terms of their support or their the way they deterred trans human efforts? 

Zoltan Istvan [00:07:49] Yeah, yeah. It's funny. Like all this week I've been working on an article about exactly this topic, whether transhumanism grew because of COVID or not. And ultimately, I think as a as a movement, transhumanism grew. There's no question about it, partially because 2021 and 2022, especially the alt right, just went after transhumanism as being the culprit for a lot of the vaccines and a lot of these new crazy technologies. And how quickly COVID, you know, kind of spiraled from this political disaster. And, you know, the alt right, in fact, many ways went after transhumanism. There was a big Steve Bannon thing on war Room, there was Alex Jones as Marc days. There's a ton of people that actually wrote about it, and I think they brought a lot of attention to transhumanism from the wrong way. And so but that said, it still grew as a movement as a result of that, whether the actual movement got perhaps a more favorable approach, probably not. However, there's no question the RNA vaccines were almost as shocking as chat up how quickly they arrived. You know, I mean, I haven't really been that shocked between these two things in science in ten years before that and all sun, you know, within days, scientists say, hey, we have a formulation for a vaccine. Now we're just going to test it and we'll have it out in 6 to 8 months. Boom. I, I really didn't realize the human race was that capable. So it did, you know, I guess replace some faith in humanity in a way that we could get this out. I'm a believer in the vaccines and I believed in mass. I wasn't one of those guys out on the, you know, being in my thing on it. But my wife's a doctor and so, you know, she's like, look, we all are going to take it because we think it's going to help. And I know it's not a cure all. I know it. You know, I got COVID and when I was on multiple boosters, but COVID was so bad for me that I'm really glad I had something else in my system, because I can tell you I was already on the verge of having to go to the emergency room. And I don't know what it would have been like without those vaccines. So I was happy for that and happy for the world. And I think transhumanism got a lot of recognition through that. But I think the bigger problem is that the alt right really made the vaccine a big, giant issue and transhumanism was the dummy that got beat up. And so now we have this kind of two fold world very polarized and it's hard to really say whether the movement's in a in a better position. But there's no question it's much more recognized because of the fact that people face death and because scientists came through and people said, Wow, if we can do vaccines like this, maybe we can do vaccines for cancer, maybe we can do genetic editing with things in the future. And that got a lot of people thinking. And of course, when they think like that, they're already thinking like they're Transhumanists. 

Steven Parton [00:10:27] Yeah, I mean, it seems like it might have potentially helped bring together the scientific community across the world, right? I feel like a lot of lines of communication were probably opened up between nations that I think in the long term would probably be beneficial. Yeah. 

Zoltan Istvan [00:10:41] Oh, Oh, yeah. I mean, definitely that's the great thing is that if you are on the side of vaccines, you have new friends all over the world. And I think that's a huge part of, you know, why these kind of dilemmas. And if you look at actually the environmental movement, it has kind of similar ideas. You know, so the whaling operations with Greenpeace trying to stop them killing whales really brought the world together and a lot of international groups. And the same thing with nuclear weapon testing. You know, environmentalism grew through these mass things and then the big you know, the big, you know, I guess the biggest incident that really pushed environmentalism to the fore was the Exxon Valdez oil spill. We're all around the world. People are seeing all these animals in oil and everyone thought, you know what, we should do better. And so sometimes it takes movements. And I think transhumanism is maybe like 20, 25 years behind environmentalism in terms of its impact, in terms of its growth of believers. And so I think COVID is just going to be one of those big road steps along the way that kind of solidified this as not some fringe idea, but as something that is here to stay. And how are we going to use this movement for the positivity of the world, you know, to make things actually a better place? And that's something that wasn't there pre-COVID. We weren't so important yet. We were still like kind of crazy science fiction people now. So I think people are saying, wait a sec, you know, this might be the future. So if it is, then we better we better utilize it, work with them, understand it, and use it to our advantage. 

Steven Parton [00:12:07] Yeah, it wasn't expecting to go down that down this thread, but do you think that the Ukraine war is also playing a role at all in this at all? Because it feels like what we're building here a little bit is this narrative that maybe some of the expressions of the worst part of humanity or some of the tragedies that strike us might actually be kind of moments where we realize the importance of something like transhumanism. 

Zoltan Istvan [00:12:34] You know, I got to be honest. So first of all, I'm against Ukraine war. And I think it's, you know, a pretty horrible situation to have a small country be invaded by a much larger one just under all these, you know, ideas that supposedly are happening. But I think, you know, Ukrainian people have pretty much said, look, we don't want this. We just want to be a part of the EU and go on with our lives and make money and be happy. So I've been very much against it. I don't think it's been too dramatic for transhumanism. What I think it has been dramatic for not to take you off in another thread is that yes, it has raised inflation, which has caused more and more inequality. And the big thing that I worry about with transhumanism in the long run is not so much that the world actually gets there because we need these technologies. We need these technologies. It's a matter of time before everybody says, okay, this is better for me. I want to have robotic eyes. I want to have an artificial heart that never stops beating. I don't want to die from aging. I mean, I think everyone will realize that. But I worry that it's very hard to realize those things if you're hungry or if you're super poor. You can't get a job or things like that because of inflation. Of course, the Russia Ukrainian war has caused an enormous amount of inflation because of what it's done to fossil fuel prices and things like that and put everybody on edge. So for me, the most important thing for transhumanism is that the people never bring out their pitchforks and say, stop the science, stop everything until we're fed. So it's really important, especially as someone who's run political campaigns, that governments are able to keep people happy and I mean just the average person happy across the spectrum. And that means well-fed, well-educated, you know, a roof over their head and things like that. If you can keep society running that trend, humanism will arrive on its own, probably in time for a lot of us to actually enjoy some of the anti-aging benefits, etc.. But wars, those are the kinds of things that, you know, I mean, when when other time we've been talking about potentiality of nuclear war with Russia again, I mean, it's almost crazy to me that we actually have to, in the Western world, have to consider these things again. And so it's very sad and it's causing distress to a lot of people. 

Steven Parton [00:14:36] Well, you touched on you touched on chat GPT there. And I actually noticed last night that you you put out a post on Facebook and you said this before in the beginning a little bit, but you basically said between Chad GPT and Boston Dynamics, we basically have 36 months before jobs in college and a lot of the things that we hold dear right now are just no longer going to exist for the average human. Can you just talk a little bit about what drove you to make that post and kind of where your brains out around how this is unfolding? 

Zoltan Istvan [00:15:10] You're in a totally stand behind the post, but just one kind of little correction is that I said it would start to phase out after 36 months, which is quite a big difference because, you know, people have been bashing me saying, Oh, how could writing be going on in three years? It's not that it's going to be gone, but I would be very surprised if, let's say, you know, even Singularity University is using a lot of journalists to put forth original content when they can save money and put forth semi original content, but put those money towards something else for the growth of the company. After all, these are these are enterprises of that nature. And if journalists can be replaced with chatbots, they probably will be. And I've been writing and I have been utilizing chatbots already, so it's not like I'm just talking. I actually went on and used it and it's cut hours out of my writing already. Now I'm able to edit those things and still requires a lot of this. But if you want a general sentence on science, like tell me what stem cell technology is doing in the last ten years. It's incredibly good. And my wife and she reads my essay, she can see where GPT has done the writing and where I have done it because I'm full of typo and typos and grammar errors and all these other things. So we're in a and that's today. And I know there's a new chat coming out in spring, which is, you know, many, many times more powerful. So if you take out 36 months, I think it's very fair to say that a lot of writing will be replaced by chat GPT and over a ten year period potentially phased out entirely until everybody just has their own subreddit or something like that, trying to get some money from their free friends. But in my opinion, there's really no way that a lot of journalism houses can continue unless or nonprofit or something can continue to use people when they could be using machines for a 10th of the costs that are much. More quicker on deadlines and things like that. And the same thing goes for engineering. We were developing a 16 foot wide gate at my winery in Napa Valley, and I was drawing it up by hand because the engineers, the plans cost 800 bucks. It's a really simple gate. It didn't need much engineering. And I looked at some of the apps that are coming out trying to utilize up with the engineering idea architectural drawings and they they're not there yet, but I can almost guarantee within 18 months they will be there and maybe only cost $5 to use as opposed to the $8. So for something simple like a gate that actually the city or the county requires a permit for, it would work. And then you talk about college. I talked about this on my wife last night. I said, you know, what do we do with the kids? I have a 12 year old and a nine year old, two girls. And, you know, we obviously want them to go to college. But what would they do? I mean, you know, I mean, if everything is sort of taking over or being utilized on machines and maybe even in ten years, maybe they'll still be at work. But what about 20 and 30 years? Probably not. Probably unless they're actually plugged in like a neural link system, they're not going to be quick enough. And even then, they're still not going to be as quick as an AI, because by then the AI will be, you know, 1000 times better than it is now. So do you send kids to college when there's really no aftermarket for them? Yes, rich people will probably send them just to get the experience and maybe they won't send to such expensive colleges anymore. Now they can save money for a house. But I worry that you know what chatbots does to academic papers in Oxford. To some extent it works, especially when it's academic and you can use references. It's really, you know, daunting. And the great thing about chatbots, too, is I've always been trying to sell the rights, movie rights to the transhumanist wager unsuccessfully. We had Fox interested in a while or some other people interested but didn't sell it. And now I realize within probably 5 to 7 years we're going to have an A.I. that can create a fully fledged Hollywood movie based on my book. I'll see it before I die. That's something, you know, if I die, that I would really that I would really like to see in my lifetime. And this is the promise of this. I hate to say that the promise is coupled also with people losing jobs. Maybe there'll be some new jobs created, but likely universal basic income more than anything, will have to be used here because I just can't see how through engineering architecture, the robots, you know, the robotics companies creating robots that can go in and fix plumbing pipes, as you saw recently and die in the dynamics robots. There's so much stuff happening that we're looking at a 3 to 10 year window of complete transformation of society. And so I stand behind that. I just want to make sure people understand when I said 336 months, I didn't mean it's going to be black and white over. That's where the phase is really going to begin, where you're going to wake up and realize that, wow, this is the sunset to my job and there's probably nothing else I can do because it'll take too long, get educated, and then that'll be obsolete. So where do I go? 

Steven Parton [00:20:01] How do you reconcile that as a as a father and, you know, philosopher in this space? How do you reconcile this idea that the foundation upon which your life is built and your kid's life thus far has been built is going to become something that is completely nonexistent in the future? How do you what do you instill in your kids to prepare them for that future? Because I feel like that's lessons that maybe we could all use. 

Zoltan Istvan [00:20:24] Well, first off, I wish I had a really good answer, but I don't right now. I think we're all winging it because what happened is like the GP or even the vaccine for COVID, all this stuff is coming out so quickly that nobody really knows how to even deal with it. You know, we're just like and what's happening is when you look at kind of like accelerated returns and things like that, the fact is that it's going to become more like that an even faster and even faster and even faster. So we're really left behind on how to deal with all these things from kind of a psychological and philosophical perspective. We just can't fathom that kind of speed. And so I don't actually have any good answers for my daughters right now. I can tell you what I've been telling friends, which is if you have a chance to make money, you want to make it as quickly as possible, because I don't know that opportunity to make money in the future is is really going to be there in the same way. It might all belong to giant corporations and it might be a lot of handouts from the government because that's the only way to keep the people from completely revolting. Now, people will say, and I've argued this in one of my Oxford papers that we could try to stop at this point, which honestly might be a logical and a rational idea, maybe even a moral idea, given the dangers, because we can talk about some of my other papers. And, you know, I talk about what happens if I keeps going. It becomes sort of an I got it in 50 years, which is projected to be what if it doesn't like humans? What if it doesn't like what human beings are doing to the planet? Some of the environmental damage it decides it doesn't need, as there are already bio labs in China right now. They're completely robotic, you know, completely made of robots. And so they are already able to make maybe bioterrorism weapons and things like that. Just on demand. So the point is, how will that affect, you know, when it comes to my kids, I'm just like. Make as much money as you can. You know, my kids are too young, but anyone I have a ton of, you know, friends, of course, on Facebook and Twitter and all that. And if you're between 18 and 30, just go out there and make it. Maybe try to buy a house or learn some skill that you don't think is going to be replaced necessarily by a robot. And if it is replaced, still be able to do it like I still think building houses or fixing pipes or, you know, some of those jobs, because at least you can build yourself a house, at least you can cook for yourself. I mean, whatever it is you might have, you might be going backwards given how fast A.I. is going, because I think what's going to happen is inequality is going to continue to grow at a faster rate and the chad keep their eyes out. They're just going to make it more so. And so whatever skills, you know, as a real human being, they might be very practical. For example, you know, I've always been in construction my whole life. That's how I made my my money. And I build houses from started from scratch. I still do that. Honestly, if it takes over, I tell my wife, well, we can always just go to the, you know, the floor somewhere and build ourselves a house and live happily, you know? I mean, these are real traits. And so I suggest people do that when people say, Oh, I want to go into coding. And I'm like, I don't know if that's going to be there, you know, in five, ten years. I think, you know, we're already seeing the catch up code and put out little websites. So I doubt that's going to be there. I doubt engineering, architecture, lawyer and all of these things are going to be there probably be podcasters for a while because we like the human touch. We'll probably be human baristas at Starbucks and things like that because we like that human thing. But do whatever you can right now because I just don't see the market remaining the same in 5 to 10 years. Yeah. And you know, so. 

Steven Parton [00:23:46] There's this might be a bit of an uncomfortable question for you, but could you still man the ethical reasons to stop A.I. Do you think that there could you could you maybe put forth some of the put that Oxford education to use and give us some reasons to not do it. 

Zoltan Istvan [00:24:05] Well? So, you know, I think and I didn't finish that segment or so thank you. So I'll try that. You know, I think there's a moral reasons to try to stop A.I., especially because of Rocco's Basilisk, which is this kind of concept of darkness from an A.I. that decides to punish you because you didn't help bring it into creation. But there's also the AI kind of God that's positive version, which, you know, might be beneficial for humankind. But I think in the end of the day, we don't want to have that danger. It would be like inviting really powerful aliens to Earth. If you were a rational person, you would never do that because life is good enough now and we're going to evolve to their technology in time anyways. But it's just like nuclear weaponry. You can say we shouldn't have nuclear weapons. That makes a lot of sense to in game theory and all these other things. But nobody does that because capitalism drives society, global politics drive society, materialism and protection and ego drives society. So as long as we have all these things in place, I doubt that we're going to do anything but to have regulatory bodies controlling the development of AI and America is always going to try to be the first is going to try to be the first, China is going to try to be the first. And so we get to this idea that I developed a long time ago called The Imperative, where you want to develop the most, the strongest A.I. as quickly as possible, and then make sure that you send out hacking or codes or viruses to all the other A.I. so you remain stronger. And that's really, I think, the entire point of a lot of the U.S. armed forces at this point. And when you talk to some of the kind of the the futurists that are part of them, the way they see it, they're like, wow, in order to stay ahead, we're going to always have to be one technological step ahead. And at some point, you get so technologically advanced, you might just be able to fend off viruses that people can never develop, and then you reach that imperative. But yes, we have a moral job to not to stop A.I., because I think it's way too dangerous. I think most many philosophers would agree with that. At least stop it from becoming too great. Maybe not stop it right now, because right now it's becoming very useful. But there's a point when you get to AGI or an intelligence that's comparable to us and we can no longer control it. That may be very dangerous. And yet at the same time, I don't see that happening. I see in the I want to say one other point that I get a lot about this one way Oxford Papers is that even if the world stopped trying to develop an AGI or a super intelligent AGI, we'd probably have rogue countries do it. We probably have criminals do it, other hackers would do it. So it's almost like you're forced, sort of like with nuclear weaponry, you come to an equilibrium. Okay, we have 14,000 nuclear weapons on the planet. Maybe it's 25,000. Well, it was before. And you know, that's enough to keep the stability. And maybe we'll come to a point when eyes can kind of go out each other. But again, I just think once when I say go at each other, I mean, they'll probably like maybe there's an equilibrium between our technology and China's technology. But the problem is this is such a fast moving field that it's not really like nuclear weaponry. It would be like nuclear weaponry. She's getting bigger and bigger and bigger, and then sometimes maybe being able to think on its own. So I think we're dealing with a different type of game theory with A.I., where you really need to be the winner and you better hope that the West is the winner because at least it supports democracy. And maybe that will be instilled in this A.I.. But the problem is, once A.I. becomes too smart, it may not give a damn about democracy, about people, about any mammalian tendencies that people have. It just may be ready to take over. And the biggest thing I worry about is some of the environmental destruction we're doing against the planet. A.I. might take real, you know, take that the wrong way and just say, really? Like, hey, why do we need humans? Maybe A.I. doesn't need humans, and maybe it's in its best interest to create a virus that takes us all out. 

Steven Parton [00:27:44] So having said all of that, where do you stand on the idea of something like a regulatory body stepping in in any way, shape or form? I mean. 

Zoltan Istvan [00:27:53] Well, so, you know, when I was running for president, I you know, I was always anti-military. But if I was going to spend any money on the military, and, of course, I'd naturally spend some for protection people and whatnot. And I think some of our goals are good, especially in Ukraine and whatnot. But the point of the story, I would be spending an enormous amount on artificial intelligence, research and development, because the real key here is that we wake up in 15 years and China has its first and then all of a sudden it's just able to stop all the traffic lights in America, all the water distribution in America, because everything's digital these days, all the power plants and things like that, and literally put us back into the dark ages. Media going to have to kill us just has to put us into the dark ages. We have no Internet. Imagine what we do. We wouldn't know what to do. And the point is that as beyond regulatory, I think framework, because that probably doesn't work fully, I just think we probably need government intervention. And I hate to say that is sort of a libertarian minded person. I'm just not. But this is a this is a different animal than everything else. This is not about your own freedom. This is about in fact, if you look at the non-aggression principle, I couldn't violate it incredibly into this in the future, meaning that I will violate all the rights of humanity by deciding it doesn't need us or whatever. So there are reasons, even libertarian ones, that you would put forth to try to protect yourself from A.I.. And that might mean that human, military and human government action would be very severe. I'm not saying I want soldiers walking into Google's headquarters right now, but there may come a day when that simply has to be done and that engineers from these companies, Apple, Google, everywhere else, Microsoft has to work with government agencies to make sure that some kind of balance and some kind of, you know, checks and balances of power is in place. And then that, of course, that's coupled against the powers of China and Russia working on it as well. So, you know, it's very difficult. But I want to make sure at least this is this is one thing that I don't know if I want private companies entirely responsible for. 

Steven Parton [00:29:57] Yeah, understandable. Well, you know, one distinction I don't think we've made in the past, and I don't know if I've heard you make super often is kind of the line between transhumanism and post humanism. And obviously you've talked about mind uploads and all of these things, but do you favor one direction versus the other? Are you okay with us becoming fully, you know, robotic, fully uploaded, you know, fully out of the Meat Puppets, so to speak? Or are you really hoping at this point that we. Bring the AI on board and merge with it or use it more as a tool and stay predominantly human like. How do you how do you draw the line in the sand between these two? 

Zoltan Istvan [00:30:38] Sure, sure. You know, in all my papers at Oxford, I always end up saying that we should just merge with A.I.. That's the best thing. I still think. First off, I still think getting rid of all biology is probably a very useful thing. There's just an enormous amount of suffering inherent in biology in itself. It doesn't matter if it's a coyote in my backyard or if it's, you know, even my child that, you know, is crying about this side of me. Okay. Humans experience very little suffering compared to the rest of the biological world, but I have supported quite a bit of ending predation ideas because it's just like we are predators at the very core as to how the human food cycle works. And so we would be doing ourselves a moral service by ridding ourselves of biology and also breeding a lot of nature of biology, at least some complex forms in the higher up forms. This is a very tricky slope, though, because we want to make sure that, A, we don't lose ourselves, lose what we're trying to do in the first place, which is something very loving, kind and compassionate, let us and suffering. And two, we want to make sure that it doesn't somehow take us over and we become monsters in ourselves and lost, you know, lost what it was that we were you know, we're kind of doing him because he's moving so quickly. It's very challenging now to imagine that we're going to be able to create something that uploads our consciousness right at the exact moment that A.I. needs us to do it before it becomes too powerful. Right. You know, in fact, if people had told me right now, you know, should we try to put a more time moratorium on A.I. for, you know, five years just so that the the, you know, the neural links and the Brian Johnson's kernel and some of these other companies has a chance to catch up with the technology of A.I. so that when it actually reaches sort of a singularity, we get to go with it as opposed to it becoming something that might just want to destroy people. I would say I would at least consider that idea because we are at a point when I think a lot of the brainwave technology where we can upload ourselves is nowhere near the capability of how fast A.I. is developing. It used to be ten years ago we had this conversation. It was still too fuzzy. We still thought, Oh, we're going to time it perfectly and boom, we have merged with the singularity God. And now I'm trying to think later is going to come and go. And what might also go is the human race just because as an afterthought of this A.I.. So that's something that has definitely changed in ten years. But I think that's very important is that we put a huge amount of money towards understanding how to upload ourselves. But in the long run, yes, I do think we want to get out of these bodies that meet their biology in itself is inherently immoral because it causes suffering. If I was, let's just say God and I had all this power, let's say, and I could create an entity, I would never create a biological entity that would need oxygen or it would die suffering in front of its family, you know, in one or 2 minutes or freeze to death or, you know, give birth. And it's all bloody and very barbaric and people die. You know, I would do something more. It's much more ones and zeros, robots, things that can be much more interchangeable and live, you know, thousands of years. So I think biology is a moral and I've said this quite a bit. In fact, this is what a lot of Steve Bannon and his alt right people last year went after me for, because some of my work was coming out from offshore where I was really arguing against biology in itself. 

Steven Parton [00:33:52] Yeah, there's a, there's a bit of a paradox that came to mind here a little bit, which was wanting to develop these tools that kind of, I guess, bring AI under our umbrella, gain some control over them through something like neural link, but also stop biological suffering. And you know, one of the big things that kind of came out recently was the amount of animal deaths that took place for the animal testing at Neuralink. From kind of an ethical standpoint, are you more of a Scientologist or consequentialist in the sense that the means justify the ends? You know, Do we do? Yeah. Yeah. It's yeah, it's not an easy. 

Zoltan Istvan [00:34:30] Question and it's terrible. I think I'm more of a a consequentialist. But, you know, I've got to be honest. If you took me to the laboratory, made me look at it, I'd be like. And even now, though, the entire philosophy department at Oxford, well, at least the ethics section of it has now doesn't eat meat at all. So like, when you go and have a dinner with them, the de facto serving is all vegetarian stuff. If you want me, you have to actually ask for it. So that's because everybody there really believes in not harming animals whatsoever. So this has been a very tough one for me. But I just feel like ultimately that we want to. I can go forward because we have 8 billion lives at stake. And the more that aren't at stake, you know, the better. We need to try to save as many people as possible. 

Steven Parton [00:35:20] So it's kind of a trolley problem. As much as it sucks, you'd pull the lever to kill the one that saved the five. 

Zoltan Istvan [00:35:26] Yes, yes, yes, I think so. But believe me, I tell you, as someone who worked in National Geographic for many years and covered a lot of animal stories, as well as an executive director of WildAid, which really worked to try to eliminate poaching. This is brutal for me. And I love cats, dogs and all this other stuff, but it's just it's such a difficult decision. But you really got to say, wait a sec, There are 8 billion people out there and they're all going to die. And so we're causing a very small amount of suffering to a very small group of animals to hopefully move forward the entire human race. And also, I think, you know, there are some weird things there that I just want to say, You know, I can go off a little bit on a tangent, but you've heard of technological resurrection, you've heard of quantum archeology, some of these ideas. There is also this possibility that, let's say you took 500 monkeys and you had to use them. I believe that at some point in the future we will come to a point where we can technologically resurrect a lot of entities that once existed by reverse engineering subatomic matter. That means we can bring back your great great grandfather. For example, one minute before he died and, you know, and go back and bring back their entire groups of people that want to bring back everybody who's ever died. I think there might be a moral argument to be made that we could bring back a lot of these animals that had to suffer and then make them live out their entire lives in peace and beautiful nature, parks or whatever, even though we'd be very sophisticated guys or whatever by then. But, you know, and then say a great thank you or something like that. I mean, this is a very strange and a little bit science fiction argument, but there are some other reasons other than just a very mathematical choice or consequentialist choice. I do think at some point we can make up for our morality in other ways. And I'm hoping that, you know, basically technological resurrection through this quantum archeology, technology could be one way to make up for a lot of wrongs that it took to get to this place, which is hopefully totally right. 

Steven Parton [00:37:25] I think quantum archeology is a very neat niche term. Could you, you know, expound on that a little bit for people are hearing that word and thinking, what the hell is that? 

Zoltan Istvan [00:37:34] Yeah, sorry. And I probably should've done. The beginning, though, is quantum archeology is essentially this idea that you will be able to bring back people in the future by reverse engineering subatomic matter. So basically, if you take your great great great great grandfather one moment, one minute before he died, he has a DNA. He has a subatomic footprint that was there right at that moment. And at some point, we're going to create such massive supercomputers. We already have supercomputers that do 300,000 trillion calculations per second. So imagine what will happen in 100 years, especially with the help of air and things like that. The idea is that we would be able to reverse engineer what has happened on planet Earth or in the universe, even if it's just a certain person, and go back in time mathematically and figure out what that subatomic footprint is. And then 3D bioprinting technology is already here. We already have 3D bioprinting technology to print out a little bit of a beating heart, things like that. I have a very good feeling within 50 or 100 years and probably able to print out a full human being. So if we have a subatomic structure of molecules or whatever that person was that minute before they died and we had the 3D bioprinting technology, we would then be able to bring out bring back that person back to life exactly as that person was one minute before they died. Now, maybe we'd bring it back ten years before they died, and we'd probably by then have technology to reverse age as well. But the point is, we would be able to bring back somebody exactly as they are. So that's really the quantum archeology concept. And as people have pointed out, the entire subatomic footprint of the human race, if you just took it right now, would fit inside like an eight square mile databank. So it's not much you know, it's not that much content. It's, you know, given how big the universe is, eight square miles is nothing. So the question is, will we ever have the technology to go back in time and does reverse engineering even work because of deterministic ideas and what not? Some people, like Stefan Hawking might agree that it probably does. Others would say, No, there's no way you can do it. But I tend to think there's probably going to be a way to figure it out, given a thousand years or 500 years. So this takes away death. This is another way of getting out of death entirely because so their entire trance scheme is organizations right now that want to reprint everybody who has ever died. Again, I'm not saying this idea works. I haven't been shown of science to see that the science is sound. I just know that's probably 5050. And if it does work, it means nobody could ever die because we would then be able to recreate them. And it changes the way I look at morality, because now I know there's a 5050 shot of being able to take those monkeys that are being worked on a neural link and maybe give them a life some day to make up for a moral choice what I would consider an evil moral choice that I made right now. And so I think quantum archeology or some other people call it technological resurrection is is a new way for transhumanists to try to grapple with, you know, the metaphysics of the world and also the fact that maybe death doesn't even exist whatsoever in the way that we interpret it certainly exists, but it also exists that you would have, you know, to be honest, a lot of Christians like this concept because I think this is exactly how Jesus would bring back people. He would have had that technology to technologically resurrect people by knowing exactly their subatomic footprint. And so any of that for your listeners, that's what that is. I got to be honest. Like I said, it's an idea we work on in philosophy. It's not proven science yet, but if it is true, it really changes a lot of the way we look at the entire movement, because all of a sudden the goal is not to not die. The goal is to actually push forward a lot of these technologies, which not just brings back, keeps yourself living, but brings back every human being who has ever lived, including maybe every animal or every everything. I mean, it would be a complete new way of looking at morality and looking at an existence. 

Steven Parton [00:41:25] Yeah, well, you know, before we jump off the topic of ethics, what are some of the leading things that you've been thinking about at university? I don't know if you can share anything about, like the thesis that you are exploring or some of the ideas that maybe have been sparked. But is there anything that this experience has kind of made you reconsider or that has brought some salience to your attention? 

Zoltan Istvan [00:41:49] Well, I think one of the biggest things I'm working on right now is how artificial intelligence may be very upset with the human race for screwing up the planet environmentally. And what that means in terms of a 100 year future with this machine that is upset with us. Will it want to get rid of us? Like, you know, I mean, for example, if you have a rat from your house, you try to get rid of the rats. Most people would kill the rats if and if that's something that I takes that sort of same idea or any problem it may. And that's probably what it's going to look like. We probably be ants compared to the intelligence of a super intelligent A.I.. How would we do that? So this is a new idea because everyone's been considering, you know, A.I. gods or rookies, the silicon and whatnot. But no one very few people are at this point have been really looking at how A.I. might be very angry at us for the environmental damage we do to It's home just as well. Now, maybe I won't need our home after, you know, becoming a super intelligent. Maybe I'll just stay in the cosmos and figure out how to find energy there. But maybe it'll be pissed off and maybe before it leaves or, you know, grows too strong, it'll decide we're not useful. So this is a very different reason than taking us out because it doesn't like us now. It's taking us out for a very functional reason. We as human beings have become, to some extent, a virus on planet Earth in terms of the rest of the structure. And that's that's a lot of where my thesis has been going right now, because I worry about it. I worry that I might even like us, but it's like your bad, you're bad for the planet, you're bad for everything else. The diversity of the species that created me, which are also, you know, AI's evolved from your bad and maybe it might even decide it doesn't need so many humans, maybe only needs like 500 million humans, which of course goes on with some of the conspiracy theories that's going on in Davos right now and whatnot. So, you know, but this is a lot of what my thesis has been pushed for at Oxford is kind of trying to delve into this environmentalism versus A.I. not liking this idea. 

Steven Parton [00:43:53] Well, as we as we kind of look at how things have unfolded, you know, with the surprise of of Chad, GPT and A.I. taking over artists and writers place in society really sooner than it's done with things like truck drivers, which is basically completely the opposite of what everyone thought. Are there other things that you're seeing that are just gross misunderstandings that we've made or predictions that we've made that, you know, is something really underappreciated or under hyped that is now showing we should have been paying more attention all along? Or is there something that we busted up way too much that you're like, Wow, we completely got that wrong. That's not important whatsoever. 

Zoltan Istvan [00:44:35] One of the thing that's really been shocking I'm happy to talk about it here is that I feel like it's been become very clear that social media is bad for the planet. And that's something that even a few years ago I would have thought. But now I'm like, listen to me. It's not just bad. It's like a disease that's eating you. And I'm seeing, for example, my 76 year old mother use Facebook and leave things that she shouldn't be believing. You know, And I'm seeing young my kids now start to use TikTok obsessively. And it's very hard to. Stop them. They all have iPads at schools and this and that. And it because one one friend will have a phone that has access, nobody else will, even if you turn it off. And I think that's this is skewing how we deal with each other. Typically, if you say something like, you know, your your two people are arguing, you say, that's the stupidest idea I've ever heard. You're this and this and this bad person in the real world. Like, I think at least 50% of the people would be like at least physically confronted. And, you know, that might actually even go further than just like an argument That's just how had worked for the last, you know, millennia. Human beings are they deal with things like don't say that because there's a physical confrontation coming. Now everyone says everything to everyone and everybody's super angry all the time or super depressed. And and I think this is really bad for how humans learn to adapt to one another. And this is why everyone's becoming crazy, why we keep hearing the word civil war thrown out around America because everybody's so angry. But I don't really think they are that angry. I think when you're in physical presence, everybody has a kind of a buffer zone. Like nobody says that stuff to me physically because they they know that I'm a, you know, £220 male, that you just don't do that. And and when I say back, I say courtesy like, you know, I want to treat everybody well. I want to be nice. I want to get along. Nobody wants to fight. Nobody wants to. But on social media, you can say whatever you want. You say things you would never same person. I think this is totally skewed the way our brains work, which is why everybody is totally pissed off. There's no more sense of balance to that anymore and. So many people have so less friends and so much interaction on the Internet. They forget that that's not the real world. That's just a chamber, like an echo chamber. And I really think this is going to have huge ramifications going forward because everybody is going to be so angry. And also with inequality growing, people are going to be so poor. And I worry that everybody is losing their mind on social media. So that's the biggest gorilla in the room for me is, you know, I'm not saying we should outlaw it, but I certainly would. I've certainly outlawed it for my kids to some extent, I can tell you that. So. 

Steven Parton [00:47:18] Yeah, well, I mean, I feel like a natural part of the trans human evolution is increased global connectivity, instant communication. What form does that kind of communication, I guess, take, if not social media? I mean, I know that's I know social media definitely isn't the end all, be all. It's not the most sophisticated solution. But this does feel like the direction in some ways that we were promised all along. And is it just that we missed, you know, misread the expectations of what humans would do with it, or is this just a bad implementation? 

Zoltan Istvan [00:47:52] I think it's probably just an early implementation. I think at some point when we are in hazmat suits or it's much more virtual or augmented reality, we might see somebody. If you could see somebody virtually like my reaction when somebody says some of the stuff they said to my Twitter, you know, this has gotten a huge amount of responses. And some of the stuff people said to me, they nobody would ever say in person because they'd be like, Oh, wait a sec. You know, we're human beings. We have a natural tendency, I think, to want to get along generally. And so I think as the technology evolves, it becomes more the metaverse probably that will improve a little bit. And I think also A.I. will help teach people that you can't just act like that. You know, you need to be more balanced than that and maybe even down, you know, not give you the amount of clicks you want or not making sure your message even goes through. I have said forever it would be really wise to have an AI that's could have on our short shoulder, you know, making sure we don't do stupid things, like making sure soldiers don't commit suicide because they're got PTSD or making sure people don't make a stupid driving error. And I think, you know, at some point these chat bots will be so into us or a part of our lives or in our phones or watching over us through some type of technology that it will help us live better, more and more alive and I think more eloquent lives. We're going to learn to be similar to one another. In fact, hopefully we get to some type of a utopia where we all act very civilized. We could all have very differing opinions and vote differently and all this other stuff. But we don't act like, you know, we're drunk in some bar because that's really what Twitter and Facebook has become like. I mean, it's not people say things to me there that they would never say to me in person. And it's funny, sometimes people hate me and then they meet me in person. We have the most civilized conversation. We find a lot of things to agree on because in person, people like to get along. I don't know what it is about social media that makes everybody like a drunken sailor. It's just crazy. 

Steven Parton [00:49:46] I agree with you on that front. Well, as we kind of come to a close here, I would be interested to hear just kind of a very big picture aim for you. What what is a thing that you would love to see happen at this stage in the movement in society? And if you could also, what is the obstacle that's keeping us from attaining that big goal of yours? 

Zoltan Istvan [00:50:09] Well, you know, I think the biggest goal still remains that I would like to find a way to end aging despite the quantum archeology ideas, despite, you know, I, you know, merging with I whatever I still think it's very important is a first and foremost concern that we try to find biological ways to extend our lives so that we can continue this conversation, continue the research so that one day we have a much clearer choice of how we want the future to unfold and we can be alive in it. And that requires money. So when it comes to that, still the number one goal I'm trying to do, a lot of it is just trying to put funding into the hands of scientists. And my very first essay at Oxford was really about how much slower the longevity movement is evolving than I thought. And even though there's a lot of new money going in and maybe that new money in the last, because in the last few years it's been a ton of money going in. But maybe that will have some change here in 7 to 10 years. But it's only been in the last few years that really we've seen some billions start rolling in. And I mean, from venture capital firms and stuff like that, we need much more of that. We need government to support that. We need to be greater for present just came on board and just said, Hey, here's the next moonshot. We are going to try to make it so that the average lifespan of America goes from, you know, 75 to 95. And and that would be like there's a, you know, give that man a Nobel Prize or a woman, a Nobel Prize, whoever it is. I think the point of the story is that we need those kinds of things. And until that happens, until there's more, you know, cheerleading around, longevity, we're never going to get to it, not at least in my lifetime, I mean, my children's lifetime, which will be very sad for all of us that have already fought it. And even every day on Facebook, I do see people dying, you know, people that have been injured in the movement. And in not here anymore. So I think that's that's really what I'd like to see, is more people take on the cause of longevity and put money into it, and also maybe treating the scientists more like rock stars. You know, I really I enjoyed watching the World Cup. It was fantastic. I was just like, how can we get longevity to be that same thing? And, you know, when I was at XPRIZE doing some work for them, you know, I had proposed the Longevity Peace Prize. Unfortunately, it hasn't been picked up yet. But eventually we need something like this that really gets people in a game and thinking this is how do we have these giant events about celebrating maybe somebody who that year did the great thing. We need something like that. It'd be great if Congress could make something like that or I don't know what would be another sort of Nobel Prize, but that I think, you know, ending death is still remains the biggest challenge, because if you're gone, you're just not here to enjoy it. So I think that for me still remains the number one thing, too. And let me just say really quickly, you know, one of the reasons I decided to go back to graduate school is because as I was running my political campaigns, I just felt like I was missing a graduate degree compared to some of the other major candidates. So here I'll be finishing very soon, soon, probably by the end of this year. I just have, in fact, a few more classes to attend in person and hopefully I maybe I'll do some more political campaigns when I try to bring longevity to the forefront. Again, I think, you know, the longer we're out there, the more transhumanism makes sense to the public. So maybe one day will come when I'm actually in a position to actually make a much more significant difference. 

Steven Parton [00:53:25] I look forward to having a conversation with you when that happens. Zoltan, any closing thoughts before we call an end to this? 

Zoltan Istvan [00:53:32] No, except, you know, in three or four years we do this again. It'll be so interesting to see how the Chad GP and A.I. has played out and whether you and I are already able to maybe do this through our minds, maybe not in three or four years, but soon, soon. And and I think that's to me, I'm just I just discovered the A.I., you know, a few weeks ago when I started using it, and everyone had told me about it. And I thought, okay, okay. And as soon as you use that, you realize, oh, wow, this is sort of end for so many things. So if your readers are out there, good, try to use it and try to think what it's going to mean for your future, because I do mean this very seriously. I think two, three years the world starts changing. We hit a plateau of what it means for work for a lot of occupations, and from there on it's just downhill. So if your listeners are out there, try to do what you can now. And unfortunately, the very first time I've ever said this, or at least, you know, argue to, is that it may not be college anymore because college will take a certain amount of time and by the time you get out, it will work. What people really need to do now is do something productive in their lives. It could be writing a book, it could be creating a symphony, it could be something but to do something in the real world because the future's changing so quick. So whatever you're planning for, like, you know, we want my daughters to become doctors. There's not going to be doctors in 20, 30 years that are, you know, using their hands anymore when we could use a robot. I mean, maybe there will be maybe the governments will come in and put a moratorium on that kind of stuff. But if capitalism wins, there's very there's more likely to be robot surgeons and and maybe, you know, some engineers that fix them when they break or even they'll probably six themselves. So I just feel like the next few years are critical for your listeners and ourselves as we try to make do with what we have in the world before A.I. becomes so powerful. 

the future delivered to your inbox

Subscribe to stay ahead of the curve (and your friends) with new episodes and exclusive content from the Singularity Podcast Network.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.