< return

How AI Limits Our Choices & Shapes Bad Habits

August 15, 2022
Jacob Ward


This week our guest is NBC technology correspondent, Jacob Ward, who recently released his book, The Loop: How Technology Is Creating a World Without Choices and How to Fight Back.

In this episode we focus broadly on the ways in which technology and AI are learning from the worst instincts of human beings, and then using those bad behaviors to shape our future choices. As a result, Jacob suggests this creates feedback loops of increasingly limited and increasingly short-sighted behavior. This conversation includes exploring topics such as big data, bad incentives for programmers, profit motives, historical bias reflected in data, system 1 vs system 2 thinking, and much more.

Find out more about Jacob at jacobward.com or follow him on Twitter at twitter.com/byjacobward


Host: Steven Parton - LinkedIn / Twitter

Music by: Amine el Filali


The following transcription was created automatically. Please be aware that there may be spelling or grammatical errors.

Jacob Ward [00:00:00] And so my worry is not so much that that some robot overlord that's going to, you know, enslave us. It's that we will be enslaved by our worst characteristics, or at least our most instinctive characteristics, as amplified by A.I.. 

Steven Parton [00:00:30] I was running. My name is Steven Parton and you were listening to the feedback loop on Singularity Radio this week. Our guest is NBC technology correspondent Jacob Ward, who recently released his book The Loop How Technology is Creating a World Without Choices and How to Fight Back. In this episode, we focus broadly on the ways in which technology and I specifically are learning from the worst instincts of human beings and then using those bad behaviors to shape our future choices as a result. Jacob suggests that this creates a feedback loop of increasingly limited and increasingly short sighted behavior. This conversation includes an exploration of topics such as big data, bad incentives for programmers, profit motives, historical biases that are reflected in data system one versus system two, brain systems and a whole lot more. So without getting into any more details. Let's just go ahead and jump into it, everyone. Please welcome to the feedback loop, Jacob Ward. Then I think the best place to start is with your obviously with your recent book, The Loop How Technology is Creating a World Without Choices and How to Fight Back. And as someone who has written books myself, one of my favorite questions to ask authors is Why did you lock yourself in a room with a laptop, shut yourself off from social experiences in the world at large, and decide to focus on this one subject. 

Jacob Ward [00:02:04] The process of writing a book. Actually, I would say, did not agree with me. And I grew up in a family of writers. I have my dad as a writer. He's got nearly a dozen books to his name and I've got an uncle who's a writer. I've got cousins who are writers and they all never seem to bother them. But for me, especially with young kids at home, I found it tremendously painful to tell everybody, No, I cannot hang with you this weekend. No, I cannot, you know, play with you after dinner. So I did not enjoy it. And I my I have tremendous respect and I would say a little bit of alarm about the kind of people who gravitate toward that. Maybe later in my life, when I'm retired or something, I'll want to shut myself away and, you know, be more that way. But it did not agree with me. That said, I also knew that this was a moment that I sort of had to act. And I went on this sort of personal journey that was inspired in part by a documentary series that I got to work on, in part by the election of President Trump, which sort of was a a signal that many of the themes that that we had been observing and making this documentary series about human behavior was really coming to fruition. And I then at the same time was experiencing all of these things in my work as a technology correspondent, where I was seeing a lot of really alarming trends coming together. And I just thought, you know, this is really the moment that I got to go for it and try and make this happen. And so I did sort of transform my life. I quit drinking so that I could wake up early in the mornings and go to sleep and have a good sleep at night, you know, and survive on brutal sleep. And I sort of made this bargain with my wife about it and we got through it. But yeah, I don't recommend writing books to people when people say, God, I must have been great, I had to do it. I say, I have no idea. And no, it was not great, but I'm very glad to find me and I'm very grateful to do for taking the time to talk about it. 

Steven Parton [00:04:08] Yeah, of course, man. So on that note, then, what is this idea of the loop that you do put forth in the book? Can you kind of lay out a foundation for us to have a better understanding? 

Jacob Ward [00:04:19] So it's funny to be talking to you because it was actually Ray Kurzweil's idea of the singularity was actually one of the ways I got started on this idea. So I had heard for a long time when I was the editor of Popular Science and when I wrote for Wired and I had sort of moved in the circles where Chris was name is is a pretty household name. You know, this idea that computerized intelligence is going to take over human intelligence, right. Is going to is going to exceed human intelligence. And what were the implications of that be? I had lived with that as a sort of background hum in my life for quite a while. But then I did this documentary series called Hacking Your Mind for PBS. That was basically a crash course for me and is a crash course for the viewer in the very automatic ways that human beings make decisions. It turns out if you throw decisions at us under pressure and where we don't have enough resources to make them properly, either enough information or enough calories or what have you, it turns out that the mistakes we tend to make are very automatic and very systematic and predictable. And I went all over the world and got to meet all these incredible researchers who specialize in studying that. And at the same time, I was also noticing all of this work going on in the private sector around artificial intelligence, where people are trying to create an automatic, systematic way for AI to jump in and make decisions that we don't like to make for ourselves. Who gets a job, who gets bail, who gets a loan? And I sort of think to myself, well, wait a minute. Okay, if the profit motive of private industry is throwing sort of, you know, all of the greatest minds at this question, and we know on the other side of the fence in academia that so many of our decisions are incredibly capable, you know, systemically predictable and as a result, sort of manipulable. And I thought to myself, well, wait a minute, okay. What's going to happen when when we throw AI at these fundamental forms of human decision making? And I started to think, you know, maybe the threat here is not what Kurzweil and others have talked about. Right. This idea that that we're going to be kind of over. Seen by a general artificial intelligence down the road. You know, the Terminator idea that robot overlords will take us, you know, we'll take over, which had been sort of this background idea for so long. I thought to myself, you know, before we get to that phase, I think we're going to hit this other phase in which an automated process begins, where we analyze human behavior, we shape it a little bit and winnow down the choices. Humans make choices from that more limited menu, and then that gets refined again by AI and we get a even smaller menu at that point. And to me, that forms this idea of the loop, this loop of shrinking choices. That is a sort of downward spiral away from some of our most hard won decision making abilities or sort of modern abilities as humans cognitively. And really this kind of amplification, and I argue in the book an oversimplification of our most instinctive decision making patterns. And so my worry is not so much that that some robot overlords that is going to, you know, enslave us. It's that we will be enslaved by our worst characteristics or at least our most instinctive characteristics as amplified by A.I.. And that idea being caught in the loop is, is what sort of inspired this book. 

Steven Parton [00:08:05] Yeah. And in the book, you talk about the two kinds of brains and you talk about in the documentary as well and hacking your brain. What can you talk a bit about those two kinds of brains and how the artificial intelligence is using one of them versus the other? 

Jacob Ward [00:08:21] Yeah. So, you know, the concept of a dual process theory is what the researchers who study this stuff call it dual process theory has been around for a long time. And it's the idea that we have these two minds, right? One is an instinctive automatic mind and the other is a much more contemplative, cautious reasoning, rational mind. And the this idea has been best popularized by Daniel Kahneman in his book Thinking Fast and Slow, Fast Thinking, Brains, Clothing, Automatic Brain, Rational Brain. And he and his research colleague for decades, Amos Persky, did a lot of experimentation that kind of led him to thinking about this. But it goes back to all sorts of other research. We found that in this is. But when I say we, it's not me. I'm a journalist just reporting this stuff. But the researchers who study this stuff have studied things like the fact that in our eyes, our visual system is in fact a dual process system. But it turns out in one famous experiment, a few, you know, those those like funhouse illusions where a face is carved into the wall. When you walk past, it looks like it's it's going past you, you know, looking after you, you know, following you that I remember. And that that illusion is to scare the bejesus out of me when I was a kid. Well, it turns out that even though your conscious mind is fooled by that in experiments, these researchers showed that, in fact, if you then put a little plastic fly inside the convex face and just glue it there for a second, you ask somebody to reach in and flick it off. Even though your conscious mind thinks that that thing is a protruding face, your unconscious mind will reach in and perfectly gauge the spot it needs to be to flick that fly off. And the suggestion that results is that we have a conscious form of seeing and an unconscious form of seeing and they call it, I think it's a camera with their phrase was something like Vision for seeing and Vision for action, something like that. And and it turns out hearing there's that same kind of thing touch has that same kind of thing that we have these dual processes and the the buckets that they are the two buckets is an instinctive one and a thinking about it one. And so what Kahneman basically writes about in his book is this broad idea that we have this ancient, instinctive system that made most of our decisions for us in this very automatic way, and then this much more modern system. And all sorts of evolutionary scientists have, you know, theorized that this system grew out of our, you know, our sort of new brains that originated it when we stopped just living for survival and began having bigger questions about death and the afterlife, how are we going to domesticate? These cattle and all those kinds of things. That newer brain is your slow thinking brain that is cautious and creative thinks things through. And it's why we have laws and math and all of this amazing art, you know? The problem that Kahneman identifies in that I'm trying to think about in this book is we like to think that we are using that cautious, creative brain system to the slow thing brain most of the time, because that's who we like to think of ourselves as is creative. We're rational, thinking it all through. Well, what Conrad and so many others have shown in their research is that no, in fact, most of the time, in fact, maybe more or less all the time, except in rare circumstances, your instinctive brain is in charge. It's driving the car, it's making the coffee. It turns out it may also be casting your vote for you. It may be, you know, making decisions about who you friend and trust. Right. It's a the modern landscape, it turns out, is, you know, even though we're navigating it with this new, more rational brain, it turns out that the modern landscape really has, you know, that we're using our system one, our our fast thinking brand probably most of the time. And when you throw that against things like social media, right, or addictive gameplay or drugs or, you know, any of a number of of the sort of vices and social ills that we've invented, you know, using the slow thinking brain is the great irony of it, right? Is we've used our best, the best parts of ourselves to in many ways make money off the worst parts of ourselves. And for me, that that is the crux of the dual process theory, is this idea that we have we're living in a world right now that pretends to be all about our slow thinking brain, but is in fact built to make money off our fast brain. And to me, you throw air into that mix and technology in general and you're you're, I think, really playing with fire. 

Steven Parton [00:13:40] Yeah. And so how is this, I guess fast thinking brain becomes something that becomes systematized. Is this a case where we have bad programmers with bad incentives who are purposely cognitively creating bad systems? Or is it a case of the artificial intelligence just picking up on our bad behavior that we typically exhibit when we go about our lives like or is it both? 

Jacob Ward [00:14:05] Well, I think you know, I think this is a really good question. I think that the big so I think it's a combination of factors that play into this. So one of them is the illusion. Well, first of all, there's the grand illusion that we are making our own choices. Right. I think that the modern that modern politics and the modern landscape of of sort of popular thought about ourselves is that we make our own choices and and that our instincts are often the best possible way to make a decision. And that's because we just like that idea. I think in many ways better than like like the sort of the certainly the Western ideal of a kind of rugged individualist. Making her own decisions has always been this romantic idea. So like Han Solo in Star Wars is always saying, you know, leave me alone, C-3PO, you know, get away from me, nerd. Don't tell me the odds. I'm going with my gut, right? And then he's proven right. He gets through the asteroid field, he takes this Neo goes back into the Matrix and gets Trinity out. You know, like we love those stories of people saying, against all logic, I'm going to do this thing. And what's funny is that starts people thinking that they're making good choices. Right. And the truth of the matter is that, like the statisticians and the economists and the behavioral people would all say, you should listen to C-3PO. You know, you should not go into that asteroid field. You know, those movies would be very boring and very short as a result, but they would really make sense, you know. So I think there's a whole social backdrop of delusion when it comes to how we make choices. That is that is an important first thing to understand. And then that, I think, infects how we build things. We start to think, well, you know what? We're all making our own choices. We're in charge of ourselves. It's all fine. So I can just make whatever seems to move the needle in any way. And that seems, you know, and that's perfectly ethical because people are making their own choices. You know, I'm not responsible for what other people do. I can only sort of offer suggestions. Right. But what we're learning, right, and I think some of the top thinkers in the kind of reformist circles and I would say, oh, no, no, no, no, it turns out we're not making you know, we're certainly what they would say is we are not the data is not neutral. When you absorb when you first grab data on the history of loan making in this country and use it to program a piece of data about who should get loans, it's going to make deeply racist decisions about who gets loans. And that's because the systems of loan making in this country have always been stacked against black and brown communities. You know, you look at the history of redlining, you know, and how that set black and brown communities back, the inability to buy a house where they wanted to set them back in terms of generational wealth, you know, by 100 years or more. You could fool yourself into thinking, oh, this data is neutral, and if I just feed into this system, it's going to kick out a good neutral response. But it turns out, no, of course, that's not the case. Right. And yet I think a lot of engineers are quite. Shoshana Shai, the right word, you know, reluctant to mess with the data. Right. Or to put it put their finger on the scale at all of historical inequity. They'd rather just follow the data where it leads. That's how a lot of data scientists are trained, and they're trained to account for confounding patterns and various things, but they're not trained to necessarily account for these things that as a country, we're only just beginning to grapple with when it comes to inequality and rest of it. So there's that. And then I think there is also the profit motive itself, which has led many people to simply say, you know, if I can make a buck off a thing, then that thing is worth making a buck. And I've bumped into people. What's interesting is that, you know, so there's a whole category of very addictive user interface design. And I know some of your listeners, I can hear their eyes rolling now, right? Because I know that people are allergic to the idea that making something, you know, that there is no such thing as a digital addiction. People like to say that, you know, addictive behavior is certainly not the fault of the designer. It's the fault of the user, if anything. Well, okay, maybe. Except that whole industries are being developed right now on the addictive potential of things like, you know, not just, I mean, conventional gambling, sports gambling and the rest of it. Not only is that having one of the most explosive growth periods it's ever had in its history, right in November, in California, where I live, we're going to vote on the idea of legalizing online sports betting. I mean, that's going to create this vast industry in its own nation, essentially of 40 million people. Right. We've already seen that. But even even when you're gambling on stuff that that like isn't even real, gambling on sports is one thing. And we can argue about whether or not that's a legit thing. But, you know, you've got a whole category of things called social casino apps that just simulate the slot machine experience, simulate the poker experience. And people I've interviewed dozens of people who've lost their shirts off, their loss, their entire life savings to a 499 at a time game in which you bet on nothing and you can win nothing, and yet you get hopelessly addicted. And to my mind, the same design principles that are built into something like that, the tribalism of playing with your friends and the gamification and the, you know, the sort of the coming back for a surprise and don't lose your streak. And all of the sort of stuff that people have learned over time is really, really compelling in user interface design and and the the onboarding of people and all that stuff. Well, that's also gets deployed in the name of good facts, right? We see that in Peloton, right? You join the tribe, pal. Welcome to the Peloton family first. I was, of course, as I'm speaking, like taking a bath right now and, you know, having a really hard time financially. But that's sort of beside the point, you know, whether it's Peloton or Noom, you know, the weight loss program, it's very similar design practices and they're very, very effective as the ones that have been deployed for gambling and the ones that been deployed for social casino gaming. So to me, it's it's you know, there are these tools that I think have been developed that we don't really understand why they work yet, but we're starting to hone in on it. They're entirely unregulated, and I think they're having a huge effect on human behavior. And so I think it's a combination of this landscape, this cultural individualism, this, you know, the broad allergy to believing that we could ever be gullible enough to fall for something like this. And, you know, the illusion that we're that the data is neutral and then just sort of the blanket profit motive can make money off something it's worth making money off of. I think those things are just working together in this case. 

Steven Parton [00:21:15] And that seems like how it's leading us to what you would say is like a realm of fewer choices, right? Because as we get addicted to these things and as they kind of, let's just say, hijack our attention and bring our focus very narrowly into this one realm, we kind of become blind to other options. Right? Is that kind of. 

Jacob Ward [00:21:33] Yeah, well, so that is the argument I'm making. I mean, and it's and it's beyond just the sort of attention economy stuff, which to me is the sort of easiest to grok. I'm also trying to articulate this book that there is this broad order tendency and and that tendency is essentially I mean, what would all of the people who study the dual process theory stuff, the fast acting, slow thinking stuff, have tried to explain to me is that the human brain is built to outsource its decisions and take shortcuts whenever it possibly can. And that is because we survived that way, finding an efficient way of making decisions about which food to eat or who to trust, you know, or where to lay down for the night was the. It's between survival and death. When we lived on the open plains of what is now Africa, whatever that was 120,000 years ago, 100,000 years ago. And and the the ability to outsource decision making to this instinctive system was, in fact, good because it kept us alive. Like, if I you know, I use this example all the time as it was explained to me, you know, if you and I are sitting together in a cafe and a snake comes into the room with us, we don't take time to talk together about what kind of snake that is or a fire licks across the ceiling. We don't take a second to say, geez, how hard do you think that fire is? You know, we both recoil. And in fact, there's a whole world of incredible research about how our faces, in fact, transmit horror between each other faster than we could ever explain how dangerous the situation is. If I make a face of horror, you catch it, you pick it up and transmit it to the next person. And that's how you get a whole crowd running out of a room. You know, there's a famous routine from Cedric the Entertainer, the comic, about how he when he sees other black people running, he can't help but run. And and it's a it's a really funny routine, but it's also very dark and it's absolutely spot on for the behavioral science stuff, because it turns out that when you see other people, especially other people who resemble you doing a thing, you can't help but do it. And all of that is because we learn to survive with our tribe like it was good. That was really, really useful once upon a time. The problem is now it means we are primed. We are built intellectually to outsource tough decisions to an automatic system. And if that automatic system comes in the guise of a neutral seeming, very sleek piece of technology that miraculously pumps out great decisions for us, like who to hire, right. Or something that is one of the hardest things to do right now. Figure out who to hire for a job. You know, we love to hand out stuff over. We are built to hand that stuff over. And so not only do we do that in things like the attention economy, where, you know, I am very much guilty of spending so long on tick tock, the tick tock actually has to tell me to go to bed. Videos pop up that say, okay, you watch the naps, go to bed, you know, but that's also true in all sorts of other systems we're seeing. You know, you know, the the Transportation Safety Administration spent almost I think it was almost $2 billion on a system that they thought could possibly automatically spot people. It was called the spot system. So would spot, you know, suspicious people in line at the airport. They wanted it to work. They sank off so much money into it. And in the end, when they investigated, they figured out it was a total wash. It didn't do it didn't work at all. But man, do we want that? You know, we're built for that. And so from the attention economy to major pieces of government infrastructure, I think we're going to be tempted to rely on the shiny automated system over and over again when we don't want to make a decision for ourselves. 

Steven Parton [00:25:25] Yeah. In all of the examples that you explored in the book and through the TV show and just through your own explorations, have you seen any cases where they saw this bias arise in the first version and then use that to adjust? Like, is there is there possibilities that we can take this information? And rather than just running with it and assuming that it's right and ignoring the bias, we can then say, okay, we know loans are disenfranchizing people of color. Let's, you know, let's make the next data set that we train our eye on, get rid of that. Or is that something that you mentioned that they don't want to put their thumb on the scale? But yeah, is there any cases where you've seen that maybe be used in a positive way to to make some progress against them? Yes. 

Jacob Ward [00:26:12] I think there is where there are ways to do it. I mean, one of the great horrors for me over the last year or so when this book came out is just as I was finishing it. In fact, I had just finished it. I was just just putting it away. Daniel Kahneman came out with a book in which he basically argues the opposite of what I'm arguing. He argues that it could be used to compensate for our flaws, the flaws that he spent my career identifying. And and he's he makes a very, you know, rational argument that that you could use a bias neutral system to compensate for our tendency is to be racist and sexist and the rest of it or, you know, compensate for our bad choices with money. Right. You could program a thing to compensate for that. And I think he's right in theory. The thing that he does not seem to work into his argument and this is the part that makes me so crazy, especially because he's an economist, is oh, sorry, he won the Nobel Prize for economics. He's a psychologist by training, but he's, you know, moved in this in the realms. Economics for so long. He just doesn't think about the profit motive. And my argument is, nobody's making money making us smarter about our decisions. People people empowering us to make better choices. Very often causes us to spend less money. And so you just don't see a lot of people making money doing that. And to me, the profit motive is the thing that we really have to keep our eye on. I think that there are extraordinary opportunities in the public sector, in nonprofit work, in the art world, in all sorts of amazing places where you can you could really use aid to do extraordinary things. There was just a gathering recently of antiquities experts who are really, really hopeful about using AI to fill in the gaps in the archeological record of certain things. Because it turns out that if you if you show AI the history of, let's say, Etruscan art and then and you show it all the pottery that you've ever found in the world and but, you know, you've got these big gaps in the record because pottery is so fragile, it doesn't last long enough. You know, it's it's really hard to find on broken pieces of pottery. It turns out the way I can say, oh, yeah, this is probably what came in between. Right. But you can use it to do all sorts of things. But again, I was making money off missing pottery programs. You know, they're making money off getting us to gamble on imaginary stuff. And and so. So that's one problem. But but, yes, to your point, you know, can we compensate for those biases? So some very smart people, you know, have gotten into a lot of trouble arguing this very thing. You know, Timnit Gebru is this researcher at Google who was fired very famously after making a whole argument about how the sort of the way that that language was being passed by, I was picking up all kinds of, you know, bad and potentially racist patterns. And other members of her team wound up being fired as well. You know, you've got people basically saying that, yes, we can work on this, we can compensate for it in some way. The problem is it's it's much harder to do that. And as you mentioned, I think, you know, some classically trained data scientists do not like to think that they, you know, do not like to be asked to put their finger on the scale. I talked to one. One guy who, you know, I had heard was had experimented with it. And I asked him about that. And he worked at a big loan making company. And he told me, no, you know what? We played around with that for a moment. But what would not be ethical for me to get involved like that? I have to. And he was talking to this case about correcting racist patterns in loan making. And he basically said, you know, we essentially said, I got to go with the data leads. And to my mind, you know, that is an abdication of responsibility in the guise of being neutral that is going to sort of perpetuate the stuff. But I think the final thing that I'll just say on this particular topic is that the place I am encouraged is in law, and I know that liability law gets bad rap and people always like to talk about spurious lawsuits and so forth, but I consider legal shenanigans to be one of the great backstops of American society, because it really does is it's the correction engine for things like a national addiction to cancer causing cigarets. It is only when the only reason you and I are not sitting here smoking right now is because of lawsuits. And I think that lawsuits are going to start piling up and legal liability is going to emerge when it comes out that people but making laws in the east where using AI suddenly the company is going to be culpable for that. And I think that that's going to sort of push people to to sharpen up their skill set around the stuff in compensation stuff. 

Steven Parton [00:31:33] Do you think it's going to be a case where those lawsuits basically come to the conclusion that you just need to throw it out? Because when does putting the thumb on the scale just become a second form of bias? You know, you're just going to say, hey, we're simply just not emotionally mature enough or intelligent enough humans yet to do any of this automation and we should just do away with it. Or are they going to say, you know what, we could probably come up with something and they'll just push forward, assuming that they don't have any biases that they're baking into the new design. 

Jacob Ward [00:32:08] Right. Right. I think it's going to slow things down. And I know that in tech circles, that's the worst thing you could possibly say. Right. But I think that that is kind of what we need to do is. Hair back the the use cases a little bit. Right right now people are just so excited about know I can't tell you how many presentations I've been to where somebody says, you know, it's based on human cognition, you know, and you go, oh, my God, that's first of all, that's not good. And second of all. No, it's not. Nobody's built that yet, you know. What are you talking about? So the the the over promising of what these things can do is, is I think it's something that has to be corrected. And the other part of it is the ways in which we need to in the manner, the modes we need to invent. When it comes to investigating how these systems make decisions. I think anybody who listens this podcast is probably at least somewhat familiar with how A.I. works. But one thing that, you know, your classic flavors of, I generally just take a data set in a big, disorganized dataset and draws patterns out of it and kicks out these answers. And it gets a little bit of guidance from from human operators as to whether those answers are correct. And then pretty soon it's getting it right and the human operators satisfied and it's just set loose. But at no point in the classic formulation of how this is put together do you ever get to look inside and say, but how did you come to that conclusion? Why is this a picture of a dog? This is a picture of a cat, you know. And it turns out that if you build the most efficient form of this kind of decision making system, it it's a black box. And at one point, I was considering calling this book a black, you know, black box because it's the that is the that was one of the big problems. You know, explainability is a big problem. So I think the other thing that's going to have to happen is we're going to not only have to limit the use case of little bits of not overpromise of what it is, but just use it to do workers scheduling, let's say, and not necessarily let it choose who we hire. And I think we're going to have to make it show its work and make it legally necessary, you know, make it a legal responsibility of of deploying this stuff at all, that you have to be able to look inside the hood and say, oh, this is how it made this choice. And, oh, jeez, no wonder these results are so racist or whatever else it ends up being. 

Steven Parton [00:34:38] Yeah. Do you think that we're going to just ultimately accept these decisions, though, as they come come down the line? Because when I look at some of the research on like ultimatum games where people are playing a game against a machine or where they lose against a machine or even therapy sessions where they know they're talking to an AI rather than an individual. There's some studies, as far as I know, where people aren't as upset if they lose money to a machine, they aren't. They're more open with the AI therapists than they would be with the human therapist. There seems to be some kind of odd tendency towards trusting machines and just kind of enjoying maybe their impartial decision making as opposed to the humans. Like, have you had any thoughts on that? 

Jacob Ward [00:35:31] Yeah. So the you know, we were talking earlier about this idea of the human brain being geared to hand off decisions to automated systems. Well, not only do we have that, we also have this tendency to ascribe sophistication to systems. We don't understand to an extent that I think is going to be especially dangerous in the realm of A.I.. So there's the famous story of Weizenbaum, the the German born practitioner. He was a he was a really computer scientist, basically. And in the 1960s, working in the United States, he came up with this system that could spit back these sort of scripted sentences if it absorb a little bit of information from you and basically more or less repeat back what you had said, but kind of keep the conversation moving forward. And he was trying to figure out a way to make it, you know, useful. He was trying to figure out what am I how am I going to dress this thing up is something that people will actually play with. And he dress it up as a reassurance therapist, a psychotherapist that would repeat your stuff back at you and in the style of the day. And so back then, when you sat down with a therapist and he's said, now with this tape, you'd say, I'm feeling very sad today. And he'd say, Why do you think you're feeling sad? And then you'd say, I think it's maybe something to do with my mother. Tell me about your mother. You know, he would just and that kind of thing turned out to be perfect for what wasn't being built. The system that he called Eliza and he deployed at first on his secretary just as an experiment. And the secretary famously turned around within the first 5 minutes and said, I need you to leave the room to Isabelle, because she was so. Taken with this thing. It was about to reveal so much of herself to it. To him. Within a couple of years, the American Psychological Association was predicting the end of human therapy. One of the story is it was about quit the field. He was so appalled by what had happened that he quit and he walked off, you know, because he just said human beings are not prepared to put you know, we're playing with fire. What we play with this, this is too dangerous a thing. And, you know, for me, I think that the you know, this very well established tendency of human beings to to believe in the decisions of things they don't understand. You know, it's why we think that that putting on a jersey and showing up at your sports teams game is going to somehow cause them to win or that, you know, or, you know, I've fallen prey to the fallacy that that somehow my watching a game on TV is going to affect the outcome. Right. It's the basis of superstition. It's why people throw salt over their shoulders. We don't know why that works. It just seems to work. So I'm just going to keep doing it. And I think that that tendency is a huge one here. And so, you know, there was a there was a case a few years ago where the first piece of generative art came out, where people used as somebody used AI to basically generate a painting, this kind of ever shifting set of paintings. We interviewed one of these artists I spent time with one of these hours really fascinating, beautiful work. But basically he gave it like he gave this piece of AI, like a thousand or so examples of his favorite 14th century oil paintings from the Louver. And then he had to just kick out his favorites, essentially trained on what his favorites would be. And so it began to kick out some of the greatest hits of 14th century oil portraiture. And this philosophy professor at I think it was Mitty wrote in Tech Review, this very angry piece about how it's not really art. When a computer does that, it's not really art. And in my case, you know, I looked I remember reading it and thinking. Okay. I get that in an academic sense, but from a human sense and a sort of societal sense, who cares whether you think it's art or not? This dude that I interviewed sold that generative portrait thing for $50,000 at Sotheby's, you know, and and right now, like generative A.I., generative art is sweeping the art world. You know, like blockchain based art transactions, you know, are all about generative art. And now you're looking at these new systems, Dali and what's the new one? Mind journey or whatever? 

Steven Parton [00:40:06] Yeah, I think so. Yeah. Mid-June, right? 

Jacob Ward [00:40:07] I think. Yeah. The big journey. Thank you. You know, I've got an architect friend who's kicking out these incredible renderings from from this journey. And this is a guy who spent his, you know, spent decades refining his craft. And now all he has to do is say, you know, bubble greenhouse, you know, and outcomes these things he could never have thought of. And so it doesn't matter whether we think that's real architecture or real art, it's acceptable to the human brain in the same way that therapy was acceptable to us about secretary. And so, Art, I think that that's the that's the difficulty we're facing. It's like we're not going to know the difference before long, especially when a full generation goes by. You know, I don't want to go too off topic here, but I was just watching this incredible interview with Keanu Reeves, who is describing how he was telling his, I think, his niece about The Matrix, who had never heard of it, never, never watched it. And he's describing this as, you know, this guy and these discoveries, he's living in a computer simulation and he can either take the red pill and be freed from the illusion of it or the blue pill and remain inside the matrix. And the niece is shocked and says, Why would you take the red pill? Why wouldn't you just take the blue pill? What were you thinking? You know, and and I, in fact, had a conversation with several teenagers recently about this. When it comes to Instagram and the rest of us, they just say, I don't care if it's addictive. I don't care if it's made by a for profit company. I just like it. Leave me alone, you know? And I think that that's a tendency we're moving toward of of it works on my brain. It feels good to my brain. That's good enough, you know? 

Steven Parton [00:41:49] Yeah. I think that's one of the things that scares me about a lot of this, because I've done quite a lot of behavioral research myself. And one of the things that you see is like the the tendency to go towards the fast thinking brain is drastically increased when you're in heightened levels of stress or fear or anxiety. And as we've been talking about, there's there's a massive profit motive that takes place in society. And if you have a lot of people who feel disenfranchized like they don't have jobs they enjoy if they're not financially stable, which, you know, what is it like? 70% of Americans don't have $500 in savings. It's like you're going to be you're going to grasp and hold on as much as you can to these things that make you feel good. And that whole time, they're just kind of taking you down that vicious spiral to the point where you you cling to these these decisions made for you by the age. 

Jacob Ward [00:42:47] Yeah, that's I think that's right. And I think you're absolutely right about that. And I think that I worry that that, you know, society is not built right now to reward good long term thinking. Right. We and it's true. And you're seeing it reflected in, you know, savings rates and you're seeing it I mean, the pandemic forced a bunch of people to save, but I was just kind of accidentally, you know, the tendency is not to do that. And those savings have gone away for for most of those families. You know, in climate policy. Right. It's the classic version of this. We're just terrible at thinking ahead and making the hard choices now to make life easier down the road. Right. The I mean, it's the marshmallow test, right? It's the it's the inability to sit in front of one marshmallow long enough to earn a second marshmallow. Right. And and I think that that I think you're obviously right. And that is where I think a system like a could in theory do good work in the same way that government regulation, I think could in theory do good work. It's just not built into our systems right now. And we definitely if we're going to build those systems on our instinctive brain, then it's really not going to get us anywhere because and so the more that the attention economy in the rest of the stuff makes us think that it's okay to just like. You know, sort of sit in an automatic place. The more vulnerable we're going to be, I think, to this kind of stuff. And so, yeah, I don't know. I don't know, man. I, I get I get pessimistic sometimes. I have. 

Steven Parton [00:44:24] Yeah. Do you think that the best approach to address some of these concerns is to. Go from the top with the policies, with the lawsuits, or do you think that there's something to be said for limiting the data collection? Like, we haven't talked much about data privacy and data issues here, but these guys don't really do anything if there's no data for them to train on. Like, do you do you think maybe data attacking the collection of data is a better approach or valuable? 

Jacob Ward [00:44:55] I certainly think it's worth playing around with. I mean, right now, right. There's no data privacy at all, essentially in the United States. I mean, there's there's some there's eventually legislation moving forward, but there's almost nothing. And, you know, and in terms of, you know, any kind of like standards as a society around what is or is not acceptable when it comes to this stuff, it's moving so fast. You know, you've got people voluntarily buying surveillance devices and putting them into their holes. Right. You know, audio devices and doorbells with cameras on them, you know, all that stuff. And so I think that we just as a society have not reckoned with any of this in any kind of forward looking way. It's just being defined by whatever is newest and being developed faster. So that's one thing. I think you're right. Yeah. Like some data privacy would be great. Or at least making it expensive for companies to use your data, or at least maybe making you giving you some shares in, you know, the the benefits that those companies accrue from your data. Right. Maybe if you can profit from it, you would get some sense of just how valuable it is in totality to these companies. Maybe, you know, I don't know. Yeah, I think that I think that re framing the data economy and data privacy would be a really good move. But I also think that as a society we got to like take a really hard look at what we want our world to look like and what we want things to be like. And there are a couple instances I cite in the book about in which we've sort of come to a collective understanding or at least sort of, you know, a bipartisan understanding of what we think is right and wrong. One of them is backup cameras. So backup cameras were a solution to a very small problem, which was about 60 Americans a year were dying every year annually when they were being backed over by a car. But the horror of it was that most of those cases were kids, and most of the people doing the backing up were parents. And so it's the most horrible thing anyone can imagine. And anyone from any political background, I think can get together on that one and say, well, that's totally unacceptable. We cannot have that in this society. And as a result, a bipartisan effort came together. And it's now mandatory that if you buy a car in the United States, it has to come with a, you know, a new car. It has to come with a backup camera. And that adds price to the car. It adds priced know to money that you have to pay. It creates all this stuff, you know, but from a from the top down, as a society, we said we need this to be the case because that is that is an evil thing happening. And we needed to stop. You know, and I also think about, you know, vaccines. There's a whole separate legal system, in effect, for handling the rare instances in which a kid has an allergic reaction, has an allergic reaction to a routine vaccine. This is long before COVID or any of this other stuff. And it's it's not you know, this is not vaccine craziness. This is just the fact that when you deploy a medication, you know, a chemical across as many people as as we do with vaccines every year, there are these very small number of cases in which people have an allergic reaction. They typically have a Guillain-Barre syndrome response. And that handful of cases every year, though, we needed a way to deal with that. And so there's a thing it's inside of of the of the White House. There's the vaccine court in Washington, DC, where you can go and you get paid out millions of dollars for the suffering of your child, possibly the death of your child. It happens almost instantaneously. It's like a weeks long process. It doesn't take very long. There's a special master who does the you know, as an appointed judge that makes the decisions. And it basically happens outside the the normal bounds of liability law in the United States. Those vaccine companies, the manufacturers pay into or they charge you and you pay into a fund that creates the fund for all of those cases. It's like 100 cases a year, I think something like that. And those vaccine makers as in exchange do not have to take it. There's no fault. It's a no fault. They don't have any legal liability. They just have to pay into the system to keep the system going because as a society, we've decided we need vaccines so that one of the greatest life saving inventions in the history of humankind. And so we keep those going. So to my mind, like backup cameras, vaccine, of course, like we know how to create special rules that govern special circumstances when society recognizes that something needs to be corrected. I think we can all sense that something weird is happening right now and needs to be corrected. It's just moving so fast and so many different pieces and there's so much money to be made off of it. It's making it difficult for us to get our arms around it in advance. 

Steven Parton [00:50:09] Well, with those beacons of hope that you just kind of exhibited there, you know, as we kind of get close to the end here. Are you optimistic? Do you feel that this is is a moment of growing pains and that we're just going to mature out of it? Are you kind of thinking actually this this might be kind of a special circumstance where it might be a a feedback loop that goes in a negative direction, like after after exploring this. What's your felt sense of the future? 

Jacob Ward [00:50:41] Well, I am pessimistic in the short term because I do think that it's going to be very difficult to shake anyone free. You know, at a time when we've all just had these incredible economic jitters from a global pandemic that shut down society, you know, I think people are going to be very motivated to try to you know what? I'm speaking to you today on a day when new jobs numbers have come out and everyone's super excited that they are more than everybody had originally forecast. I think economic indicators are really catching a lot of attention these days because we're worried about the ability of the United States to continue to grow and be a hugely, you know, productive and resource filled society. And so I worry that in the short term, we're going to prioritize that over having to make some tough and in some cases, money limiting decisions around stuff like technology. But I am encouraged in the long term by humankind's history of being able to correct for bad stuff. And I think that we have whether it's in just like raw innovation, you know, medicine and seatbelts, you know, the ways in which we have invented ways to protect ourselves from danger or whether it's regulatory, you know, whether in the form of lawsuits or, you know, people coming together across the political divide and making a choice. I do think we've been pretty good at that stuff in the past. I hope that in a generation I'm sitting with a grandchild. My kids are eight and ten right now or even 11 right now. And so I was a ways off. But I'm thinking, you know, my grandkids, I hope that my grandkids sit with me someday at the end of my life. You know, we're having lunch. And she'll ask me, why did you guys think it was okay to give phones to kids? You know, like in the same way that I asked my grandfather, why did you smoke? You know, and I smoke Cigarets, too, but but still, you know, why did everyone think that was okay? So, like, I think there's going to be this reckoning with this stuff where we're recognized that we were that there was quick money to be made off an offer. You know, a technology that we knew in some form was dangerous, but we weren't quite sure how. You know, I think we're going to get to that place where we're going to figure it out, figure out the real costs and and slap a sticker price on it such that we can really make some hard decisions about it. So I'm confident about it in the next couple of generations. But in this generation, I think it may be too much for me to hope that it's going to turn it around. 

Steven Parton [00:53:21] Yeah, fair enough. I mean, it's still an optimistic note in my book, so I'll take it. 

Jacob Ward [00:53:26] Okay. 

Steven Parton [00:53:28] Well, as we come to the end here, man, I want to just give you a chance to, you know, if there's anything you want to talk about and if you have any new things coming up that you want to share. 

Jacob Ward [00:53:38] Yeah. Thank you. God knows, when I write another book, I do. I have this dream of writing one for kids. A version of this for kids. It's like a crash course in behavioral science and technology for kids, because I think of that as a really missing element to society right now. So if anybody out there is teaming up on that, I'd be very happy to speak with you. I work for NBC News. And so I urge you to to tune into us. I work as one member of a really big team of really great journalists who cover all kinds of stuff around misinformation of the gig economy and all sorts of things around technology. And it's also NBC News. I really have deep respect for what it does. But yeah, I just really hope that we can all continue to look sort of at this as a, you know, not just a technological issue, but as a social and ethical issue. Because I think. That's really what's coming up. And, you know, I really hope mine is I know it won't be, but I really hope that's not the last book written on a subject like this. And that people get even smarter about stuff that I've only just begun to think about. 

Steven Parton [00:54:42] Yeah, well, thanks for driving the conversation forward, man, and thank you for your time. 

Jacob Ward [00:54:45] Appreciate it. Thank you so much. Tiger. 

the future delivered to your inbox

Subscribe to stay ahead of the curve (and your friends) with new episodes and exclusive content from the Singularity Podcast Network.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.