description
This week our guest is psychologist and professor at Northwestern University, Adam Waytz, who specializes in the study of technology, ethics, and how people think about the minds of others.
In this episode, we take a wide tour across many topics as we explore Adam’s different areas of interest and focus. This often centers around how our demonization of technology often blinds us to the real source of our societal struggles: the people using the technology. This leads to discussions around meaning, religion, echo chambers, ethical dilemmas around AI advancement, the differences between in-person and online interactions, and more.
Find out more about Adam and his work at adamwaytz.com
**
Learn more about Singularity: su.org
Host: Steven Parton - LinkedIn / Twitter
transcript
Adam Waytz [00:00:01] Sometimes to me, technology is like a boogeyman that if you want to like, talk about the causes of people failing to consider other minds or treat others as real human beings, it's easy to say, Oh, that's social media. For all the concerns about the threats of technology. You should be more concerned about the threats of human beings.
Steven Parton [00:00:39] However, one my name is Steven Parton and you're listening to the feedback loop by Singularity this week. Our guest is psychologist and professor at Northwestern University, Adam Waytz, who specializes in the study of technology ethics and how people think about the minds of others. In this episode, we take a wide tour across these many topics as we explore Adam's different areas of interest and focus. This often centers around the notion of demonizing technology in a way that often blinds us to the real source of our societal struggles, which is simply human beings. This leads to discussions around meaning religion, echo chambers, ethical dilemmas around AI advancement, the differences between in-person and online interactions, and a whole lot more. And so without further ado, let's dive into it. Everyone, please welcome to the feedback loop. Adam Waytz. All right, Adam. Well, you have your hands in a lot of different subjects, and I think it would be disingenuous for me to try to start anywhere other than just to simply ask you what your research focuses, what you're thinking about these days, and the way that you see the many different topics that you play in.
Adam Waytz [00:02:06] Yeah, I'll actually answer that by going back. So about 19 years ago, I started grad school and the first thing that I studied with my mentors, John Cassiopeia and Nick Epley, was, why do people treat non-human things as human? Like, so why do people treat animals as human? Like, why do people treat God as human? Like And of course, why do people treat technology as human? Like, why do we anthropomorphize? So my research got started with this question of how social are we? Are we so social that we even treat things like technology as human beings? And, you know, given that that was 2004, that was before humanlike technology was everywhere in our pockets, in our homes, etc.. So whereas the initial work looked at why do we humanize technology as human like technology became more prevalent in daily life, that sort of shifted into questions of, well, what are the consequences of humanized technology or technology that appears to behave like a human being? So that's always been a theme of my work, I would say the psychology of technology. And then, you know, outside of that, I've studied things related to ethics and morality, things related to, you know, intergroup processes and conflict. And although the main theme of my work has always been how does technology affect us socially, psychologically, those issues of kind of intergroup processes and ethics have contributed to the work on technology as well? Sure.
Steven Parton [00:04:00] Well, it seems you're speaking of, you know, anthropomorphizing things there. And do you have a good argument for why we do that? Is there leading theory? For instance? I know of a thing, I believe a thing called the hyper active agency detector. I don't know if that's a term you're familiar with, but I've always found that to be an interesting way. We kind of project agency onto things in the world. Is that something similar?
Adam Waytz [00:04:24] Yeah. So that was a theory that we were familiar with when we were doing this initial work. I think people like Justin Barrett and others kind of brought that into popularity and that's, if I recall correctly, kind of an evolutionary theory that would have made sense for us to over impute agency onto the environment for our survival. You know, the classic example is if you see an ambiguous figure that might look like a stick, it's better to treat it as something that might have a mind that might attack us like a poison in a snake, rather than just to assume it's something like an inert object. So that's a more evolutionary explanation. Our theory of anthropomorphism, I think, dealt with more proximal explanations, and we basically said there are three people, three reasons why people might anthropomorphize. One is the concept of human is just very top of mind for us. We walk around consumed by thoughts of our self all the time, the self as the prototypical human. And so this concept is just readily available. What we need to make sense of the world. We think human and of course we don't do that all the time. But to the extent that something resembles a human or brings the concept human to mind, so, you know, a chimpanzee, which looks much more humanlike than, say, a bumblebee, that's going to be more likely to be anthropomorphized simply because it's helping bring this concept of human to the forefront of our minds. So that's one cognitive factor, what we call well, it's kind of a jargony term. We call that elicited agent knowledge to the extent that a stimulus in the environment elicits the concept human, we're going to humanize it. The two other factors that I think are simpler to understand. One, we tend to anthropomorphize things more when we want to make sense of them. So we want to predict, explain or understand something that we don't understand. We treat it like something we are familiar with, Again, the human form. So that I'll just call that sense making motivation. And then the third one is to the extent that we're motivated to have social connection, when we feel that social connection with humans is lacking, we might try to find that in other sources by humanizing non-human entities, including technology. So those are kind of the three main reasons. When humans are brought to mind, we anthropomorphize when we're motivated to make sense of something, we anthropomorphize, and when we're motivated to have social connection, we tend to anthropomorphize more.
Steven Parton [00:07:17] That idea of something kind of having a stimuli association that makes us think of humans brings to mind kind of social media interactions on there a little bit. And I wonder, have you done any work or seen any research on whether profiles because we know it represents a human ends up evoking that same response or if there's some distance that's maintained Because we think this this is just text on a screen and not really something that feels very human like.
Adam Waytz [00:07:49] Yeah, I don't know any of work off the top of my head, but I do know that there is work on how people communicate differently over text versus over video. This is work by Juliana Schroeder and others. And essentially we humanize people more to the extent that we see their online profile as human. So, you know, text only goes so far when you have the real live human there on the screen that evokes a human response, which kind of raises an interesting research question. If you make your avatar on social media more human, like if you give it a face and a voice and maybe some movement. Do people treat you more like a human being online potentially?
Steven Parton [00:08:40] Well, you mentioned in your 2018 book, Unprecedented Access to other Humans that technology provides has ironically freed us from invasion of them. Could you unpack that a little bit and maybe expand on how maybe this overwhelming world of agents that we're anthropomorphizing might be affecting us and how that and how ironically we're distancing from them at the same time?
Adam Waytz [00:09:03] You know, what I meant by that is simply the capacity to interact with people at scale through technology all around the world has meant that, you know, in an instant we could have a surface level interaction with anyone. I mean, that's what social media does and I think even goes beyond surface level interactions because a lot of online communities, you know, you get a lot of deep connections from that, from those. But I think whether it's a cause or just a correlate, what that means is those interactions have replaced deep in real life social interactions which seem to be more satiating for social connection. So, you know, contrary to some narratives out there. Interacting with people online. It's not necessarily like the cause of loneliness, and for many people it can be the solution to loneliness, especially if, you know, you're part of like a very small community or, you know, you want to connect with people who like the exact same, you know, comic book artist as you do, or if you are also, you know, if you're someone who's suffering from a rare disease. I mean, wow, does technology bring you closer to people in your exact circumstances, or at least it has the capacity to? But at the end of the day, nothing really replaces in real life interaction. And I think that's where we get some disconnection that interacting in a mediator way through machines means we have more disconnect from other people broadly.
Steven Parton [00:10:55] Is there something here that has to do with maybe the way theory of mind or other forms of social connection psychologically manifest as a result of the in-person? Like, is there something in the body language, the facial features, maybe oxytocin or something happening at a physiological level that might differ in person than it does through something that's like technically technologically mediated?
Adam Waytz [00:11:25] Yeah, I haven't, you know, checked in on that research in a while, but my sense is that's definitely the case, that there are triggers to theory of mind. And, you know, the human voice is one of them are seeing a human face is another one. And to the extent that those stimuli get degraded online, it's going to be activating less mental izing, less mental engagement with another person.
Steven Parton [00:11:56] Yeah. Can you talk maybe about some of the work you have done in this realm, then? Maybe something with how we explore other people's minds through technology or. Or maybe not.
Adam Waytz [00:12:07] Yeah. So the space that I work the most in there are these various terms that all kind of get talked about as though they're the same thing. But, you know, theory of mind people might disagree with, with my definitions on this theory of mind. And my view is really the ability to see or conceptualize another person's mental states, their intentions, their emotions. Broadly, what we call agency and experience, where agency is kind of the capacity for thinking, planning and tending, etc. and experience is emotion and feeling. So theory of mind, in my view, is the capacity to perceive that in the first place. Then there's mental izing, which is the process of perspective taking like understanding in particular to Steven prefer mushrooms and as pizza or, you know, whatever, poor cheddar. And then there's mind perception, which is really more the question of does this entity have a mind or not? And it's not as simple as a binary switch. It's to what extent am I perceiving this entity as having a mind or not? And that's really where my work has been primarily situated. So because that's how we measure anthropomorphism, basically we measure anthropomorphism through the extent to which people perceive a non-human entity like a robot as having a mind. And so we'll put people in different situations where you know, you're going to experience some unpredictability with this robot or you're going to predict you're going to experience the robot as behaving in a very predictable manner. When you see the robot as unpredictable, does that increase your tendency to perceive, Oh, I think this thing has intentions and consciousness and experience in terms of my work on my perception toward other people. I guess what might be interesting is that there's any variability at all. We might think that when you ask, you know, to what extent does this other person have intentions, have feelings, have experience, conscious experiences, everyone should just say ten out of ten. But you know, there's some variability there. And we show that, you know, people are more willing to perceive mind in their in-group members and their outgroup members. People are more willing to perceive mind in themselves when they see themselves as ethical versus unethical. And, you know, there's a variety of other places where that research has gone that kind of that's my space in the whole domain of theory of mind, mental sizing, mind, perception, perspective taking. I'm sort of in the mind perception corner of that.
Steven Parton [00:15:21] Yeah. Well, you talked there about robots interaction with robots. Could you say what you mean by robots? Is this something that's like a Boston Dynamics human like machine, or is this something more like what kind of interactions are taking place when we're talking about that?
Adam Waytz [00:15:38] Yeah. So sometimes we're describing or giving people a video of a specific robot like, you know, back in the day we did some studies with this, showing people a video of Clark Clark. He was this. An alarm clock developed by someone admitted that it looks like an alarm clock and you set it next to the nightstand. And when you press news in the morning, it actually rolls off the table and it has its mobile and it rolls around the floor and it beeps. And sometimes it's something like that. Sometimes we're describing to people an algorithm, so something that's more nameless and faceless. Sometimes we're having people sit in a autonomous car simulator, so I use that term robots quite loosely. In fact, I've been I've been using an even looser term lately, which is just automation, which in my view is kind of like a catchall for both physical robots and then artificial intelligence that may manifest in machine learning algorithms and so on and so forth. But, you know, other people have done a better job at breaking down some of the distinctions between how those different types of technology affect us. I haven't done as much of that except to say the more humanlike something looks, the more we're going to humanize it.
Steven Parton [00:17:14] Yeah. I mean, do you does your work suggest I mean, not even your work and really just your opinion. How do you think it's looking in terms of the relationships we build with our machines? Because a lot of people have, you know, concerns about when you talk to Siri or something and you use an aggressive, angry, kind of antisocial approach, that's something that you're kind of like cultivating within yourself. And it says a lot about who you are and how you treat people, because we we kind of we kind of say, hey, even though we know that's not a person, the fact that you're socially engaged in that way with that technology tells me that, like I have questions about your ethics, about your morality, about who you are. Is there thoughts that you have on kind of how we're exploring that relationship currently?
Adam Waytz [00:17:59] Yeah, it's fun to speculate. I mean, I think about, you know, the other day I saw someone just really yell at their dog and, you know, it's like that in my view. That tells you something about that person. It's interesting to think about, is this a way to implicitly look at people's personalities? How do they treat these bodies? You know, I was just experimenting with Churchill Beauty earlier today, and I was thinking, oh, you know, I'm asking it to do something, to summarize something. Should I say please? You know, does that elicit a different response? What does it say that I don't naturally say, please? You know, I think that's an area that's ripe for ripe for research. I know some people have started to experiment with that along the lines of do people treat bots with different genders differently? So, you know, there's a lot there, and I think it does say something about who you are, how you treat non-human entities as well.
Steven Parton [00:19:09] Well, let's maybe take the inverse of that in a sense. What are what are some of the ways you're maybe thinking about dehumanizing? You know, I think you talk about the dehumanized worker in your book specifically, and, you know, maybe this is another angle. As we become more technological, as we become more associated with people being represented by technology, are we are taking away some of the humanness of people that we interact with.
Adam Waytz [00:19:35] Yeah. You know, I've kind of paused my work on dehumanization, although it's not forever like in my control. About, like, what a pause or don't. Pause is like an idea stops being generative for me. I mean, I think when I was writing the book, the idea was that the more separated we become from each other and I mean psychologically separated, the easier it is to dehumanize another person. So if we think about I'm sort of taking this as the premise, people tend to think of the self as the prototypical human. So the less you are like me, the prototypical human, the less human you are. And when you have the forces of technology and other things like income inequality or. Anything that kind of segments, different parts of society. I think what that means is that you get in groups where everyone feels very self like, and that might even enhance the humanization of people in our social circles. But anyone on the outside of that, it might increase dehumanization. So that's kind of my broad thought on that.
Steven Parton [00:21:03] Do you think that the technology can change kind of how we engage with consciousness because of that? And by that I mean if we are maybe seeing people go into echo chambers of things like social media, if they're creating their newsfeed, if they're having these self affirming worlds that, like you said, the prototypical person that they are basically is what they decide to surround themselves with. How does this then affect our conscious ability to mentor allies, to engage in through your mind, to take that perspective of a person who we might consider in other or a member of the outgroup?
Adam Waytz [00:21:46] Yeah, that's a that's a tough one. I was just thinking about echo chambers just before we got on this call, because, you know, one, I've become increasingly skeptical of the idea of echo chambers to begin with. And I'm not the expert on this. I know there's some debate in the field, but there's a lot of recent research that suggests that echo chambers are real. But on the other hand. Social media that's ostensibly swept us into these echo chambers actually exposes us to a ton of information outside of our echo chambers. And, you know, one school of thought, it's like the people are most in echo chambers are the people who are not on social media, people, you know, actually watching cable news. There's some work by Chris Bell on that. So, you know, I had this you know, I'm just kind of thinking through this would be in real time. I've had this back and forth in my mind about maybe this whole idea about echo chambers is over blown. You know, that as much as social media segregates us also, like, you know, it really segregates us is real life Like the like maybe the only time I'm really exposed to people who are not like me ideologically is when I'm online. You know, maybe the or I should say the primary time where I interact with with those people. Then just as just before we we recorded this just today, a massive study came out in the journal Nature, which maybe is the best journal around about it was like studying 24,000 people on Facebook around the 2020 election and basically showed that, yes, echo chambers are real, that most people kind of consume ideologically consistent content, but it doesn't have much effect on things like ideological extremity or what they call affective polarization. And so. I just think about. Technology and I'm going back on some of the things that I was thinking for years ago. Sometimes to me, technology is like a boogeyman that if you want to like talk about the causes of people failing to consider other minds or treat others as real human beings, it's easy to say, Oh, that's social media. But first of all, like, what's behind social media? Human beings And and to what if we take social media just out of the equation? You know, who's prompting people to get into echo chambers? What if you know who's prompting people to consider those outside of your echo chamber as other human beings? And so I think, you know, the short answer your question is like, do echo chambers affect the extent to which we see minds see human outside of our echo chambers? Probably. But the longer answer is it's complex. And I think there are much other not so many more forces at play driving people to see those outside of their immediate networks as less than human or less worthy of humanness.
Steven Parton [00:25:39] Yeah, well, you know, stepping away then maybe from that boogeyman aspect, are there some benefits in your mind to the way that current technology is kind of mediating or influencing, maybe amplifying positives of social dynamics?
Adam Waytz [00:25:59] Yeah, I never thought that I would be like a technology apologist and I should disclose I'm, you know, doing some work with it with Google right now, doing some consulting work with them on the positive side of trying to, you know, roll out some of their technology responsibly. The. Yeah. The major thought I've had is that and I've written about this in a forthcoming article with my colleague Moran Cerf for the Bulletin of the Atomic Scientists. The argument that we make is that for all of the concerns about the threats of technology, you should be more concerned about the threats of human beings, like everything you want to blame technology for. Like, I just gave the example of polarization from social media. You know, when you look at these big statements about what are people concerned about in terms of A.I.? Well, it's spreading misinformation. It's exacerbating bias. It's, you know, breaching cybersecurity protocols. Humans, in my view, and this is what we read an article are the bigger threat and those are gone. So to the extent that I see. Technology is a positive. I mean, if we want to say technology broadly, I think technology can help. People make better decisions. So, like, you know, I have a colleague, Matt Groh, who has done some work on detection of deep fakes. And basically, you know, the long story short is like humans alone are not totally great at detecting what a deepfake is and what isn't a deepfake. And of course, by Deepfake, I just mean like, you know, a fake video of Vladimir Putin singing Chuck Berry songs or something like that. So humans working with AI. This is what Matt's research shows are outperform humans alone in detecting whether a video is a deepfake or not. So I think there's a lot of those examples that there are other examples and medical decision making where humans working with I can make better decisions, more accurate decisions than, you know, humans on their own. So I've actually come around to the point that we're we're a little too harsh on machines. And I think some of that by some actors is intentional. They would like us to think of technology as the problem rather than humans. And if anything, I'm seeing a lot more promise as to what technology can do.
Steven Parton [00:29:07] Yeah, well, in that regard, as we kind of empower, I guess, A.I. to work with us and trust Annette more, what are your thoughts around the ethics of technology in terms of, you know, blame or maybe ethical alignment or how we treat the guys that we're working with any of these topics? Really, just in general, what are you thinking about ethics these days?
Adam Waytz [00:29:31] Yeah, I have. I it's going to be disappointing. I feel like I don't I don't have anything interesting to say. So I just got finished teaching three weeks of a nine week class broadly about ethics and AI. And, you know, I'm talking about privacy, I'm talking about algorithmic bias, I'm talking about blame and responsibility. And I feel like. I'm kind of just saying. Things that have been said before. And so what I've been doing and they calling it my. Listening tour I'd like. I feel like there when we talk about ethical risks, the ones that we should be really concerned about haven't been talked about. Yeah. You know, everyone's kind of talking about the same things risk to privacy, risks to disinformation, spreading risk to exacerbating and various social biases. And I just feel like that there's something there that we just haven't anticipated. So that's why I'm like, I don't know if I have anything interesting to say that hasn't been said a million times already. And that's, you know, I'm like literally an ethics researcher. I teach two classes on ethics, one specifically on ethics. And I this should be my wheelhouse, but I'm more in a let's wait and see mode. I'm, you know, talking to people who work for these companies. Like what are the risks that you're talking about? Because the public just kind of hear the same thing in every New York Times article or it jumps immediately to these things are all going to kill us. And I'm not there yet either. So so that so that's why I've been in this kind of madder risk mode. Like where where we just finally wrote this article that said. You know what I think a risk of air is, is that it's sucking up all of our attention and we're talking about the risks of air. And we're not talking about the fact that, hey, do you know that actually human beings are spreading more misinformation than bots are? You know, what's up with that? So that's that's kind of where my head is these days.
Steven Parton [00:32:07] Well, I want to delve into that last point you made momentarily. But one thing I do want to see if you thought about it all is the way that technology might force us to answer ethical questions. Again, this is definitely a thing that's been said before, and I'll be a little bit cliche, despite your previous response there. But you know, things like the trolley problem and self-driving cars, there's there's an impetus that we have now to kind of answer some of these questions that were great thought experiments in the ethical domain previously, but now have like really pragmatic needs to be answered. And I'm wondering if you see like some tension there or some examples of that taking place.
Adam Waytz [00:32:57] Yeah. You know, this is a interesting one. The interesting question for me as well, where my thinking has really changed over the years. Like I read this article by want to get this person's name right. Yeah. Julian de Freitas and his coauthors Sam Anthony, Andrea Sensi and Jorge Alvarez. And the gist of the paper, as I remember, was like. There's all this work looking at how trolley problems might inform how we program autonomous vehicles. And I think the gist of the article, if I remember correctly, was like, That's not really how autonomous cars work. It's kind of like not you know, it's not really how the technology works, where it's able to assess whether it's ethical to kill the driver to spare five pedestrians. But putting that aside. Yeah, I kind of have just. In response to your question, I just have other questions. And then this is something I talk about with my students. It's like. My sense is so right. The trolley problem is essentially this question. If a runaway trolley is screaming down the tracks and it's going to kill five people working on the track with certainty, you know, there's five people working on the track just can't get out of the way. Is it ethical to do something? And there are various versions of this like flip a switch where instead of the trolley killing those five workers, it kills the guy who's just standing on the bridge overhead and you're going to drop them through a trap door and he's going to die. But then you spare the lives of the five workers and it's like most humans will say no and don't kill the guy on the footbridge. And then there are variations where people are more or less accepting of killing the guy on the footbridge. The gist of it is that people in that. Dilemma. Don't think like a utilitarian or some people do, but they don't think like a pure utilitarian, which would be to say, Oh yes, one is less than five, so you should always kill one person to save five. That's just doing what's best for the greater good. You know, if you want to like, maximize whatever happiness or maximize human life, that's what you should do. Yeah. Bringing this back to A.I., I think people started studying this dilemma in the context of, well, soon machines will be making these decisions and the, you know, the paradigmatic example is like an autonomous car might have to make this decision. It's going down the road. There are five, let's say, children who are jaywalking and the car has to make a decision. Does it kill the five children or does it swerve and hit a pole that will kill its driver? And, you know, I can't remember what the results of these studies are, but, you know, these great studies by the moral machine people Eyad Radwan and Azeem Sharif and others have shown that. Well, it depends on their like they kind of figured out what are the factors that inform people's decisions. But when you look at what the car companies are actually doing and how they're thinking about this, I think there there's just a variety of thoughts. Like I think some companies may be misspeaking here, but my reading, you know, what you can read in the news is like, well, some companies are just training their cars to do what human drivers do, Like forget about putting in a moral rule that says always kill one to save five or always, you know, kill five if the five are ducks crossing the road and not people is like a lot of companies are just saying, well, what a how to humans drive how to humans make that decision. We should train our cars to make the decisions humans would make. I think I read somewhere as well that Mercedes basically was going to implement a rule to their self-driving technology that said, always save the driver no matter what. That's a different way of looking at it. And then, you know, Tesla probably has different tweaks to their algorithm. We've been able to see some of that in action about, you know, both the successes and the failures on that front. And so. I think. In terms of what I can do is is very good at kind of like figuring out I can't remember what company I'm referring to is figuring out like, well, what do most people do in that situation? Like, that seems to be when I think about what chat JPT or something is good at is it's not always like great at giving me answers or predicting things, but it can consolidate information reasonably well in my experience. And so that could be helpful. Or what do people typically do? And then on the other hand. We might say, well, we don't want our cars to do what people typically do because there's way too many car accidents right now. So what if we tried something better? And that's where my kind of limited understanding of the technology limits me from saying, well, how would that work?
Steven Parton [00:39:00] Right. Do you ever get too deep into the philosophical waters in this regard and think about consequential ism versus the ontology? Like, do we want to program into these machines that actions are bad or do we want them focusing on outcomes?
Adam Waytz [00:39:15] Yeah, you know, I think that would probably be a better use of my time just to kind of think about ideas too. I mean, one, my work is too filled with just emails and meetings, then deciding where do we like to admit? Yeah, yeah, where do we put people in this conference room to even get into the world of ideas these days? But if I had unlimited time just to think, that's what I would want to be spending more time on, I do kind of pose that question. I teach an ethics class to MBA students, and we do talk about this distinction between the oncological thinking, you know, this idea that no harming someone is just wrong, even if it's for the greater good versus, I guess consequential ism. Doesn't matter whether you know what the action is, whatever saves more lives is the right thing to do. And we have a little bit of a discussion around that. Yeah, I think it's interesting because eventually someone has to make the decision, How do we want the machines to think? Do we want to have them think like us? Do we want them to kind of optimize based on what the machines think? Do we want to just choose the moral rule? I mean, there I think, you know, when you get into issues of like people using hiring algorithms and finding out like, oh, my hiring algorithm is biased against women, that's where I think people have stepped in and said, Oh, I want to implement a moral rule. And the moral rule is, yeah, don't take like this factor into account when determining who to hire, because we know that that factor for all sorts of strange reasons is correlated with gender, and that's going to end up discriminating. And we don't want to do that. So, you know, in less life or death circumstances, people feel comfortable making those decisions. And but yeah, when we get into like the big existential questions, that's where I wish I just had more time to think about these things.
Steven Parton [00:41:36] Yeah, well, speaking of existential, I think you had a recent paper about automation and religious declines, and we've been talking about the kind of offline impacts and the human side of this. Could you talk about that work maybe, and maybe some of these ideas about how the stuff taking place offline is is part of the dialog here that we often forget?
Adam Waytz [00:42:02] Yeah, the work on religion is interesting because we should be published any day now. We basically have show that exposure to automation that is advanced robotics and AI, whether that's you're living in an area where robots have become more commonplace or you start working in a job and you have to interact with AI and more exposure to AI basically reduces people saying people's likelihood of saying religion is important to me or belief in God is something that I practice. That's kind of the gist of the findings. And then we've had a harder time explaining why that happens. You know, there's a few, I think not even competing hypotheses, hypotheses that make sense of the each other. And we in the paper have preliminary evidence for one hypothesis, which is that exposure to advanced technology just gives human beings more godlike powers. Like we can predict the weather now better than ever before. We can, you know, know what's happening on the other side of the world better than ever before. We can have like saw sort of capacities for foresight and omnipotence and omnipresence, these things that have traditionally been associated with a higher power God. And as we become more. Superhuman like through the use of and exposure to this advanced technology that may be lessons are need to say. Religion and God are so important. And then you know I'm kind of like. Whatever you think about religion, there are some big positives to two religions psychologically. You know, religion makes you know, some people would argue religion is the cause of all the wars in the world. I'm not that person. What I know psychologically is it also gives people a lot of meaning. It also gives people a lot of social connection. It gives you a community. And so the interesting thing to think about. Banners of automation and AI is reducing the importance that people place on religion. Does that have other downstream consequences that are not so good for people's mental health or maybe their consequences that are good? If you're someone who thinks that religion is, as you know, not a great thing in the world, I'm sort of like, well, religion does a lot of things because then you have to wonder. Right. So one thing we know is obviously religion gives people a sense of meaning if AI is reducing. The importance of religion and thereby reducing people's sense of meaning does give people meaning in a different way, or does it not satisfy that thing that religion gave gave us? That's that's the question.
Steven Parton [00:45:23] Do you think that some of that might explain why we are seeing some of the negative manifestations on social media, for instance? I mean, you know, Nietzsche, his idea that God is dead, and then we have people like Jon Favreau from the University of Toronto talking about us being in a meaning crisis. And then we hear your work, hear about people losing potentially access to me through religion as they get exposed to technology. Do you think that maybe some of what is happening here as we kind of place ourselves in that godlike position is that we're starting to maybe use technology in some of these more negative ways. And and so when we talk about technology being bad, maybe what we're really talking about is people not having just access to, meaning, making systems in their life.
Adam Waytz [00:46:12] Yeah. I mean, my answer is I would want to know. Right. I mean, wouldn't that be interesting? Like, say we're in a meeting crisis and I'm not sure we are, but it would make a lot of sense. You know, we've had a lot of unexplained events happen over the past few years, a lot of like massive surprises, you know, upheaval, geopolitically covered, you know, natural disasters. All of those things would tend to financial shocks, would make people, you know, have a crisis of meaning. It would be interesting that rather than technology, which seeks to explain and predict things and make sense of things rather than giving us meaning. What if that just takes meaning from us further? And what if by, you know, becoming more superhuman? Because now we can predict the weather and we can know what someone in Ukraine is thinking immediately without having to travel for weeks to get there. Or, you know, we can basically consolidate the teachings of an entire 1000 word book in 15 seconds. What if that makes us experience less meaning as we've become more, more superhuman? Like these are the things that are fun to think about that I don't have much time to think about things. That's good.
Steven Parton [00:47:43] And we're here to speculate a little bit. I mean, as long as you're comfortable with that. So that's okay. I mean, I do wonder if it's an inverted U a little bit in terms of its benefits because. I've had a lot of conversations lately with people about this very thing about our increasing power, but that also now that things are becoming so possible to automate through things like the large language models, that people are losing some of their motivation. Like I know people who want to write books or create paintings and stuff, and I'm like, I kind of don't because it feels meaningful to do those things anymore. When I know somebody can type a prompt and it feels like we've already kind of maybe passed that phase of the benefits of our godlike power and are now kind of like, Well, shit, now I'm inferior again.
Adam Waytz [00:48:30] Yeah, yeah, I know this is a big topic for me because, you know, in my book I also write about how people derive meaning from things that they believe were created by other humans as well. Like even something as simple as like a coffee mug when, you know, the mug was created by a human. You value it more than when you learn it was created by a machine. And so as everything becomes more automated, even our ability to create art, do we start devaluing those things? Do we start seeing less meaning in them? And does that contribute to a meaning crisis? My guess is yes. Yeah.
Steven Parton [00:49:20] Well, what I'm taking away from this conversation is we need to get you out of admin and get you playing with some ideas.
Adam Waytz [00:49:26] Yeah. Yeah. Those, you know, so what I thought academia would be and, you know, it'll come back, I'm sure.
Steven Parton [00:49:33] Fair enough. Well, as we come to a close here then, you know, letting your mind free a little bit, what are some of the things that maybe looking forward you're most excited to think about or maybe technologies that you think hold great promise or maybe great peril? Like, what is your kind of big picture looking forward these days?
Adam Waytz [00:49:55] I think what I've been most interested in is how. I has kind of broader societal effects, like our work on how the the rise of AI reduces religiosity around the world. I think that's interesting to me because it's like, here's a consequence that goes beyond just kind of the immediate implications for, Oh, when I interact with this technology, I might be receiving false information or I might be receiving better information or quicker information. These kind of more downstream consequences are what I'm more interested in because they feel like a little bit more untapped. And then in terms of a concern, something I've been talking about with various people that I would love to study in some form is whether the use of and exposure to AI is making culture more homogenous. Like I was talking to a guy the other day who is using, you know, one of these bots for planning a trip to Seattle. And we were having this conversation like, yes or no. A lot of it's it's kind of out there that, Oh, you can use A.I. or companies are already using AI to say like, Hey, plan me a perfect day in Seattle. But is the use of AI just like the use of, say, Yelp? Is that going to drive everyone to the same place in Seattle? Because if A.I. learns, Hey, everyone loves this ice cream shop in Seattle and then it spits out that answer, you should go to this ice cream shop and then everyone goes to the ice cream shop, and then you get more information. Hey, this seems to be a popular place. Are we going to get that everywhere? You know, forget about the ice cream shop in Seattle. But like, companies are already have been for years using algorithms to say, what do people want to watch on Netflix? Okay, we've learned that this was highly streamed, so we should just produce more stuff like that. Does the use of A.I., given that it is sort of inherently learning from past behavior, just perpetuate that past behavior? And if we're already in a, say, film ecosystem where like everything is a sequel or existing IP or a remake, and then A.I. is learning what was popular as a film in the past ten years, are we just going to get increasingly narrow offerings for culture and art and film? So that's beyond existential stuff. That's my concern.
Steven Parton [00:53:00] Yeah, the recycled cliche phenomenon. Media definitely seems like a thing these days.
Adam Waytz [00:53:04] Yeah. Yeah.
Steven Parton [00:53:05] Well, out of any closing thoughts before we let you go, man, anything at all study you need recruits for. Thank you. One of the article I think you said you had coming out. Anything at all?
Adam Waytz [00:53:17] Definitely need. I always need help with research in the following way. If anyone has access to a lot of data that I could analyze and give you insights on, that could be a win win. That'd be great. And if you're work for an organization and you'd be willing to let me survey your organization, I'd be happy to do that. So data is my my joy. So if you can help me get some, that'd be great.
Steven Parton [00:53:50] And is that what's the best way for them to contact you in that regard?
Adam Waytz [00:53:54] Yeah, just email at I'll just give my Gmail, which is Adam. It's Adam where it's at gmail.com.
Steven Parton [00:54:04] Perfect. All right, Adam, thank you so much for your time, man.
Adam Waytz [00:54:07] All right. Thanks very much.