< return

How AI is Shaping You and Your Life

May 15, 2023
ep
101
with
Anne Scherer

description

This week my guest Anne Scherer, a professor of marketing at the University of Zurich who specializes in the psychological and societal impacts that result from the increased automation and digitization of the consumer-company relationship.

In this episode we focus on the details Anne covers in, You and AI, a book she co-authored with Cindry Candrian to bring an accessible understanding of the ways in which AI is shaping our lives. This takes on a tour of topics such as our symbiotic relationship with AI, manipulation, regulation, the proposed 6 month pause on AI development, the business advantages of better data policies around AI, the difference between artificial intelligence and human intelligence, and more.

Find out more about Anne and her book at annescherer.me

**

Learn more about Singularity: ⁠⁠⁠⁠⁠⁠⁠su.org⁠⁠⁠⁠⁠⁠⁠

Host:⁠⁠⁠⁠⁠⁠⁠ Steven Parton⁠⁠⁠⁠⁠⁠⁠ - ⁠⁠⁠⁠⁠⁠⁠LinkedIn⁠⁠⁠⁠⁠⁠⁠ /⁠⁠⁠⁠⁠⁠⁠ Twitter⁠⁠⁠⁠⁠⁠⁠

Music by: Amine el Filali

transcript

The following transcription was created automatically. Please be aware that there may be spelling or grammatical errors.

Anne Scherer [00:00:01] The key here, apart from informing society, is also to creating more of a sense of urgency. So we know technology is evolving very, very fast. It always has. But in this case, I think we need to have more of a sense of urgency that even regulators need to be, you know, working a lot faster on this topic. Otherwise it will outpace us. 

Steven Parton [00:00:37] Hello, everyone. My name is Steven Parton and you are listening to the feedback loop by Singularity. This week. My guest is Anne Scherer, a professor of marketing at the University of Zurich who specializes in the psychological and societal impacts that result from the increased automation and digitization of the consumer company relationship. In this episode, we focus on the details that on a covers and you and a I a new book that she coauthored with Cindy Hadrian to bring an accessible understanding of the ways in which A.I. is shaping our lives. This takes us on a tour of topics such as our symbiotic relationship with eye manipulation regulation. The proposed six month pause on development, the business advantages of better data policies around AI. The differences between artificial intelligence and human intelligence. And much more. So with all of that being said, let's go ahead and jump into it. Everyone, please welcome to the feedback loop on a share. I think the best place to start and where I want to start with you specifically is the book that you recently coauthored, You and AI, a Guide to Understand How Artificial Intelligence Is Shaping Our Lives. Can you just give me a bit of an overview of that book and what the motivation was for writing it? Mm hmm. 

Anne Scherer [00:02:09] Yeah, it's actually a very exciting project. And we just published a book. A very exciting process in itself. The idea actually started a couple of years back when I did my TEDx talk, and that sparked a lot of interest and lots of discussions. And people came up to me saying that, you know, that would be interesting as a book to hear and read more about this. But I never kind of found the time to, you know, sit down and write so much. But yeah, now my career in academia is about to end. And I felt like, you know, this is a good time to wrap things up, to summarize everything. And the big motivation was also to have something that my parents would understand. So whenever somebody asked them, you know, what's your daughter doing? It's like, Oh, she's in robotics or A.I., and, you know, it's like all kinds of big buzzwords. But they couldn't explain what I'm doing. And I felt like I wanted to have a book that explains this research in very simple terms, in a very engaging way. So the idea basically is to explain AI to the general public, to increase AI literacy among, you know, everybody in our society so we can have an open discussion. So we start very basic with basic concepts, you know, what is AI, how is it different from human intelligence? We try to understand in very simple terms what all these buzzwords mean with machine learning and all that, and why do we need that at all? And then we dive more into the research on the psychology of technology. You know, how do we interact with these systems? How do different design concepts affect the way we interact or trust these systems? And also a little bit more about, you know, where can we benefit from this technology? Where do we need to pay attention, but also address these big topics that are flying around the media at the moment? You know, will I take over our jobs and all these big fears and discussions that are going on to kind of, you know, bring it down a little bit and explain, you know, A.I. as a tool. It's helpful, but it also has its limitations. And, you know, try to bring that out a little bit more into the public so we can have an open discussion about it and an informed discussion and not just, you know, this kind of Hollywood fiction narrative that a lot of people have in their minds, which I think is not very helpful for a good A.I. discussion. 

Steven Parton [00:04:37] Yeah. Do you think the current conversation about artificial intelligence is overhyped? Do you think a lot of people have a bad idea of what it is actually capable of? 

Anne Scherer [00:04:47] Yeah, I think actually. Well, it goes it goes both ways. I think especially at the moment with, you know, the introduction of ChatGPT and DALL-E and it's not like, oh, you know, AI can be creative. There's a lot of hype and big concerns now that, you know, now that it's mainstream and everybody's reading about AI, I think there's lots of concerns and a lot of misunderstandings, you know, what it can do. And on the other hand, I have the feeling that there's lots of things where people do not understand that, you know, in the background there is a AI going on and I can do a lot of things with your data. For instance, I have lots of discussions also with my students at the university. You know, sharing your Facebook likes most people don't worry about it. You know, it's not very interesting if somebody knows I like curly fries, you know, that very common example. But they don't have an understanding. You know, if you gather enough data about a person very sophisticated A.I. models can infer a lot more about you than you know, you liking curly fries. So, for instance, on this end, there's a lot of things where people are completely unaware of what I can do. And on the other side, we have this whole discussion about jetpack, which in my understanding is a lot of, you know, misunderstandings. And it can we too can do way too much than it actually can. So it's actually going both ways. And this is a very scary kind of development, I think, and not very helpful. 

Steven Parton [00:06:16] Yeah. Let's let's address both of those points that you made. To start with, let's maybe look at the creative side of things where a lot of people are concerned about how much it's capable of. Do you feel like. These large language models chat shaped daily mid journey. Do you feel like these technologies are capable of truly innovative work and are going to maybe replace a lot of jobs and do a lot of work that people have been, you know, that they've made a career out of for the past several decades? Mm hmm. 

Anne Scherer [00:06:52] Very good question. And I mean, this is what's currently covering a lot of the or dominating a lot of the discussions. And yes, these models can be. I mean, if you look at the pictures that Dolly creates or, you know, the text that she can produce, it's amazing what these tools can do. And a lot of times the results are indistinguishable from human art or from human text. Yes. But I think the important aspect to consider is they still need human input. Right. So even if you work with Dolly, you need to prompt and you need to be very good at prompting and giving, you know, that input, what it should be doing. The model in itself doesn't do anything. It's not you know, it's not smart in that way in initiating any creative process. So I see it more as a collaboration. It's always a I and human collaboration. And yes, I think ultimately it will change, you know, the kind of work we do and the skills that are important. So even for an artist, maybe we have to learn more, you know, writing good prompts and, you know, knowing how to make a good input and working together with these models and then adapting whatever the output is. So the creative process in itself might be very, very different. And just the skill sets that we need and the jobs that we do will shift, definitely. But this is actually nothing new. If you look at the technological development over history, that's always what's happened with new technologies, right? So AI is no exception. So yes, there will be new jobs and yes, some people will be out of jobs because, you know, I can do it better, I can do it faster. And that will also change creative industries and so on. But that does not mean that we can sit back and do nothing when a lot of people at the moment fear. Because again, it's a collaborative process. It makes us it is a tool, it makes us and our work a lot more effective and efficient and it changes the way we work. But still we have a lot to do. So hopefully it can help us solve big tasks or complex problems a lot faster and better. And that's the idea of any tool that we had seen over the time of history. 

Steven Parton [00:09:12] And let's look at the other side of things. Then you have a marketing background, and I would say one of the more common creative industries that we can expect this technology to take over very quickly is that of marketing. And specifically, what I fear is that marketing, when combined with AI, looks a lot like manipulation, looks like a lot like a very powerful tool to persuade people and use that data you were talking about before to maybe get them to buy things or think ways that they normally wouldn't. How much of a concern is that for you and is that realistic? 

Anne Scherer [00:09:51] Well, there's definitely a big concern here, and there's a lot of research showing that AI is very good at persuading people. And again, I think the biggest problem here, I mean, humans have also tried to persuade before, right? We had human sales people that had similar techniques in trying to get people to buy things. I think the big problem now is I can do it at a much larger scale. And the other aspect that I mentioned before is this lack of algorithmic awareness that people do not understand that these algorithms are running in the background. And, you know, as they're profiling them, they're trying to give them the right products, the right message at a time that will be most convincing to them. And the big problem here, I think, is that people do not understand that that is happening. You know, if they knew and this is where I come in saying that A.I. literacy is so important, if they don't know that, they feel. Also, if you look at the political domain and trying to persuade people to vote for a particular campaign, if they feel that whatever opinion they have and this is all they see and hear online, they feel like, you know, it kind of reinforces their worldview and they're not, you know, aware of that during this huge bubble. And I think this is the critical aspect here is the the lack of awareness in the general public and knowing what's happening behind the scenes and understanding how far and how well these algorithms work and profiling them. And again, I mean, there's tons of examples that pop up in the media with Target and the pregnancy score that are funny. But I think the important point here is to consider. At the moment, companies are just trying out, you know, what their AI algorithms can do, how far they can go, how well they work in predicting certain things like a pregnancy. And I think once consumers are more aware of what's happening, we can have an open discussion on, you know, how far can they go and should they go to have more of a discussion on ethical AI. So at the moment, the development is completely, you know, left up to some tech experts and, you know, the Silicon Valley startups and all that. And I think we need to have a broader discussion. And for that to happen, people need to be more informed about AI and what it can do, where it's working in the background. So ultimately we can all decide where we want to go with this. And that's not just, you know, some elite that understands the technology and applies it to the best, you know, to get more profits or whatever they want to do with it. Yeah. 

Steven Parton [00:12:26] Do you think that our wisdom as a species can outpace our innovative capacities? I feel like in a way, I want to believe what you're saying, but I am genuinely concerned that when you compare cultural zeitgeist shifts with the speed at which GPT 33.4 is going to come out, I don't feel like a person can change quick enough, you know, or grow quick enough to outpace these changes. So I don't know. Do you think that that's going to be a struggle that we have to deal with and that we might not actually be able to do it? 

Anne Scherer [00:13:07] Well, at least I hope we are able to do it. Otherwise, I don't know where we're headed. I mean, I surely hope. 

Steven Parton [00:13:14] To be more specific. May be. Do you think we need more? Is there more than information needed here? Do we need laws and regulations, you know, or are there other things we need to do to help culture and society move forward to keep ourselves in that race? 

Anne Scherer [00:13:30] Yes, definitely. I'm definitely for regulation. And and, you know, having a clearer picture on, you know, where this can go, where are the limits, what do we want to do with this technology? Because there's lots of potential. And if it's not, you know, if we don't discuss and have a clear picture on where we're going, this is a problem, right? So you need to have a destination, what you want to do with this technology and then try to, you know, steer it in that direction. The problem usually with regulation is it's also a lot slower than technological development. So this is a problem. And I think the key here, apart from informing society, is also to creating more of a sense of urgency. So we know technology is evolving very, very fast. It always has. But in this case, I think we need to have more of a sense of urgency that even regulators need to be, you know, working a lot faster on this topic, otherwise it will outpace us. So, yeah, I hope that we will manage to do that. If you look at again, history, there's always been technology that ultimately could harm, you know, a human. If you think about a car, you know, the engine, it got faster, it got better, and there were no rules on, you know, traffic lights or, you know, how you should regulate that. And ultimately, humans did generate these regulations. They had to rule. So everything was clear. But at first you saw that technological development. Right. And technology was moving fast. And my hope is that, well, history will repeat itself on that, hopefully with a I will have a very similar development to where we understand, okay, now there's so much happening, it has so much impact. It's in everybody's lives. It's influencing so many of our decisions and very critical decisions. Right. So in this case, we need to work out regulations, how we can apply it, where, etc.. So hopefully once it's more This is why I'm actually quite positive because at least what output and this you know Dolly and all they these tools have done is move more into that public consciousness to, you know, when people were amazed at what I can do. And for me that was a good development where it would spark these discussions. So hopefully we will use this development at the moment to kind of push further towards, you know, where do we want to go, what are the rules and regulations that we need to have to see ultimately a positive development from this technology and benefit and have a benefit for all and not just, you know, an elite that is developing them and a lot of other people are left behind. 

Steven Parton [00:16:11] Yeah. Have you seen the open letter that was put forth, I think, by Max TEGMARK? What are you what are your thoughts on on this six month pause that he's proposing for society to take a breather and think about what they're doing? Do you think this is going to work? Do you think it's a good idea? 

Anne Scherer [00:16:30] I see you smiling. I am, too. I think it's a I mean, the idea is nice. I just think it's not working. This is this is you know, in theory, yes. That would be a nice idea to, you know, sit back and, you know, take the time. And I think again, I think it is important to me to have these discussions. I just think it's not a very practical solution. And considering the way humans work, that probably, you know, maybe some people will do that. But there is lots of other countries who say, well, we won't stop the development. So it's not like we will have a global solution. Somebody will think, you know, this is a competitive advantage. So I will go ahead and develop. And then others want is, you know, they don't want to stop. So you have a huge, I think, a global race at the moment. So I don't think you can realistically get everybody to sit together and say, yes, we'll stop that now and discuss where to go first. And I think for that to make sense and to work, we would have to have a global solution where everybody sits together. And considering the ways the human race has worked, I think that is not something that can be realized, unfortunately. Yeah, it's serious. In theory, it's a great idea, but practically I think it's just not possible. 

Steven Parton [00:17:52] A similar topic. I think it might even be another thing that Max TEGMARK has talked about as well. But what do you think about open source? Do you do you think that it is a good idea to open source the code for these? These large language models because he, for instance, has likened it to giving away the the way to create nuclear bombs. Basically, he's saying this is literally, you know, I am become death and we are giving it to everyone. Do you think that that is something that you agree with or do you think that we should open source these models? 

Anne Scherer [00:18:31] Oh, I think this is a really hard question, but there's so many elements that play into this. On the one hand, you know, will this in any way harm innovation that we don't want to see? On the other hand, yes. Who will use that and what manner? So. Actually, I don't have a very 1.1 sided view of it, so I honestly wouldn't know which way I would lean at the moment. I think it has positive sides and negative sides. Again, I think that here this is a very theoretical discussion as well. So even if we say that this is something we should be doing. I wonder if everybody's willing to. Do that. And if they don't, again, internationally or globally, that can be a big disadvantage or advantage considering which side you're on. So this is something we should consider as well. And a lot of companies will enter innovative processes. So. Yeah. Again, I think this is often a very. Theoretical discussions were, if we want to have a solution, all governments worldwide sit together and decide on what to do, which I think is really. 

Steven Parton [00:20:01] Unrealistic. 

Anne Scherer [00:20:02] And again, very hard to accomplish, I don't know. Is that is in the impossible? 

Steven Parton [00:20:08] Mm hmm. Well, how about the imbalance, I guess, in our social institutions specifically? You come from academia. And one thing I've been thinking about lately is the fact that, you know, I recently went through academia, you know, a grad program myself, and going through review boards and boards of ethics and the research guidelines. There's a lot to do just to make sure that you're allowed to even ask somebody's questions on a survey. And yet corporations can basically do live experimentation on millions of people with private data at any point without talking to anybody. How do we I mean, do you feel like there's some reconciliation or some kind of change in social institutions that we need to come to grips with? Does academia need to speed up or maybe government needs to slowed or businesses need to slow down? Like it feels like there's a gross imbalance there? 

Anne Scherer [00:21:08] Oh, yeah, I totally agree with your observation there. Mm hmm. On the one hand, I think it's actually a very good development what we're seeing and academia to kind of at least sit back and think about, you know, is this experiment worthwhile? So do we have to use this in this manipulation or this nut treatment? Does that hurt anybody in any way to kind of at least have these thoughts? And I think this is something that companies should push in their own projects a lot more. But to be very frank, I think what we're seeing and at least at at least what I'm seeing with industry partners is that there is a move and an increasing awareness about among companies that, you know, they need to rethink their procedures as well. So here in Switzerland, there was recently a big data scandal of one of the biggest telecommunications providers here, and they had use customers voice data whenever you called in and trained in a model to do an automatic caller ID And the idea was, you know, this is very convenient for customers. You don't have to answer these boring call of questions in the beginning. They will be automatically identified and the call will be a lot shorter. Obviously this is also very interesting for them from a cost perspective. When they presented it at a conference, which was very interesting, they were really, you know, very enthusiastic about their cool air model. And then there's this huge outcry and I was like, What did you do with our customer data? We never gave consent that you use it in that way. They simply said whenever you call, you know, we use we record the call for quality purposes. Nobody thought they'd be training an AI model. So there was this huge outcry here in the media for this case, and I really liked what they did afterwards. So they set up an ethics board, they set up an ethics framework. And from now on, you know, every new project, data science project that's coming in has to go through that ethics framework. And for them, I mean, the whole scandal was like a huge brand reputation crisis, and I think they took it as a big chance to, you know, show that they've learned from that. And they are very proactive now about the way they handle your data, what they're doing with it. And ultimately, I think that this is the way to go. For many corporations, regulations are, again, very slow, as always. But I think for companies, if they proactively set up these frameworks, proactively communicate, that you can actually be a competitive advantage because increasingly customers are concerned about these things. There's more and more, you know, incidents in the media, you know, as target in the US and what they're doing with AI models. So ultimately I think a lot of companies can benefit when they proactively decide to have their own ethics frameworks, decide that, you know, everything has to go through a review before they do these data science projects. And if they communicate that very openly to their customers, this can be a very big benefit, I think, and hopefully this is the way they're moving. But I agree with you that considering the regulations, there is a big disappearance still at the moment from our universities, you know, everything is very clear. You have to do this step and to step, and it's taking a very long time, whereas companies are still, you know, trying to figure out which way to go and, you know, very voluntarily can decide if they want to set up and, you know, have this more a time consuming process or not. 

Steven Parton [00:24:50] Yeah. Are you seeing a difference between. Countries specifically, I'm thinking like the EU obviously has a lot better data protection currently than the U.S. And I wonder if I wonder if you the U.S., would be as mature or it would worry as much as maybe they did in Switzerland when something like that happened. I feel like in the United States we might just not care, honestly, that they were using our data to do a voice recognition. So I wonder, do you see a difference there? 

Anne Scherer [00:25:22] Well, I think, again, it's also something about publicity. And, you know, what's what's reported in the media and what makes the headlines. And I think ever since GDPR or even before it started, when they had the discussions and when it was introduced, and at least this whole topic on data privacy, data protection, you know, what companies are doing with your data has. We gained a lot of attention here in the media. And I think this helps because now people are more concerned about what's happening with their data and they're more aware of the value of their data. There's lots of platforms popping up where, you know, companies can post a certain project they want to do and they need certain data from customers and they will pay you for that. And that also kind of helps to give you that sense of, you know, my data, even if it's just my transaction data, you know what, I'm shopping at the grocery store that is still valuable. And I think that, you know, this this whole GDPR, the whole development and the regulation has helped spark that awareness because there were was more media attention and then consumers paid more attention to what what's done with their data, how is it used? And that is valuable for a lot of things. So I think that really helped. And I'm not sure about the media landscape in the US, but it seems like, you know, it's been some cases where it did pop up and in California it was a bigger topic. But as long as it's not in this constant kind of media landscape and reporting it soon, I guess, you know, it's just in the background and you're no longer worried about it at all. So I think it kind of has to rise up a little bit higher, be more top of mind to where consumers will, you know, also push this more and pay more attention to which companies they want to engage with. You know, if they're actually telling you, you know, we worry about your data, we care about that. This is a big issue for us. Our data is stored in Switzerland. It's safe and all that. And, you know, it's this is important to you. And if you've heard so much about it, you will maybe even pay a higher price for that service at that company. But, you know, if it's not top of mind, you probably don't care about it as much and probably even, you know, paying more for a particular service. In that case. 

Steven Parton [00:27:51] It doesn't. Yeah, this might be a weird question, but my first thought when you just said what you said was, do people actually care? Do you think that people are going to the average consumer is going to be mindful of this? I don't know if you have any research data on this in terms of people's opinions or what, but do you feel like the average person is really paying much attention to what's happening with their data or their privacy? 

Anne Scherer [00:28:18] I think they are paying attention. So I've been working for the World Economic Forum for a year on this topic, and we did a large survey in Switzerland and in the US. And the interesting aspect there was that people generally said, Well, I'm more concerned now. So concerns are going up in the last year, especially also during COVID. You know, how is government using my data? That could be a critical development here. So people are concerned. I think the big problem is does it lead to any behavior changes? Right. Because changing behavior or paying more for service is, you know, costing you something is taking effort. And and this is often where it stops. Right. So, yes, I tell you, I care a lot about this. This is important to me. But if you have the choice, you know, to get a free service somewhere online or you have to pay for it, most of us will go for the free service. Right? So and this is this is in academia. So this huge problem, this kind of privacy paradox, people always tell you that they worry about it, but they don't behave in that way. And this is a big issue trying to figure out, you know, well, if you do worry, why do you not act in a certain way? But at least the development here, I think, is if you have competitors and it's a similar service, similar price, then we do see behavior changes, right? People will say, okay, I'd rather go with company X of day, tell me that my data is safe. And, you know, I have a clear picture of how they're using it compared to a competitor. So at least in some instances, we can see that it is affecting behaviors in others. Obviously, people still like free stuff, especially online. They like, you know, easy behaviors. We don't like to read the privacy statements if they take so long to go through and it's legal wording, who does that in their free time? Right. So this is still a big, big issue. Definitely. 

Steven Parton [00:30:24] Well, I'm thinking of your TED talk, why we're more honest with machines than people. And it makes me wonder if I could be its own worst enemy in this case and maybe in a sense. Do you feel we might be able to maybe outsource some of our decision making to a or have it basically tell us that we should care? Like, do you see a way in which I might be like, Hey, here's how your data is being used. Here's why it's bad for you. And because we trust machines, we will actually start taking a AI more seriously because the AI is telling us to. 

Anne Scherer [00:31:03] As an interesting, very complex thought. Yes. 

Steven Parton [00:31:06] I know. Sorry for the confusion. 

Anne Scherer [00:31:09] No, that's mine. So first off, when we did the research on why we were more honest with machines, one of the concerns we had was also, you know, and you see that on social media, people are oversharing in the digital age. You know, it's and when we talk with machines, like how I can share this, I can share that, and people give up, you know, very sensitive information that if you think about having a face to face discussion with a stranger, you probably wouldn't tell them, you know, these things. And obviously, that can be misused as well, too, where you engage or encourage people to share more and very sensitive information that they shouldn't be doing. On the other hand, I really like your sort of you know, I could ultimately help us understand what is done with your data and help you make better also privacy decisions. Now, a lot of years back, there was a project in New York called the Data Selfie, which you really, really liked. It was a master's student project, and what she did was basically simply just visualize what Facebook and their algorithm could infer about your personality based on your likes. So basically, she just got access to, you know, the Facebook profile of people that were using her service, and then she visualized it. And I thought this was a very, very smart idea of trying to get people to understand or get this algorithmic awareness. Right. To understand. You know, Facebook does not only know that you like curly fries, if you have like 200 likes or more, they can infer so much about your personality. And she was directly showing that to them. And I thought this was a very, very smart idea where I can actually help people in the way that, you know, it shows transparently what it's doing ultimately. And then people can decide, oh, you know, this is a little bit too much. I wouldn't want to have, you know, Facebook know my political orientation, for instance. Yeah. And I think that was a very powerful way. And I think it's also a development we're seeing with a lot of companies, at least where they try to give you this algorithmic awareness. So Google does that, too. If you go to their privacy settings, right, they tell you, you know, this is your profile, this is what we think who you are, and sometimes it's even wrong. So it can also help you to kind of, you know, where you don't have this over enthusiasm. What I can do is I can also tell, you know, sometimes is very wrong. But, you know, showing that to people and then giving them the option and more control in a way to say, you know, I don't want you to personalize my advertisement based on this is not characteristic of me. So this is a very good way to go. And we're starting to see that development. And again, this is where I can help because it makes it more transparent and showing, you know, what it's basically doing in the background. And then we can decide, is this okay? Is this something where you want to have personalized advertisement or is this something we don't want them to, you know, profile us on? So I think this is a very good development. 

Steven Parton [00:34:19] Yeah, I would love to see that. I think that would be great if we all had on our phones are our data profile that is out there in the world and could just say, you know, take that off the list. I don't want anyone to know that any longer. 

Anne Scherer [00:34:31] Yeah. Yeah. Well, it's interesting to see. Yeah. So I can definitely encourage everybody to to do that, to kind of experience that firsthand. What is algorithms are doing is very, very helpful. 

Steven Parton [00:34:45] Yeah, well, we've talked a lot so far about a lot of societal things and haven't focused too deeply on the AI quite yet. So maybe one obvious question that we can ask here is what kind of intelligence is AI? What what are we seeing right now with these GPT models, with, you know, mid journey? Are we talking about something that is going to be kind of human like or is this going to be something that's very different from human intelligence? 

Anne Scherer [00:35:15] Mm hmm. Oh, it's a very good question. And I like the way you introduce it, because this is like the most common understanding AI is, you know, simulating human intelligence. And that definition itself already has a problem because it kind of has this assumption that, you know, it's very similar because we're imitating human intelligence. And I think basically what we need to understand is artificial intelligence is very different. I mean, we have of just just thinking about it, our brains is, you know, biology in itself very different from a computer. And you know that the hardware that we have there. So ultimately, all these differences leads to very different tasks that A.I. is good at and that humans are good at. And this is important to understand. So, for instance, at the moment when we talk about A.I., even the output or dolly, this is all narrow A.I., meaning they are very good at one particular task. So you can write text really, really well. Dolly can make, you know, these nice pictures, but they cannot drive your car or give you a weather forecast that's accurate. So. E-systems at the moment at least, is very, very specialized. You have a training set and are very good at processing huge amounts of data in a very, very efficient manner. And then they are very good at the specialized tasks, but they're not good at anything else. And this is something we often overlook in humans. Humans learn from at times very small sets of data, and we're very good at applying whatever we have learned to completely new situations. And we often talk about, you know, we have this common sense and common sense like by itself, the name already implies it's so common, it's nothing special, but it's very cool what we're doing, right? So we have an understanding of, you know, the basic laws of physics, gravity. So we kind of have an understanding when we have a huge rock and we place it on a glass table. Ooh, that might not be a good idea, right? So things like that, we automatically understand, even if we haven't ever been in that situation. And this is something that's totally not possible with A.I. at the moment. It's trained on one task. This can do it really well. But you see so many incidents when an AI encounters these so-called corner cases or uncertain situations it fails is really bad at it. So any time when there's a decision, context that's more uncertain is more complex, you know, might involve, you know, trying to how do you say, apply your knowledge to a new situation, to something uncertain. This is something where humans are still very good at and I cannot do. And also typically the examples that you get, we are social animals. So we're very good at, you know, kind of getting the feeling of, you know, our counterpart. You're nodding at the moment. So I kind of feel it'll, you know, you like what I'm saying. Things like that are helpful. Right. And this is something I you we can train it to kind of imitate and simulate social behaviors, but it doesn't understand on the same way and kind of convey that empathy. So humans are still very good at these social interactions and social cognition, etc.. And as we discussed before, creativity, as I said, even though Dolly or you might seem creative and their output and their outputs are truly amazing, if you look at them, the process always starts with the human thinking outside the box, going beyond their current existing rules, and then also kind of persuading others that this is a great idea. It's all part of a creative process that an AI model cannot do. It can support you in these processes in all of them, but it cannot take over the tasks all by itself. So in my sense, I think this is why it's important to understand A.I. as a tool that can do certain tasks really well and can support you in, you know, finding new ideas and exploring something, finding patterns and medical pictures, all that. But in the end, you know, the way we work best is, is a hybrid kind of model in collaboration because we both have very distinct types of intelligence. And I think this again, coming back to your initial point, this is I think the most important thing to understand. AI or artificial intelligence is very different from our intelligence, and we kind of need to see it more not like an imitation of human intelligence, but rather like a I think of it like an alien coming to earth. And it's very different from us. It's very different strengths than we do. But this is good. This is actually good for us because it means that when we work together, it can be extremely beneficial. We don't want someone to use exactly the same process. And this is why it's important to understand how we are different. To understand where you can trust AI is good and where are the limitations, where you need more human in the loop, I guess. 

Steven Parton [00:40:47] Yeah. So I'm going to play devil's advocate a little bit here and challenge that last sentiment. Which is it necessarily good that we are becoming more symbiotic with these technologies? And in that sense, I'm asking because I wonder if interacting with a I could be like flattening our affect if maybe, you know, dealing with machines that are kind of focused on intelligence and the intellectual sense and not in the emotional sense could, you know, dehumanize us in a way where we might become more machine like in our attempt to make them more human? Like, do you feel like. That is something that could potentially happen here. 

Anne Scherer [00:41:30] Mhm. Oh it's definitely a concern. So at the moment what, what we do see is that a lot of the social rules that we have learned in our social interactions are also applying to machine interactions. But ultimately the more often we, you know, interact with these systems instead of other people, we might learn kind of new rules. There were these discussions with, you know, kids interacting with Alexa, and they were really rude and impolite because it's a machine, right? So you interact differently. And I guess this is more of a question, You know, ultimately, if we interact with that so often that we kind of learn new kind of machine rules, how we interact with these systems. And that could also, you know, kind of color the way we interact with each other, obviously. Yes, that that is definitely a concern and could be happening. But my hope is that ultimately we still, as humans, learn when we grow up as kids, we learn social interactions. So we need these social interactions and hopefully we will not lose them altogether, even though these systems become more common in our daily lives. So we kind of the more we interact with them, we understand that there's different rules that apply to these attractions, hopefully. 

Steven Parton [00:42:54] Well, looking forward, I'm going to pressure you to make some bold predictions. What are we? Nothing. You don't have to make anything to predictive, I guess. But what? What are you thinking is going to happen in the near term? I know you've done work with how GPA is affecting education. And, you know, we've talked about how it is potentially going to affect other industries like marketing. Do you see a lot of social upheaval in the in the near term as institutions have to adjust to the fact that this technology now exists? Or is this going to be. Kind of like a hype that quickly dies off and we get back to life as we know it. Mm hmm. 

Anne Scherer [00:43:45] I'm actually not a big fan of making predictions because you can only be wrong. So. Okay, so let me give you at least let me let me put it like my what my hope would be. Where are you going? So I definitely think that in the next year, we will see some big developments in these models because data is so much you know, there's so much data available today that, you know, there's just so much available to train new models which ultimately will see a big development. So I don't think this is a far stretch to make, at least as a prediction. My hope is that ultimately these tools can be I know there's some downsides, like I said with it could be dehumanizing. Yes, there's the potential of polarizing, of discriminating people. It also has the potential to democratize a lot of things like education. Like, for instance, if you think about activity, especially in academia, you have to be able to write well, especially in English. And there's lots of researchers, including me, whose native language is not English. So writing a good scientific paper has always been a struggle, and writing it well is important to publish well. And my hope, especially if we have these language tools, is that, you know, it gains gives access to people who are not as well, who can't afford, you know, somebody editing their texts, etc.. Now we have these tools available and, you know, they can write really well, all of which would be nice, so they can communicate their ideas a lot better across the globe. So I think tools like that can also give access and opportunity to a lot of people who did not have that in the past and to also give access to more knowledge and education to everybody. So hopefully we will see this development rather than kind of having a few who benefit and profit from these technologies and leaving everybody behind with without jobs. So my prediction is that jobs will change and the way we work will change, but hopefully for the better. If we use these tools, yeah, we can use them to be more effective, more efficient and focus on the things that are really important, like these social connections, thinking about a doctor to kind of, you know, show so more empathy to your patients, have more time for these things. So hopefully this is a development that we see rather than, you know, having more discrimination with machine bias and polarization. I know, I know. These are good topics and these are important, which is why we need to talk about, you know, the way we go with these technologies. But ultimately, they could be a force for good. And hopefully this is the way we're headed with it. 

Steven Parton [00:46:44] Well, looking forward, are there any aspects of AI that either you are? Really concerned about or really optimistic about that You would love to see us pay more attention to that. You think maybe either isn't getting enough attention or you just think is really important? 

Anne Scherer [00:47:00] Mm hmm. At the moment, what I'm most concerned about is, honestly, illiteracy. Having people understand. So it's not just a discussion of, you know, researchers, academics, elite deciding what to do. I think if we want to have a technology that is democratizing, helping everybody, everybody should be involved in the discussion. And so we need to be able to communicate it in a lot, a lot easier, a lot in a way that everybody can understand. So when I sent my book to my cousin last week, she was like, Ooh, you know, it wasn't a iBook. I don't know if I can try it. You know, it's like it's already stigmatized. I you know, this is a techie book. I can't read it. I won't understand it anyways. And my hope is that ultimately people will open up more and understand, you know, if we try to communicate it in a lot more accessible way, that they can understand these things as well. And it's important that they understand so they can join that discussion and they don't feel left behind because at the moment this is my concern. Like there's one group that, you know, talks about AI a lot and the discussion is going, where are we going? You know, what are the risks? And there's another group just, you know, waiting to see what happens. And I think this is this is a little bit concerning at the moment. We need to have a broader discussion and information on that. 

Steven Parton [00:48:30] Well, hopefully this conversation will help. And with that in mind. Any closing thoughts? Obviously, we'll share a link to your book and let everyone know how to find it. But is there anything you'd like to promote or talk about or just leave the audience with before we come to a close here? 

Anne Scherer [00:48:45] Yeah. Just a final thought, please. Everybody who's working in that area tried to find a way to communicate that in a very easy way to a broader audience, not just think about, you know, the academic conferences, etc.. Tried to think about it a little bit broader. And if there's one way you can share our book and that as an approach, I'd be very happy also to get feedback on that. 

the future delivered to your inbox

Subscribe to stay ahead of the curve (and your friends) with new episodes and exclusive content from the Singularity Podcast Network.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.