< return

The Adoption of Embodied Computing

August 22, 2023
ep
113
with
Isabel Pedersen

description

Our guest is professor and director of the Digital Life Institute at Ontario Tech University, Isabel Pedersen, who specializes in the study of wearables, embodied computing, and similar technologies.

In this episode, we take a tour through what Isabel calls the continuum of embodiment, starting with the defining characteristics of the field, exploring its many manifestations and advancements over the decades, and even looking into the future when we may see applications such as brain computer interfaces. Along the way we discuss the impacts of embodied technology, including topics like the impacts of rhetoric on design and adoption of technology, the societal impacts, and much much more.


Follow Isabel and her work at ⁠twitter.com/isabel_pedersen⁠
**

Learn more about Singularity: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠su.org⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Host:⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Steven Parton⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠LinkedIn⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ /⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Twitter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

transcript

The following transcription was created automatically. Please be aware that there may be spelling or grammatical errors.

Isabel Pedersen [00:00:01] We have a model where we remediate old computing styles and then propose the future based on what we already know. But we have trouble asking those difficult questions about what will our lives be like. We know what we understand the technology and its continuum, but we don't place social harms and social good on that continuum and begin to predict those outcomes. 

Steven Parton [00:00:41] Hello, everyone. My name is Steven Parton and you are listening to the feedback loop by Singularity this week. Our guest is professor and director of the Digital Life Institute at Ontario Tech University. Isabelle Pedersen, who specializes in the study of wearables, embodied computing and similar technologies. In this episode, we take a tour through what Isabelle calls the continuum of embodiment, starting with the defining characteristics of the field, exploring its many manifestations and advancements over the previous decades, and even looking into the future when we may see applications such as brain computer interface is becoming the norm. Along the way, we discuss the impacts of embodied technology, some of which I don't think I've really heard discussed elsewhere. This includes, but is certainly not limited to topics like the rhetoric and linguistic impacts of how we talk about technology, how that impacts design, how such dynamics impact the likelihood of mass adoption, the societal impacts of these technologies and their rhetoric, and much, much more. And so with all that being said, let's dive into it. Everyone, please welcome to the feedback loop. Isabelle Pedersen. The first place. I want to start with you, if we could, is really just to kind of get a big picture view of what you do. What are your research focus? What are the things that you're exploring these days and why are they interesting to you? 

Isabel Pedersen [00:02:18] Well, my research focus has always been future tech and strategizing. Future tech for a human centric or so sort, social centric or ethical. And so ethically aligned design and technology would be my broad research area. I started at the University of Waterloo a long time ago studying the future idea that we were going to wear technology on the body that we were going to wear, where computers and that wearable computers were going to happen. But at that time it was very much a tech imaginary. It was used by the by militaries around the world. It was used by sort of specialized, you know, groups and in specialized domains. So I had to travel around the world to ever see an example of that. I had to go to Zurich to eat. I had to go to Georgia Tech or MIT just to see an example of my own research area. And so what sustained over time is and about embodied computing has really I've always research embodied computing. 

Steven Parton [00:03:28] Yeah. And what is embodied computing, if you could kind of give us the the most succinct definition or accessible definition for that matter. 

Isabel Pedersen [00:03:37] For me, embodied computing is a human centered design model that helps me research how computers are carried. So when you work, you carry computers, you carry them like mobile phones and laptops, and then the progression towards wearing computers on the body, whether they're, you know, on your on your wrist or you're wearing a heads up display for a digit for augmented reality or implanting components in the body. So implantable and with and then to the next phase, which is ambient. So the research area covers components that are on in and around the body and all of the, let's say, consequences that comes through the interacting with computer components in that way is what I study. 

Steven Parton [00:04:35] Okay. Yeah. And you talked about the progression there. I know in your 2013 book Ready to Wear, you talk about the continuum of embodiment. Is that kind of what you're speaking to here? Is this progression. 

Isabel Pedersen [00:04:48] Right? I actually started that with my own Ph.D. dissertation at the University of Waterloo. As I said in 2004, when I saw that we hadn't yet embraced anything wearable or implantable, but that was very much written into the discourse that things that we do with, you know, our desktops at the time were were proving to be inadequate. They weren't meeting the kinds of, let's say, ambition ambitions that people had for what their computers could and should do. And I saw that the motives for designing these kinds of technologies, we're only going to be met through much more immersive and immersive types of computing. And so I don't think it's that useful to look at any one of these types of platforms in isolation as if they're not as if they're static, that they're dynamic, and that prototypes and and research development moves us through these sort of phases. Whether we agreed to adopt or adapt to these technologies or not. This is a continuum, the continuum of embodiment that's occurring. 

Steven Parton [00:05:59] And how have you seen this continuum develop over, let's say, the last 20 years? I mean, in terms of maybe pace and funding, is it something that has been a steady linear increase? Is it something that's exponentially adopted? Is it something that's had ebbs and flows? What are you seen in this paradigm? 

Isabel Pedersen [00:06:21] Well, absolutely ebbs and flows. There's certain there certainly has been sort of the winter where people were only interested in their mobile phones and that took a decade, even though most embodied computing paradigms were proposed and even strategized before mobile, before you, we were carrying our phones around. So there's been a long progression of, let's say, development and cycles that are drawing on embodied computing, and they haven't necessarily even emerged yet. I think what has happened, though, is most people have already adopted the idea that you carry a computer on you, that you are going to socialize or work or entertain yourself using a computer that you're carrying with you. And what's happened, I think this what I call these discourses or rhetorics of persuasion, that people want to do that more efficiently or they want to do it more autonomously. And we're beginning to sort of run, you know, mobile. Your smartphone is no longer going to fulfill those expectations for the next phase of computing. And so people are beginning to already you know, many people wear a smartwatch. Many of you wore Fitbit. People got used to wearing cameras on the body through sports. And so the notion of socializing of public or certain people's to the idea of wearing technology, we sort of passed past that. And many people are talking about the next phase of and experiencing ambient technology, for example, smart houses or smart cities or technologies that are around the body which are are contributing to all of this transformation. 

Steven Parton [00:08:13] Yeah, this is probably going to be more speculative and maybe even out of scope a bit for you. But what do you think is propelling us to adopt these technologies? Is it something that's purely the curiosity of. The human animal? Is it is it the desire for status and the way that materialistic goods are attached to status? Is it this persuasive language that's used by capitalism to make lots of money by selling gadgets? Do you have an idea of what really is driving this? 

Isabel Pedersen [00:08:44] Well, you kind of got to the heart of some of the big research questions. I think with all all forms of techno culture. Right. How why do we adopt which for me, adoption is where you decide to you decide to purchase a device or use it or try it at a tradeshow, you're going to begin the process of adopting. But adapting to technology can take decades, if not centuries, for us to to fully sort of adapt to technology. And that process of persuasion goes on through all of these stages. And I argue the one that we're not paying enough attention to is design, right? How our inventors persuaded to design something, how or how were they enabled to design and what what are the what ways are they? Know, why is it that we have been inventing wearable technologies for 20 years, implantable technologies for that long? And how what are they? You know, how are they persuading us as a sort of future, those future adoptees? Like how is it that we are being convinced to listen to them or to consider their science legitimate, for example? Or is a technology legitimate? And I always think when you're talking about motive or rhetoric or why people are persuaded, there's always a two way it always goes to. So we're we're contributing to we have agency like we are society's contributing to techno cultural adoption, but we're also being persuaded by other writers or let's say, large tech companies that are, you know, the motive is profit. Convincing us to adopt something so that they can make money. But it it's a two way. I don't believe in technological determinism that it is we're only persuaded by by powerful, powerful others. But I think yeah, we have to always see it as a two way negotiation or dialogic. Mm hmm. When it comes to design, adoption and adaptation to technology. 

Steven Parton [00:10:51] Yeah. You talk about semiotics and rhetoric and a lot of the things that I see around your work. Could you expand on that idea a little bit more about the rhetoric there and how semiotics plays into this? Because that it's definitely not like a common thing you hear when somebody is talking about these things. Those words stand out as kind of unique. 

Isabel Pedersen [00:11:09] Yeah. Well, semiotics and in in brief is that study of meaning making meaning, using meanings. And it's beyond simply language. I mean, we make meaning with gestures, we make meanings with imagery, video, we make meaning. So it's a science of science, of how we exchange meanings and misuse meanings. And so at the heart of any human computer interaction is an exchange of meaning and is an semiotic exchange. And so if you you I started human computer interaction trying to understand what will be that next phase, How will we exchange meanings in an embodied technology future that will involve, carried, worn, implanted and ambient interfaces? And so kind of cutting through that was always a recognition that language is motivated. It might be politically motivated or it might be ideologically motivated. And the in you know, in my writing, in my theory, there is no there is no neutral exchange of meaning. All language is motivated according or according to motives. Mm hmm. So the bring it back to rhetoric that helped me understand, you know, how if an inventor is proposing that we you know, we wear a heads up display, use augmented reality, all of those questions you ask. Well, why why does the inventor suggest that a user in the future should use it or not? What are the ideal scenarios that are being proposed through the through that usage and and, you know, the counter motive, What kinds of harms are embedded in this early design strategy that is being communicated to to us as users or as society members? And that's how, in my view, you, you, you address or interrogate the idea that technology is neutral when it isn't. And that's been I mean, that's going on for 20 years. This is it might be new to some. To some. People tend to realize that technology and science is not neutral, but in rhetoric, rhetorical studies, we've we've always talked about how meaning exchange is could be politically motivated, could be motivated for all different reasons. 

Steven Parton [00:13:42] Do you feel like I don't know if this is a fair question, but on a scale from like 0 to 1 or 0 to 10, you know, use whatever measure you'd like. How are we doing right now currently in terms of letting it be more neutral? You know, acknowledging that, as you just said, it can't ever really be neutral. Do you feel like our design and the way we're building these gadgets and technologies have a lot of forceful meaning baked into them, or are they more loose and open to interpretation? 

Isabel Pedersen [00:14:16] That's such a good question and an important one. I teach a graduate class in computer science. I teach ethics, ethics, global ethics to computer science students. And they come from they have various research areas. Some of them are in game design, some of them are in computer science and are interested in how health domains like how how I help, you know, restore people's, you know, aspects of their physicality that they would like to have restored or assistive technology, some say. And what I like about this class is we begin to ask this question, the question that you're asking, what is ethically line design? And if if you know, the proposal to create a certain computer software is is how is it motivated and how can computer science students ask those questions of themselves as they invent? And what are ways, you know, if if say, for example, I give them the many different case studies about that. And so to, you know, try to reveal ways that language language isn't neutral. Neither are, you know, software programs. And the use of software programs is operating in and through this sort of motivated, sometimes biased, sometimes not biased milieu. How how can they address these concerns and how can they, you know, deal with it? And they're they are extremely enthusiastic about the course. And they are, I think, you know, have our heading dealing with this problem, neutral tech, the assumption that science and technology is neutral head on. 

Steven Parton [00:16:13] Mm hmm. Do you did everything change like just get turned upside down for you in the past year with with large language models? Like how has that kind of changed your approach to that course? 

Isabel Pedersen [00:16:26] Well, yeah. So you mentioned my first my first book in 2013. So you can imagine I was publishing that book as Google Glass was announced. I had written about heads up displays a see through augmented reality, visual augmented reality for ten years and had no idea that it was going to be released. So likewise, when in November of 2022, when opening, I released that more rigorous version of GPT that I kind of was sort of in a moment, I'm going to remember. But one of the reasons is I had written a book, I had released a book called Writing Futures and Running Futures was released in the summer of 2021, and it talked about the future of when when writers would, you know, professional writers would be collaborating with AI writing agents. Mm hmm. And it argued that our reticence to prepare for this writing future was one of the problems with my field or writing studies. It was coauthored with Dr. Anne Hill doing from the University of Minnesota, who spent her whole time in in writing, writing studies programs, and she's a full professor in that field and also predicted that, you know, there was enough evidence over, you know, I would say five years before that about how these large language models were going to disrupt and even revolutionize writing studies. Yeah, And I'm going to say largely it was wasn't embraced as a sort of essential research area in social science and humanities. And, you know, the very people who would get is going to change their profession. It's going to change the way we teach writers and teach writing students. Very few of those. People were talking about it in their main conferences and main international conferences, and it was kind of crickets. So crickets around when we really spoke during pandemic and didn't have much uptake. So when it happens, I mean, it was exciting. It was also somewhat terrifying that it was even I thought it would be adoption would be slower. I didn't think it would happen that 5 million people in five days or. And if I'm wrong and I apologize, but I think that's what it was that would happen that quickly. And people were, you know, completely stunned by what had happened. And I think, yeah, I mean, there were there were many journalists writing about it. New York Times was writing about it. Kate. Katie Metz was was writing about it. And so if you had been following the journalism, at least, or tech journalism, then this wouldn't have been news, really. But if you had sort of said, well, you know, that that can't ever happen and sort of decided that I was still sort of a far off future, you would be surprised. 

Steven Parton [00:19:39] But do you worry about being blindsided by the continuum of wearables that we've been talking about here, being blindsided by them in the same way we were blindsided by large language models? Are we potentially, for example, going to take this life optimization, life hacking, you know, on body sensors that people use to be really healthy? And it seems really innocuous at first. And then six months later, it's like every single person in the world is giving data about their biomarkers to everyone in the world, and we don't even have any legislation for it. Like, is that something that concerns you at all? 

Isabel Pedersen [00:20:22] I've always been concerned about this future that I write about. I've never been an optimist. I've always said if we don't strategize future design, if we don't strategize that and ten years isn't enough, it should be sort of 20. We should be now trying to understand what our lives will be like in 20 years and try to, as best we can, look at the state of our inventions and ask, okay, well, you know, no one's going to use this, but they will in 20 years. So what can we do to prevent all of this sort of harms that we're we're learning about now? And I mean, I think one, you know, one one example is brain brain computer interaction. And one of the interesting things for me is brain computer interactivity moves across the continuum. Right? So there are a lot of have been for, you know, at least ten years wearables that, you know, can read, read brainwaves or read a read affect or there's biofeedback. But then we're proposing that sort of implanted tech based on our experiences with curable. So you can often hear those inventors talking about, oh, it's going to be like like my phone or an Apple Watch. Neuralink, for example, talks about it's it's brain implants as as as the Apple Watch or the phone. So we have a model where we remediate old computing styles and then propose the future based on what we already know. But we have trouble asking those difficult questions about what will our lives be like. We know what we understand, the technology and its continuum, but we don't place social harms and social good on that continuum and begin to predict those outcomes, even though we have many examples of how emergence happens. So I am I am obviously always very concerned, and I think that we need to we need to not be afraid to project make these projections. 

Steven Parton [00:22:32] Yeah, well, some of the things you mention in your work that could be diminished or maybe are impacts that should be considered are things like creativity, privacy and our sense of self. Could you maybe speak to maybe some of your speculations or ways that you see this unfolding if you were kind of forecasting into that future? 

Isabel Pedersen [00:22:55] Yeah, well, I can tell you about a case study I worked on called In Mind the AI mind device. So five years ago, sort of 2017 and 2018, I started to base some of my case studies on brain sensors. Are these sensors, for example, that sensed an EEG, which became very much consumer and consumer devices, and anyone could buy like certain headsets you can. These technologies and they would sense our biofeedback and people would use them to help themselves relax or to, you know, for mindfulness, those kinds of things. But what I every time I see a a sensor, a biofeedback sensor, I think about how that's going to become much more invasive. So, for example, if it's sensing sort of emotion, emotions, sort of affective computing, what happens when it can really interact with our private thoughts or our private feelings or our memories. And in order to go through that case study, it was a media art study and created a brain interface that let us choose digitized, digitized paintings based on our biofeedback. But what it did is reversed the paradigm and used it as instead of simply being sensed, humans were choosing. We're using as gender to how can we select a painting based on our or our so-called happiness or our affect. And 111 way that I work towards understanding future technology is always try to imagine what would the next step is. And that was a really interesting media arts experiment because people were everything. They reacted with delight, They reacted with fear. And we used a collection of paintings by Paul Clay, and they were a large set of digitized paintings from the Met that were open to scholarly access. And so the interface I mind interface, you put it on, you'd wear the headset and suddenly paintings either matching your emotional and how you were feeling. We we curated them by tagging them with these four categories of emotion. And people were finally got, you know, the argument I was making is this is can be reductive, right? Can we really reduce people's emotions to these categories? Is that isn't that problematic? Isn't that aren't you reading my you're you know this devices has determined how I'm or is characterizing how I'm feeling at this moment in time which I saw as a private invasion of privacy or digital privacy. But what you know, by having this media arts project and we showed it in Japan and we showed it in in the United States, was to get people to begin to think, okay, what is this tech? It might only be a sensor now. It's a sensing technology now, but what happens when it becomes a controlling? I can use it to control something about my environment in my creative context. And that led me to really come to the conclusion that in some ways so for example, if that was you're wearing that sensor in your own house and you're able to change the paintings in your room according to your mood, that might be sort of a liberate, you know, a wonderful experience and one that would help people see paintings they don't get to see. All of these paintings aren't even on view. This is the other thing is some digitized art is digitized because we only are able to access artworks that are actually hanging in in a museum. So much of that content is is stored in warehouses. Storehouses. So what I liked about the project was we could enable people to have a different cultural experience, but at the same time, my mind was very much an ironic play on words. I mind you were reading my mind. I mind that my embodiment has been, you know, you're negotiating with my brainwaves without my permission, without me really understanding what that relationship with technology are. And it was very much a precursor to to a I you know, what is going to enable now will probably make those kinds of technologies much more enable them to actually happen. 

Steven Parton [00:27:25] Yeah. I mean, you're touching on something here. I've been thinking a fair bit about lately, and I've recently talked to Chris Ryan, who wrote a book called Civilized to Death, and he was talking about, you know, we're in zoos of our own creation. But the thing that worries me about being in a zoo of our own creation is I don't think humans usually know what's good for them, right? I think we design worlds around us that maybe aren't always the healthiest. Right? So I wonder if you see this as maybe a step in that direction, so to speak, where maybe it would be good for us to be exposed to paintings that don't match our emotional resonance because that would make us more emotionally robust or, you know, I don't know. Is that something that you think about what this is, that relationship with novelty and kind of being pigeonholed into something, Right. 

Isabel Pedersen [00:28:17] So that's a very interesting that's very interesting. And I haven't read that book, but I really like that metaphor and I agree with it for for I mind we change the. Algorithm at one point and did that, we actually made it the opposite. And the reason we did that is we found that most people were, you know, calm, calm and seeing very calm, calm paintings rather than anything else. Right. So people come to your tradeshow booth and look at your invention and they're all there in that sort of lighthearted mood. But yes, absolutely. We weren't able to sort of tap the next the next frontier, which would be all the full range of human emotion. And I mean, that was also the point that these these technologies are reductive. I mean, they're, you know, to to classify something as phenomenal as human emotion into these basic categories is so problematic. And I think, you know, we had we had some finally did have some experiences with that when we showed it to people who are not necessarily Luddites, but don't want to see our techno techno futures involved technology to this extent. And we did see finally paintings that were visualized. We would visualize this on several screens in the room so other people could see the emotions of one person. And finally we did see it firing what would be, you know, paintings that were inspired by wars or that were inspired by tragedy and those kinds of things and realized, you know, if people were experiencing this in their own home, the full breadth of their emotional output would be quite different than it was, you know, in the stairs we had. And we started to work on other emotional output like fear. We had a project called Fearmonger and imagined What if I chose horror films? What if what if I chose to modulate how much fear someone was experiencing in a film situation? And we have variations on this using biofeedback, galvanic skin response and heart rate and participants who were part of this performance to sort of increase and decrease fear for this for the person wearing a virtual reality headset. And again, the the what we were trying to get at is what if you you mentioned agency earlier, what if you take agency out of the out of the hands of the person in a cinematic situation and said they lose control over deciding to go to a specific film or watch it in a specific what if in a I decides how we'll experience horror film. And it was interesting. I personally don't know much about horror film, but we've got a curator from a university called Trent University whose his Ph.D. was in horror, and he chose the kinds of clips that we were used to to do this. 

Steven Parton [00:31:21] Given what you've seen, do you feel like people would actually be willing to take the next step in this continuum? Because you know, where we still probably want to go too deep into the political realm. But vaccines, for example, are, I mean, so minimally invasive in terms of what they do to the I mean, there's a big argument there. But my point being is that people were not excited about vaccines. People are very distressing of science these days. They don't like the idea of anyone putting anything in their body, even if it potentially means keeping them healthier or giving them some advantages around the world. Do you think people are going to be willing to actually take on something like an implantable or interface and let this technology get access to those parts of themselves? Because this is much different than carrying something around, right, Like a phone. How do you feel like humanity is going to respond to that, that level of advancement? 

Isabel Pedersen [00:32:21] Yeah, You know, so, so many answers come to mind because I think this answered in a three volume set of books, because these are this is the question, will people adopt? I mean, if you keep framing implanted, you know, brain computer interaction as if it is a phone, so then people might. So you these inventors are big tech inventors. Elon Musk, as I said, often refers to the future of brain computer interaction or sort of neuro neuro technologies as on that continuum and says, well, this is an inevitable and imminent future, which it isn't. But using that rhetoric helps people. It helps them not be as fearful for the future. Right? They'll say, well, you know, if it's just the same as wearing, you know, carrying a phone and it's the same companies I trust are legitimate, then then they might I mean, the paradox is always that even though people. Will understand or are starting to question their privacy or their concerns over it typically still adopt many of the technologies, even despite that those concerns. But I also think the other rhetoric that you hear is the counter fear, right? That if we don't if we don't adopt, then I will be superior to us and we will fall behind. We will become obsolete. And this is, for me, a transhumanist rhetoric that obfuscates our ability to question some of these technologies. Right. And ask difficult questions at this point in design so that we don't have to adopt without understanding what we're adopting. I mean, having said that, I think, you know, in medical in the other one of the other big issues, is there always sort of implications are they're they're I think they're called they're dual technologies, right? So you can you could adopt a technology that will help people in a very specific, specific way. So, for example, if people have difficulty or challenges and we need an assistive technology that might help them, you know, say, regain a mobility or that they choose to regain at the same time, then if you if you suggest that that would be good for a mainstream audience or say, everyday users that could become adopted before we have before we have a chance to ask questions about harms for that other audience? Mm hmm. So what is good for one group might be very bad for another in very reductive terms. 

Steven Parton [00:35:10] I mean, speaking of the medical uses there and the way that we could aid people, I think both of us would agree that there are benefits here. There are there are things that we can see that would be really beneficial, especially medically speaking. So could you maybe talk about a little bit the ways that embodied computing could improve health or maybe even stuff that is more focused on productivity, like efficiency, focus, creativity and these things? 

Isabel Pedersen [00:35:38] Yeah. So I would always start by saying that, you know, the determination of whether you need or should adopt a technology for a medical reason should be the person's decision, right? That the agency should always lie with the person who's choosing to adopt it or adapt to it. So I think sometimes technologies are proposed for a group and sort of foisted on them. So I would start by saying it has to be whether the person wants to adopt it. But my own chapter in Embodied Computing, which was coauthored by Dr. Andrew Elias and I, co-edited by not a authored I wrote a chapter on body area networks and body area networks are the next phase of wi fi. And what's fascinating and fascinating to me is it would combine carried Warren and implanted technologies and it would provide a sort of a safer, more secure wi fi for far beyond what we would have with Bluetooth and enable, you know, your let's say in very simplistic terms, your implanted devices to talk to each other and and send information to external actors, doctors, hospitals and maybe help with diagnosis or help with remote patient monitoring or enable us to finally let some of these technologies converge in a way that would be beneficial to us as patients. However, at the same time, you can imagine the double edged sword that we would lose. We could lose agency, right? We wouldn't be able to decide, you know, do you know what? If you don't want, you know, one of those external actors to know about your heart condition, I'm using this, but just anecdotally or, you know, have that your energy level is significantly diminished. Could this affect you as a worker, could affect you in your job? But these this technology was proposed, you know, more than ten years ago, which hasn't happened yet. So it's very much a historic design or historic idea that people are using the standard or trying to use the standard. But we haven't seen it emerge for us. Like we don't have we don't use this. And, you know, it's fascinating that it moves through the body, not just over the body. And so it could be very secure. Right. If you're passing information through the body, could be harder to steal, could be much more that could be a safer way to exchange information. And so, you know, and could connect. I mean, I think of my own. Invention, the fear monger device. What if you know facts and those kinds of signs were connected to, you know, heart rate, all that was connected within the body by exchanging information, exchanging information, you could then really have more personalized and enjoyable experiences. Having said that, it's rife for corruption by actors. They manipulate that. 

Steven Parton [00:38:53] When you say medical, then my brain immediately goes to regulatory and I guess I must be like, I know the answer to this question before I ask it, but have there been any legislative policies, regulations, anything you've seen at all that has made an attempt to have some kind of restriction or a limitation or ethical consideration around these technologies? Like is there anything on the table right now to really like help us not shoot ourselves in the foot? 

Isabel Pedersen [00:39:27] There are in augmentation technologies, in AI, in technical communication. The current book, we we dedicate a chapter to starting to look at governance regulations, policy standards to chart what is happening internationally and locally in in what, you know, augmentation technologies are really incredibly popular. They are, I think for for all the reasons we've talked about this, that people do start are starting to think about, you know, could I live longer? Could I can I could I remember everything? Could I remember more? Could I be better at it? Is that the discourse that is driving that and the question? There are certain jurisdictions like Illinois that has done more work in, you know, in trying to monitor people's privacy and biofeedback and bioethics is obviously interacts overlaps with ethics. Right. How do if you're regulating a are you you should be also regulating you should make sure that they're overlapping because the technologies will overlap. I'm not an expert in legal studies that by any stretch, but I find it generally that it comes out after the fact. Right. It's it's difficult to talk about regulation for a technology that hasn't been adopted yet. Right. If people aren't using it or you can't, it's and I actually think where we need to begin is university level research ethics to a certain extent. Ask if because so much so much of our research happened it does happen in government research enterprises, but it also happens at universities and many of the inventors I've followed for the past 20 years. I started following them at at in the universities and asking, okay, well, if we could get that as part of a research ethics agenda, what are the social harms? What are what happens, you know, if there's discrimination or could be potential discrimination, but you're inventing this. And then I think what would happen was there would be better integration with social science and humanities and science and technology. And that divide which gets worse, I think, is this is one of the reasons. 

Steven Parton [00:42:00] Yeah, it's funny you say that. I'm laughing just because I feel like I hear that more and more no matter who I talk to that. One of the big complaints is the separation between the humanities and the and the STEM people. It seems to be a growing problem. 

Isabel Pedersen [00:42:14] Yeah. 

Steven Parton [00:42:15] Well, given given your research, given your understanding of the the cultural influences of language and our stories, I'm going to ask you kind of a big question here as we come nearer and which is what are what what things are you looking forward to? Is there something that really like stands out as really promising? Is there something that stands out as really negative or is there may be something that's in the gray area in between that you feel like is just simply not getting the attention it deserves? You can tackle each of those or one of those. Take your choice. But how do you feel about these these paths forward? 

Isabel Pedersen [00:43:01] Yes, that is a big question. I mean, I see myself as a researcher who stands, stands back and looks at technology for better or worse. Not I, I wouldn't say that I I'm a cheerleader for technology ever. I'm always a person who's going to look at it and analyze it and maybe predict what's going to happen, but not as a person who is an enthusiast or a celebration. I'm celebrating it. So I'm saying that because I, I think this if brain brain implants do occur, I think, you know, that would be a lot of my research, 20 years of research publications talking about it, research. And so that would be that would be a complete shift, right? A paradigm shift in everything, right. The way that we use technologies and will dramatically change most of our you know, it'll change work entertainment, medicine and also, you know, could change the way that we are harms against humanity or all those things. So I'm I'm curious to know if that will actually happen in my time as a as a researcher and if I will have the opportunity to weigh in on that when it actually happens. I would be very excited for that because I have you know, I've I've invested so much in researching around it. So I think that would be if it would fascinate me and I think I would have a lot to contribute to that, You know, Paradigms Complete paradigm shift in computing. 

Steven Parton [00:44:46] Yeah, Let's throw out a random number here. By 2030, what's the likelihood that there's a at least a subset of people who adopt a BCI, a brain computer interface. 

Isabel Pedersen [00:44:59] So non wearable, right? So wearables will, I think, be very popular BCI devices because you can take it off, right? Okay. Yeah. Not being able to take it off is the huge separator between the two. But having said that, I mean, the only way to achieve some of these really weighty, ambitious claims would be to actually implant it. So I remarkably, I'm I always think it will take longer than it does these kinds of things. So and that's probably because proprietary trade secrets are not really revealed in any way that I could. Yeah. I mean, I think I would be comfortable saying that. I think by 2030 there will be users. Yeah. 

Steven Parton [00:45:46] Yeah, fair enough. Well, Isabelle, any any closing thoughts? I want to give you a last chance here before we officially shut down the conversation to just speak to anything maybe that came up that you would like to mention again or something that you would like to promote. Maybe a study you're working on that you need volunteers for anything at all that you'd like to talk about here at the end? 

Isabel Pedersen [00:46:10] Well, we did release our book Augmentation Technologies and Artificial Intelligence and Technical Communication Designing Ethical Futures. And so that was with Rutledge coauthored with and Held Doon. And I and I are always plugging my research institute, the Digital Life Institute, which clusters all these different researchers, multidisciplinary collaborations amongst people to help help with our our digital life futures. And it's sort of ethical, the ethical deployment of these technologies. So that would be the only thing. And just to thank you. 

the future delivered to your inbox

Subscribe to stay ahead of the curve (and your friends) with new episodes and exclusive content from the Singularity Podcast Network.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.