< return

Thinking Freely in the Age of Neurotechnology

May 22, 2023
ep
102
with
Nita Farahany

description

This week our guest is Nita Farahany, a Distinguished Professor at Duke University where she heads the Science, Law, and Policy Lab. The research she conducts in her lab specifically focuses on the implications of emerging neuroscience, genomics, and artificial intelligence; and, as a testament to her expertise, there is a long, long list of awards and influential positions she can lay claim to, including an appointment by Obama to the Presidential Commission for the Study of Bioethical Issues.

In this episode, we explore Nita’s recent publication, provocatively entitled, The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology. This takes us on our tour of the current neurotechnology that exists, the upcoming ways in which this tech will be integrated into our daily products, how it will shape our decision making, the profound list of ethical considerations surrounding cognitive liberty, and much more.


See more about Nita at nitafarahany.com or follow her at twitter.com/NitaFarahany

**

Learn more about Singularity: ⁠⁠⁠⁠⁠⁠⁠⁠su.org⁠⁠⁠⁠⁠⁠⁠⁠

Host:⁠⁠⁠⁠⁠⁠⁠⁠ Steven Parton⁠⁠⁠⁠⁠⁠⁠⁠ - ⁠⁠⁠⁠⁠⁠⁠⁠LinkedIn⁠⁠⁠⁠⁠⁠⁠⁠ /⁠⁠⁠⁠⁠⁠⁠⁠ Twitter⁠⁠⁠⁠⁠⁠⁠⁠

Music by: Amine el Filali

transcript

The following transcription was created automatically. Please be aware that there may be spelling or grammatical errors.

Nita Farahany [00:00:00] In a world in which even your thoughts can be accessed by corporations, can be bartered and sold, mined and used, and can be subpoenaed and accessed by governments. It's a world that I think will be very difficult. And so I think the risks are profound. I think they are more profound than any other technology that we've ever encountered, because it really does get to the core of what it means to be human. 

Steven Parton [00:00:37] Hello, everyone. My name is Steven Parton, and you're listening to the feedback loop by Singularity. This week our guest is Nita Farahany, a distinguished professor at Duke University, where she heads the science Law and Policy lab. The research that Nita conducted in her lab specifically focuses on the implications of emerging neuroscience, genomics and artificial intelligence. And as a testament to her expertise, there is a long, long list of awards and influential positions that she can lay claim to, including an appointment by Obama to the Presidential Commission for the Study of Bioethical Issues. In this episode, we explore Anita's recent publication, provocatively entitled The Battle for Your Brain Defending the Right to Think Freely in the Age of Neurotechnology. This takes us on a tour of the current neurotechnology that's available, the upcoming ways in which this tech will be integrated into our daily products, how it will shape our daily decision making, the profound list of ethical considerations surrounding cognitive liberty and many, many more topics. How boldly state now that if this is a subject that you're interested in, you're going to be hard pressed to find someone with more expertise and who is more articulate than Nita. So without further ado, please welcome to the feedback loop, Nita Farahany. So I think to start, the best thing to do is to be a good philosopher and start with the heart of any discussion. How do you define neurotechnology? What is it? 

Nita Farahany [00:02:21] Yeah, it's a great question because, I mean, a lot of things could be considered neurotechnology. If I were to define it as broadly as anything that could access or change a brain, that would be pretty much the entire universe. So we'll narrow it to the universe of technologies that are really designed to be able to image and to be decoded and to change more directly through stimulation and and otherwise. And I'll give examples of those which are functional. Magnetic resonance imaging is the biggest and bulky list of the lot, and that is giant MRI machine in which a person goes into the machine, their brain is imaged to look at blood flow changes through their brain as they engage in different activities of thoughts and communication and movements and everything else. Oxygen is consumed in different portions of the brain and that can be detected and imaged. There's also functional near-infrared spectroscopy, which also looks at hemodynamic or blood flow changes in the brain using near-infrared light that can even be worn in a wearable device. There's also EEG technology, which is electroencephalography, as we engage in any kind of brain activity. Neurons are firing in our brain as hundreds of thousands or millions of neurons fire in our brain. They give off electrical discharges that can be picked up based on their amplitude and frequency in what are called brainwaves. And using sensors that are applied to the scalp or inside the ears, that activity can be picked up. All of this can be decoded using software that is powered by artificial intelligence classifiers, machine learning or generative AI. And the last is EMG or ELECTROMYOGRAPHY and Electromyography is both with something that can be used to detect just muscle twitches and changes throughout the body. Or it can be used in ways that pick up the electrical activity as, for example, if it's worn on the wrist, picks up motor neuron activity from the brain. So as a signal is sent from the brain down the arm to the wrist and it is motor neuron activity that has tiny electrical changes at the muscular junctions. Those changes can be used to interpret things like your intention to move or to type or to swipe. So the broad classes, those it also includes implanted neurotechnology, all of those can be sensors implanted, and it includes things like transcranial direct current stimulation that can change the brain. That's not the full universe of all of the technologies, but it gives a class and a sense of technologies which are designed to image or directly change brain activity. 

Steven Parton [00:05:07] Yeah, I mean, you gave us a nice range there, but I think when a lot of people think about nanotech right now, they're probably thinking of something like an MRI or something that's more bulky and heavy, something that requires a lot of electrodes on the brain. How much of this is actually becoming small enough or reliable enough to be on a person or on an individual kind of through the day to day life? Is anything like that really taking place that you could say is reliable? 

Nita Farahany [00:05:38] I think it depends on reliable for what? So the answer is there is a lot of neurotechnology which has been miniaturized at this point that can be worn in the form of F nirs or EEG or EMG even, and it can pick up a lecture, go activity in the brain, it can pick up the hemodynamic changes in the brain. There are still signal degradation problems. And by that I mean, you know, if you're wearing dry electrodes and you're wearing them in the form of something that's movable, you may pick up muscle twitches or eye blinks rather than just brain activity. There can be substantial interference with the signal as well. That's getting better and better, both in terms of the devices that have been developed, but also the filtering using AI to be able to filter the difference between a muscle twitch versus an eye blink versus actual brain activity. You also have the problem of degradation of the signal through the very thick skull to get information that can be reliably decoded. So all of that is to say it's not perfect by any stretch of the imagination. Implanted electrodes are ephemeral. We are still going to give you better signal than a wearable device, but there are very good wearable devices that are already in the market that can pick up at least averages of brain activity rather than precise regional brain activity. And those averages of brain activity can be reliably decoded to pick up the. Things like attention or mind wandering. Whether you're bored or engaged, happy or sad, your positive versus negative reaction to something, your automatic reactions like a p300 signal, which is a recognition based signal on the brain for like real time thought decoding using something like EEG. I haven't seen any studies that can do that yet, but at the time that we're recording now, a couple of weeks ago was the first time that there was a very robust noninvasive application of AF, MRI and F nirs to be able to do real time thought decoding. And the difference in that study versus other research to date was applying generative i GPT one to be able to do that decoding in short order, I would expect that same kind of change in classification and application of generative AI to EEG data. I don't know what will be discoverable as a result, but I suppose I suspect that will be leaps and bounds beyond what we can do today and that that won't take that long before we start to see that. 

Steven Parton [00:08:21] Yeah, well, I mean, obviously the impetus for this conversation then is is your your latest book, The Battle for Your Brain, are these nascent and rapidly emerging technologies kind of what sparked that? Was there anything in particular that motivated you to think, I should probably talk about this book, This is getting serious or getting real quickly? 

Nita Farahany [00:08:42] Yeah, it's a good question. So I had been tracking neurotechnology for a long time and its implications both for positive use cases, for enhancements for, you know, improved communication, for the capabilities that, you know, expansion of human capabilities. For a long time I'd also been studying the risks of interference with freedom of thoughts and mental privacy for a long time and the particular application of it in the criminal justice system. But I wasn't that worried and was not motivated to write a book on it yet, even though I had written some law review articles because the truth was I didn't see how it would become a societal wide issue like that, I thought it would still be limited in context. And then I saw this presentation in 2018 by somebody who was at Control Labs, and he was illustrating a device that was different in form and different in kind in form. It came in the form of a watch like a bulky smartwatch. And he started his presentation by saying, you know, we are phenomenal input devices as humans or brains have extraordinary capabilities, but we are very inefficient output devices. It's very difficult for us to get what's in our brain out into the rest of the world. And he held up his hands and he said, you know, we're limited by this clumsy output devices that are like sledgehammers at the ends of our arms. And wouldn't it be better if we could operate tentacle tentacles like octopus, like tentacles, instead of these clumsy hands? And then, you know, rather than like going back upwards in time, which is what we're doing by typing with two thumbs on our smartphones, wouldn't it be better if we could type on, you know, virtual keyboards with our minds just by thinking about doing so, rather than slowing down our rate of thinking, using technology and slowing down our rate of input using technology rather than speeding it up. And he was like, And this device is going to enable us to do so. It's going to enable us to do so by picking up brain activity as it goes from your arm, I mean, from your brain down your arm to your wrist, picking up first, you know, typing on an actual keyboard, then typing on a virtual keyboard, then just thinking about typing or swiping and decoding that information in real time. So it was different in in application in that most neurotic devices that I had seen to date had niche applications to do things like meditate or, you know, use it for simple neurofeedback. But it hadn't been described as an interface to other technology in the past. And then the second is most of it looked dumb. It was like, you know, a like a forehead band that was like a stiff plastic, futuristic looking device that nobody was going to go around their everyday lives wearing or using. And it was like yet another thing you had to wear as opposed to something that would be multi-functional. And so as he described it, you know, my thought immediately went to like, of course Apple's going to buy this and it's going to go into the like Apple Watch and this is how it's actually going to go mainstream, is it's not going to be through these niche applications. But the way we interface with other technology and then the issue issues become profound across society. So I dove in to learning everything I could about them. In 2019, just less than a year later, Mazda acquired them for half a million. I mean, half a billion dollars to a billion. Somewhere in that range is all we know. And I was like, okay, this is getting real. Like I got to get on this, right? And as I looked into it, I realized, you know, that was just the tip of the iceberg. Like, there's, you know, embedding of brain sensors into earbuds and embedding of brain sensors into headphones. And, you know, the newest game really is the multi-functional devices that have brain sensors integrated into them, just like heart rate sensors are integrated into watches, just like the, you know, sensors in people's rings, pick up sleep patterns and, you know, other activity. And brain sensors, of course, just make sense both to pick up brain activity, but especially the power to be able to navigate and interact with other kinds of technology. So that was my big aha moment that led to the impetus to write it. And then as I'm writing it and I is getting better and better and better and better, which becomes a critical part of the story, right? It's not just that the sensors are getting better, it's that the interpretation of what you can do with it is infinitely better. And then I launched the book, you know, right. As generative AI as launching. And it's like everybody who, you know, disbelieved anything that's in the space I think are over that now. They understand the implications at the moment that we're in are that I can even decode what you're thinking. 

Steven Parton [00:13:26] That's a pleasant thought. On that note, I mean, what are the ethical considerations? Because you've you touched on how this is going to kind of sneak into the mainstream through these very just like seemingly innocent consumer devices. But obviously, as you just said, we're talking about something here that can read your thoughts potentially in a very serious way that might open the door for a lot of manipulation. So what are some of the ethical considerations that in your mind are are are most important? 

Nita Farahany [00:14:03] Yeah. So, you know, I would say, first of all, they're not going to just sneak in. I think people are going to willingly embrace them, and I think that they will do so. You know, the risks will sneak and they'll be invisible to many people. But the choice is one that I think just as people have chosen to integrate heart rate sensors and other smart sensors into their everyday lives, I think they'll choose to integrate brain sensors. And I think many people approach the ability to finally peer into their own minds with fascination and excitement and, you know, the ability to see their focus and distraction and boredom and engagement and track their brain health in the same way that they've tracked the rest of their physical health. I think they'll willingly embrace when they do so. I do fear that they'll be unaware of the significant risks of doing so. And that is, you know, a heart rate sensor picks up. Not a lot of information about you, right? It might pick up your heart rate. I may pick up you know, you could use it to interpret like, how often am I exercising? You might be able to figure out things like, you know, heart conditions that I may be suffering from, but not like what I'm thinking or feeling or, you know, literally the thoughts and images in my mind. The thing that is being decoded is fundamentally different in kind than other kinds of data that have been collected. And the risks of that are everything from, you know, it becoming the productivity measure like your brain and your attention for surveillance in the workplace, where surveillance is massively increasing already in the workplace to, you know, risks of significant chilling, of freedom of thought. So, you know, there's already all kinds of ways in which governments have interfered with freedom of speech across the world, all kinds of repression that people are facing. But at least they have the ability to entertain private thoughts, to be able to think freely despite whatever kinds of constraints they have otherwise. And I think that ability to think freely is so fundamental to human flourishing, so fundamental to self-identity, to intimacy and relationships between individuals, and choosing what you share, the safety and security in society, to know that you can have bad thoughts pop into your head. You can think dissident thoughts, you can think thoughts of freedom or thoughts of resistance. You can think about unionizing or you can think about demonstrating against the government or protesting in a world in which even your thoughts can be accessed by corporations, can be bartered and sold, mined and used, and can be subpoenaed and accessed by governments. It's a world that I think will be very difficult to be able to continue to be human. And so I think the risks are profound. I think they are more profound than any other technology that we've ever encountered. To the extent that people are having existential risks about A.I., this particular application of existential risk should be one that they're paying particular attention to, because it really does get to the core of what it means to be human. 

Steven Parton [00:17:04] Yeah, And when we talk about reading thoughts, you know, when we say that phrase, how specific are we saying? Because right now, at least, and maybe in the short term, I think of things like seeing if you're sympathetic or parasympathetic nervous system is active, seeing if you have disgust seen, if you're relaxed, like you said, if you're mind wandering, if you have high attention, how how much more beyond that are you thinking that we're going to get and how quickly are we going to get to a point where it is something more tangible then maybe like these more surface level sensory, you know, interpretations and actually down to like you are actually thinking about this specific sentence, a sentence or like activity that you want to do. 

Nita Farahany [00:17:51] It might be useful to differentiate here between voluntariness versus and voluntariness. And I say that because down to the specific thoughts right now, based on the most recent research that has come out, if I want to go into an MRI machine with a researcher who wants to decode what I'm thinking, you can get down to like continuous language and imagine thoughts in my brain. And we're not talking about like, this is my basic brain state or I'm happy or I'm sad. It's literally, you know, you want to know what I've just imagined as my thought and decoding the semantic content of that maybe not word for word, but pretty close because it's you know, you get the meaning of it even if you don't get every word of it. Now, that needs to happen voluntarily because you can effectively use countermeasures against it. And brain activity is unique, and that is that you have to have a classifier that's actually trained on your brain activity for the classifier to be able to decode what you're thinking. So we can start there, which is if I want to get into an MRI, it can happen today if I want to if I want to use f nirs, according to the researchers, and I don't know how portable the answers are that they're talking about or what the sophistication of the errors are that they used. I haven't been able to get to the bottom of that yet, but I have an f nirs device that's a portable f NIRS device, and I wonder the extent to which the classifier could be used with that device, which I can use right now to track my focus level and prefrontal cortex activity and the extent to which I am more focused or less focused, mind wandering or not mind wandering. That's not thought right, but that is a brain state. And those brain states can be pretty accurately measured today. I would maybe a year ago have said, I don't think that egg will ever get there. Of having thought decoding. I'm not going to say never any more because they just don't know what's possible with what can be detected, for example, through in-ear EEG and the extent to which it may be possible to train a classifier using advances in generative AI to predict far more about thought. And if I'm right and people adopt these technologies to use in their everyday devices, the countermeasures of it has to be voluntary, as you will train the classifier on your own brain activity because it's useful to do so. It's useful to be able to do brain attacks, communication. It's useful to be able to type or or swipe or turn on your lights or interact with the rest of your technology and have your thoughts decoded to be able to do so. And one thing that was really interesting about that paper was they found that there is significant redundancy of language across the brain. It's not that you had to record from the entire brain in order to decipher continuous thoughts that they could pick just a region and a different region and a different region and a different region. And it was redundantly represented and each of those different regions. And so that means if we go back to this idea of can portable device does decode given that portable devices usually have far less of the brain that they are recording activity from, if language is redundantly represented across the brain and you're just targeting a particular region and it's kind of like pick your region at random and you can still decode information. The possibility at least exists that in the near term it will be possible using portable devices to decode far more than they can currently do, which is basic brain states right now, they can effectively and have been effectively used to decode brain states from attention and mind wandering, boredom and engagement. Happy, sad how you react to political messaging. Even pin numbers and things like that have been decoded from the brain. 

Steven Parton [00:21:30] Wow. That's kind of incredible to believe because it feels like that the big gap between that sci fi future and where we're at now is the fact that these headphones, for instance, wouldn't have the resolution to really grab that kind of data. But if that's something that you feel is possible, that feels like it makes it much more real, much more quickly. For most. 

Nita Farahany [00:21:52] I'll say the dangers are here quite regardless of whether or not thought can be decoded. Yeah. All right. One of the things I wanted to dispel as a myth, and my book was when people feel like, okay, but you can't decode that much. You can't decode what I'm thinking. And I'm like, Yeah, that's an even scarier world when you can decode what a person's thinking, but you can get pretty good approximations of thought even today with these technologies. You know, just as an example, I mentioned P300, This is an automatic reaction in the brain. It's like a recognition signal in the brain that can be used to interrogate a person and have already been used to interrogate a person like a criminal suspect to see if they recognize crime scene details. So you show them different images that only a person who is either who was either present at the crime scene or who was somehow involved with it, would recognize. The P300 signal can be used to determine whether or not they recognize that. Likewise, it can be used to determine whether you recognize a whole bunch of other information, you know, secret information held within you that gives revealing thoughts about you and then your reaction to things can reveal biases. So, for example, in China, a number of workers are required to wear EEG devices like a hard hat or a baseball cap throughout their workday in order to track their brain activity for attention or fatigue. They also are being probed for political messaging and their reaction to it. So shown a series of things like communist messaging. And then are they positively reacting to it or are they negatively reacting to it? You can start to determine with a significant degree of accuracy by showing that information and then recording the automatic reactions that a person registers what their inward feelings are and everything from kind of dystopian workplace scenarios that are already playing out to this dystopian thought control idea where it's your inward feelings that you're being punished for, for with respect to political messaging, that's a scary world that already exists right now. Go to the level of actually interrogating thoughts themselves or decoding thoughts themselves. Even scarier world, right? But we should already be worried about cognitive liberty in a world in which all of that is already possible based on existing and very limited technological capabilities today. 

Steven Parton [00:24:20] Yeah. Do you see this as a tool that's going to become a part of maybe things like hiring people or doing college applications where they'll start having people basically wear these headsets or give their brain data over as part of the application process to see if they're a good fit for the company or the school. 

Nita Farahany [00:24:39] Yeah. So I think that it's already in some ways being used not with headsets, but cognitive and personality testing has already become a mainstay in hiring. You know, if you look at a number of the companies that have been built based on neuroscience, they have built cognitive in person Audi testing tools that are integrated into AI based hiring tools, whether that is through, you know, what was previously called Mercer, which has been purchased by Harvard. And Harvard, does a whole lot of front end AI based hiring tools for employers or hire view, which is an AI based hiring system that also has cognitive and personality testing. And when you integrate that not just with the battery of tests that are deployed, but also looking at things like micro facial changes. And I, you know, attention and other metrics that are being used, you realize that neurotechnology doesn't exist in a vacuum. It exists in relationship to a whole lot of other information that's being gathered about people. And in that world where all of this information is being gathered to try to create very precise profiles of what a person is thinking and feeling and to change in many instances what a person is thinking and feeling. I see it being a kind of critical part of hiring education maybe, right? I mean, there seems to be greater protections both around children, but also educational settings. So I think there'll be greater resistance to doing that, that kind of cognitive and personality testing, although in some ways we do that right. If you think about S.A.T. and other standardized test, they're designed to get at particular cognitive processes or cognitive capabilities. And so could you imagine, you know, integrating A.I. more into that or trying to realign those different types of standardized tests to say, well, really what we're interested in is the following things. And maybe rather than the traditional pen and paper or computer based task that is standardized in this way, we ought to update it with generative AI and these kinds of tools. I could imagine, you know, seemingly well-meaning approaches to do that. 

Steven Parton [00:26:47] Yeah, well, then I know you have a lot of ways in which you're involved in trying to get some policy changes. And what that makes me and we can talk about that a little bit later in more detail, but. For now. What are some of the applications, I guess, that you are most concerned about? Is there a medium or an aspect of society or any, you know, particular niche, even though this is very wide ranging that you are very concerned about right now? 

Nita Farahany [00:27:17] All of it. 

Steven Parton [00:27:18] Yeah. That's kind of what I was picking up. 

Nita Farahany [00:27:20] I mean, I don't think that this really should be a piecemeal approach. I think we need to move toward recognizing cognitive freedoms for people generally, and then we need to move quickly in that regard, and that then we can take the piecemeal approach from there of applying it in context. Civic settings like employment, for example, where I think there's an urgent need to act quickly. But I, I would urge us to not wait and to not take a piecemeal approach that does, you know, context by context whittling away, but really taking a more holistic approach to adopting a right to cognitive liberty. 

Steven Parton [00:27:56] So this is something I remember I recently talked to another lawyer and I researcher, Elizabeth Ryanair's. She was on the podcast and she said one of her issues with how we're doing law is the fact that we're trying to come upon these digital versions of law to create new ones where she says we should really just use the ones that are already there and maybe update the language to include these technologies. We already have the law. We don't need new ones. We just need to update these. Do you agree with that approach then? 

Nita Farahany [00:28:26] Yes and no. So. I'll I'll back up and say, you know, we have liberty as a recognized concept already. And many of our institutions are built on a concept of liberty. It's built on a pretty outdated concept of liberty, though, because it's an outdated concept of liberty that. Doesn't really consider how the digital age has fundamentally transformed a lot of things, including what it even means to be human anymore. Right. I mean, people are so much more integrated with technology these days in ways that there is, you know, kind of a beyond human world that we live in now. And so as we think about what our existing laws, which all predated that world, many of them are antiquated. And yeah, we could go in and we could update them. But I'm proposing that we rethink what liberty means in the digital age, that we recognize that cognitive liberty is fundamental to human flourishing. And that means like a positive right to self-determination over our brains and mental experiences to access them, to have our self ownership over them and to change them if we want to do so. And there are a lot of laws that stand in the way of that. You know, take a simple one, like in the US, the FDA, the FDA is pretty good at being able to figure out risk benefit for therapeutic classes like, for example, you know, if this is the disease state, does this drug give more benefit than the risk but enhancements they're terrible at. Right. Because how do you think about risk benefit for enhancing your brain when you know, there's it's very difficult to both quantify it at a societal level. But my value in, you know, in enhancing my brain and the risks that I may be willing to take on may be very different than what another person's value is in that context. Psychedelics. Right. Everything within that space risk benefit, the benefits, right. Is a it's a category that the FDA is just not well equipped to be able to analyze. That's not what they were set up for. Most regulatory agencies weren't set up for a world in which we can expand beyond human. We can expand our brains and mental experiences well beyond what was possible before. So just updating existing laws that are stuck in a mentality around risk benefit based on therapeutic and are not designed to be able to think about what enhancement means. I don't think that's going to be enough. I don't think like that will solve the problem. I do think that in some ways, like what I'm proposing with respect to cognitive liberty, which is the right both to and the right from, we don't have to invent new laws in that space. We can take the existing human right to privacy and to make explicit that mental privacy should be included within it. We can take the existing right to freedom of thought, which is primarily been applied to religion and belief, and to ensure that interception, manipulation and punishment for thoughts is included within the way that we think about it. We could take the collective right to self-determination, that the political right of the people's and say individuals have an individual right to self-determination as well. To do that, though, you need to recognize a right to cognitive liberty, right as an umbrella concept that directs you as a holistic reason to update each of those existing rights. Like they're not going to just happen by themselves in concert with each other. And so you need something that's an organizing principle. And that organizing principle, I believe, is recognizing a new right. And that new right is the right to cognitive liberty as an update to liberty in the digital age. So, you know, I agree with Elizabeth. I think she's incredibly smart in the space. It's it's a mantra that a lot of lawyers say, like we have all the laws we need. But not quite right. I mean, we have the laws, but we don't have the organizing principles. And sometimes those organizing principles are crucial to getting the laws updated in a way that's consistent across an umbrella concept that actually leads to that principle being echoed in all of the different laws. 

Steven Parton [00:32:29] So with that in mind, I mean, as a researcher, you understand the benefits that come from data and having access to to this kind of information. Yeah, but as this conversation emphasizes, there are a lot of risks. So. So what do you think the reconciliation or the balance there is between how we allow the collecting of this data? Should we should we just say, don't collect this data if you're a company or a government, or should we say it should be very specific cases? Okay. 

Nita Farahany [00:32:58] So, yes, but I think we need to flip the terms of service. I think that we just have gotten it wrong in the aggregation of data and the commodification and instrumentation of people. And that is just very especially true when it comes to brain data, which is not yet a class. I mean, there's there's commodification by the companies or already players that has already occurred. One company is Intertek. They're a Hangzhou based company that has collected millions of instances of brain activity, data from people using their brain based devices to do mind control car racing or to, you know, meditate or to engage in neurofeedback. And they've already entered into partnerships with other entities to be able to share that data. So, you know, I think we should recognize that there will be a strong temptation to commodify, use and share the data, not just to improve products, but to make all kinds of insights that violate our cognitive liberty if we allow the same terms of service to apply. So we need to change it. And this is a good moment to do so, which is to say self-determination over our brains and mental experiences is rooted in a concept of self ownership, like going all the way back to John Locke and saying, Self-ownership, as we understand that in this context means I own my data now. I own my data, but I'm not actually going to do anything valuable with my data as the truth, right? I may gain insights about myself, but the real value in data comes from the work of transforming that data into aggregation and gaining insights from it. And if we want to solve neurological disease and suffering everything from neurodegenerative disorders to mental health and drug use disorders to, you know, depression, like we need to agree to data and we need aggregate data of people engaged in long term real world everyday activities while their brains are being monitored. If we design the system differently, that can still happen, right? Which is I have my data. It's my data. I get to choose to share it and I get to choose to share it with whom and when I want to do so. And so. There can't be a terms of service that says, to use my service, you have to give me your brain data. But there can be a mental vault that is created where I keep my brain data and there can be third party fiduciary organizations with whom I can share that data for specified purposes. Like I want researchers and scientists to have access to this data, or I even want a license back my data to the company that is, you know, aggregating the data for themselves to improve their products or to customize products. For me, it's a difference of putting people in control of the use of the data and recognizing that they own the data, but that the value from the data comes from sharing it, and that there are going to be good reasons that we will want to share that data. But not everybody will, and they should have the choice not to do so. And I may not want to share all my data, right. I don't want to share my data while my brain is being monitored, while I'm hanging out with my kids. Just not interested. Want to have that as private time to myself or I want to have a nice moment of self-reflection about, you know, the evils of a government that I think are deeply problematic. And I want that to just be my data that I don't share. So there should be great it's more than just like, here's all my data. There needs to be ways of having more nuanced control over the data that is ultimately being shared. 

Steven Parton [00:36:14] Yes. And given that we've dragged our feet so severely on digital already in terms of our legal frameworks and how much. 

Nita Farahany [00:36:23] Of a chance it's different here right now? I mean, this is like one of the great things about like everybody realize we've dragged our feet, we've done it wrong. It's been terrible. We have no momentum to do it otherwise. But this is a brand new category of data. It doesn't have to be that way, right? We don't even have to claw back all of our rights in every other context. We could just be like new category, special category. We're going to treated differently. 

Steven Parton [00:36:46] But where's the pressure come from? I guess is my question. Like because we talked about before a little bit when I said sneak, you know, maybe using loose language, sneaking in the technology there. I guess what I was really alluding to is the fact that I don't think most people care or in a place maybe socioeconomically or emotionally from a place of emotional intelligence to care right now. Where does the pressure come to really change this if a lot of people just think, hey, whatever, if you're going to give me a free product, you can read my brain. I don't care. I like free. Free is good. And you know, where is the pressure do you feel is going to come from to make that that zeitgeist shift? 

Nita Farahany [00:37:23] Yeah, it's a great question. So, you know, it's not going to be me alone. Right. I can't convince everybody through a book, through talks, through engagement with other people that we should care about our brain data. We need collective action. And, you know, I'm hoping that I'm helping to spark conversations around that collective action. And I'm certainly not alone. There are efforts underway at the U.N., UNESCO's other organizations, other scholars, nations worldwide who are starting to make movements toward this idea of special rights that might apply when it comes to the brain. They're not organized across each other. They're not working together. They're working, in some cases at odds with each other, with conflicting kind of ideas. But I think the next step is building the political will. Right. And that's some of the work that I'm trying to engage in, in which is to bring like minded people together and to say, hey, there is is an umbrella concept. We don't have to do it piecemeal. We don't have to do Whac-A-Mole with like, you know, here's a I and here is Metaverse and here is immersive in general, and here's every other XR technology. I mean, like we're we're just going to keep playing Marco Maul. We're never going to win. But we could collectively, where there is political will in these other spaces, try to shape those conversations in ways that move more quickly. It seems like there is gaining steam around political will to regulate A.I.. You know, plugging into that conversation, this conversation to say like, hey, hey, wait, Kai, I can also decode your brain. You know, it's a good place to also, you know, ride the wave of people recognizing that there are some unique risks that have arisen as a result of rapid advances in generative AI. 

Steven Parton [00:39:00] Yeah. Is there a specific policy or guardrail that you think we should focus on first to kind of get the ball rolling? Is it really just kind of updating that cognitive liberty aspect to kind of I know you're doing un I think human rights. We're trying to get that. 

Nita Farahany [00:39:15] So, I mean, I ideally think that the right answer is to work at it at the U.N. level, to recognize it as an update to rights and update the concept of liberty in the digital age. I think working with companies to act as if we have cognitive liberty and product design has, you know, is a promising approach. I've had a lot of companies expressed interest in trying to develop a kind of global collective around this idea, which I really like. A lot of VCs are interested in figuring out how to invest in cognitive liberty, which I like as a potential way forward. And I'd say probably like the easiest place is mental privacy, right? Privacy. Just simply recognizing a right to mental privacy is probably the easiest low hanging fruit that can be incorporated because there is, I think, a recognition and a growing recognition of the need to recognize mental privacy. It's something that, you know, is almost a super easy fix. Like, yeah, we have a human right to privacy. Obviously that must include mental privacy, right? And then that would have a significant effect already on, for example, employer use of these technologies, because the default rule would be you can't be accessing brain activity data from employees other than for a bona fide legal exception, which has to be narrowly tailored to the purpose for which are gathering the data. In which case, maybe you could do something like fatigue monitoring, but you can't also be mining the employee's brain activity for how they feel about the raise that you've just offered them. 

Steven Parton [00:40:44] Right. So have exceptions, but update the rule broadly at the start. 

Nita Farahany [00:40:49] Yeah, I think that's right. Exceptions are built in, right. So mental privacy or privacy is a relative, right? So you'd have to build in exceptions. You just have to recognize that those exceptions would have to be sought by the employer. For example, they would have to prove that there is a reason to grant an exception in a particular case. And they have to provide with that bona fide legal reason is and there's a mechanism already in existing law about how you would do that to find those kinds of exceptions. But the point is that would immediately give a default rule change in favor of individuals. 

Steven Parton [00:41:21] Yeah. And that would make those companies or any entity that wants to abuse this actually have to do paperwork and bureaucracy and leave a paper trail that basically says what they're trying to do. So it puts the power, I feel like, back in the hands of people in that regard. 

Nita Farahany [00:41:34] Exactly. Yep. 

Steven Parton [00:41:35] Well, let's we have a little bit more time here. I want to try to bring a little bit of optimism into it, I guess. Do you think part. 

Nita Farahany [00:41:42] Of have we not been optimistic? I'm trying to be optimistic. 

Steven Parton [00:41:45] I think. Well, I think we have. I think we have I mean, we're being it's it's very optimistic in a lot of ways, but maybe like extra optimistic, like brain enhancement kind of things. What are some of the ways that you see us using this just in general to make ourselves better people? Like you talked a little bit about neurofeedback and some of the things we might do to streamline our interfacing with technology. But are there certain ways that you're really excited about how this could be used? 

Nita Farahany [00:42:12] Yeah I mean, there's the simple to the more complex, right? And the simplest is we can better change our brains in ways that are enhancing. And I mean that through drugs that can improve the rate of processing of information, through drugs that make people more relaxed and happy and, you know, see the world through greater worldviews and lenses and expand their capabilities and expand their mindset, that's exciting. We can also start to reveal to ourselves our own biases that get in our way. You know, people believe that they are not racially biased or not biased based on a person's sex or gender. And you can reveal to yourself not just through like an implicit association test, but really like here's what your brains B looks like and reacting to these different, you know, faces or these different images or these different messages. And that kind of self-discovery can be transformational for people in ways that I think could help people really start to address and grapple with their own biases that get in their way. We can empower people to realize that, you know, it's incredibly costly to live in a distracted world. People think that they multitask. They don't. They just context switch. And context switching is deeply, deeply troubling for your brain. It slows you down and people know that intuitively, but they can't see it. If you can literally see like this is how much it costs you in terms of your brain focus and attention to pay attention to that notification that just popped up on your phone, you would probably turn off most notifications on your phone and decide that when you're going to switch tasks, you will intentionally switch tasks in a more concerted way rather than context switching constantly between different tasks. You can also do things like, you know, use some brain training platforms that actually speed up the rate of processing of information, executive functioning, reasoning, memory over time. Those are all exciting, and the more validated they are, the more we can target and focus on ways in which doing so would be useful for individuals. I'm really excited about some of the ways that people can address not just ordinary mental health issues through these everything from depression to other kinds of mental health disorders, but also potentially being able to address causes of suffering. So, for example, you know, I personally, through a traumatic loss in our lives, suffered from PTSD, and I use neurofeedback in order to be able to work through it. What wasn't available to me at the time is a very promising platform called Decoded Neurofeedback, where literally the specific brain activity that is activated when you are suffering from the most traumatic memories can be tracked. Then a game or positive kind of associations can be used to retrain that brain activity without reactivating the memory. So traditional therapy for PTSD is exposure therapy, which is very difficult to have to confront the same memories over and over and over again versus literally just playing a fun game that retrained your brain activity enables you to overcome suffering. That, to me, is extraordinary and could be so incredibly helpful for people who are suffering from any kind of suffering. It doesn't have to be, you know, PTSD. It doesn't have to be at that level. There's you know, I think there's value to working through suffering. I also think there's value to not having to work through too much suffering and to enable people to be able to live happier, healthier lives. I'm also excited about some of the futuristic possibilities, like brain to brain communication, you know? So for example, there was a study led out of the University of Washington where three different people in three different rooms played a Tetris like game with each other. So there were two centers in two different rooms that were wearing EEG devices. While they were able to visualize the entire game screen, including the falling pieces. And then they had to decide if the pieces needed to be rotated or not to reach the bottom of the screen. A third receiver was sitting in a different room wearing an EEG headset and a transcranial magnetic current stimulation device and would receive the sender's thoughts about whether to rotate it or not in the form of a flash of light that would appear in their visual cortex and then would make choices about whether to rotate the piece and the receiver couldn't see the entire game screen. They could only see the piece that was falling and had to make a choice as to whether to rotate or not. And in 16 different trials of three different groups of participants, they were able to achieve a very high rate of accuracy, I think in like the eighties percentile, which is much greater than chance. And that's like the simplest version of brain to brain communication. I think a world in which we could actually communicate with each other, brain to brain would be incredible, both because we might actually be able to send like a full resolution thought. To another person, you might truly be able to empathize with another person. How many times have you tried to be like, okay, I'm trying to describe this to you, but like I can't fully put it in words to help you understand what I'm thinking or feeling about this, to really be able to communicate with each other in that kind of like full resolution would be incredible. Those are just some of the things I'm excited about. 

Steven Parton [00:47:33] Yeah, I see a whole new suite of party games from opening up from this experience. Do you also think I feel like this could also help people in a way? These are loaded terms, but I most like self-actualize or find some like path to meaning. Right. Because I would imagine if you can detect when someone has a higher level of attention or engagement that perhaps people are able to see what things they were more intrinsically motivated towards rather than not. I mean, does that seem reasonable as well? 

Nita Farahany [00:48:04] Yeah, I mean, yes and no. So I worry a little bit about kind of reductionist views of self. And so I'm sort of of a mixed mind about this. Like on the one hand, like a neat application of this is that has already occurred is like giving somebody a neural headset while they view art and then sharing with them. Like, which ones were you most engaged with or did you seem to love the most? But is that really what you enjoyed the most? Is that like, you know, kind of reducing it to these feelings and then judging beauty and self-actualization based on external data could shortcut that process of trying to identify, you know, what you're really motivated by, by data. On the other hand, maybe it does reveal to you, you know, something important that leads to that important self updating and self-actualization. So, you know, I think data can be incredibly valuable. Here's an example. I was on a long trip recently and I got home and I was feeling really winded. Like I thought, oh, my gosh, I'm very short of breath. This is so strange. And, you know, I wondered, like, am I actually, like, am I breathing shallower or is there something going on? So I got a pulse ox out and I measured it and I was at 100% and I was like, okay, I guess I'm not short of breath, right? Maybe that's not what's going on here. And that allowed me to update my own thinking and to realize actually I was physically exhausted. Right? This trip had actually just depleted me, but I couldn't place well what the feeling was. And I was trying to rationalize what the feeling was with, you know, information just for my internal software. So external data can help you update and replace your thinking and help you, you know, really check whatever the internal software you're using to make those decisions are. But we also have to be careful not to end up with like a reductionist view of self. So it depends on how we use it and what we do with it. 

Steven Parton [00:50:08] Yeah, there's going to be a lot of nuance in this conversation. And with that in mind, I think the best place to end here is just any closing thoughts. I want to give you a chance to leave us with any any words of wisdom, any projects you're working on, anything you want people to pay more attention to, anything at all. 

Nita Farahany [00:50:27] Sure. So, I mean, this last question of, you know, self-actualization and say, this is where my work is going next with cognitive liberty, which is, you know, on the one hand, trying to work collectively and collaboratively with others to realize this. Right. As a as a human rights. But I think the second part of that is really understanding it is a theory of human flourishing in the digital age. And what does that mean and how do we actualize it in our own lives? Because reclaiming cognitive liberty in an increasingly distracted world where technologies can both enhance but are also designed in many ways to distract us from critical thinking, from deep self-reflection, from the ability to truly be able to develop the kind of resilience and common sense and gut instincts that we otherwise historically have developed. I think it's in many ways the missing piece of of existing theories, of human flourishing. You know, concepts of unanimity or happiness are really built on an assumption of having freedom of thought. And, you know, if you don't have that, if you aren't a self-actualized human being, if you don't have cognitive freedom, like what does happiness mean? What is manufactured have been as where you get it from little dopamine hits of like buttons as opposed to through true ability to have cognitive freedom to reach that level of happiness or enjoyment. And so that's where my work is going next, is trying to build out cognitive liberty beyond the concept of the legal rights that we need to what it means to cultivate it in ourselves. 

Steven Parton [00:52:00] But I think that's a perfect way to end. Anita, thank you so much for your time. 

Nita Farahany [00:52:04] Thanks for having me. I really appreciate it. 

the future delivered to your inbox

Subscribe to stay ahead of the curve (and your friends) with new episodes and exclusive content from the Singularity Podcast Network.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.