Mind-Reading Tech is Here with Nita Farahany [S4 Ep.09]
Hip-hop is always called out to any qualities in America,
but even within it, nothing's ever been equal.
This season, Lauda de Nariya podcast is tackling sexism, homophobia,
and all the unwritten rules that hold the entire culture back.
Listen to the Lauda de Nariya podcast from NPR Music,
or wherever you get your podcasts.
Have you ever wondered why we call French fries French fries?
Or why something is the greatest thing since sliced bread?
There are answers to those questions.
Everything Everywhere Daily is a podcast for curious people
who want to learn more about the world around them.
Every day, you'll learn something new about things you never knew you didn't know.
Subjects include history, science, geography, mathematics, and culture.
If you're a curious person and want to learn more about the world you live in,
just subscribe to Everything Everywhere Daily, wherever you cast your pod.
♪♪♪♪♪
♪♪♪♪♪♪
♪♪♪♪♪♪♪♪
But I still outplayed all the cards I was dealt with
And when the going got tough, it's the man in the mirror who I card when he fell
And when the judge with they comin' snowplace, you can run either have the noise to happen
So get your alibi straight, wing of souls on fire, wing of souls on fire
A couple people said I should probably stay in my lane, stick to where I'm good, but I'm good at too many things
Coleman got more lanes than LA highways, so like Sinatra had to have it my way
The track you just heard is an excerpt from my brand new album Amor Fatih
You may remember the music video as I put out last year, blasphemy, straight A's and forward
This new album features all three of those singles plus seven brand new songs
Now I put my all into this project and it's a real representation of my passion for music
So if you want to listen to the whole thing, click in the description
Or search cold x-man on Spotify, Apple Music or wherever you listen to music
Now back to the podcast
Welcome to another episode of Conversations with Coleman
My guest today is Nita Farahani
Nita is a professor of Law and Philosophy at Duke Law School
She's the founding director of the Duke Science and Society
She is the faculty chair of the Duke MA in Bioethics and Science Policy and Principal Investigator at SLAP Lab
In 2010, she was appointed by President Obama to the Presidential Commission for the Study of Bioethical Issues
where she served until 2017
She's an appointed member of the National Advisory Council for the National Institute for Neurological Disease and Stroke
And she is a past president of the International Neuroethics Society
And that is only a small slice of her bio
The topic of this conversation is mind reading
And I don't mean trying to guess what's in somebody's head
I mean actual technology that scans your brain and reliably conveys what you are thinking or feeling
Now this seemed like science fiction to me, but Nita convinced me in this conversation that this technology is already here
And there are a host of ethical questions relating to privacy and other things
So we talk about how EEG scans can give us information about our mind
We talk about the relationship between EEG scans and classical questions in the philosophy of mind
Such as consciousness as well as free will
We talk about uses of mind reading technology in criminal investigations which has already happened
We talk about current uses of mind reading tech in Chinese factories
And yes, that is already happening too
We talk about tattoos that can pick up your brain activity
And once again, that already exists
We talk about the combination of artificial intelligence and mind reading tech and what that promises for the future
We talk about how biometric and facial recognition tech is being used by airport security
We talk about whether excellent liars would be able to pass mind reading technology
And we talk about how mind reading tech has even been used to tell whether couples are in love
Like I said, I went into this conversation skeptical that we ever developed a great mind reading tech
And left basically convinced that mind reading tech is already here in some ways
And is certainly one of the most important and potentially dangerous areas of innovation in the world right now
So I really hope you enjoy this one without further ado, Nita Farahani
Okay, Nita Farahani, so good to be with you. Thanks for doing my show
Delighted to be here. Thank you for having me
So we were just talking about your background a little bit and how you avoided philosophy
As a younger person and later came to it in your PhD
And you have a neuroscience and genetics background too
So before we get to your book The Battle for Your Brain
Which will be out by the time this podcast comes out
Can you summarize your background a little bit in science philosophy, neuroscience, genetics, etc
Sure, so I started with real passion and science and thought that I was going to go to medical school
Largely because if I thought you were interested in science that's what you do is you go to medical school
So I was pre-med and undergrad at Dartmouth and I actually had a government minor
While I was there because I was very much interested in policy issues
I was also on the policy debate team but I intentionally avoided philosophy
Mostly because all of them were on it and I thought no no no I'm in like the real area
I'm in the science like the hardcore area
And yet every internship I took, every thing that I did
Suggested I was much more interested in the philosophical issues that arise with scientific developments
So took some time off after college did some strategy consulting as one does after most of these liberal arts colleges
And you don't know what you want to be when you grow up
And it was during that time that I started to do masters work figuring out what I was going to do for grad school
So I did a masters in biology focusing on neuroscience and behavioral genetics
And then decided to apply to grad school after that
And did it combine JD ultimately PhD in philosophy?
Largely because I went to school to study the intersection of behavioral genetics, neuroscience and criminal law
So went very much with the intention in place that that's really what I wanted to explore and then here we are
Alright so the topic of this conversation is essentially mind reading
And the two aspects of your background neatly parallel the two major components that I want to discuss
One is mind reading possible in principle?
Are we going to develop the technology to get there in our lifetime?
And if so what paths actually promise to get to that end zone in terms of having technology that can actually read your mind?
And then second half is assuming we can what are the ethical implications of real mind reading technology?
So I want to start on the first half. Do you think it's possible in principle to have mind reading technology?
What would that actually look like? And do you think we're going to get there anytime soon?
This as a fellow philosopher raises questions about what is the mind? Right? And what counts as mind reading?
So can we already decode some of what's in your brain? Yes.
And does that include things that people would say is their mind? Like if they're happy or sad or frustrated or angry or tired?
The answer is yes, we can decode most of those brain states already.
If what you mean by mind reading is literally like the inner monologue that you're having or the inner visual images that you have when you're in a moment of self contemplation and self reflection.
I think the answer is probably in our lifetime. We're going to be able to do that.
But the question is with what technology? And whatever the book about really focuses on technology that's here today,
which is consumer technology, wearable devices for mind reading, and those wearable devices for mind reading are probably never going to get to the level of depth of being able to really decode your inner monologue.
But they're going to get some pretty good approximations of mind reading and certainly already can get some decent approximations of basic things that are in your mind.
So what is an EEG? When you and I think, when anyone thinks, they have neurons that are firing in their brain. These are the cells that fire and they give off tiny electrical discharges when they do so.
When we have different emotional states or cognitive states like you're tired, you're happy, you're excited, you're bored.
You have hundreds of thousands of neurons that are firing at once, and those happen in characteristic patterns and they give off characteristic patterns of electrical discharge.
And you have different, what we call brain waves, different rates at which those electrical activity can be picked up.
EEG, electroencephalography, is a way of picking up that electrical activity from your brain.
So they're basically just sensors. Most people who think about EEG, think about these big medical caps that are in hospitals where there's 128 sensors and it's like this giant cap you put on with a bunch of gel over your head with all these wires coming off of it.
That's clinical grade kind of old school EEG. And what I'm talking about EEG for the masses, what I'm talking about are little brain sensors.
So they're tiny electrodes that are embedded in things like earbuds or headphones or tiny wearable tattoos that are worn behind the ear.
The kind of previous generation. So it's something that small be able to detect the EEG signal of your whole brain?
It can pick up enough to do what it is that you're trying to do task specific, right?
So the more electrodes and different spots on your skull, the more signal you're going to be able to pick up from different regions of your brain.
Most of these headsets, when people are wearing them, you're averaging information from different spots across the brain.
So it's not necessarily the case that more is always better, but having different placement across the skull can give you more information than you would get if you just have like a couple behind your ear or one in each ear, for example.
So there's an old debate in philosophy. And I remember Hillary Putnam had some seminal paper on this. This is reaching a few years back in my memory about whether mind states are synonymous with brain states.
This is something that used to be a very live debate in philosophy, which is to say, is a state of my mind just synonymous with a state of my brain?
So listeners to this podcast might take that for granted as being obvious. Like, of course, if everything going on in my mind is it has a direct one-to-one correlation with something going on in my brain, and the mind is what the brain is doing.
But historically, most human beings throughout history haven't taken that for granted or believed that. That's a very modern notion.
Many have believed that a mind can be an effect of the soul or of some other material. So in the past, this has been a big question in philosophy and in analytic philosophy.
With the advent of EEG technology, is this going to move from the realm of a question for philosophers to debate to a question for scientists to just test?
Is this going to be answerable by empirical analysis?
So there's so many different levels on which we could take that issue. We take it from the philosophical perspective of is it possible to have a non-material soul that acts on a physical body, a non-physical emergent property that somehow is able to act on matter?
So far, we haven't found any way in which those things could happen. And maybe, right, even if every thought you have has a physical correlate in the brain that we can detect and map that doesn't exclude the possibility that there is also some metaphysical soul that exists and that
departs when a person dies, for example, or something like that. These things are not mutually exclusive. I think the question from a neuroscientific perspective is twofold. One is, can you find neural correlates of consciousness, for example, consciousness being the kind of mental experience of being and being awake and being able to self-reflect.
And then the second is your thoughts, the images in your mind, your feelings, your experiences, can those be decoded from the physical firing of neurons in your brain or the blood flow in your brain, right? It doesn't have to be just through EEG. There are other ways of picking up brain activity.
And so far, the answer seems to be yes, right? Even when somebody's dreaming, for example, the ability to use more complicated neurotechnology, kind of bigger technology that can look more deeply into the brain.
Even the visual imagery in the brain as a person has been dreaming has been able to be decoded and kind of recreated in visual images that we can look at on a computer screen.
Is that the full experience of being? Is that the full experience of thought that we've decoded? Maybe not, right? But is it enough that it gets at a whole lot of what a person would think of as their mind and their experience itself? I think so.
So let's dwell on that neural correlates of consciousness point. This is something that's very interesting to me. From my point of view, we have this fundamental mystery of why it is that a brain which we believe is a set of cells arranged a certain way, which we believe is at bottom a set of atoms and subatomic
cells arranged a certain way. Why is that a brain produces this fact of conscious experience in addition to the mere processes it does, right? A computer can play chess, my brain can play chess, a computer can speak words increasingly chat chat GBT can sort of understand and create sentences.
So can my brain, but my brain has this extra thing, which is that it feels like something to be the brain. There's something that it's like to inhabit this as well as I assume it is for you and for everyone else listening.
So there's long been this idea in the philosophy of consciousness that there may be neural correlates of consciousness, which is to say things in your brain that correlate with the fact of conscious experience. So with an EEG, presumably you could find what the brain looks like when it's conscious and what it looks like when it's not, or you could look at the processes that are conscious such as me speaking right now
and differentiate those from the processes that are unconscious, like my experience of my food digesting right now. I have no idea whether that's happening. You have as much idea as I do about what's going on in my spleen right now, for example.
Yet those processes are also governed by my brain at some level. So there's been this idea that we can learn something about what causes consciousness by looking at the neural correlates of consciousness, which is interesting, but I've always been skeptical of it on one level because even if we found the perfect pattern of when, you know, what brain regions and light up and what they look like when I'm conscious, that wouldn't really feel like an explanation
of like why certain atoms produce experience and certain firings don't. So would you agree with that or do you think we're going to find something deep about consciousness from this tech?
You know, I said a moment ago that it's not mutually exclusive, right, in the sense that, you know, in philosophy, there's been this idea of this qualia, right? There's something about consciousness. It's different.
That consciousness is an emergent property, right, that you can't reduce it just to the firing of neurons in your brain that there's something that is bigger than the pieces, right, that it comes together to form something that might be much more difficult to quantify.
So I'm going to do in a different related area where this has been vexing for me on a couple of different fronts. So one is right now, I'm a commissioner to something called the Uniform Laws Commission, and the Uniform Laws Commission has delegates from all of the states, and they take on projects to try to develop uniform laws across the United States that are model laws that can then be adopted.
One of those laws that is one of its more successful laws is the Uniform Determination of Death Act, and there's two parts to that. The second part of it is Death by Neurologic Criteria or brain death, as it's been commonly known.
And there's been a very long debate about brain death since it was adopted in the United States and continues today tied up in this question of what does it actually mean, right? What does brain death mean?
And how would you even measure it if you could truly find it, right? Because maybe as you said, you can see patterns of it in Fireings, a BEG or patterns of it in FMRI, functional magnetic resonance imaging, which are big machines that we've used to try to test consciousness.
We've seen it also in the creation of these little brain organoids that are human brain cells that have been developed and have started to show patterns that look like consciousness. We have the same question of, like, would we know if AI was conscious if we saw it?
EEG and consumer wearables is not going to solve this problem for us, right? These are big questions about the universe and the meaning of brains and the meaning of human life and experience and existence.
What we will see are what the kind of patterns that correlate with a state called consciousness is and the loss of it, and it's much easier to see the loss of it in a being that we know has had consciousness.
So humans have had consciousness, they lose it, they have no firing of neurons in their brain anymore. You see the patterns that correlate with the loss of consciousness, but that still doesn't answer for us what consciousness is, why we have it, how it emerged, how we lose it, what the meaning of it is.
So let's talk about how realistic it looks to have this kind of mind reading, whatever that means, tech. I'm always astounded by the way in which certain problems seem easy to solve but are extremely difficult.
And other problems seem extremely difficult and end up being rather easy. So the example I've used before is putting a man on the moon as opposed to solving the common cough.
So, like, last year, I've told the story once before, I think, but last year I just had a cough that just would not go away for like six weeks.
And I went to the doctor, I was like, look, give me anything modern medicine has. I have a simple cough, I'm a podcaster, I need my voice, right? And he puts me in an x-ray machine, asked me a bunch of questions, and he said, well, I diagnosed
you with the common cough, go home and drink tea until it goes away. And I was like, no, give me every drug that works, he gave me every drug, did nothing, right? And I just drank tea and it went away within an end time, right?
So, which led me to the true fact that we have not reliably cured the common cough. And even things like like dextromethorphin, like robatussin and stuff, they don't actually work as well as as advertised for a lot of people.
At least for certain coughs. For certain coughs, yeah. And yet we put a man on the moon and chat GPT and any number of insane technologies that I can't explain to you how they work because they're so complicated.
So, this always interests me because when I think of technology involving the human body, it turns out some problems are just way easier than they seem and some are way harder than they seem.
And problems involving the human brain, for whatever reason, and this is just a gut feeling for me, strike me as like wicked problems, like hard problems, and like things that might take 500 more years to solve than we thought, kind of like carrying the common cough.
So, at first blush, I'm kind of skeptical of the notion that like we humans will figure out the brain well enough to know like this thought means X or this firing means he's thinking this.
You clearly have never tried one of these. Have you? No, I never have. I never have. So, correct question.
I should have brought you one to try out. Well, so first when we say, yeah, I think there are some aspects like there's so much happening in the human brain, right?
And there are some problems that we're not going to solve probably anytime soon. But there are some problems that we're going to solve.
Like it's another organ in our body, just like our heart, right? Just like other organs that we quantify all day every day.
And the idea that there's something so special and so unique about the organ that we can never figure out where speech comes from, where movement comes from, where your feelings are represented in your brain.
That to me, I think, takes a kind of hopelessness about the brain that we don't need to take because it turns out that all those other advances that you were talking about, including generative AI and chat GPT, have led to seismic advances and how much we can actually decode from the human brain.
There will still be some things we can't for a while, maybe within our lifetime too, right? So we may not solve Alzheimer's and Parkinson's, even if we have some better treatments, we may not solve all depression.
We may not understand every cause of mental illness and mental suffering in the human brain in our lifetime.
But that doesn't mean there isn't a lot we can't already decode and reliably decode.
And with something like generative AI, which can be trained on your unique brain patterns, and then generate unique experiences for you, like you're feeling joy.
Here's your favorite song. You're feeling stressed. Here's the song that generally chills you out.
And then see those changes in your brain as your brain waves that indicate stress come down and your brain waves indicate relaxation and meditation come up like, that's already possible today.
And so I think it's, you know, there are hard problems. Maybe you're imagining the hardest problems in brain and mind reading, but we can already do a lot today.
And even the fact that there's now miniaturization of brain centers that can pick up a whole bunch of brain activity and really sophisticated algorithms that have been trained on millions of data sets of brain activity and can actually decode what much of that means.
You know, one of the things that most people who have read the book so far, even neuroscientists are startled by is how many concrete real world examples I use about what we are already doing, what we are already capable of doing and the accuracy of those things.
I'm not claiming your inner monologue and your complex thoughts with a couple of surface electrodes can be decoded reliably, but your intention to communicate speech like your intention to type and your intention to speak.
That's getting better every day at being able to pick that up. And that's a little different, right? So it's, there's lots of things that happen in our brain.
Some of them are intentionally communicated speech. Some of them are inner monologues that we have. Some of them are kind of fantasies and biases.
Some of them are emotions. Will we decode absolutely everything about the human experience within the next 10 years? No.
Will we decode a whole lot more from the human brain just like we have discovered so much more from other organs? Yes.
This is a perfect match.
I'm not sure if you're going to get some online dating or online business.
Perfect match? Shopify.
After global revolution yearn and all-in-one commerce platform,
create a student in online shop in Hunt on Brain.
Shopify decked Alifatrips Canail up,
Noyatee Köppen,
Mjömögli Käiten,
Einwerfakaufen.
Test the Shopify custom laws on Bring Danigesheft,
Edie F. Wollkeich,
and the Väitreich Indivate of Shopify.com.me.
I'm Kevin Miller, former pro-athlete, author, father of nine and self-help guide.
I broadcast the self-helpful podcast from high up in the Colorado Rockies.
I'm a fan and critic of self-help,
and I invite today's most important influencers to grapple with their own wisdom
and an authentic conversation about self-help and what drives them.
With each guest, I conduct a four-part series, and these candid explorations
have been downloaded over 60 million times.
I invite you to join me in the self-helpful podcast
as we elevate our personal experience of life
and the way we show up for others.
So your comment about being able to detect intentions
reminds me of the LeBette experiment.
The Benjamin LeBette experiment, which some people have criticized,
but can you describe what that experiment classically was seen to have found
and how it relates to free will and how this tech in general relates to our picture of ourselves
as conscious creatures that choose to do things when we choose them?
Sure, so you'll correct me if I get some of it wrong,
because now you're making me reach back and just help my memory of this.
It did write about Benjamin LeBette's experiments a number of years ago,
and it's relevant to some of that.
No, it's been a few years for me too, actually.
So we'll come together.
Yeah, but the basic idea was this, which was he had in front of people
a left click button and a right click button,
and he said, whenever you feel the urge to press the left or right click button,
take note of the time, right, and then go ahead and click the left and right click button.
All the while he was measuring brain activity through, I think it was EEG that he was using to pick up,
and he was looking for something called the event,
related potential, the ERP, which is the moment at which your brain registers the intention to move
versus when you consciously recognize the intention to move.
And what he found consistently, I think it was 200 to 400 milliseconds before people
recognize the intention to press the left click button or the right click button,
the conscious experience of choosing that the brain registered choice before that.
And the idea was that the conscious brain, I mean,
there's been lots of interpretations of this, right, which is like we backward rationalize the choice
the brain has already made, and conscious experience is really just kind of trick,
an evolutionary trick where we think or experience real free choice,
but actually the choice has already been made.
Other arguments are, you know, there's a process and that process is like an automatic process
that then gives rise to consciousness and there's a gap between them.
But a lot of that experiment, that kind of pre-conscious signaling has been used,
like when I say we can decode a lot of what you're thinking, that recognition memory or, you know,
kind of now it's P300 or N400, these early pre-conscious signals can be picked up reliably from the brain.
They're used for all kinds of things, from criminal interrogations to trying to figure out, you know,
whether a person experiences congruence between two facts, like if I, you know, were to say,
listing all of the prior guests you've had on the show and looked at your brain as I was naming each of them
to see if there was recognition of the name without you telling me whether or not they'd been on the show,
but I'm just testing for recognition and I'm only using names, you know, they're not in the general public
that you don't otherwise know they're just people that you might recognize from having on the show.
I could look at your brain activity to figure out recognition if those names like, yes, that is or yes, that isn't.
Or N400 would give you confidence.
Would that be like a lie detector test?
Kind of. I mean, people could have called it a lie detector test.
It would be the significance of a mismatch between my brain recognition and my saying the name.
Well, you might not want to tell me, right? If I want to interrogate your brain, now, I mean,
I've given you an example of like you have, this is public information, right?
Oh, I see, right.
But like your criminals, that's that.
So it'd be the episode I recorded with Jeffrey Epstein that I never released.
Yes. And then I find out.
Yes.
Yes.
Yes.
All of that, right.
Exactly.
So you find that out, you know, or even that, well, you just said the friendly interview with Weinstein, right?
Right.
Those two things in your brain, I'm hoping.
That's your internet.
Register in Congress.
No, that's right.
I am so like.
There's no, there's no level of obvious joke that I can make.
You were right.
My, my, a recent presentation, mine went absolutely viral, which was a dystopian portion of the introduction to my book,
which taken out of context.
I really thought that I was advocating for like neural surveillance of the masses.
No, quite the opposite.
It was like, here's the dystopia.
Right.
But yes, things can be taken out of context quite easily.
But no, so that idea would register undoubtedly and your brain is incongruent.
Weinstein and like good guy, right?
And you could do that.
I could test a whole bunch of pairings like that in front of you to see what your brain registers as being congruent.
Like those two things go together or those two things do not go together.
And then look at how your brain reacts to different pieces of information to even find out, for example, like, where did you bury the body?
Right.
In the lake, in the woods.
And right.
I mean, if you're a criminal suspect.
So it's a, it's not quite lie detection.
Right.
But what it is is probing your brain for memories and information.
I see.
So, so how would that practically be used in a criminal situation?
It already has been.
So the P 300, not the N 400, the N 400, as far as I'm aware is largely under development by places like DARPA, the Defense of the Bachelors Research Projects Agency.
So P 300, there's a guy by the name of Larry Farwell, which I write about in the battle for your brain about how more than a couple of decades ago, he took early research like the limit experiment that we were talking about.
And he was curious as to whether or not those kind of pre conscious signals could be used to interrogate a criminal suspect or, you know, a national security interest, you know, kind of interrogation for information in their brain.
And so what he wanted to do was to test for recognition of information that only the person who did the crime would know.
So like he goes through police files and their facts that haven't been released to the public.
And then somebody should know about it, except somebody who actually investigated the crime or somebody who committed the crime, or was somehow involved in the crime, right? It wouldn't tell you they committed it.
They could have been a witness to the crime, for example.
And so he developed a series of probes, questions that would be based on those files and then would set a person with an EEG helmet or hat or, you know, sensors and then show them a series of images or words on a computer screen and then measure to see whether or not they showed the
person or if they didn't show the recognition through the P 300 wave that you would detect that early pre conscious signal in the brain.
And he was like on the cover of Time Magazine, the CIA and a whole bunch of other, you know, DOD put huge amounts of investments into it.
It was used in a number of criminal cases.
And since then he sold the technology to a company called Brainwave Sciences.
Michael Flynn was on the board of it together with, I think, ex maybe Soviet spy, like a KGB spy or something like.
Anyway, they sold the technology and are still selling the technology as far as I'm aware to a bunch of governments around the world who are reportedly using it.
India does brain interrogation in Dubai.
So how reliable is that recognition? Like this is not quite a lie detector test, but it can imply that you're lying because if you recognize a picture from a crime scene, then if you had no other way of being there, how reliable is that recognition signal?
Is it a hundred percent? Is it 90%? Is it 50%?
So there's a lot of controversy over that very question with not how reliable the signal is, but how easily you can recreate or validate what he's doing.
So the science, which is that there is recognition memory is very reliable, like high 90s reliable for that is an accurate signal of recognition.
The question is the creation of the probe, so that's more art than it is science. And so a lot of people, when they've tried to scientifically reproduce the results that someone like Farwell has done, have a harder time reproducing the results because it's hard to accurately make sure that there's no other reason that you recognize the information.
So for example, if I were to try to see if you recognize past people that you've hosted, you would recognize a bunch of names, not because you've had them on your show, but just because you recognize the names.
So what does that really tell me that I see recognition in your brain? But if you really get an excellent probe and you get down to the place where what you have is facts that truly nobody else should know, it's an accurate signal of recognition.
What you do with that, whether or not you could actually kind of take that to the bank, I think that's more controversial.
Yeah, so I mean, we have this principle in law of letting 10 guilty people go free, rather than putting one innocent person in prison.
I don't think that principle has necessarily actually been lived up to. And there are things like the Innocence Project, which are often finding out just how many innocent people have been put behind bars, especially for crimes like rape.
But yeah, it seems like presumably this technology would, if it were in the high 99% tile accuracy, it would really call on us to potentially make that kind of evidence and miscible in court.
And it could actually potentially solve, especially these crimes where there often isn't evidence outside of what's known by the two people involved, the criminal and the alleged victim, right?
So would this have implications for say rape cases?
It could. You know, brain-based interrogation, the idea that there may be information in your brain, especially from kind of pre-conscious signals, things that you can't necessarily control or manipulate or change.
You know, as I said, they're already being used, police departments worldwide. There's, you know, even in Japan, a widespread technology that's used and tested to use just called the concealed information test, the idea that you really can probe a person's brain and get some accuracy, some truth,
especially in cases where you have, you know, they said they said in rape cases where it can be so incredibly difficult to base it on credibility, and those credibility determinations are oftentimes flawed and biased and juror determinations of credibility don't track the truth necessarily, right?
They may track the pre-existing biases or commitments that they have or beliefs that they have.
And so maybe if we got to a point where it was incredibly accurate, you may be able to resolve some of the hard cases.
Whether or not that's a good thing to do, whether that would be ethical, whether the benefit to society of being able to adjudicate those cases outweighs the risk of giving government access to brains to interrogate, that's, I think, a different question.
And, you know, I'd say for me personally, I think it's a dangerous road to go down, which is to give government access to brain interrogation.
And I think that the effect on freedom of thought and the chilling risks when used by authoritarian governments makes it a problematic technology to say that's a good way to get evidence.
Right. I mean, I think in more time, we often find even in America, which is a country with notoriously famously strong protections for individual rights.
In more time, you know, we found that our phones were being tapped by the government.
And when the government feels even an American government, a government with constitutional dysinclinations to do this, when it feels it's in an emergency, it will seize basically whatever technology.
It will try to seize whatever technology is available to intrude on people's privacy and get more information and do things under the table that then journalists will whistle-boers will leak years after the day.
So I would predict, even if even in a country like America, that this kind of technology could be used.
At the same time, you've given examples of it already being used in other countries which have less of a protection on individual rights and individual privacy.
You've also given the example of it being used in a factory in China.
I think it was an electric car factory in China where...
Is that rural? Yeah. Yeah, several where you have basically hundreds of workers hooked up to EEGs to monitor their productivity, presumably if their brain signals correlate with whatever laziness looks like, then that information is fed to the boss.
And this is like truly dystopian stuff.
So I guess my question is, what do you think... What conversation do you think we need to be having in order to minimize or prevent the worst possible uses of this technology by authoritarian regimes?
And I would say major corporations as well.
Definitely major corporations as well.
I mean, I worry about the commodification of the brain. I worry about the misuse of this in the workplace. I worry about the misuse of it in schools and in everyday settings and the normalization of neural surveillance.
I think a starting place is people have to start to set aside their skepticism about the technology.
And I don't mean that to say, set it aside and embrace the technology.
I mean, I think most people's initial defense... You mean, except that it's coming.
Yeah, I think most people's initial defense to it is to say, that's not possible.
But instead, we have a moment, right? Not only is it possible, but pretty much all of the major tech companies have huge investments and acquisitions that they've made in neurotechnology in the past few years.
You know, whether it's meta or Apple or Microsoft or Google, they all have made major acquisitions and investments into developing brain centers that can be embedded into everyday technologies.
Later this year, there are several multifunctional devices from earbuds to headphones that are launching that have pretty high accuracy for the things that they're measuring, attention, focus, engagements, these kinds of things.
And so I think the starting places for people to recognize the technology has arrived and it can do more than they think it can do.
And the second is once you accept that jarring facts to realize we have a moment, it has not gone fully mainstream yet.
It's being embedded into everyday technology, but rather than needing to wait five years from now and say, I'll worry about it when it's widespread.
We have an opportunity right now to set the terms of service as to how these technologies are used to make a decision about what the default rules will be.
And I've proposed a default rule, which is a right to cognitive liberty, a right to self determination over our brains and mental experiences.
And I spell that out in the book, both in terms of what the right looks like and what the update of updating of existing human rights looks like at the international stage, but also what that means to operate as if that right already exist in a corporate setting.
When governments are starting to use the technology for everything from biometrics to interrogations, what that means for the coming age of cognitive warfare, both informational warfare, but also intentionally trying to assault brains on battlefields and in ordinary interactions.
And so I think it's really like kind of realize it's here, be part of the solution, and then start to integrate the terms of service demanding we have a say and a decision to make about how the technology is used.
So like pretty much every technology, this is going to have horrible uses and great uses. And we, I guess we've sort of talked about both of them, but I mean some of the great uses I can imagine are, for instance, therapy.
Yeah.
Right. Like if I, if you're a person that struggles with self awareness and self knowledge, which we all are to one degree or another.
Like, how much do you actually know what you're feeling in every moment and why you're feeling it. And where do you work best? What time of day do you work best? When are you the most focused? When are you most created?
These questions are obvious because you're experiencing you, but actually you can be wrong about what you're experiencing and certainly about why.
And knowing and having just like a readout of what you're actually experiencing can be very useful.
We also would end up treating brain health in the same way that we treat all of the rest of our house, right? I mean, right now, you know, I'm a chronic my greener.
You know, so I've had brain scanning done, particularly when I have like a really unusual pattern of chronic migraine that suddenly, you know, ends up looking something different. Like, is it a brain tumor or is it something that's happening?
Most people have not. Most people have never had a brain scan. Most people won't. Many times, you know, the neurodegenerative disorders aren't picked up until too late. Mental illnesses aren't picked up until much later depression oftentimes is picked up once a person is suffering from severe depression.
You know, cognitive decline, sharpness, enhancement, all of these things. We treat it as if, you know, there's like a black box. You don't get to know anything that's happening with your brain activity and health.
And if you had devices in your own hands that both helped you to know a whole lot more about yourself, your biases, your work habits, a lot of things that you think you know, but you're wrong about.
And you could track your own brain health, just like you track your heart rate and the number of steps you take per day. I think that could be revolutionary. I think it could be revolutionary for mental health.
I think it could be revolutionary for wellness if it's used as a tool of empowerment for people and not as a tool of surveillance by corporations and by governments.
I'm Richard Sarrett. Join me on Strange Planet for in-depth conversations with the world's top paranormal investigators, alien abductees, Bigfoot trackers, monster hunters, time-treating, and all that.
The handler one day told her this whole thing about how they've been terraforming on Mars and they're building a colony and they're recruiting specific people or specific bloodlines and specific talents and skill sets to go onto the planet.
On Richard Sarrett's Strange Planet, we're redefining reality. Listen now wherever you get your podcasts.
So finding that line it would be tough because presumably when this tech gets really good, you want people to be able to choose to use this tech for the better. But once people can choose to use it, they can also choose to ask other people whether they're willing to use it to say work for my business, right?
And I can say, listen, we're both adults over 18. Here's the deal. I don't have to employ you. You don't have to work for me. But if you choose to work for me, you choose to also submit to certain rules, one of which is you have to use this, you know, futuristic, eG machine, which reliably tells me what's going on inside your mind during the workday or at periodic review times.
And I will like review your chart for the last three months and notice that you have been actually paying less attention at work lately by 20% and here's the proof.
And certain people may be whether desperate enough or simply because of their preferences care less about their own privacy. They may, some people may be willing to submit to that kind of a thing.
Most people will probably be creeped out by it, but one can imagine this may change between cultures, cultures which are, which are already heavily surveilled, such as China are not necessarily people are kind of used to being surveilled by the government.
They've been conditioned to think of that as normal, maybe less, maybe able to put up less resistance or maybe inclined to put up less resistance because it's a cultural difference.
Would your conception of like cognitive liberty, would it say to the corporation owner? No, you actually can't ask people to do that.
Yes.
Even if they consent to it.
Yes, but.
Yes, but.
So, so first of all, let me just say one thing, which is that's already happening.
Right. I mean, what you describe as some futuristic scenario, there are already workplaces in China, there are workplaces across Asia, not just in China, where that's already happening where people are being told, like, where you're using it.
We're told, like, we're using this neurotechnology in the workplace and we're using it to track your metrics.
There was a company that came up to me to talk to me after my recent talk about your brain in the workplace to say like, we've been using that technology and testing it and we're starting to sell it to our customers worldwide as an enterprise solution.
It's given us really great insights about things like work from home policies versus workers office.
Yes.
It goes back to me saying like, this is happening, right? This isn't something that is futuristic. I'm not describing some science fiction scenario. This is happening to workers worldwide already.
The question about whether my right to cognitive liberty as I define it would prevent workplaces from doing so. I say yes, but because if we recognize a right to cognitive liberty, that includes the right to mental privacy.
A right to mental privacy means there has to be a legal, bona fide exception to be able to get access to brain data.
So an employer could say, I'm giving you this as a tool of empowerment, but if they want to get any of the data from that, the right to mental privacy would require they have a legally justifiable reason that is the least intrusive means to do so.
So what does that mean in practice? A commercial driver who is, you know, right now, if they're a truck driver, they have in camera, in cab cameras that are trained on their face and saying whether or not they're falling asleep.
There's driver assist technology and there's driver alert technology that's built in to pick up whether or not they're fatigued or falling asleep.
There's a company called SmartCap that has sold thousands of these headsets worldwide that measure fatigue levels from the brain, just giving a score from one to five, right, about whether or not the person is wide awake or falling asleep and the levels in between.
No other brain data is extracted and all of it is kept on device and overwritten and the only score that a manager would see was that score one to five.
The question is, could you come up with a legally justified exception in a case of a commercial driver to say, we're only going to test fatigue levels, we're not going to get any other data from the brain, and it's justified because they are a commercial driver where many lives are at stake when they are on the road.
I think possibly, I think as a society, we may decide that cognitive liberty and the right to mental privacy in limited circumstances with limited data that's collected can be justified in a situation like that.
We might decide, no, we might decide that's not a legally justified exception, but that's the kind of thing that cognitive liberty would do because mental privacy isn't absolute.
It's a relative interest between societal interests and individual interests, but it does change the default rules about what it is that an employer can do.
The default rules are no, and less.
It's interesting, because in some ways we've already breached all of these questions with Facebook and Google selling our data.
Yes.
I mean, we know that Facebook and Google and all of these big tech companies certainly TikTok, and this is apparent by the very fact of how good TikTok's algorithm is at showing you quickly finding out which content you respond to is just by how long you take scrolling on something
right, millisecond differences in your attention can create a pretty good profile of what you want, and at some level, how you're feeling, or at least how that relates to what products you're most likely to buy.
And Facebook has been able to do that for years to a degree that was very surprising when it first came out, that it can tell if you are going into a depressive episode, or if you're someone with bipolar and you have mood swings in mania,
they can potentially tell when you're going into an upswing or downswing, and that's without any direct access to your brain signals, right?
And largely in my opinion, society's reaction to that, and even my own reaction to that, has been somewhat apathetic in the sense that we are not as outraged by that as probably we should be.
And that's an interesting fact to consider our own apathy about that, and how that may translate to a world with a widespread frequent use of EEG technology.
And I think the fact that it's more physical and it's sort of potentially touching your head will trigger people's instinctive, like, get off of my body reaction, and that we won't be apathetic, or do you fear that we'll have pretty much the same attitude that we have towards big tech knowing all about us and selling our data?
And that's a little bit of both. So, first of all, I'd say, as I conceive of it, the right to cognitive liberty should cover more than just neurotech touching your forehead, right?
It should include all the ways in which people are trying to both get into your brain and manipulate your brain.
And we ought to figure out which of those practices, really precise algorithms are permissible and impermissible.
We ought to claw back some of those rights and include a broader definition of cognitive liberty.
But when it comes to brain data, I think it depends on how well people understand it and how well it becomes normalized.
So, when I've done studies through my lab to try to understand if people would react differently to brain data, people regularly rated their Social Security's number is more sensitive than their brain data.
And I think that's because people don't really understand what you can decode from that information. But when I shared the dystopian video about the future of work where brain data could be collected, you know, I'd say the reaction was much stronger than I ever expected it would be,
which I actually find incredibly encouraging in some ways, even though I got a lot of death threats and, you know, other misdirected, I think anger at me of revealing to people what's happening.
Were the death threats just a misunderstanding?
Yeah, they were just a misunderstanding.
They thought you wanted this to stop it.
They thought I was like advocating, but that's what should be happening at workplaces instead of sounding the alarm of like, this is what could be coming and, you know, we need to safeguard against it.
And so I'd say the death threats weren't encouraging, but what was encouraging was how strongly people reacted to it, that visceral reaction to it.
I hope carries through. What I worry about is that part of what makes normalization so likely and that people become comfortable, even like in the beginning, they're startled that the algorithm is so precise.
And then they just stay on the platform.
So they buy the products that are precisely targeted to them.
The reason consumer neurotechnology is grown up now and is going mainstream is because the sensors are embedded into everyday devices and to watches and to earbuds that you're already listening to music and to headphones that you're already listening to music.
And to headphones that you're already taking conference calls from.
So maybe at the first moment, people notice and react to it, but then they're taking their conference calls and they're listening to their music and it's just like the heart rate sensor on the watch.
They're already wearing the devices.
It's not something new they have to put onto their body.
It's just one more sensor that's tracking activity.
That's what I worry about is the blind spot people develop where they initially are worried about it, but then it becomes so normalized and they're every time they're able to get it.
And then they're normalized and they're everyday life and hidden from their view that they forget about it and forget and give up so much of their privacy and Marieta's result.
Well, that will be the first thing that big tech does when this technology, if and when this technology is sufficiently advanced to be put in an iPhone app or something.
It was like, we already have these apps that it is an iPhone app.
So I'm just going to call you out again on your skepticism, right?
Like, you know, we have these apps that I put it on my bed at night and it's going to give me as much possible information about my sleep that I didn't know about.
Yes.
So do we have the app slash will we have the app that is just on all day in the background, like the equivalent of the health app that's looking at my steps and is just looking at my EG waves or my electrical signal and giving me all this information that can be mapped and patterned
at just like 16 hours of waking life every day for years.
Yeah, I mean, so to the extent that people start using the wearable tattoos, for example, these are tiny, go right behind the air, right? I mean, that's continuous monitoring and enables you to interact with the rest of your technology seamlessly, but also just all day, every day, pick up your brain activity.
Some companies have even correlated the pattern of heart rate activity through the ECG monitor. They have apps that then also test your engagement and immersion from a brain activity perspective through that already.
That's already being worn by many people 24 hours a day, but there's no remote monitoring of your brain. Right.
So, like with your with your phone, it has like an accelerometer in it. It can, if you have it with you, even if it's not touching you, right? If it's in your purse or something, it can, it can go along with you and keep your GPS.
There's nothing like that for the brain where it will remotely monitor brain activity without your awareness or choosing something to have on you, whether that's a headband or, you know, a sensor behind your ear, inside your ear, over your ears.
But there are many people like me, I've got earbuds or headphones on most of my day at work. The idea that most of my day, my brain activity could be monitored through those is already a reality.
So, what role does this AI play in this? Because it seems to me, what we've discovered over the past few years is that with machine learning and deep learning based on big data, basically that we can, if you can feed millions of data points into modern day AI tech, which is trained to find patterns between any set of variables, you can achieve, in many cases, super
human level of competence at understanding that function. That's been true for chess for decades.
It's now true for language. It's probably going to be true for music soon, I think, with music LM, which is at an earlier stage, but on the same path in my view.
It's true for art with Dolly 2 and mid journey. So, I'm imagining really complex EEG data, so not just like the little tattoo or whatever, like the real full complex EEG data,
linked up to exactly what is going on for the person during that data. So, I guess just to take one example, EEG data of me while I'm speaking right now.
And now multiply that by millions of different humans speaking while collecting complex EEG data. And you feed that into a machine learning and deep learning AI, which is now going to work its black box magic
to understand and correlate brain states with human speech. Is it possible that on the other side of that, there is just an AI that understands like basically the architecture of human thought simply by looking at the EEG data.
It's kind of a terrifying thought in some ways, right? I'm not sure in principle why it wouldn't be possible. Yeah, so first let me say what you've just described is how we have gone to the place where these little headsets can do as much as they can,
which is on hundreds of thousands of recordings using medical grade EEG, which have then been correlated to the data with consumer grade EEG. So you take whole brain readings using and whole brain being whole surface electrode brain readings from EEG,
and then try to figure out what are you picking up on the earbuds and the headphones and how does it correlate? Which is low resolution.
Which is low resolution if you were electrodes, but that's how they've trained it is they've done huge machine learning data sets on brain readings.
And then they've done the same activities, for example, with headsets and with earbuds. And then the same patterns could then be correlated into these smaller devices.
But that's not what you were asking. So let me go to what you're asking, which is the bigger question. The whole of human experience, I don't think is going to be captured by EEG.
And I say that because, yes, without a doubt, we're going to be, you know, as consumer brain wearables go mainstream, we are going to have millions to billions of brain recordings of people who have healthy brains in their everyday life going about their everyday activity.
And if that data is all used as training data for AI, and all of the rest of the information, which is already being commodified, like what are you looking at on your phone at the time at which the headset is being used?
Are you walking or not? Which your accelerometer and your GPS location is picking up from your iPhone and all of that's correlated and fed into AI?
Then it can learn a whole lot about the human experience, which isn't reduced to writing, which is what's currently being fed into AI, to train AI, to come up with things like chat GPT and generative AI, and all of the rest of them that you mentioned in art and music and every other domain.
And so much of the way human thought and experience is captured in the brain will become training data eventually for AI, and can that lead to even more powerful generative AI models without a doubt?
And will that be some of the training data that leads to AI without a doubt?
Neural networks are a form of how generative AI and how ML have been developed, and those are based on how the brain works and theories of how the brain works.
So yes is the short answer, but I think no matter how powerful surface electrodes are, they don't get at the full depth of the human inner monologue and inner experience.
So all of that doesn't become training data. Now in plant today, I mean in plantity, EJ, maybe, and planted electrodes will also become much more mainstream.
But I guess presumably there's a lot of what goes on in the human mind that has no clear behavior associated with it.
It's just what you're feeling, just what you're thinking, and all those things have associated brain states.
But if there's no Y to correlate with the X, then it's like, well, but that's, I don't think so, right?
Because suppose what you're experiencing is like you're having an inner monologue, right?
But your outer monologue of speech creation is something that has been decoded through AI, right?
Can that be used to accurately correlate what your inner monologue is?
Maybe.
Maybe, especially if the parts of your brain that like the motor cortex, which you used to form speech is also used in imagining speech.
Or your visual cortex where you see images when you're dreaming, those images show up in the visual cortex too.
That's part of how a lot of dreaming and dream states have been decoders.
It's using the same structures in the brain.
So when you're having a purely inner monologue, is that really beyond the realm of decoding when you can decode everything else that looks the same when it's been expressed?
I don't think so. I think it makes all that accessible too.
So what you're describing there is true deep mind reading, right?
Yes.
That's true mind reading when you can have, you know, 99.9% confidence after hooking you up that you're thinking, right now, oh, I'm hungry, I'd love to get a steak after this podcast or something like that, right?
Yeah, as opposed to decoding right now, hunger, right?
That's right.
And so already I could tell hunger.
Mm-hmm.
So maybe I want a steak part of it. You know, you can't.
Right.
Now you might be able to pick that up from my prior preferences and all over the rest of my buying behavior.
And if you've commodified all the rest of my data, you know, I like steak.
And so you're able to both tell them hungry, know what my prior preferences are and precisely mark it to me, steak.
But you can already do the basic mind reading of hunger.
The more complex what you call true mind reading of, I'm hungry, I want to stay the complete sentence you can't do right now.
At least not with consumer based EEG, right?
My intention to speak, if I have implanted electrodes, that has become remarkably accurate,
where people can have real time speech processes.
They've lost the ability to speak. They have electrodes that are deeper in the brain.
There are thousands of electrodes rather than a few.
Those are being used to allow people who have lost the ability to communicate otherwise to accurately communicate their thoughts.
Does this have, I mean, this is, I'm now imagining completely, well, you tell me if it's completely science fiction.
Every time I tell you something's completely sci-fi, you say we're already doing it.
So, does this have a security theater TSA application?
Is it like, can we?
Sure. And yes, we're doing it.
You want me to tell you what it is?
Okay.
All right.
So actually more than a decade ago, it was the TSA worked with what used to be Northwest Airlines to try to figure out if they could,
pick up neural signatures and do any kind of remote decoding as part of a TSA feature.
There's too much noise and too much interference to do that, meaning if you're trying to pick up electro,
magnetic signals or electrical signals from the brain and you're on an airport with a huge number of other signals,
it's very difficult to pick up.
But turns out that biometrics, as you know, are a huge part of what the TSA and other places used to authenticate people.
I was startled. I went to Switzerland recently and I walked up to the gate and they said,
oh, no, we don't need your passport or your boarding pass to stand in front of the camera.
And the camera scanned my face and popped up my seat number and information and a check mark that I could board because they have all of your facial recognition and other information is biometrics that are being used.
Well, a lot of biometric static biometrics, like your face, even your thumbprint can be faked.
And so it's concerning as a security measure.
So there's been this move to look at functional biometrics.
Like when you are doing capture phrases, it's really about how you move.
It's not what you answer. And there's a pattern of how you move that is more predictive of whether you're a human or a robot rather than the answer itself.
So you're saying when it asks me to identify every picture with a red light in it with a light in it or whatever, it's not actually, it doesn't actually care what I'm getting it right.
It cares how I'm moving.
It also cares if you get it right, but it's looking capture is generally looking at movements and human functional movements as opposed to the way in which a robot or an automated system solves things.
And functional biometrics look for something similar. So if you sing, like think of your favorite song, what's your favorite song?
I don't know.
I'm not getting in trouble for saying your favorite song. So think of it. You can think of it in your head.
I won't force you to move the other hand.
God's plan by Drake is a great song.
Okay. So if I sing God's plan by Drake in my head and you sing it in your head, your unique neural signature of singing your favorite song, even if it's exactly the same words, we start in the same place.
It's not going to be the same.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
It's going to be the same thing.
I'm pretty sure that I did biometrics at my interview for that at Well as well.
I think there were fingerprints and facial recognition at least.
You can imagine a future system where giving the man the system, whatever, a lot of EG data
is the ticket into whatever level.
We haven't talked about one of the kinds of-
Which will be very difficult for a lot of people to resist.
It will be.
Of course, there's the risk in more authoritarian governments that it's the ticket for being
free at all.
A lot of studies have been-
Ticket for citizenship maybe?
Well, worse than that.
It's loyal to you.
Go back to old McCarthy era loyalty oaths.
A lot of studies have been done to figure out if you can figure out the neural correlates
through EEG of political ideology.
Who are you going to vote for?
Are you more likely to be conservative or liberal?
Supposedly even do you believe in the political ideology of the Communist Party or do you not?
And probing people's brainwave activity when they have mandatory headsets that are on to see
whether or not they truly believe or are a heurants to the Communist Party.
That's kind of frightening applications that reportedly are already happening.
Reportedly where?
Newspapers, right?
It's hard to get any accurate information out of China, but if you look at some of the news
reports, including the ones that talk about the workers using the technology in the workplace
or the Wall Street Journal that did reporting of students and classrooms being required to
wear them.
Same kinds of investigative reporting has shown that at least there may be some of that kind
of political ideology testing that's happening as well.
Interesting.
You're on the edge with Andrew Gold and that was true crime legend Amanda Knox.
If you like interviews with the most controversial and extreme people then your love on the edge
with Andrew Gold.
Here's me talking to a real life psychopath.
How would you feel if somebody just started strangling me?
Well I do not feel connected to you Andrew.
I take you inside cults such as extreme Islam.
I feel I was going to die out there.
Before we delve into the darkest minds.
Was the violence?
Yes.
Find it wherever you get your podcast that's on the edge with Andrew Gold.
So I wonder how self deception plays into this because presumably some people are able to
actually deceive themselves into believing things that they can't really believe or passing
lie detector tests because temporarily they have assumed the state of someone who believes
that.
And these are obviously the most effective liars and manipulators are the people that believe
they're on bullshit.
And I think also to some extent everyone can get caught up in a little bit believing their
own bullshit but there are certain people that really can seem to be able to snap into
that state and I call them politicians.
But I mean obviously that is a big.
If you can really be made to feel like something is true then you can pass our intuitive lie
detector sort of systems.
Presumably would you also be able to pass any kind of next generation mind reading tech?
So I'm going to give you a sideways answer to that in part.
So fMRI functional magnetic resonance imaging which can peer more deeply into the brain
but looks at blood flow levels for a while maybe a decade ago was kind of all the height
that that was going to be the new lie detection.
And the theory was that it's more cognitive load to tell a lie than it is to tell the
truth.
Turned out for those best liars, no more blood flow required to tell the truth and to tell
a lie.
I was put into an fMRI as part of a research study out of Stanford where they were trying
to test for recognition.
So it was like images that I had seen previously they wanted to then test to see whether they
could pick up that recognition memory from my brain.
And then I was told to actively try to deceive the system and to see whether or not despite
my actively trying to deceive the system like no, no, I've never seen it.
If I could bring down the accuracy of how well it could detect it.
And I could bring down the accuracy of how well it could detect it by trying to think
about a pink elephant or you know like have active mind distraction techniques when I
know that my brain is being probed.
So if you know that the test is being undertaken it may be possible to beat a lie detector in
the same way that people beat a lie detector when they use other physiological responses.
But I think you have to think about it a slightly different way.
So recently in a few criminal cases Fitbit data was used to try to undercut an alibi
or to prove that when a person said that they were asleep that they were really actually
physically active at the time.
And if we're passively wearing neuro technology throughout the day and you can do things like
flash up images on a computer screen without a person realizing it to probe and get information
with a person's brain without their awareness.
I think it may be possible that those kinds of bypass when you know that you're being
interrogated that's not going to be the case.
You're going to have passive collection of data from the brain that you could use against
a person as a more accurate way of doing so.
That's still not lie detection the way you're describing it, right?
Which is like asking a person yes no questions and seeing whether or not they're lying or
telling the truth.
But it is intercepting information that can be used to convict them or to use to discover
what the truth is of the situation.
Very interesting.
So I'm curious if there will be if you think there would be any applications of this technology
to say prove that you love somebody.
Right?
Yeah.
I mean this is the age old joke is like prove to me that you love your wife scientifically
and it can't be done.
And this is a point in service of the idea that science is not what is measured by science
is not all that there is.
Not all that that matters.
What happens if a couple gets in a dispute and maybe one thinks one cheated on the other?
Is there a way that this tech gets so good that I say look submit me to the mind reading
I love you I did not cheat on you.
Do you think that would be an application and there would be a service for that?
Well father you're a battalion that's already been done.
Oh fucking god.
Alright.
So first let me say like more than a decade ago now there was a researcher who was really
interested in Chi ran this experiment.
I'm don't tell my girlfriend this.
Alright.
I'm not letting her listen to this part.
Yes.
No she's not allowed to listen.
There was an experiment that she ran to like pump couples into an FMRI and would show
them images of the partner and be able to tell from the kind of neural correlates the
FMRI function whether the person whether there was real love or lost based on what
we see is the distinctions between them.
And it had like a really interesting emotional experience for people when you know their
brains did not show love and their brains showed lust or you know no love at all or
something like that or love for an ex boyfriend and not for you know the current person.
So you know that that kind of thing is possible and then on the somehow that's scarier than
any other feature of the dystopias you've described in the past.
Well I'm gonna make it scarier can I do that for you?
So the like I will submit to the test that's how the P300 test has been used in the US
is where a criminal defendant will say no no like I really wasn't there I really was
like you can show me all the images you want like I wasn't at the crime scene my alibi is
true and then using probes to try to prove that right and so incongruence the N400 that
we're talking about there are ways to do that this kind of like interrogation of truth or
not with like appropriately created probes and if it's already happened in the criminal justice
system and you can do already distinctions between love and lust and other feelings that
have neural correlates that can be picked up through EEG and other you know F nearest we
haven't even talked about FMR other modalities yeah I mean we can do that.
Jesus Christ.
Now should we do it right I mean no like that's not how we should define our relationships that's
not how you know people should work through their breakups where their relationships like
part of what I think is so important for people to realize is this technology really is going
to fundamentally change what it means to be human what it means to relate to one another
right the questions that you're asking about like how much does a person know about themselves
how much can you learn about another person most of that has been through what we've chosen
to share with other people now some of it not right some of it is you can make inferences and
you know your your data has been commodified so that corporations can figure out things about
you that you wouldn't want to share with other people but your most intimate relationships
are based on what you choose to share with other people and the development of that kind of
intimate sharing and vulnerability with another person and this could redefine a lot of that and
really be transformational in ways that we should be paying attention to this is really
this has been a wake-up call for me of of how much is already going on and how much is is likely to
go on in terms of inching towards deep mind reading technology and all the implications of that
I think people will really enjoy this conversation thank you for your time the book once more is
called the battle for your brain and I encourage you all to pick up a copy and if there's anything
else you want to plug a Twitter handle a website now would be the time they can follow me at
needa ferahani and my website is needa ferahani.com keep it simple all right thanks needa thank you
thanks for listening to this episode of conversations with Coleman if you enjoyed it be
sure to follow me on social media and subscribe to my podcast to stay up to date on all my latest
content if you really want to support me consider becoming a member of Coleman unfiltered for
exclusive access to subscriber only content thanks again for listening and see you next time
if you had the right information could you create a better life i'm chris stem the host of smart
people podcast one of the best podcasts you've probably never heard of my producer john and i
started the podcast 12 years ago when we got burnout on corporate america and wanted to figure out
a better way we've asked top global experts including bernay brown simon synick niel de
grass Tyson and hundreds more how to create a better life with over 10 million downloads this
is one you don't want to miss subscribe and follow smart people podcast wherever you listen
are you fascinated by the twisted minds that commit criminal acts i'm dr shiloh and i'm dr
scott we're both forensic psychologists working in southern california in each episode of our
podcast l.a. not so confidential we dissect the nexus we're true crime forensic psychology and
entertainment meet all delivered with our signature gallows humor while examining the actual diagnosis
and dishing on the media portrayal of the events subscribe to l.a. not so confidential anywhere you
go for podcasts trust us we're doctors