• Show Notes
  • Transcript

Nita Farahany is a professor at Duke Law School and one of the nation’s leading scholars on the social, ethical and legal implications of emerging technologies. Preet speaks with Farahany about how advances in brain monitoring devices will impact the workplace and courtroom, whether a “gene for violence” exists, and how A.I. is influencing bail and sentencing decisions.      

Plus, Preet discusses Steve Bannon’s potential criminal sentence and the latest on the January 6th Committee hearings, including new testimony from former White House Counsel Pat Cipollone. 

In the bonus for CAFE Insiders, Professor Farahany discusses how A.I. is already being used in place of judges, and the influence of machine learning in the realm of dating and matchmaking. To listen, try the membership for just $1 for one month: cafe.com/insider.

Tweet your questions to @PreetBharara with hashtag #askpreet, email us at staytuned@cafe.com, or call 669-247-7338 to leave a voicemail.

Stay Tuned with Preet is brought to you by CAFE and the Vox Media Podcast Network.

Executive Producer: Tamara Sepper; Senior Editorial Producer: Adam Waller; Technical Director: David Tatasciore; Audio Producer: Matthew Billy; Editorial Producers: Noa Azulai, Sam Ozer-Staton.

 

REFERENCES & SUPPLEMENTAL MATERIALS

QUESTION & ANSWER:

THE INTERVIEW:

  • Nita Farahany, The Battle for Your Brain, Macmillan, 2023
  • Farahany’s Ted Talk, “When technology can read minds, how will we protect our privacy?” November 2018
  • “Forensic Brain-Reading and Mental Privacy in European Human Rights Law: Foundations and Challenges,” Neuroethics Journal, June 2020
  • “Facebook acquires neural interface startup CTRL-Labs for its mind-reading wristband,” The Verge, 9/23/19
  • “Criminal defendants still cite a ‘gene for violence.’ It doesn’t exist,” The Washington Post, 3/18/21
  • Presidential Commission for the Study of Bioethical Issues
  • “‘I want a beer’: Paralyzed man communicates first words in months using brain implant,” Yahoo!, 3/24/22 
  • “How We Analyzed the COMPAS Recidivism Algorithm,” ProPublica, 5/23/16
  • “The accuracy of pulse oximeters shouldn’t depend on a person’s skin color,” STAT News, 7/5/22
  • “Cheating-detection companies made millions during the pandemic. Now students are fighting back,” WaPo, 11/20/20
  • “Wearable Tech That Tells Drowsy Truckers It’s Time to Pull Over,” NYT, 2/11/20
  • ‘They Were Spying On Us’: Amazon, Walmart, Use Surveillance Technology to Bust Unions,” Newsweek, 12/13/21
  • “Data scientists have access to your sensitive data. That’s driving more schools to teach ethics,” Fortune, 2/22/22
  • “The extent of employee surveillance is greater than you know, ” Duke Magazine, 8/24/21

FARAHANY ACADEMIC ARTICLES:

BUTTON:

  • “Senate confirms Dettelbach to head firearms agency as gun violence grows,” NPR, 7/12/22

Preet Bharara:
From CAFE and the Vox Media Podcast Network, welcome to Stay Tuned. I’m Preet Bharara.

Nita Farahany:
The one space that you truly had for privacy, the one refuge that existed was your brain, was your thoughts. And suddenly you’re going to be opening that up to Facebook and to Google and to governments worldwide.

Preet Bharara:
That’s Nita Farahany. She’s a professor at Duke Law School and a nationally recognized scholar on the social, ethical, and legal implications of emerging technologies. Advances in neuroscience and brain monitoring devices are changing the way we live and work. They’re also raising questions about what it means to have freedom over our own thoughts and in what legal protections might look like. Farahany and I discussed the ramifications of employers tracking brain activity in the workplace, the application of artificial intelligence in bail and sentencing decisions and why she believes there should be a codified right to cognitive liberty. That’s coming up, stay tuned.

QUESTION & ANSWER:

Preet Bharara:
Now let’s get to your questions. This question comes in a tweet from Twitter user @MuellerSheWrote, who I think is on a streak. I think this is the second question I’ve answered from Mueller, She Wrote this month. The question is, “I’d like to know if Bannon’s criminal past will impact his sentencing guidelines since he was pardoned. Thank you. #askpreet.” So as everyone I’m sure is aware and has been following the news, Steve Bannon was indicted for contempt of Congress by the Department of Justice. He faces trial in just a few days. He has indicated that he has some willingness to come and talk to the committee, probably in a gambit to help himself at trial or get an adjournment of the trial. But that gambit was shot down by the judge this past week.

Preet Bharara:
Now the questioner is assuming something that hasn’t happened yet, and that is a criminal conviction of Steve Bannon. Now, if he is convicted, I guess the question goes to what the guidelines would be for Steve Bannon based on “his criminal past”. Well, it’s a technical matter. He does not have a criminal past. He was arrested as you may remember. And an indictment brought by the Southern District of New York, my old office, in connection with a scheme to get taxpayers to give money to a nonprofit that Bannon was part of, for the purpose of a building a wall at the southern border. But here’s what the allegations were in the criminal indictment, that Steve Bannon and three co-defendants had made representations that they would not pocket a single penny of the money donors were giving to build the wall. But the allegations go on to say they, in fact, pocketed hundreds and hundreds of thousands of dollars.

Preet Bharara:
Now Steve Bannon was never convicted in that case because he never went to trial in that case because he was pardoned at the last possible moment by the former president, Donald Trump, when he had the pardon power. As I noted at the time, it was kind of ironic and a bit ridiculous for many reasons. But one reason was that only Steve Bannon was pardoned and the other three defendants were left in the case and had to go to trial. In the federal system, one of the ways in which the guidelines work is that if you have a prior criminal history, that gives you a certain number of points and it places your sentence in a range that’s determined by a chart in the sentencing guidelines as every federal prosecutor and every federal judge knows. But that criminal history is not governed by arrests. It’s governed by convictions. So if you have a number of prior felony convictions, your score goes up and your sentencing guidelines range also goes up.

Preet Bharara:
Steve Bannon was never convicted of anything to my knowledge. So he has zero criminal history points. I suppose the spirit of the question is, for some people that accepting a pardon amounts to a confession to the crime or an admission to the crime, does that constitute a criminal history? I don’t think so. I’m not aware of any court that has considered that question and decided that it does constitute criminal history. It might be different in the case of someone who had been convicted of a crime and later pardoned that the pardon doesn’t necessarily extinguish the conviction for purposes of a future conviction and the guidelines range. But in a case like this with Steve Bannon, whether you like it or not, the fact that he was never convicted and nonetheless got a pardon, I don’t think brings him up in the sentencing guidelines. That’s my answer.

Preet Bharara:
So of course this past Tuesday was another blockbuster hearing by the January 6th Committee. And it did not disappoint as prior hearings have not disappointed. There was a lot of information. Some of it was stitched together from prior things that we knew and some of it was actually fresh and we had not known. And we heard from people that we hadn’t heard from before, including former White House council Pat Cipollone, some people associated with the Oath Keepers. And there was again the intermittent broadcasting of behind the scenes depositions, for the committee’s purposes, in making its case and trying to explain what happened in the days and weeks leading up to January 6th.

Preet Bharara:
There are kind of two categories of information and evidence that I think were brought to bear at the hearing. The first category I think advanced the ball on the question of whether or not there is legal liability, legal responsibility, culpability on the part of people around Trump and Trump himself. I’ll give you two examples. Example number one is the amount of evidence that they pieced together to suggest more than we realized before how much Donald Trump was connected to the march on the Capitol on January 6th. We heard previous testimony from Cassidy Hutchinson and others that Trump really wanted to go onto the Capitol. But we learned further this week that he had a draft tweet that never was ultimately sent, but indicated his interest in going to the Capitol.

Preet Bharara:
We heard from people who said that they expected Donald Trump to go to the Capitol. We heard about the things that he ad-libbed in his speech on January 6th, relating to the strong action people must take and the fight they must take to the Capitol. And there were other examples as well, connecting Trump to the need and desire for action to be taken, obstructive action to be taken at the Capitol. I think that goes some way of increasing the potential liability on the part of Donald Trump.

Preet Bharara:
And then second, we heard more testimony and more information about that fateful crazy many hours long December 18th meeting in the White House that ranged from the Oval Office to the Residence, to other rooms in the White House as well. That’s the meeting where there was a draft executive order that was circulated, which would’ve given the power to the Department of Defense to seize voting machines, where Donald Trump talked about giving Sidney Powell, the Kraken lawyer, a security clearance making her a special counsel were some of the main line lawyers for the White House and the White House counsel’s office were arguing strenuous with and cursing at some of the Kraken lawyers who were advocating extreme action and the declaration of Marshall law.

Preet Bharara:
So the evidence relating to Trump in connection with the march on the Capitol and weeks earlier, this meeting where lots of things were discussed, extreme things were discussed that Donald Trump seemed to favor, I think all go to the legal question of what liability there might be and how this might impact the Justice Department’s decision about an investigation and a potential prosecution.

Preet Bharara:
The second category of information I think is very, very important as well. It doesn’t necessarily go straight to the legal question of criminal culpability, but a little bit more in the nature of emotional testimony, human interest testimony, that may have a persuasive effect on some of Trump’s base and some of the people who have not thought the insurrection was in fact an insurrection, who have not thought that it was as bad as I think normal people and reasonable people think it was. I’m thinking in particular of the testimony of prior Oath Keepers. One of whom said we are lucky that there was not more bloodshed at that event who apologized to the Capitol police officers and who seems to have awoken from a cultish haze of extremism. And you hope that someone like that who was caught up in this violent extremism, but who is awaken from it and is counseling people that that’s not the way to go. That’s not good for the country. That’s not good for our harmony. That’s not good for unity. That that has some effect on people who used to think like he did.

Preet Bharara:
The other example is this very stunning text exchange between two people, very high up in the Trump community. These are statements made by Brad Parscale, the Trump campaign manager. He was making them in text to Katrina Pierson, another high up official. And what does Brad Parscale say in the wake of January 6th? “A sitting president asking for civil war. This week, I feel guilty for helping him win.” He goes on in the text exchange with Katrina Pierson to say he feels guilty about that and also said he believed that Trump’s rhetoric led to a woman being killed. And Katrina Pierson says it wasn’t the rhetoric. And Parscale responds, “Katrina. Yes, it was.” Again, that text exchange does not have a lot of direct bearing on legal responsibility, but maybe it does have some bearing on the degree of personal responsibility folks will feel having contributed to Trump’s rise and his stride persistence in the big lie.

Preet Bharara:
Now at the end of the hearing on Tuesday, Liz Cheney, as she has before, dropped something of a bombshell. She said, “After our last hearing, President Trump tried to call a witness in our investigation, a witness you have not seen yet in these hearings.” She goes on to say, “That person declined to answer or respond to President Trump’s call and instead alerted their lawyer to the call. Their lawyer alerted us.” And then she went on to say that they have referred the matter to the Department of Justice.

Preet Bharara:
So that was a significant moment. It was a serious moment. It was a serious thing that needs to be looked at. It was certainly unethical. It’s certainly problematic. It certainly suggests something based on the track record of Donald Trump that everyone should be worried about and concerned about. The committee was right in bringing it up. The committee was right and referring to the Justice Department. But the question that I’ve gotten a lot in the last couple of days is, does this constitute criminal witness tampering, obstruction of justice? And there’s a statute that talks about witness tampering and lots of people have been commenting on it.

Preet Bharara:
So to repeat, I think it’s serious. I think it’s something that needs to be taken seriously. I think it’s right for Liz Chaney to mention it, if for no other reason, to make sure that Donald Trump and people around him are on notice that they know about this conduct and they take it seriously and it looks bad. And maybe they’ll have the intelligence and the pragmatism not to try to do that kind of thing again.

Preet Bharara:
However, I disagree with some of the commentators who are kind of blithely stating that this is an easy criminal case to bring or a clear criminal case to bring by itself without additional evidence. Now, it is certainly true that attempts to commit a crime can constitute a crime themselves, but here there’s not really much to go on, for purposes of violation of a criminal statute that you have to prove beyond a reasonable doubt to unanimous jury. Now, most importantly, no conversation ever happened. This witness did the right thing, refused the call and reported it to the authorities. So there’s no way of knowing or being able to prove, certainly not beyond a reasonable doubt based on the evidence here alone, what the purpose of Trump’s call was.

Preet Bharara:
You and I know that it was almost certainly nefarious. You and I know that he’s had a track record of doing this. In fact, I know, especially from my own experience, that Donald Trump once called me, I think, for an unethical and possibly bad purpose the day before I got asked to resign. I too didn’t take the call. I too reported it to the Justice Department and made a statement about it after the fact. But I can’t prove because I didn’t take the call. I didn’t connect with the former president. I can’t prove that he was going to ask me to do something inappropriate, unethical, or illegal. And I think that’s the same situation you have here.

Preet Bharara:
Now, I guess it’s possible if there are other witnesses who can say, “Well, Donald Trump told me that he was calling a witness for the purpose of telling the witness not to testify.” That would be helpful to a criminal prosecution. But based on what we have here alone, without more and without a further investigation, I think a criminal prosecution based on an incomplete call is difficult. We’ll be right back with my conversation with Nita Farahany.

THE INTERVIEW:

Preet Bharara:
Does a gene for violence exist? Can we trust machines to decide if someone should go to jail or not? What is the proper use of neuroscientific evidence in the courtroom? Nita Farahany is a leading voice on these and other legal and ethical questions associated with emerging technologies.

Preet Bharara:
Nita Farahany, welcome to the show.

Nita Farahany:
Thanks for having me.

Preet Bharara:
So before we get to the various things we want to talk about, can I ask you something about your education?

Nita Farahany:
Sure.

Preet Bharara:
How many degrees do you have? It’s a lot of degrees.

Nita Farahany:
Yeah.

Preet Bharara:
I got exhausted just reading about your degrees. You have PhDs and masters and bachelor’s degrees. Why do you have so many degrees?

Nita Farahany:
There’s a story about it, Preet.

Preet Bharara:
Okay. Can you explain please?

Nita Farahany:
Well so first of all, I’m first generation. My parents grew up in Iran. Education is always been incredibly important to my family. And so my dad, I’m the youngest of three kids, told us, “As long as you are in school, I will support you.” And being the last child and being in school as long as I was, eventually he had to say, “No, it’s enough.” No, but seriously, I would say it looks like a big grand master plan to bring together the things that I focus on, which is science and philosophy and technology and law. But in truth, I just kept getting degrees in the areas that I was really interested in and ended up in this place.

Preet Bharara:
What was the hardest degree to earn?

Nita Farahany:
Probably the PhD, just because you have to really bring together a lot of different disciplinary thought into a dissertation. And it was the first and longest sustained writing projects that I had undertaken.

Preet Bharara:
And when you describe what you are, do you have to use six words always? Or do you just say lawyer? Or do you say bio ethicist? Or do you say philosopher? What do you say?

Nita Farahany:
So my newest is to say I’m a futurist and legal ethicist.

Preet Bharara:
A futurist?

Nita Farahany:
Yeah.

Preet Bharara:
Interesting. Do you write science fiction also?

Nita Farahany:
Well, I mean, I’m trying it on. I’m trying it on because really what I’m doing is I’m looking at developments in science and technology and trying to figure out what the ethical and legal implications are in the future. And that seems to me like what a futurist does, but I don’t know, maybe I’m wrong about the use of the term.

Preet Bharara:
Yeah. No, look, it’s better than being a presentist.

Nita Farahany:
It is, right? Because a presentist misses everything that’s coming. So anyway, I’m trying that on right now. And for a while I was just a legal ethicist, and then I felt like that didn’t really capture it. Or somebody doing emerging tech and law and philosophy, that seemed like too many words, so I’m trying on futurist.

Preet Bharara:
Okay. Futurist is good. Nita the futurist. So speaking of the future, let’s talk about the brain. And I’ll just tell folks that I saw you give a presentation at a conference recently and I was blown away, and I think many people will be, because I think that folks aren’t thinking about the future in a particular way. And that is, to begin with, this issue of brain monitoring. It is not currently the case that if we attach electrodes to my head, that you’d be able to read my thoughts, is that correct?

Nita Farahany:
That is correct.

Preet Bharara:
But you can tell some stuff.

Nita Farahany:
Yeah.

Preet Bharara:
And so my question is, what is the current state of brain monitoring? What is the ability of third parties to tell what you or I are thinking or feeling?

Nita Farahany:
Well, first of all, it depends on what kind of technology I attached to your head. So let’s limited to consumer technology, the kind of thing that you could easily put on, not in a hospital or in a controlled setting, and something that would have electrodes that are inside of a baseball cap or in a headband or in something like your AirPods that have sensors in it that detect what’s called brainwave activity. So let’s just start with a kind of simple, which is every time you have a thought or a mental state, attention, fatigue, anything like that, your brain is firing what are called neurons in the brain. And when you have a particular thought or a particular brain state like fatigue or attention, there are patterns of electrodes that are firing at the same time, giving off tiny little electrical discharges. And that’s what the electrodes can detect, is those discharges.

Nita Farahany:
As artificial intelligence has gotten better and better, we can start to decode what those patterns look like, because it turns out that the patterns look pretty similar across different people for different things that you’re doing. And we can do things like, are you tired? Are you falling asleep? Are you paying attention? We can even decode some simple things like a number that you’re thinking of or words-

Preet Bharara:
Right. So stop there. I saw that you have mentioned that in your writing. How is that possible?

Nita Farahany:
Well, so it’s possible in a number of different ways. It could be something as complicated as a program that has been developed where you can navigate around a screen. Like, suppose I’m looking at a screen and it has all of the different letters in the screen, A through Z, and I think up down until I locate and kind of focus on a particular letter. The algorithm could trace that kind of sense of up and down to pick letters. But you could also do it in other ways, which is you could use technology that Facebook, or Meta now, has been developing. They bought a company called CTRL-Labs, where they are decoding through a wristlet your intention to type. So it’s picking up signals from your brain, sent to your arm, sent to your wrist, which would then be used to tell your hands to type on a keyboard. And it can decode what those firings mean. And then you don’t have to type on a keyboard. You could just think about typing and it could still pick up those same signals and decode the letters in that way.

Nita Farahany:
So there’s a lot of different ways to get at it. It’s also possible to picture a letter in your mind or picture, a number in your mind, or to have a prompt on a computer screen that you’re reacting to. Like if I think of 1 and then you flash a bunch of numbers up and 1 is one of the numbers that I’m thinking of, there’s a brain signal that could pick up that one. So there’s a lot of different ways to get at it, but some of them require you doing so really intentionally and some of them can get at your unconscious brain processes to decode numbers or letters that you’re thinking of.

Preet Bharara:
Now, is this a beautiful and amazing thing or is it a scary and frightening thing?

Nita Farahany:
Yes and yes.

Preet Bharara:
All right. Let’s talk about the good first.

Nita Farahany:
All right.

Preet Bharara:
I would imagine that this has practical applications like for people who are immobile or paralyzed in some way. They can communicate through this method. Is that true?

Nita Farahany:
Yeah. So many consumer technologies actually start as assistive technologies, like things that are meant to be accessibility technology for people who have disabilities in communication. A really great breakthrough happened recently. There was a patient with pretty far progressed ALS to the point where he could no longer even move his eye. So he couldn’t communicate in any way with other people. And through a series of electrodes that had been implanted in his brain with his permission before he lost any ability to speak, a few months later after he could not even move his eyes or anything else, he was able to use those electrodes to, kind of one character at a time, type out his thoughts. His first thought was, “I want a beer,” which is pretty funny.

Preet Bharara:
That would’ve been my thought too.

Nita Farahany:
Yeah. I mean, you’re locked in and you’re just like, “Can nobody hear me? I want a beer.”

Preet Bharara:
“I need a beer.”

Nita Farahany:
Exactly. But since then he’s been able to communicate things with his four year old son to tell him that he loves him and to ask for particular food that he wants and to be able to ask for his needs to be met where he couldn’t do so before. And so, amazing that you could take just brain signals for somebody who has no ability to communicate with the outside world in any way, shape, or form and allow them to be able to communicate again. That’s amazing.

Preet Bharara:
That is amazing. Okay. So what’s the problem then?

Nita Farahany:
Well, so as I said, let’s back up, which is, anytime that we develop technology, a lot of these technologies start as incredibly expensive and start as technologies that are really designed to be therapeutic in purpose. Eventually even things like speech-to-text, that was originally designed as assistive technology. And now it’s a really helpful technology that people use in their phone to type while they’re driving or to have a memo or other things like that.

Nita Farahany:
The same kind of technology that is used to communicate brain to text or to be able to operate your lights or play a video game or ultimately to be able to replace, for example, your keyboard and your mouse will be really enticing for the average person because it’ll enable us to do things more quickly, more seamlessly. Imagine driving your car, and instead of even speaking out loud, I don’t know about you, but Siri gets it wrong about half the time for me still. So imagine just being able to think about it and have your thoughts being able to communicate very easily. That’s exciting again, but the implications of doing so is that suddenly the one space that you truly had for privacy, the one refuge that existed was your brain, was your thoughts. And suddenly, you’re going to be opening that up to Facebook and to Google and to governments worldwide. And I worry that when that last refuge of privacy falls, that the implications for society will be profoundly bad unless we put into place some safeguards around it.

Preet Bharara:
Was it better not to go down the road at all?

Nita Farahany:
Well, I’m also a bit of a tech inevitablelist. I think it’s going to happen.

Preet Bharara:
Inevitablelist?

Nita Farahany:
Yes, I made that up. Do you like it?

Preet Bharara:
Maybe that’s better than futurist.

Nita Farahany:
Yeah.

Preet Bharara:
Maybe you’re an inevitablelist.

Nita Farahany:
Maybe I’m a tech inevitablelist and legal… Yeah. I’ll try that one on. Yeah.

Preet Bharara:
Okay.

Nita Farahany:
Tech inevitablelist and futurist and… No, [inaudible 00:22:24] again.

Preet Bharara:
No, I get… Your point is there’s no stopping it.

Nita Farahany:
I think there’s no stopping it. And I think there are reasons that we’re going to want it. It can make society better. It can make our lives better. It can make it easier. It can make our interaction with technology more seamless. It can make us more productive, give us ways to enhance and improve ourselves, decrease our suffering. There’s a lot of promise. And I think it’s already here and the rest of it’s coming. And so the question from my perspective isn’t prevent it from happening. It’s, how do we maximize the benefits of it and minimize the downside risk?

Preet Bharara:
Let’s talk about some other practical applications. So you mentioned tech companies largely people can choose and volunteer to be plugged in or not plugged in. But there’s some workplaces where this technology is manifesting itself. And one example I remember you gave when I heard your talk was long haul truckers and the ability of employers to figure out if the truck driver is alert or tired to prevent accident. And that seems like a very good and useful purpose to protect not only the driver, but also the cargo and make sure everyone is safe. Is that good or bad?

Nita Farahany:
I think that’s good. And again, I think it depends on the safeguards we put into place. So a starting places, truck drivers, long haul drivers, pilots, other people who are operators of commercial vehicles, but especially long haul truck drivers, already have a number of devices that have been put into place to monitor their fatigue, whether it’s technology that’s built into the car to see how they’re driving. Like, are you making micro changes to the steering wheel? Even some cabs of trucks have already put cameras in place that are reading what’s happening in the truck. And I think in some ways that’s more intrusive into the privacy of the truck driver than having a baseball cap on that has electrodes that just detects their fatigue levels.

Nita Farahany:
There is a company called SmartCap that has been selling this technology for a number of years to track fatigue levels and then present on a scale from one to five, from kind of wide awake to falling asleep, where the person is for fatigue levels and give real time alerts to both the person who’s driving but also to their manager or kind of control center to be able to see the fatigue status of their employees. And that’s good because the leading cause of accidents is drowsy driving and the implications of those accidents are profound. People are dying, it’s causing a tremendous amount of suffering. It’s causing huge amounts of economic loss. We should want to opt into the best possible way to be able to detect and to find out that a person is falling asleep.

Nita Farahany:
The problem I see is that in order to detect whether or not a person is falling asleep, you’re collecting potentially a whole lot more information from their brain than just whether or not they’re asleep, and then you’re processing it through an algorithm. And so the question is, can we limit what the company is collecting, not just the employer, but the company who is operating the device. Can we limit what they’re collecting and have it overwritten on the device? Can we limit what the company receives so that they can’t mine it to find out like, “Is this person suffering from cognitive decline or they’re daydreaming about their coworker when there’s a no intraoffice romance policy?” What are the things that they’re going to look for and how do we prevent them from looking for those other things? But I think if it’s limited to like, “Here’s your fatigue score,” the minor intrusion into a person’s mental privacy to get that information seems well worth the societal benefit of having people not barreling down a highway while they’re asleep.

Preet Bharara:
Wait. So are you saying that the cap with electrodes can detect amorousness? Feelings of love?

Nita Farahany:
Potentially. There are neural signatures that suggest kind of love versus lust. And you can pick up things like amorous feelings and even distinguish amorous feelings of love versus lust that a person may be experiencing.

Preet Bharara:
Well, is that good or bad for couples? I’m going to go back to my earlier question. Should we not go down this road at all?

Nita Farahany:
Well, I met your lovely wife and it seems like it would be all good for the two of you in all of the wonderful things that she would decipher. But I tell people to think about it this way, which is, think about all the little white lies we tell throughout the day, like, “No, no, that dress looks fabulous on you.” Or your friend gets a new couch and you’re like, “Oh yeah, I love that mustard yellow that you chose. It’s really amazing.” You tell these white lies because they don’t really matter and you know that they can help smooth out relationships between individuals. But if you’re wearing a headset that can be decoded at all times that reveals what you really think, I think it could be problematic. Now, chances are, even if you could detect amorous feelings or not, your wife isn’t going to have access to that information unless you give it to her.

Preet Bharara:
No, just your employer.

Nita Farahany:
Just your employer. Just your employer, which could be problematic. No. I mean, I worry about it in the employment setting because especially in the employment setting, and part of the reason you heard me talk about that, that’s one of the chapters in my book that’s coming out, because I think it’s such an interesting setting to think about these issues. People might say like, “Oh, well you can just quit and go work somewhere else.” Except surveillance tech is in almost every workplace now, whether that’s cameras or keystroke monitoring bossware that has been integrated into most people’s computers during the pandemic. Something like 87% of company said that they’ve started to implement bossware technology to track what people are doing, even turning on their cameras to see with them at their desk.

Nita Farahany:
And there are very few, if any, protections for people other than the idea that the workplace is at will. But if there’s nowhere else to go because every worker is required to have some kind of technology that’s in place, it can be problematic. And there are other ways that it’s not just tracking fatigue levels. There are other kinds of things that work places will start to track using people’s brains.

Preet Bharara:
There’s one example of this that I just learned about recently, because I’m dumb. My kids were telling me that standardized tests, which often now have to be taken in the home because of COVID, I think you’re supposed to download software that can tell if you’re looking away from the computer screen or your not focused enough on the test that it’s possible you’re cheating. So it’s not just in the workplace. It’s testing as well.

Nita Farahany:
Yeah, that’s creepy. So during the pandemic, because Zoom actually rolled out a feature that was incredibly unpopular that basically did the same thing and was offering it to employers, which was to find out how often the person minimized the Zoom screen on their computer to do something else during a Zoom meeting.

Preet Bharara:
Oh yeah.

Nita Farahany:
People were outraged by this. They were like, “We definitely do not want that.” And I certainly wouldn’t have wanted that or still don’t want that for other people to know every time I minimize the Zoom screen that I’m in a conversation with them. So you’re right, I mean it’s in every setting. I find it particularly pernicious when these things start happening with children, because I think there’s a thing I worry about, which is normalization of surveillance. And I feel like when you start with children such that their everyday experience is having their computer monitored, their eye tracking monitored when they take tests or ultimately their brains being monitored while they’re in the classroom, that I think starts to create from the earliest ages the acceptance of the technology in a way that then makes us blind to the risk.

Preet Bharara:
Well, that’s interesting that you say that because that I think is universal and not recent. My oldest is 21. I think one of the first things we bought was a baby monitor so we could surveil at all times. I know lots and lots of people who have so-called nanny cams. So that’s a bad thing, you think?

Nita Farahany:
I mean, it’s not bad or good, right? I don’t think technology is bad or good. It’s not technology itself that’s evil. It’s how we use it and what kinds of misuses we allow for it.

Preet Bharara:
Was that usage bad or good?

Nita Farahany:
It’s normalizing. Whether that’s bad or good, I don’t know. I mean, we do the same thing. We have cameras in our kids’ bedrooms. Our oldest now who’s seven, she would never cry when she woke up. She would just sit up and look at the camera and wave. You know? They feel like expectation.

Preet Bharara:
They have future in broadcasting.

Nita Farahany:
Yeah. There you go. But you know, the idea that from the youngest age, like preverbal she could connect up like, “I’m being watched. And if I just wave, my parents will come and find me,” it definitely-

Preet Bharara:
Yeah. Everyone is the star of The Truman Show now in a way.

Nita Farahany:
Yes. Yes they are. It’s unsurprising that she makes fake YouTube videos anytime I loan her my phone because they grow up normalized with it. Now, is it a bad thing? What it does is it can desensitize people to the risk. And I think if we counteract that with vigilance of surfacing what the risks are and trying to put into play safeguards against it, it doesn’t have to be a bad thing. But it does require more work, because the more it’s normalized, the more we have to make ourselves aware of what it is that we’re opting into.

Preet Bharara:
Yeah. I mean, another example of that that actually we had a debate about in my house some time ago was that function on the iPhone you can have where another person like a parent can know exactly where the child’s iPhone is. And presuming it’s on the person, on the child, you can know based on the GPS function where your child is at all times, which I think I know this is not quite technically correct, but I think that’s a 4th Amendment violation. And I told my children, they could object to that. So I don’t do that.

Nita Farahany:
Well, I mean maybe in your prior role it would’ve been a 4th Amendment objection.

Preet Bharara:
Yes.

Nita Farahany:
Right? But I think parents tracking their kids… I mean, as you know, the 4th Amendment turns on reasonable expectations of privacy and-

Preet Bharara:
And state action.

Nita Farahany:
… and state action, which is why I said your prior role, right? But the question really once you normalize technology is, what is the reasonable expectation of privacy? And if kids from one years old are waving at the cameras that are watching them at all times and realize that their every movement is being tracked on GPS, do they have a reasonable expectation of privacy? Is there some inherent concept of reasonableness or is it something that shifts as we shift our use of technology?

Preet Bharara:
I want to get to the legal implications of all this in a second, but just for a moment to go back to the workplace, you said something interesting about how there was this function that was offered with Zoom and people hated it and they rebelled against it. In the workplace, do you think that workers will have enough power and authority to rebel against and reject certain kinds of surveillance? So for example, the cap for the long haul trucker seems to make reasonable sense. But other invasive technology usage and surveillance that’s invasive for an office worker to figure out how alert they are as they’re doing data entry or things like that, are workers really going to have the ability to rebel against that? Or do you worry about that a little, or a lot, or not at all?

Nita Farahany:
I worry about it a lot. So the only instance in which the long haul truckers or minors were able to keep the use of that technology out of the workplace was when there was a preexisting union where a lot of preexisting unions negotiate for the right to review surveillance technology before it’s implemented in the workplace. But the question is, could the use of this technology prevent a union from ever performing to begin with? So a different possible usage of this technology and a number of companies are trying to sell this to enterprise for an enterprise solution is to track people’s attention and focus during the workday. And rather than just looking at eye tracking or keystrokes, or what’s in your Microsoft 365 environment, literally using electrodes on the brain to see whether or not your mind is wandering or focused or paying attention, and even being able to discriminate between the kinds of tasks that you’re involved in. Like, are you focused on social media or are you focused on coding or documentation for code?

Nita Farahany:
And if you’re monitoring for attention at all times, which means that it’s not just the factory worker or the long haul driver but also the person who’s sitting at their desk and writing and working on whatever’s on their online environment, you can also start to pick up other things from the brain. One of the things that’s a potentially interesting application is to see brain synchronization between people. So as you work on a problem with another person, your brain starts to show patterns of alignment or synchronization between them. And so when they’ve studied, for example, children in the classroom who are working collectively on a problem, you can see which of the kids are working together and which of the kids are not part of that group working together by looking at synchronized patterns in their brains.

Nita Farahany:
So you can imagine, you start looking for patterns of synchronization in the brain for people who should not have synchronization in the brain because they’re not working with each other in the workplace. And you start to see this kind of collective synchronization. And you suspect that they’re starting to try to develop some kind of collective action to prevent workplace surveillance or demand better terms of the working environment. There are already a lot of examples of using surveillance to try to break up or prevent unionization from happening in the workplace. This is another more powerful and more discreet tool that could be used to try to figure that out. So I worry that it will be hard to stand up against this technology if we don’t put safeguards in place before it becomes widespread in the workplace.

Preet Bharara:
We’ll be right back with more of my conversation with Nita Farahany after this.

Preet Bharara:
So I want to talk about some of the legal implications of this. There’s been an intervening legal event since you and I met and I heard you present.

Nita Farahany:
Yes.

Preet Bharara:
And at that event you were asked a question about whether there is or should be found a right to mental privacy in the constitution since that day the Supreme court has struck down Roe V. Wade, it seems to put into doubt the right to privacy in the constitution. You have, on a number of occasions, written about the need for right to cognitive liberty. What’s the fate of all that.

Nita Farahany:
So I’ve advocated for a right to cognitive liberty, which is really a bundle of rights. It’s the right to self-determination over your brain and mental experiences, the right to mental privacy and the right to freedom of thought. I’ve advocated that in order to recognize that, we update existing international human rights law related to each of those three to reflect the right to cognitive liberty. Mental privacy is, from my perspective, included in at least a couple of places in the UN Declaration of Human Rights. First, while it isn’t explicitly recognized in the general comment that attaches to the right to privacy, it is one that easily could be updated.

Nita Farahany:
Freedom of thought, which is included within the Universal Declaration of Human Rights and I think the precursor of freedom of speech even in the US constitution, includes a right to not have your thoughts accessed or used against you. And while it has traditionally been understood to be limited to religious liberties, only in the way it’s been interpreted not in the way it’s written, recently the Special Rapporteur for freedom of thought has been investigating and presented to the general assembly last year about the need to update that right in light of emerging technologies, including neuro technologies.

Nita Farahany:
I think it would be fair to say that Dobbs runs counter to existing international human rights law. And in fact, there was I think a letter that was recently written by the Special Rapporteur in response to Dobbs saying that. And so the question is, if it is not located in the constitution according to the existing Supreme Court but it is located within international human rights laws, that is the right to decisional privacy and autonomy, how does it get recognized in the US and how does it get enforced in the Us? And that’s an open question. I think it is possible to read a right to mental privacy distinct from Dobbs to say that Dobbs may apply only in the instance in which there is a conflict between a rights holder, that is the pregnant person and the fetus, whereas you don’t have that kind of conflict when it’s mental privacy that’s at issue. But it’s definitely going to be an evolving area of law. I think Dobbs throws into question a lot about questions about decisional autonomy and privacy.

Preet Bharara:
Is the main line of defense on this going to be the law and the constitution? Or is it going to be sort of in the workplace and in society, people who have a strong sense of ethics and establish norms? Isn’t that going to be the first place?

Nita Farahany:
I think it’ll be both, right? I think from my perspective, we need an update to international human rights law to make clear what the right is and what the set of rights are, which then in every particular context is going to require a set of explicit laws and norms to make it happen, right? So in the workplace, we’re going to have to recognize that mental privacy applies to the workplace and what that specifically means in context, right? And so from my perspective, that means even if you could recognize that mental privacy, like other privacy interest, is weighed against societal interest. That is like, the interest in society if not having a person barrel down the road asleep in a 40 ton truck outweighs the individual’s intrusion on their mental privacy. If and only if the only thing that you’re collecting is the data about their fatigue levels, there’s no reason that an employer would have a justification for collecting all of the other data and mining all the other data.

Nita Farahany:
So I think it’s going to be setting up those norms and recognizing what they mean in the education setting, in the workplace setting, when the government uses it to interrogate a person’s brain. All of those settings are going to require really specific applications of norms.

Preet Bharara:
So outside of the legal arena, there are other downsides to this kind of surveillance technology that I think you’ve talked about. So for example, if your employer decides that it’s very important to find out if you, as an office worker, are focused on the task at hand, but your job otherwise requires some amount of creativity and freedom of thought, a wandering mind is actually a good thing, right?

Nita Farahany:
No, that’s exactly right. So I mean, interestingly, many of the greatest strokes of genius and insight that happen, happen when people’s minds are wondering, not when they’re focused and paying attention. And so, if what you’re trying to do by having people focus for longer stretches of time to have greater concentration, and they’re getting a little haptic feedback, a buzz every time their brain starts to wander, what you may actually be doing is preventing people from having the big thoughts and big insights that would really be the extraordinary breakthroughs that we need in society. So you may get higher quantity of focus time, but you may get lower quality of work, that results. And so it’s not altogether clear that that’s what we should be doing at all times.

Nita Farahany:
We’re talking about it in the workplace. This is something that’s been trialed in educational settings too, to have children wearing these headsets and having their brains monitored while they’re in the classroom with little red, yellow, and green lights that light up to tell you whether or not they are focused and paying attention to the teacher or to whatever the assignment is at hand or their minds are wandering. And if you start from the youngest age trying to stifle people from having their mind wander and having creative thoughts, I worry about what that means for what society looks like, what kinds of creativity we can expect, what kinds of cultivation of personality and self and identity will occur as a result.

Preet Bharara:
And what about another issue, a larger issue of the role that this kind of technology can play in stopping dissidents in China or Iran or anywhere else?

Nita Farahany:
Yeah, I mean, in fact, that’s one of the things that really kind of brought me to the issue to begin with, was this fear at the time that I first started really writing about this, there were protests that were happening in Iran and trying to talk with any family members there was very difficult because they really censored everything that they said for fear of being listened to on the phone or on FaceTime or any other dimension. And so I started to think about this technology. I wonder like, “What happens when even your thoughts can be monitored?” And you can imagine the results would be, people would be chilled from trying to think dissident thoughts. They’d be chilled from trying to rise against it. Or worse, that those thoughts can be detected and people persecuted as a result of their thoughts.

Nita Farahany:
Most people say, “Well, okay, well, I just won’t wear one. I will just stay away from neural devices.” But what I see as the coming future is the coming future in which neural interface is the way that you interface with other technology. That the kind of big push from a lot of companies, the big tech companies is to replace our keyboard and our mouse with neural interface as the way that you interface with your computer and other technology. And there will be reasons and conveniences and outdated technology that it’ll replace that will lead us down that path where we’re wearing it all the time. If governments can already do things like get your Fitbit data and your GPS data and use that information against you, what’s to stop governments from getting your mental activity data from these devices to use it as well?

Preet Bharara:
Okay. So enough about actual brain. Let’s talk about artificial brains and-

Nita Farahany:
Let’s do it. Yeah.

Preet Bharara:
… artificial intelligence, which is another area that presents-

Nita Farahany:
Let me give you an easy segue there.

Preet Bharara:
Yeah, please.

Nita Farahany:
Can I give you an easy segue?

Preet Bharara:
Yes, please.

Nita Farahany:
Which is, there are these transhumanist. And transhumanist, they really hope and believe that one way we could get to general artificial intelligence is using something like brain computer interface which maps all of our brain activity and then eventually uploads kind of structurally and functionally what a human brain looks like into a computer such that you have a digital artificial intelligence.

Preet Bharara:
Okay. Is that going to happen?

Nita Farahany:
It might.

Preet Bharara:
Okay. But like with everything else, there are many benefits that present themselves and there are also downsides. And you have talked about and written about, as of others, the use of AI in the law, in healthcare, in the provision of healthcare. One example you and I were talking about just before we started taping was with respect to pulse oximeters.

Nita Farahany:
Yes.

Preet Bharara:
Tell folks what you were telling me.

Nita Farahany:
I mean, a kind of starting place that is important for artificial intelligence is most artificial technology, behind it is machine learning algorithms. And these are basically just kind of big software programs that take large data sets and then are able to find patterns and make inferences from those patterns that we might not easily be able to make, or that would be very difficult to do without these algorithms because there are hundreds of thousands to millions of data points that are being used. But if the data isn’t very good or if it’s biased, you may end up with very biased patterns and results.

Nita Farahany:
So a colleague of mine who is a critical care pulmonologist, he works in the ICU here at Duke, approached me a few months ago because they had had a study that they had recently published and he was interested in trying to share the results more broadly and was trying to think about what the implications for tech companies would be. And what they found, and the FDA has issued guidance on this very issue, is that pulse oximeters, the thing that people have been using at home to measure the amount of oxygen in their blood and that hospitals use as a really quick piece of information to tell them how a patient is faring and whether or not they need additional oxygen, that those are very bad with people with darker skin tones. That most of the data sets that had been used to train the algorithms on to make the predictions, like, do you have 94, or do you have 96, do you have 98, or problematically, do you have 88% or below, that they have been very bad with darker skin because it can’t see through the melanin.

Nita Farahany:
And so the result is that people are getting readings that are inaccurate if they have darker skin. And it’s inaccurate in a particular direction, which is they’re being told that their pulse oxygenation is better than it is. And if you would go into the hospital, for example, if you got down to 92 or 90, but it’s consistently telling you it’s 94 or 96 and you don’t go into the hospital, you may be missing what the critical piece of information is to get the care you need.

Preet Bharara:
That seems to be a very basic error.

Nita Farahany:
It is a very, very basic error.

Preet Bharara:
How was that only recently discovered?

Nita Farahany:
Well, I think part of the problem with AI is that a lot of times people are taking the technology as if it is infallible or objective truth. They assume because it’s trained on huge data sets that it must be accurate. But what they miss about that is that humans are putting together the data sets. And if humans have biases and the data sets that they’re putting together, those biases will be reflected in the data sets and will be replicated by the data sets and then we may not know how it’s doing what it’s doing, the result is kind of bad errors.

Nita Farahany:
He wrote an op-ed that he put in STAT news recently. It talks about a patient that he’d had in the ICU who their pulse reading was consistently appropriate, but the patient was doing very poorly. And so they ordered a blood gas level to measure what the actual amount of blood oxygenation was. It came back with a significant discrepancy and that patient died later that day despite doing everything they could to save him. There are profound consequences to relying on bad data. And when we’re using artificial intelligence, which in many ways has made our lives better in the healthcare system, we have to realize that the risk of biased data can cost lives.

Preet Bharara:
And when people say this phrase, they say, “Trust the science. Believe in the science,” do you find that phrase to be inadequate?

Nita Farahany:
Yes and no. I mean, so I’d say trust, but verify, right?

Preet Bharara:
So the Reagan version? You’re doing the Reagan version.

Nita Farahany:
Yes, the Reagan version of it. I believe in science and I believe that scientific progress is critical to the advancement of humanity and to alleviating so much of our suffering and diseases, and even helping us get out of this pandemic. But there’s a big but, which is, we also shouldn’t blindly trust science without realizing that there are human biases that go into any institution, including a scientific one. You asked me about my degrees earlier on. My degree in philosophy was on philosophy of biology. Philosophy of biology critically looks at the assumptions in science and evaluates those assumptions and recognizes that science and technology are also human made institutions. So I might be a little more biased than others.

Preet Bharara:
Well, is one way of looking at that that’s just the application of more science to buy… In other words, you have a scientific process?

Nita Farahany:
So I think it’s not just science, right? I mean, because yes, the scientific process helps us to be able to figure out where there are problems by testing it, but we have to bring other disciplines to do so, right? If you go at it without thinking about the sociology of science or kind of the institution of science as an institution that has biases baked into it, biases that are driving it, biases that are informing its data sets, you may not look for that and test for that.

Preet Bharara:
I want to talk about AI in the courtroom because that’s obviously fascinating to me and among the things that I want to discuss with you. And so it is actually being used, the use of machine driven algorithms in a couple of places and maybe more. But the two places in which I know that it is being used, and there’s some controversy, is with respect to granting of bail for people who, one of the central worries in connection with the question of detention pending trial, is whether the person will abscond and not show up, engage in flight, and the other is in sentencing where one of the factors is the likelihood of recidivism. For most of our history as human beings that have had tribunal, those things are being determined by human beings who are fallible. What’s the role of technology here and is it working or not?

Nita Farahany:
So technology is being worked into the courtroom in a lot of ways, and AI in particular, right? So there are systems like COMPAS, which is a system that’s been used in the United States for making decisions to assist, to give risk scores to judges or in New Jersey. There’s a system that is being used for bail determination. I think, first, it’s important to realize anytime you’re using artificial intelligence to augment or replace human decision making, it’s really important that we know how the system works and what it’s doing. The reason I say that is because if you can’t see it, if there’s no transparency into how it’s coming up with a risk score for example, then it might be doing so based on factors that if we knew explicitly what it was doing, like if it was explicitly taking race into account, or if it was explicitly using proxies of race and then the result is that it’s making a recommendation against bail every time you have an African American male between the ages of 20 to 35, but not for the same white male, then we should start to be worried about it.

Nita Farahany:
And it turns out that’s what’s happening with some of these systems, is that they are replicating some of our existing biases and maybe making them worse because we can’t see into what’s happening. And so I think, one, is to realize that. Two, is to ask, what is it that they’re replacing? Because if it’s better than the existing biased way of making bail determinations or sentencing determinations, then even if it has bias in it, it still might be better than our existing bias. And so you have to ask both questions. We can’t expect technology to be perfect. Humans aren’t perfect. The question is, it better or is it worse? And one way to make it better and much better is to have transparency into how it’s making its decisions so that we can evaluate it and check those decisions.

Preet Bharara:
Isn’t it the case, and maybe I’m wrong about this, that with respect to some of these determinations, lawyers for defendants who are subjected to these determinations have wanted to understand transparently all the data that’s being used and the way it’s being manipulated and what the algorithms are, but that is being resisted by both the government and by the companies. Is that true?

Nita Farahany:
Yeah. So I mean that’s one of the big problems, is a lot of companies who are building these systems. They claim that what the data is or what the source code for how it’s making its determinations or trade secrets. And so they resist making it open or transparent. One of the researchers, a computer scientist at Duke has gained a lot of a claim, her name is Cynthia Rudin, because her argument has been that we should be building the systems transparently from the ground up. She actually did a really close look at COMPAS. ProPublica had written a piece about how COMPAS was incredibly biased. And because they couldn’t crack it open to look at everything that was inside because governments and because the companies are resisting making that data available, they tried to test and reverse engineer to see if they could show that it was explicitly taking race into account. And their conclusion was that it was basing its determinations, it had to be somehow basing its determinations on race.

Nita Farahany:
She replicated those studies and found that it probably wasn’t race itself. It was probably more likely things like prior arrests and sociodemographic factors, which reflect existing bias in the system. But she also has been able to show that if you build these algorithms from the ground up in a way that are explainable to begin with rather than having to fight with the companies or fight with the government to get the information, we would be able to know from the get go what they’re taking into account. And that she’s had success in every one of the systems that she’s replicated building an explainable model to have them be just as accurate, if not more accurate, at the predictions for the things that we’re trying to predict, like who should get a blood gas level or who is likely to recidivate for a crime.

Preet Bharara:
I guess the question is how do you measure if the algorithm is doing a better or worse job than the human being? Some things maybe are measurable more easily than other things. Is that true?

Nita Farahany:
Yeah. So, I mean, recidivism is something that you can tell over time, right?

Preet Bharara:
Yeah.

Nita Farahany:
You can tell if somebody’s re-offending over time and you can look at how long we’ve been using these assistive technologies like COMPAS or the New Jersey system, and then compare them to how judges have done and what the rate of recidivism is. You can do that. You can similarly look at historic data sets that we have for healthcare decisions and see how those historic datas have compared to the models where you’ve implemented the newer technologies and then compare the two and see which one does better.

Nita Farahany:
Some people are doing controlled studies where they’ll implement a system to predict who gets additional testing when you come into the hospital to have a greater workup because there’s a suspicion for a heart attack, versus you don’t have more workup that’s done and use the algorithm to test two different groups, one just based on doctor recommendation, one based on algorithmic recommendation and see which group far is better. So you can do some of that. And that’s what people are doing to try to compare the two. I think the question when you have really high stakes decisions like healthcare and like jail or sentencing determinations or bail determinations is, do you have to do it blindly? Or can you do it with a model that requires and forces it to be transparent so we know how it’s working?

Preet Bharara:
There’s another issue in the courtroom not quite related to this precise topic, but I know you’ve addressed it before. Is there a gene for violence?

Nita Farahany:
Yeah.

Preet Bharara:
I mean, if there is, that has enormous implications. There’s a lot of debate about this.

Nita Farahany:
The shorter answer is no, there isn’t a gene for violence. There’s this outdated assumption that has persisted about what was called the warrior gene. The media loved the warrior gene for a very long time. It was the monoamine oxidase A gene, which, because it’s on the X chromosome, if you have a small amount of what it’s expressing and you’re male, you’re not compensated by having two copies of it. If you’re female, you have two copies of it, and so you wouldn’t have the same effects. But the idea was that if you have low expression of monoamine oxidase A together with childhood violence, that you are much more likely to be a violent offender as an adult. And it just turns out the more we learn about genetics, the more we realize that any particular gene only contributes a tiny amount to complex behaviors. And so it’s unlikely that we’re going to find a gene for violence, especially because the concept of violence itself, like, are you a really good lacrosse player or are you a homicidal killer? Both of them are violent.

Preet Bharara:
It’s the Venn diagram on that. That’s [inaudible 01:00:09] interesting.

Nita Farahany:
Right. I mean, that’s the question. That’s the question. But I mean, there’s-

Preet Bharara:
But it comes up because people are trying to make the argument that they’re not responsible, right?

Nita Farahany:
Yeah. For a long time I’ve been studying this, the people coming into the courtroom and using like, “My brain made me do it or my genes made me do it” as their claim. They look at abnormalities in their brain or they look at abnormalities in their genes and they try to say like, “This isn’t me, me being like my bad character or my bad choices that are making these decisions. These are my bad genes or my bad brains that are making the decisions.” And so they’ve brought in this evidence over time. Every time there’s a scientific breakthrough, they bring it in and try to say like, “Blame my genes. Blame my brain.” And sometimes it works. For the most part, it doesn’t. But sometimes it helps people avoid the death penalty by being able to show that they’re not the worst of the worst kind of offender.

Preet Bharara:
Right. You served on something called Presidential Commission for the Study of Bioethical Issues.

Nita Farahany:
Yes.

Preet Bharara:
How was that? And did that accomplish something?

Nita Farahany:
It was great. Probably one of the best experiences of my life. It was under the Obama administration. Seven years. There were 13 of us. We worked on a number of different issues that were the cutting edge issues of the day. Made specific recommendations about some of the issues that we took on like whole genome sequencing and even the brain initiative. So some of the precursor to the work that I’ve been working on was through that commission. And it did gain traction. So we definitely saw a number of our recommendations implemented. And for my own personal thinking, I think it helped take me from an academic who says things like, “Oh, that’s a really hard and interesting question,” to having really clear perspective on issues along with very pragmatic recommendations about what needs to happen next to enable the responsible progress of science and technology.

Preet Bharara:
How’s the book going?

Nita Farahany:
The book is done.

Preet Bharara:
The Battle… Can I say the title?

Nita Farahany:
Yes, please.

Preet Bharara:
The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology. So it’s done. That’s great. But then in the world of publishing, it’ll take another 10 months for it to come out, right?

Nita Farahany:
Yeah. It comes out March 14, 2023. It is-

Preet Bharara:
But can it be ordered in advance?

Nita Farahany:
It can be pre-ordered. It’s already on Amazon and Barnes & Noble and everywhere else.

Preet Bharara:
Do you have a cover?

Nita Farahany:
I have a cover. You can check it out at nitafarahany-

Preet Bharara:
Is it a big fat brain?

Nita Farahany:
No. So you can check it out at nitafarahany.com. It really emphasizes the brain. They went with… Because it’s a kind of big think book, apparently there’s a particular style that they think big think books reflect well. And so it’s a pretty minimalist. It’s a pretty minimalist cover, but with a kind of beautiful set of dots and shimmeriness that comes out of the brain.

Preet Bharara:
Shimmeriness. It’s not like a picture of like a Rodin sculpture?

Nita Farahany:
No, no, it’s not a picture of a Rodin sculpture. They did have the thinker on the front in one iteration. And I said, “Look, I don’t really want a thinking male on the front of my book. That just doesn’t work for me.”

Preet Bharara:
No, that’s an endangered species.

Nita Farahany:
Yes. Yes.

Preet Bharara:
A thinking male.

Nita Farahany:
Oh, come on. Not true. Not true. But I do want to preserve their ability to think freely too. I just don’t want that reflected on the cover of a book written by a woman. So, okay.

Preet Bharara:
Well, people should go and order the book and then read it and maybe we’ll have you back when it’s out.

Nita Farahany:
Well, I’d love that.

Preet Bharara:
Nita Farahany, thank you so much for being on the show. Thank you for your work and thank you for your time.

Nita Farahany:
Thank you. It was a pleasure.

Preet Bharara:
My conversation with Nita Farahany continues for members of the CAFE insider community. To try out the membership for just $1 for a month, head to cafe.com/insider. Again, that’s cafe.com/insider.

THE BUTTON:

Preet Bharara:
I want to end the show this week with a little bit of news about a friend of mine. My friend’s name is Steve Dettelbach and you may have heard his name recently because he’s been in the news. Why? Because on Tuesday, the Senate confirmed him as President Joe Biden’s nominee to lead the Bureau of Alcohol, Tobacco, Firearms and Explosives, commonly referred to as the ATF. He happens to be the first Senate confirmed nominee since 2015 to lead the ATF, which is charged with regulating the firearms industry among many other important responsibilities. The 48 to 46 vote was close, but it was a win for President Biden, but it’s also a win for the country and could not be more important or timely in the wake of the recent cascade of horrible mass shootings across the country.

Preet Bharara:
Steve is someone I’ve come to know and respect deeply over the years. He’s a fellow former US attorney. When I served in the Southern District of New York, he was the US attorney for the Northern District of Ohio throughout the Obama administration. He spent nearly his entire career in public service, from working as an assistant US attorney in Maryland to the US Justice Department Civil Rights Division, and later as an assistant Us attorney in Cleveland. I also supported him when a few years ago he ran for the position of attorney general of Ohio. I know him well. I know he will do his job well.

Preet Bharara:
During his recent confirmation hearings, Steve emphasized that he has never been swayed by political considerations and noted, “People need to have confidence that people in law enforcement’s only agenda is to enforce the law. And if you’re at the ATF, to catch the bad guys and protect the public.” At this moment, there was no one better qualified to lead ATF and help to mitigate the gun crisis we have in this country. So I just wanted to say, Steve, congratulations. Thank you for your service. Get to work.

Preet Bharara:
Well, that’s it for this episode of Stay Tuned. Thanks again to my guest, Nita Farahany. If you like what we do, rate and review the show on Apple Podcasts or wherever you listen. Every positive review helps new listeners find the show. Send me your questions about news, politics and justice. Tweet them to me @PreetBharara with the hashtag #askpreet. Or you can call and leave me a message at 669-247-7338. That’s 669-24-PREET. Or you can send an email to letters@cafe.com.

Preet Bharara:
Stay Tuned is presented by CAFE and the Vox Media Podcast Network. The executive producer is Tamara Sepper. The technical director is David Tatasciore. The senior producers are Adam Waller and Matthew Billy. The CAFE team is David Kurlander, Sam Ozer-Staton, Noa Azulai, Nat Weiner, Jake Kaplan, Sean Walsh, Namita Shah and Claudia Hernandez. Our music is by Andrew Dost. I’m your host, Preet Bharara. Stay tuned.