Talking Threads With the Facebook Whistleblower Frances Haugen
Hey, it's Eric Newcomer, welcome to the Newcomer Podcast, a really exciting episode this week.
We're talking about whistleblower.
Francis Houget, the Facebook whistleblower is on the show.
I feel like we've been building to this for a literal years I've been talking about.
Facebook's content moderation, the implications of her release to the Walter Journal of the
Facebook files and her decision to go public.
And we really got into the substance of it.
I feel like so much of the discussion around social media moderation is sort of built for a simplistic public consumption.
And I really tried to get into the meat of these issues and what potential policy solutions there are.
So give it a listen.
Francis Houget and Facebook whistleblower, welcome to the Newcomer Podcast, author of the Power of One.
Thanks for joining me.
Thank you for inviting.
Happy to be here.
I have followed your story, the Walter Journal reporting, now the book super closely, so really excited to have the conversation.
You know, the Facebook files series and sort of the complaints about Facebook can be so sprawling.
Like there's so much to it.
So I just wanted to start off the conversation.
If you had to sum up the top Facebook sin or you're like main core objection, what is your top Facebook objection?
So I would say the through line that goes through all of the different kind of shocking, you know, headline exclamation mark type things that came out of the Facebook files.
And to remind people who maybe don't remember exactly what happened about two years ago, you know, it reached everything from Facebook.
We knew there were human traffickers on the platform and worried more about offending the buyers than the people being sold.
And for Facebook had lots of research on their products, making kids unhappy, making them depressed, facilitating eating disorders, refused to hand it over to the Senate when asked, you know, what on and on and on.
But the thing that was the common through line, I would say, through all those things is we live in a world where Facebook has to report publicly its profit and loss numbers, its expenses, like how it got to that profit and loss.
It doesn't have to report a social balance sheet, you know, like what's the social costs of making those dollars.
And for most other industries, those public costs, those externalities are very obvious, you know, it's pollution in a river, it's pollution in the air, it's forced labor, you know, we can see what's going on.
In digital products, you can take from the social side of the balance sheet and make your economic balance sheet look a lot better.
And that's the core problem that has repeated over and over again across all of these, all these harms.
Yeah. And I mean, that really sort of comes through in the book. I mean, you make the point that like, you know, Apple, it would be sort of visible with a hardware device, whereas with a social media company, like Facebook, it's less visible.
Is it a matter of the fact that Facebook is customizing feeds from person to person, or is it more than that?
What makes Facebook's products so much like less visible than something more tangible?
The point I make in the book is if you go search for Apple whistleblower, you don't find a lot, like there are occasionally Apple whistleblowers, but they're not like Facebook whistleblowers.
Like, I always kind of bristle when people describe me as the Facebook whistleblower, because there's a new Facebook whistleblower every two weeks, you know, often with really big revelations.
It's not like there's one person of the company that's feeding all that out there. It's like a whole range of conscientious people who know the public need to be brought in.
So you can ask the question, like, why is that? Like, why is it Apple isn't producing the same volume or intensity of whistleblowers?
I think it's that when Apple launches a new iPhone, you know, within a couple of hours of that iPhone going on, on socials, there will be YouTube videos live, like already live, where people have taken apart those phones and confirmed, like, yes, that processor is in there.
Right. They'll be like, oh, they added a hard drive, and now the hard drive slightly slower because of it. Yeah, exactly. It's very visible to people.
But when it comes to what I call opaque systems, and more and more of our economy is going to be run by opaque systems, sets things like a large language model that lives on a data center, and all you see is the outputs, you don't see the inputs, you don't see out there manipulated.
It's a social media site. So like you said before, is the problem because we all see something slightly different. That's definitely a huge part of it.
So if we were talking about Google, have you ever programmed before? No, I'm not.
I know you're not going to believe me when I say this, but if you and I sat down for three weeks, I could teach you enough programming that you could ask real meaningful questions about how Google works.
How distributed, what's not getting distributed, how prominent are different kinds of things. What kinds of answers does Google give? If we want to do the same kind of accountability 101 for Facebook, for Instagram, don't even mention TikTok,
TikTok is way harder because it's video, right? It's bandwidth intensive. You know how much harder is to a video podcast than an audio podcast. We'd have to recruit 20,000 volunteers and convince them to install software on their phones that would send back what they saw on these platforms.
It's an entirely different order of magnitude of difficulty and Facebook knows that with TikTok.
Sort of I really understand it right where it feels so tailored to the human being and it's sort of machine learning driven where it feels like, is there even someone over at TikTok who really gets exactly why I'm getting what I am.
I mean, with Facebook, at least in the beginning was supposed to be more like network based and like friend based, which felt like, yeah, easier to trace. Do you think it's totally moved away from that or why? Why is Facebook so much harder to follow than Google?
So by the time I got to Facebook in 2019, 60% of everything people in the United States viewed came from a Facebook group. It wasn't content from your family and friends anymore.
It's one of these things where one of the projects my nonprofit is going to work on. It's called Beyond the Screen because we help people see beyond their screens is doing a simulated social network.
Because, you know, it's like you brought up before a lot of people if you were to stop them on the street and say what is Facebook for? They would say it's about connecting with my family and friends.
But if you were to actually go and look under the hood.
That product, you know, that product where the only things you saw in your feed or things that your family and friends posted, that world is long gone.
And the reason why it changed was Facebook is an advertising supported business, you know, as you scroll, you see an ad, they make money. You click on that ad, they get money.
They have a motivation to get you to can sale more and more higher value ads every single month.
Now, the more time you spend on the platform, the more content you view, the more ad dollars come in.
They had a huge motivation to figure out how can they make sure your feed is always full of really stimulating content.
What's interesting about TikTok is TikTok said, hey, we don't want to wait to get critical mass, you know, in the case of Facebook or like creating a Facebook competitor.
When the expectation from the user is the things that I see come from my family and friends, you have like a chicken and egg problem where you need to get all of the person's friends before the person wants to join.
Right, like how do you make that happen?
Well, something like TikTok or now threads threads has a very similar model on that's Facebook's new Twitter competitor.
There's much less of a user promise of what you're going to see.
So when you show up, they don't have to wait for critical mass, they just need to entertain you.
But at the same time, now you're putting yourself in the hands of an algorithm, you know, you better understand what the biases are in that algorithm.
What it shows you and doesn't show you because now the algorithm is in control, not your friends and family.
You brought up threads and I too see the similarity with TikTok where it's, you know, it's bite size, you know, it's not as tied to even like a follower network is Twitter because they needed to work right away.
So it feels like, oh, man, this is going to be really powered by machine learning pretty quickly.
Like as a reporter, it's funny to see sort of the reporter class sort of embracing threads at the moment when I feel like, you know, two years ago or more than that, they would have been so negative.
And apprehensive about trusting Facebook. I'm just curious like watching the sort of pretty upbeat response to threads.
What do you take from that? And like, are you surprised that there seems to be some media trust of Facebook right now?
Well, I think that's probably a sense where the trauma, the Twitter community has faced in the last year.
It's pretty intense, you know, it's probably a sense where, you know, Twitter never really went down that often and now goes down regularly.
It used to be, you know, as director say, the former CEO has pointed out recently, you know, it used to be basically when you posted things went live instantaneously.
And now they're, you know, he flagged that they were now, you know, a minute or two delays between when you posted and why I have to people.
I think Elon Musk really gave himself an incredibly hard hill to climb when he bought the company using so much debt.
Because when that happened, it meant that he put a clock on himself, you know, he has to make a billion dollars a profit a year.
And a product that was maybe breaking even before he bought it is really hard.
And he said lots of advertisers flee.
So I think there's part of the positive reporting on threads is that people really light having a space to discuss ideas, to discuss issues.
And the idea that they could have a space again feels really good.
Just to say in other way, it's like people were so frustrated with Twitter that some openness to trust Facebook again or meadow, whatever we're calling it these days.
Anyway, continue.
It's interesting like all of this is still true, but people were reporting the, you know, the first day or two of threads.
You weren't even given the option to just receive content from people you chose.
Like it really is like talking that way where you have to put yourselves in the hand of the algorithm and be like algorithm.
Yeah, there's no follower feed.
I mean, you haven't downloaded it. Are you personally?
I have not done.
I'm not, I'm unwilling to download it. I just haven't got to download it.
And I, as someone who worked on Google Plus, so for your listeners who may not remember, you know, tech 12 years ago.
Yeah, Google said, hey, there are flaws with Facebook.
Like we should make a version of Facebook that doesn't have those issues.
And in reality, you know, they were trying to go after the personal social media market.
That's like you can actually with people you know, but because they wanted to grow so fast, they basically made something very similar to Twitter.
And you know, growing too fast is actually a problem because it means that when people go and take that chance on you, they don't land in a functioning community.
They land in chaos.
And they don't understand what the point is. They don't understand what the point of this place doesn't feel like a place.
And they move on.
And at least some of the initial data around engagement, it seems like that's a very similar problem to what Facebook is facing right now.
People willing to download it with threats.
Yeah.
I mean, one thing Facebook did differently is they like pre-populated it with all these celebrities.
Yeah.
So they had that.
And they had obviously the Instagram sort of connection to flow people in.
But I agree there's a lot of similarity, especially.
And you know, I mean, what's going on?
Is it Google Plus?
Man, I can't even remember the name. You just said it.
Google Plus.
Yeah.
They like, I mean, they got a ton of users in the beginning.
I, you know, I was on it for a second.
And then it sort of faded.
So it's very possible.
I mean, threads take that same arc.
I don't think that's my personal view.
I mean, is that a prediction?
Or do you think this will be like a Google Plus situation?
I think one of the things that's interesting about Twitter is the way I sometimes like to frame Twitter is.
Twitter is fueled by content from a relatively small fraction of its users.
Who are most interested in the thoughts of that other small fraction of users.
So I talked to a law professor who is regularly cited in text reference.
And she was like, I use Twitter to hear from 300 other people.
And it happens to me that those people talking to each other and caring about what each other says.
Everyone else gets to kind of be a fly in the wall and follow along for the conversation.
There are people like Elon Musk who love like raising up their followers and like having direct conversations with a wide spot of people.
But I would say the fuel, the real core of Twitter is those, you know, communities of connection.
And it'll be really interesting to see if threads can maintain that.
Like is it just going to end up kind of a brand safe by brand safe?
I mean, for, you know, even sell Tide, Washington, Deterter and Clorox, you know, bleach ones.
Yeah, they're very programs, definitely.
Yeah.
Is it going to be just a space that is like innocuous or is it going to be a space that really does cultivate community the way Twitter did?
I mean, it also, you know, it feels like you could have one community on threads and one on Twitter, especially with the part.
You know, like, I mean, you're seeing Elon right now, he's like making these payments out to Twitter personnel.
It feels like extremely conservative.
I mean, you could just see sort of more left wing liberals, sort of, you know, people on threads and sort of a right wing.
Twitter world.
It's very possible.
I can totally imagine that happening.
It makes me nervous because like when we do move into a space where we are entirely dependent on an algorithm.
Like, I have no idea if I posed on threads, like, will my post even get distributed, you know, like my publisher couldn't even tag it to sell the book.
Right.
So it's things like that.
Wait, you think Facebook is like actively suppressing your book on their platforms or.
I know that for events I've done, they couldn't use the word Facebook to describe the event.
So I can't be the Facebook whistleblower.
I can be a whistleblower.
Like, you know, they'll have to sit there and try a bunch of different variations of ads to run an ad.
Wow.
But my publisher was, you know, they sell lots of books.
They post about lots of books on Instagram.
They went through and tried to tag my book.
And they got back an email saying that violated the commerce policy for Instagram.
And they paint their like concierge because they spend off my own ads that they get, you know, a human costume.
And the person was like, I don't know why this would violate the commerce policy.
I mean, there's not white community.
There's no violence, you know, whatever.
And yeah, so they don't know why.
I mean, do you think Facebook just has a blanket ban on people using the word Facebook?
Oh, I don't know.
I've had other experiences like the Nobel Peace Center had an event that I spoke at.
And they had never had an ad blocked for being a political ad until they advertise my event.
And so think about that.
Like, do you think the events at the Nobel Peace Center are political?
I think most similar, right?
So it is what it is.
Have you interacted much with Facebook otherwise?
Or like, have you had any direct engagements since you left?
I think if I was more of a troll, you know, I have a little bit of troll, not like a huge amount of troll in my heart.
I have like a enough troll for Spark, right?
I would totally promote them more because they so aggressively will not acknowledge that I exist.
So like, for example, if you ever get to be in a Q&A session with a Facebook executive,
like at a conference, ask them about me because they will not use my name.
So that's the kind of thing where like if I was a bigger troll, I would see more questions at their events.
But, you know, I have other things to do.
Back to sort of the core disclosures, you know, in the book and I mean, clearly you talk about sort of them lying.
And I'm just curious, like, what you think sort of the core or like great examples of like Facebook to see have been.
Because I mean, obviously they have such an information advantage where it's like they can just run circles around anybody trying to scrutinize them because they understand the platform so much more.
But the cases where they were, you know, lying flagrant.
So I'll give you an example.
This is one of the core parts of my complaint back in 2018, Facebook faced a business problem.
You know, they could see over time people were making less and less content.
This is a normal phenomenon on social networks.
Like some people get really into it.
Most people slowly start to self edit, self sensor.
And they try to bunch of experiments on you and I to see could they induce us to produce more content.
And one of the things I know it's like, guess what your experiments are on every day, every time you're on one of these platforms.
So it turns out if they artificially get you twice as many likes.
And the way they do this is they just keep showing your your post to more and more people until you get to that number of likes.
If they can get you up to, you know, twice as many likes, you produce more content.
You know, it's a very clear like dose response that if you feel more social rewards more little hits of dopamine, you make another fact.
So they came out and said, we don't want people mindlessly scrolling, you know, mindlessly scrolling.
That's bad for people.
So we're no longer going to prioritize content on Facebook.
They start hellish trying.
You're going to spend on the platform.
That's like, you know, it could be a proxy for like how much you enjoy consuming this.
Instead, we're going to reward meaningful social interactions.
Yeah, I remember.
So I think about a phrase like that.
It's like, what is like to you?
What is a meaningful social interaction?
Something that, you know, like a day later, I'd reflect on and say, oh, that was like a good thing.
I'm like glad it happened.
Or maybe like your friends shared something really revealing event.
You wrote a comment saying, you know, that's hard.
I'm really glad you shared that.
You're like, that sounds like a meaningful social interaction.
In reality, it was just, was there any social interaction?
Right.
So you could put bullying or hate speech in a comment.
And that would be considered meaningful social interaction.
And within six months of this, researchers across Europe on the left and the right,
local parties, local parties on the left and the right,
were telling Facebook researchers, we know you changed the algorithm.
It used to be we could post like a white paper on our agricultural policy.
And we get it.
It's like not the most thrilling thing in the world didn't get a lot of comments.
But we could see from the stats that people spend time with it.
They read it.
Now if we post that same kind of a bread and butter content, nothing.
It's crickets.
Like we're being forced to run positions that are so extreme that we know our constituents don't support them.
But like our job is to run social media.
Like we have to like put stuff out there and it has to work or we'll lose our jobs.
And what's funny crazy about this is in some ways Facebook acknowledged implicitly they had a problem
because they put out, excuse me, Mark Zuckerberg put out a white paper probably one of his employees.
It's like 5,000 words long.
I really doubt Mark wrote it.
It said, hey, this engagement based ranking thing we just launched.
There's a problem, which is people are drawn to engage with extreme content.
But don't worry.
Don't worry.
And you'll be asked people afterwards.
Did you like it?
They say no.
We're going to protect you from the most extreme content by taking it down.
We're going to have these magical AI systems.
And even their solution to their first lie was another lie.
Because those systems that they claimed we take down all the bad stuff, they were only successfully
are moving three to five percent of things like hate speech were less than one percent of violence.
Right.
It's kind of crazy when you think about it.
But they told everyone from Congress to, you know, the, you know, teachers unions like we're protecting people.
But in fact, it was all kind of smoke and mirrors.
I think this is the policy solution you want, or at least the one that would seem to sort of come from what you're saying is like,
okay, some sort of disclosure where, you know, outsiders can sort of track how this is all happening.
I mean, how possible is that?
I mean, every social media network is so different.
Totally.
You know, I'm a capitalist.
I want companies to be able to change and react.
And like, you don't want a bunch of laws that like say, oh, you need to disclose things this way.
And so now you're forced like with a certain type of, you know, the same network you've had instead of evolving.
Like, how would you see a disclosure regime working that still allows companies like Facebook to be flexible and to change?
Let's actually unpack for a second something you just said, which was, you know,
I think a lot of people don't sit and think about, you know, what's the many of options when it comes to intervening in a problem as complicated as this.
Right.
I'm really glad that you brought up the idea that these companies grow a change where that one, you know,
the next one to come along, why not fit the exact same mold of this one?
Right.
One of the ways the European Union handles that flexibility and to be really clear, like,
this kind of way of doing regulation of saying disclosure and transparency is instead of something like what's happening in say Utah,
where Utah is coming in and saying, this is how you will write your company.
If people are under 18, they have to have parents supervision, no privacy for kids, their parents can see everything,
or like Montana coming out and just fly out of Banny and TikTok.
Those are kind of, we'll call them building fences, type roles where we're like, oh, this is the fence you can't cross.
And the thing about technology is that it moves and changes and they're very good at running around fences.
Right.
So the alternative is something like what the European Union passed last year, which is called the Digital Services Act.
And the Digital Services Act says, hey, if the core problem, you know, we started at the Guinness conversation,
if the core problem is a power imbalance, right?
Like the fact that you can know it's going on and I can't know.
Let's address that core problem because a lot of other things will flow downstream from it.
So they say, hey, if you know there's a risk of your product, you need to tell us about it.
You know, if you discover one, if you can imagine one, you need to tell us your plan for mitigating it,
because it's going to be different for every platform.
We want to unleash innovation.
And you need to give us enough data that we could see that there was progress going on to meet that core.
And if we ask you a question, we deserve to get an answer, which sounds really basic, but is not true today.
You know, I've been asked questions by governments around the world that are very basic, like how many moderators speak German?
I've gotten that question, but for different languages all around the world.
It's because Facebook doesn't have to answer.
They don't even have to answer things like how many users are there in your country.
This is extremely frustrating, given that Facebook stance has always been where their move has been like regulated us.
Like, you know, not all these decisions should be made by a private company.
And then it's like, oh, well, then why don't you give at least the governments around the world or the information they would need to write good regulations.
So I mean, obviously you're talking to a journalist.
So disclosure is always going to be something I'm going to cheer for, but certainly a frustrating situation.
Provide a roadmap, though. So my nonprofit is called Beyond the Screen.
One of our core projects is called Standard of Care, where we are working to build out a wiki that people contribute to around identifying problems of social media.
What are the surface areas where we call the levers for preventing or mitigating harm?
And then what are the strategies for pulling those levers?
So just to give context on that, you know, a lot of the problems around kids.
And the number that is common across them is let's keep under 13 year olds off these systems.
But when the kids advocates talk about technology, they often don't know what's possible.
And so they'll settle on solutions that might seem obvious, but have problems.
So, for example, in the case of the lever of keeping under their channels off the platform, they'll say, let's check IDs, which I don't think you want.
I don't think I want it also just doesn't work.
We've gone to a technologist and said, hey, I have this lever.
Let's keep under 13 year olds off the platform.
That technologist can come back and say, here's 10 or 15 different ways to find under 13 year olds.
You know, some of it's really basic, like kids will say, I'm a fourth grader.
Or things like, I learned this one from my principal.
Kids report each other, like to punish each other.
It's like, you're mean to me on the playground and I'll report your Instagram account.
And as soon as you find 10,000 kids, you can find all the other ones.
And so the way I think this could tie into transparency is once we have a menu saying, these are the harms.
These are the surface areas, the levers for addressing each of those harms.
You can come in and say, okay, then we end up minimum.
Like, I think there should be raw data access for researchers.
But if you don't want to go that far and a minimum, you can say, we need data on how bad each of those harms are.
And how hard you're pulling the levers to try to reduce those problems.
You can figure out lots of different strategies to pull those levers.
But you need to show us data on things like under 13 year olds.
How later are they on the platforms at night?
That kind of thing.
I mean, a core response that Facebook would give in this situation would just be,
they might not say this outright, because it's probably not political.
But like, you know, some of these problems that you've identified are just human problems.
Right. If you talk about sort of the Instagram critique with it potentially making sort of a young teenage women,
sort of some segment of them unhappy.
I mean, you could say like, was that so different from Vogue?
Is this really an algorithmic problem?
Or is this just like how humans are?
I mean, and I would probably attach more on a lot of the sort of democratic liberal sort of anger at Facebook was about
just what Trump supporters are like and like their views and the fact that there's like an audience for them
and not always the fact that Facebook would give them distribution.
So I guess across a lot of these categories and we can get into the particulars of what I mentioned.
Totally.
But what would you say about the just like some of these things people are mad about are just things that human beings do
that happen to happen on Facebook, but it's not necessarily their levers that are moving people to do those things.
So I think one way to think about this is technology can either amplify and bring out the worstness work and act as a bridge that helps us seek our best.
Selfs.
So I totally agree with you that you know there are always been teen girls that weren't happy about their bodies or how nice their clothes work.
But there are limited number pages of Vogue every month.
You know the second time you read Vogue you're going to have a different impact on you than the third time you read Vogue or you're going to get bored of it.
And in the case of something like Instagram, you know Instagram progressively pushes you towards more and more extreme content.
So I'll give you an example.
I had a journalist reach out to me for an interview and he explained to me that he had just had a new baby.
So there's a healthy happy baby boy.
He's a modern father.
He made an Instagram account for the baby.
That baby had maybe five or six baby friends.
Everyone here is a healthy, happy, cute baby.
He's only ever clicked on or commented on happy, cute, healthy babies.
And yet about 10% of his feed was children who were visibly suffering.
So kids who'd gotten horrible accidents and were just figured disabilities and deformities that looked really painful.
Kids die of cancer in hospital beds with tubes going out of them.
And he was like, Francis, how did we get from healthy happy babies to this?
Like what happened?
I've only ever clicked on the happy hunt and I was like, well, the AI knows very clearly you like babies.
You know, you've made this whole little baby centric world.
So it's showing you content and knows that people who like babies have trouble not engaging with.
And I want to be honest, even if you're not clicking like on that content or sad or whatever,
you probably aren't lingering.
And a lot of these AI's have dwell.
You know, just your fact that you paused is a single single that you like that content.
And so, you know, he's old enough and you know cognitively mature enough to see something weird is happening.
With a 13 year old girl, you know, she might start out by looking for something like healthy recipes.
And just by clicking on the content, it pushed over time towards more and more extreme materials.
And we've seen these reports from things like child psychologists who say,
I have these kids in my practice.
They come into their appointments and say, I'm trying to make better choices like I'm trying to follow the program.
But it follows me around Instagram.
And right now we live in a world where we don't have consumer rights to really basic things.
Like, should you be allowed to reset an algorithm without losing all your past?
So like those kids are being forced to choose from their past.
You know, all their memories, their friends, and their futures.
Because an algorithm wants to keep them from losing it away from something that was hurting them.
I'm sure you saw, you know, if you want to delete threads, they say Instagram too, which is just like insanity.
I mean, it's like, oh, we know you love this other thing.
I'm much more supportive of the we should regulate tech through laws that like correct it rather than the sort of huge antitrust push.
I mean, to me, like things as simple as just like then push notification hacking.
Like we need to escape a world where tech companies are allowed to use sort of little badges that psychologically drive us crazy that don't show like actual alerts.
But the problem is you write that rule.
Like I'm sitting here like I'm frustrated with like Facebook.
And I'm like, why is Facebook showing me this group that I never used, but it knows I love to clear algorithms we should ban it.
But then, you know, for the next startup that's trying to build and doesn't know anything about, you know, it's just like it creates a huge regulatory burden that could end up helping Facebook.
This is part of why we want to do our standard of care projects, right?
Like right now, Facebook has a really huge advantage in that they have done research in the space for decades and decades and decades, 15 years, 17 years somewhere in there.
I usually are coming up in 20 years because it's 2023 and they were founded in 2004.
But if you want to talk about real deep cuts, do you remember Friendster?
Or are you not quite old enough? Yeah, that's so Facebook was my first real social media.
I graduated from my school in 2009.
So I'm aware of sort of the history of the first web crawler.
So that's a thing for downloading data off the internet so you can analyze it that I ever wrote.
I wrote a Friendster because I was using a really early graph mapping algorithms because I guess I've always loved graph based problems.
But a part of why we're doing standard of care is we know we want to make sure that the next generation of social platforms and social platforms take lots and lots of different forms.
Things like Roblox. Roblox is a so fun at work.
You know, we're going to always have new social platforms.
How do we make sure that people have the best head start, you know, the best platform rebuild off of to be like, oh, interesting.
We can get a very robust education on what we should be worrying about very quickly and an understanding of like what options exist for being able to move forward on those problems.
Is privacy a big part of your advocacy or like how optimistic are to me, I'm sort of like, oh man, privacy sort of lost in this world once you're on them.
I'm like, I'm just not like a privacy.
That's I'm much more aligned on this sort of like, oh man, them hacking our brains and like using actual like psychological tricks to get us to engage with them.
That's really troubling, but like, yeah, I don't know how much time are you spending on sort of privacy advocacy and what's your sort of view on that part of it?
I have a see of not what I would say one of my tent pole issues and very open to it as a technique in terms of, you know, people emotionally connect with privacy as an issue.
And it is a way of decreasing the ability of algorithms to be able to act towards you.
Like if they're not allowed to record information about you, it's harder to manipulate you.
At the same time, like, you know, we're developing a eyes that are getting better and better at doing implicit differences.
And so it's this question of, you know, how little data do you really need in order to actually still see a lot of these phenomena.
And also, I think it's only things where it's like, I know the thing you can write privacy laws that are going to strip enough data to actually neutralize the problems that we're talking about here.
This is a really basic example. One of the big ways you reduce misinformation on something like WhatsApp, which is end-to-end encrypted private chat, is you saying, hey, you know, as something gets reshared.
So there's a chain of reshared. So I got it. I reshared it. My friend reshared it on and on.
And as a chain of reshared, once it gets five hops away from the person who created it, say to them, hey, you can totally spread this further, but you have to copy and paste it.
You have to make an affirmative action to continue to work it up.
That kind of change doesn't, you know, you know, allowing that not allowing that's not really a privacy topic, but it's one of the most impactful things for safety.
So I'm very pro privacy legislation. If people want to push it, I think it can have a really positive impact on a lot of the problems I talk about.
But it's not going to easily solve all the things that we're talking about here.
No, that makes a lot of sense.
I sort of alluded to the political piece of this sort of digging into that section. I mean,
the political asymmetry in the United States and like how that affects conversations about Facebook.
To me, it's like pretty obvious that like Trump supporters, Republicans like our behaving in sort of different ways than Democrats and spreading generally more false information.
And like it's very awkward for Facebook certainly and for politicians where well, I don't know journalists can sometimes sort of have trouble sort of highlighting that asymmetry.
I don't know. What's your view on it? Do you agree with me that there's like a sort of partisan asymmetry there and that that is sort of creating some of the problems about how Facebook is able.
Facebook doesn't want to target Republicans because then they're going to get a lot of heat from Republicans. And so then they're not willing to do sort of things that would have asymmetric outcome.
I guess you agree with that is sort of the question. Facebook has spent a lot of money trying to frame the issue of what can be done about any of these online safety roles in terms of freedom of speech versus safety.
And they get up there and this is a real quote Mark Zuckerberg went on a podcast and he was like, you know, I've really grown a lot in the last year.
Because I've realized sometimes if you stand up for what you believe in, you're going to be unpopular and I'm a defender of freedom of speech.
And what I found so aggravating about this is like, you know, that thing we just talked about with WhatsApp, you know, when you cut the research chain at five.
You know, if you do that same thing on Facebook, we cut the research chain at two and has the same impact on misinformation as the entire third party fact checking organization and you're not picking and choosing what are the good ideas of the bad ideas.
And I think right now, you know, a pattern we see across the world is if you are a political party that is not in power, you have more of a motivation to figure out new technologies new ways of reaching people.
Because your party that's out of power, you know, you don't have the advantages of being an incumbent.
And so if we were to roll back in time to say the Obama presidency, like the first one, you know, when Obama got elected, he did a lot of techniques with technology that no one else was doing.
You know, they were monitoring, you know, what's going to be testing emails.
So they would say, you know, if Obama holds a puppy, how much money do we make if a bomb is there with his wife, how much money do we make?
You know, do people sign up for the XYZ then?
He had a motivation to do that because he was the scrappy incumbent.
And so I think when we think about these things as like there is a partisan lean that, you know, one side is maybe playing the game a little harder than the other side.
I think what I think of it is that it's just a question of like who it is or isn't in power.
And so what I always try to say when I speak with right-leaning voters or right-leaning podcasters, if you have any suggestions for ones I should go on, I'm always trying to reach out to them more and more for the audience.
If your listeners can think of great ones for me to go on email me at frances at franceshowgan.com.
What I would say is if you care about freedom of speech, you should be demanding transparency about these censorship systems.
You know, when I talk to women's rights advocates around the world, all of them have been kicked off Facebook because these AI systems are so crude that if you talk about violence against women, the AI thinks you're committing violence against women.
Like if you really care about freedom of speech, you should be marching on the streets for transparency.
And I think that's a space that we should all be willing to work.
I totally get sort of the messaging there and that you want everyone to be sort of on board with these issues.
I mean, you look at even the threads roll out.
When they launched it, they had some I think tools in there early on that said, oh, you know, this person, I forget what the exact language was.
It was like, oh, you know, I forget if it was the past false things or I'm not going to get it totally right.
There were warnings before people reshared them, which then like can have sort of perceive sort of partisan focus.
And obviously, I mean, you know, I must whole campaign on Twitter was the idea that Twitter was sort of shadow banning, you know, people in the replies and all that.
Yeah, yeah, I don't know.
I guess the question there is like clearly what we're seeing is there is some sort of right wing backlash to pieces where
social media companies try to do what you're saying, which is sort of either flag or not sort of a size things that violate their policies.
I think this again, it's funny.
So I actually worry about this a lot for AI safety.
If you already feel distressed, you know, you feel like you have been left behind or you're a marginalist and you know, I grew up in Iowa.
Like I have a lot of empathy for Republican voters today, you know, Iowa has been left behind economically in a pretty dramatic way.
That if you already are feeling a little that anxiety about the idea that people with power don't really care about you.
It's very easy to read into when a moderation system takes impact that you're being singled out.
But I think this is also true for say African American or less affluent people who participate online.
They have seen very similar things where like African Americans will get sanctioned for hate speech because these systems are not very well done.
And so I think there's a fear and this is a fear that could be addressed by being more transparent of saying, hey, we're going to actually let you see what we're doing.
Because in the case of AI safety, you're already seeing people come out with calls of saying, I don't want systems that have been aligned with the public good.
I want true or real and I think that in both cases either what conservatives are saying about content moderation on trends or AI safety.
You know, when people feel like it's out of their control when they feel like something's being done behind the scenes, they object to it.
So giving the user control basically or having enough transparency that you can build trust on these things, right?
Like it should be possible for researchers to come out and say like, hey, actually, or if you yourself, so right now, if your content is taken down on Facebook, the only people you can appeal to or you can send your thing off to is the oversight board.
Imagine if you could say, I'd like to be put in a public research data set, like I like reporters to be able to look at this and say, oh, interesting.
You know, this whole political candidate is getting taken down at a much higher rate than the other one.
Like that would be a very different world. Facebook would have to work harder to make sure that systems were objective and effective.
In the book, power one, which everyone should go read and you talk about like the rise of COVID and like, I mean, COVID sort of is a great way to get into the misinformation question.
I think like how misinformation, like the view on sort of that word and sort of the ideas there, what is your view on like how Facebook handled COVID?
Like, are you supportive of, you know, tamping down on people who were like, you know, skeptical about the origins of the vaccine or skeptical about the vaccine?
Because that fits right into this sort of partisan thing where they're end up being sort of sides that align with parties that fit onto these.
So yeah, how do you score Facebook's handling of COVID misinformation?
Oh, great question.
God, you know, this is just saying Mike criticism is on Facebook are much more like the part of the book that I think you're alluding to is, you know, when they went and divided up the United States into 600 communities.
So think about community as you enjoy the same kind of groups, you follow the same kind of pages, you post on same kinds of topics, you click on same kinds of topics, you know, imagine you put people into communities that were between 500,000 people and 3 million people.
If you went and said, you know, how many of these communities make up 80% of all the received COVID misinformation.
It turned out that 4% of the US population fell into communities that got 80% of the COVID misinformation.
Right. So you and me average person gets maybe one piece of COVID misinformation every couple days, once a week.
For a small fraction of people, they were getting whole streams of it.
It's what happened with the January 6 people, but for different groups, different ideas.
And part of that was because the way Facebook was designed was if you have a post that is really controversial, you know, has a big fight in the comments.
Every new comment makes that post new and can show up at the top of your home feed again.
And so there were a few communities where, you know, COVID had a really intense emotional balance.
Those communities actively censored out voices that said anything different.
And they became these kind of echo chambers.
And so I think there's interesting conversations to be had around like how do phenomena like that occur?
And like what other contexts are they happening in?
And so like, let's imagine you're seeing something like that where, you know, the algorithm and the product design
are pushing people towards these kind of parallel realities.
And like, that's why you had people showing up and like, you know, threatening teachers with guns,
where like people showing up at school were meeting, screaming because literally when they looked at their Facebook feeds,
all they saw was stuff about how teachers were trying to kill their kids.
So that's kind of the environment we're entering into when we have to say like, what is Facebook's role or what should Facebook's policy be?
And I think it's really complicated.
Like, I think they did a bad job in that they had, you know, these black lists.
They had the concepts that you couldn't talk about, but they never told anyone.
You know, they never let us see how well these systems performed.
And I meant that you had people feel like they weren't like it was a hidden truth.
And hidden truths are very blurry.
And so, you know, effective social software should be designed such that if I write a thoughtful reply,
if I go do research, if I come back to your allegation or with, you know, something meaningful for a conversation,
I should get similar amounts of distribution to your inflammatory statement.
And that just doesn't happen today.
Like, the systems are designed to reward extreme ideas.
They're not designed to reward thoughtful moderate ideas.
So, your solution there would be just to give more distribution to sort of counter messaging basically.
So, I'll give you an example.
So, like, they've done research inside of Facebook.
And one thing that's kind of exciting about the next few months is like Harvard has an archive of most of the Facebook files.
And they're starting to make access for those documents to academic researchers.
One of the papers in there talks about how if you are good at writing such that people
diverse from you, different from you, can engage with that content or those ideas.
You're doing a very complicated thing, right?
You know, if I can write a political post, the Republicans and Democrats can like,
constructively have a conversation in the comments, you know, people are thumbs up in each other.
It's like a positive conversational template.
I should have been rewarded for that because it's not obvious.
Like, I might not get the most comments, I might not get the most likes.
But if I can get a diverse community of people, a diverse audience to engage with it, that should be rewarded.
If you come in there and say, hey, we're going to boost content from people who can successfully reach across the aisle.
You end up getting less hate speech, less violence for free, less misinformation.
So, it's a lot of these things where if you start making transparent what is distributed versus what is created on these platforms.
And you start saying, hey, like right now what you're giving distribution to is very different than what's being created.
Can we get those a little bit more in line?
I have a feeling what would end up happening as you'd start finding more techniques like that where you could come in and say, hey, people who are fear-mongering, you know,
they shouldn't be the only ones who get to stand on the stage.
And that's kind of what's happening today.
So, it's still a world where like there's like sort of a hand on the till in terms of what's getting reached.
And like it's just sort of in some ways just doing things that are less oriented around sort of their business interests, necessarily, which are max engagement.
I mean, it's certainly not saying like don't sort of weigh certain things differently than others.
Because if you just tried to create this sort of like neutral algorithm, you would just then be deferring to sort of negative aspects of humanity potentially.
Am I understanding that correctly?
Yeah, algorithms can only capture the level of complexity that you put in the algorithm.
So, if you come in there and say, hey, people developing compulsive behaviors, right?
So, in the case of kids, you know, some like 8% of kids say that they get control of their usage and it's hurting either their employment school or their health.
Think about that 8%.
How self-aware are 14-year-olds?
Like not super self-aware. How honest are they with themselves?
Not super honest. It's probably a lot more than 8%.
Right? Or suffering like that.
You know, if you don't have a system that says, hey, sleep deprived kids are a long-term harm to society.
You know, they do worse in school. They are at higher risk for developing mental health issues that will last them throughout their life or put them at higher risk of recurrence.
They're more like they use drugs, both uppers because they're tired, downers because they're depressed.
You know, if you have an algorithm that just is like how many clicks can we get? Like how much ad revenue can we get?
You know, capture those kinds of social costs.
And so I think there's a huge opportunity where if you just come in and say like, hey, with cars, if you have a car accident, everyone gets to see the car accident.
You know, everyone gets to see the body on the ground and see that your seat belt didn't work.
We don't have a similar feedback cycle on these social platforms.
Now, they can keep kind of having their problems and they don't have anything that brings them back to center.
For the last part of the conversation, just talk about, you know, the decision to release information and to go to the Wall Street Journal.
And yeah, the reporter, it's Jeff Horowitz, right? He reached out to you on LinkedIn or tell the story a little bit about how this happened.
Like did you see yourself, like as a whistleblower before you heard from him?
I had been contemplating for quite a while, like would I have to go forward at some point?
Like I had a chance to talk to my parents about it a large number of times just because like what I was seeing on a day-to-day basis while I delivered them during COVID was just so different than what the Facebook public narrative was on these issues.
But the moment where I was like, okay, I have no other options was right after the 2020 election.
So this is in December, it's like less than 30 days after the election.
And they pulled us all together on Zoom and they said, hey, you know how for the last four years, the only part of Facebook that was growing was the civic integrity team.
So it was the team that was for Facebook.com at least.
So civic integrity's job was to make sure that Facebook was a positive force in the world, like a positive social force in the world.
You know, it wasn't going to disrupt elections, it wasn't going to cause any more genocide's because by that point there had been two.
You know, it was going to be a positive force.
And they said, hey, you are so important.
We're going to dissolve your team and integrate it into the rest of Facebook.
And when they did that, that was kind of the moment where I realized Facebook had given up.
That the only way Facebook was going to save itself was if the public got involved.
That the public had a common safe Facebook.
And so by chance.
So that day I went and opened up LinkedIn, because you know, I was kind of when you have instability at work.
That's what you do. You know, you open up LinkedIn.
And I saw that I had a message from this guy Jeff Horowitz.
And he did a lot of reporting for the Wall Street Journal about the violence that Facebook have facilitated in India, particularly Muslim Hindu violence.
And you know, he said, do you want to talk?
And I was like, oh, interesting.
Like of all the places that I would want to work with, I wanted to work with the Wall Street Journal.
Because I really view all of these topics as bipartisan.
You know, they're not left. They're not right.
They're like basic rules of the road.
And I knew that if they're reporting it come from the New York Times,
that there would be a large swath of right leading voters that would never be able to trust it.
But if it came from the Wall Street Journal, it was likely that they would be able to at least consider.
And I think that's part of why the Senate hearing was so bipartisan.
Because it was something that came out of, you know, it was trusted.
Like a lot of the things that were said in those articles sounded crazy.
They sounded super. There's like no way there's no way the company could be this bad.
Like human trafficking of the Facebook employees being worried about offending buyers over the people being trafficked.
But because it came from a very, you know, center of the road, cautious publication, people really go.
Maybe this is actually true.
And yeah, I mean, I think this journal is so credible and sort of careful on how they present things.
Initially, you were not going to come out, right?
What motivated your decision to go public?
So I wonder why the scholars are so large,
because I want them to understand on their own.
Like I always expected to do like closed door briefings with like governments to be able to explain them to them.
But I never intended to be like part of the story.
And right before I came out, so maybe a couple months before my lawyers started really playing pressure on me,
where they were saying, you know, Facebook knows it's, you know, they can look at all the different documents.
Right? They know it's you.
Yeah.
The report is going to start.
They're going to figure out real fast.
Or as soon as they start going to ask for comment, they're going to figure out real fast.
And the Wall Street Journal, they said, here's the deal.
Like, either you can wait for the rest of your life for Facebook to present each of the public.
You know, every day you're like open, you know, Google news, you're open to your newspaper and be like,
is today the day that Facebook is going to introduce me in the worst possible way?
Or you can take responsibility for what you did.
You know, I know that you don't want to be out.
You don't want people looking at you.
You don't want to be out there.
Like, you know, go and take responsibility.
Because if you do close door briefings, every single briefing you do will expand the circle of trust.
And you're going to be like the juiciest story for some reporter to break.
Like if they can find me.
You're a post or somebody would eventually just.
And one of my friends, Joe, to like after I came out, they were like,
I don't understand how the story stayed quiet for so long.
I thought everyone in San Francisco and do Francis was doing this.
And I was like, everyone in our circle of friends knew I was doing this.
Not everyone in San Francisco.
But still it's one of those things.
Like someone brags to their friend about how I know who the Facebook whistleblower is.
And you know, I just go.
And so I decided to step out into the light.
And it's actually a better really transformative experience.
You know, it's one of the things where I spent a lot of my life like really trying to avoid being seen.
You know, like I've gone married twice.
I aloved both times.
The idea of standing in front of a crowd, having them just like serenity for my wedding sounds very stressful.
You know, I've had two birthday parties in the last like 20 years.
Like it's just not my job.
But it's been really interesting having to like stand up and try to educate people on these issues.
Because, you know, I really believe in democracy.
And the way the world changes is we change it.
You know, we get out there and we say, hey, you know, I'm going to keep repeating this until I see something different in the world.
And having to show up in my own life has been a huge blessing.
And I, something I never expected to be one of the things that I would walk away.
You know, two years later and say, I'm so grateful for this.
Awesome. That is a great ending to the podcast.
Thank you so much for coming on.
And I really appreciate talking to you.
My pleasure.
The last thing I leave people with is.
If you think all these things are too complicated, you know, like these are, you know, I couldn't understand a technical book.
The whole point of power of one is that democracy requires an informed population.
And so it is written so that if you could follow along today, I guarantee you'll be entertained and follow along the power of one.
And there's lots of fun stories and, you know, crazy hijinks along the way.
Because I have always gotten into trouble, you know, what I didn't mean to.
And so I hope you come along on this journey too.
Thank you so much.
That's our episode.
Thanks so much to Tommy Heron, our audio editor.
I want to shout out Annie Wen, our intern with the summer who's been helping me prep for the podcast and working on.
Punching up the show.
Shout out to Riley Concello, my chief and staff.
And young Chomsky for the theme music.
This has been The Newcomer Podcast.
Please like, comment, subscribe.
On YouTube, give us a review on Apple Podcasts and most important, most simple.
Subscribe to TheSubstack.
Newcomer.co.
You kind of paid subscriber.
Today, it makes this all possible.
Thank you so much.
See you next week.