What do you think?
Rate this book
400 pages, Hardcover
First published September 6, 2022
When you see a post expressing moral outrage, 250,000 years of evolution kick in. It impels you to join in. It makes you forget your moral senses and defer to the group's. And it makes inflicting harm on the target of the outrage feel necessary - even intensely pleasurable.
The platforms also remove many of the checks that normally restrain us from taking things too far. From behind a screen, far from our victims, there is no pang of guilt from seeing pain on the face of someone we've harmed. Nor is there shame at realising our anger has visibly crossed into cruelty. In the real world, if you scream expletives at someone for wearing a baseball cap in an expensive restaurant, you'll be shunned yourself, punished for violating norms against excessive displays of anger and for disrupting your fellow restaurant-goers. Online, if others take note of your outburst at all, it will likely be to join in.
But as the Valley expanded its reach, this culture of optimisation at all costs took on second-order effects. Uber optimising for the quickest ride-share pickups engineered labour protections out of the global taxi market. Airbnb optimising for short-term rental income made long-term housing scarcer and more expensive. The social networks, by optimising for how many users they could draw in and how long they could keep them there, may have had the greatest impact of all. "It was a great way to build a startup," Chaslot said. "You focus on one metric, and everybody's on board [for] this one metric. And it's really efficient for growth. But it's a disaster for a lot of other things."
Even its most rudimentary form, the very structure of social media encourages polarisation. [...] Facebook groups amplify this effect even further. By putting users in a homogeneous social space, studies find, groups heighten their sensitivity to social cues and conformity. This overpowers their ability to judge false claims and increases their attraction to identity-affirming falsehoods, making them likelier to share misinformation and conspiracies. "When we encounter opposing views in the age and context of social media, it's not like reading the newspaper when sitting alone," the sociologist Zeynep Tufekci has written. "It's like hearing them from the opposing team while sitting with our fellow fans in a football stadium... We bond with our team by yelling at the fans of the other one."
The social platforms had arrived, however unintentionally, at a recruitment strategy embraced by generations of extremists. The scholar J.M. Berger calls it 'the crisis-solution construct'. When people feel destabilised, they often reach for a strong group identity to regain a sense of control. It can be as broad as nationality or as narrow as a church group. Identities that promise to recontextualise individual hardships into a wider conflict hold special appeal. You're not unhappy because of your struggle to contend with personal circumstances; you're unhappy because of Them and their persecution of Us. It makes those hardships feel comprehensible and, because you're no longer facing them alone, a lot less scary.
The problem, in this experiment [on Facebook misinformation], wasn't ignorance or lack of news literacy. Social media, by bombarding users with fast-moving social stimuli, pushed them to rely on a quick-twitch social intuition over deliberate reason. All people contain the capacity for both, as well as the potential for the former to overwhelm the latter, which is often how misinformation spreads. And platforms compound the effect by framing all news and information within high-stakes contexts.
[In 2018] Zuckerberg [...] riffed on the nature of free speech: "I'm Jewish, and there's a set of people who deny the Holocaust happened. I find that deeply offensive. But at the end of the day, I don't believe that our platform should take that down, because I think there are things different people get wrong. I don't think that they're intentionally getting it wrong."
It was vintage Silicon Valley. If Zuckerberg was willing to sacrifice historical consensus on the attempted extermination of his forebears for the sake of a techno-libertarian free-speech ideal, then so should everybody else. And, like many of the Valley's leaders, he seemed to be living in an alternate universe where platforms are neutral vessels with no role in shaping users' experiences, where the only real-world consequence is that somebody might get offended, and where society would appreciate the wisdom of allowing Holocaust denial to flourish.
When asked what would most effectively reform both the platforms and the companies overseeing them, Haugen had a simple answer: turn off the algorithm. "I think we don't want computers deciding what we focus on," she said. She also suggested that if Congress curtailed liability protections, making the companies legally responsible for the consequences of anything their systems promoted, "they would get rid of engagement-based ranking." Platforms would roll back to the 2000s, when they simply displayed your friend's posts by newest to oldest. No AI to swarm you with attention-maximising content or route you down rabbit holes.
Her response followed a reliable pattern that has emerged in the years I've spent covering social media.
Stage two in social media’s distorting influence, according to the MAD model, is something called internalization. Users who chased the platforms’ incentives received immediate, high-volume social rewards: likes and shares. As psychologists have known since Pavlov, when you are repeatedly rewarded for a behavior, you learn a compulsion to repeat it. As you are trained to turn all discussions into matters of high outrage, to express disgust with out-groups, to assert the superiority of your in-group, you will eventually shift from doing it for external rewards to doing it simply because you want to do it. The drive comes from within. Your nature has been changed.
But later, near the end of a technical explanation, as he stumbled into a reference to YouTube, his voice rose again. “YouTube is the worst,” he said. Of what he considered the four leading web companies—Google/YouTube, Facebook, Twitter, and Microsoft—the best at managing what he’d called “the poison” was, he believed, Microsoft. “And it makes sense, right? It’s not a social media company,” he said. “But YouTube is the worst on these issues,” he repeated.
“Its search and recommender algorithms are misinformation engines.” She later called it “one of the most powerful radicalizing instruments of the twenty-first century.” Danah Boyd, the founder of a tech-focused think tank, agreed, telling my colleague Amanda, “YouTube is perhaps the most troubling platform we have out there right now.”
There’s a term for the process Pauli described, of online jokes gradually internalized as sincere. It’s called irony poisoning. Heavy social media users often call themselves “irony poisoned,” a joke on the dulling of the senses that comes from a lifetime engrossed in social media subcultures, where ironic detachment, algorithmic overstimulation, and dare-to-offend humor prevail. In more extreme forms, sustained exposure to objectionable content, spent going down Facebook or YouTube rabbit holes, can lower people’s defenses against it. Desensitization makes the ideas seem less taboo or extreme, which in turn makes them easier to adopt.
Showing subjects lots of social media posts from peers that expressed outrage made them more outrage-prone themselves. All it takes is regular scrolls through your anger-filled feed not only to make you feel angrier while you’re online, but also to make you an angrier person.
...his team had concluded that social networks, especially Facebook, had played a “determining role” in the genocide. The platforms, he said, “substantively contributed” to the hate destroying an entire population.
If social media were built to activate majoritarian identity panic, then America’s shrinking white majority—and especially the non-college-graduate or working-class whites who tend to hold their racial identity most closely and who became the bulk of the Trump coalition—would be dangerously susceptible to the same pattern I’d seen in Sri Lanka. Status threat and digital deindividuation on a national scale. By 2018, that tribe had, with a handful of exceptions like the rally in Charlottesville, not yet worked itself up to outright mob violence. But I wondered whether this sort of social media influence might be coming out in other forms, priming people for racial violence in less obvious but still consequential ways.
The changes were dramatic. People who deleted Facebook became happier, more satisfied with their life, and less anxious. The emotional change was equivalent to 25 to 40 percent of the effect of going to therapy—a stunning drop for a four-week break. Four in five said afterward that deactivating had been good for them. Facebook quitters also spent 15 percent less time consuming the news. They became, as a result, less knowledgeable about current events—the only negative effect. But much of the knowledge they had lost seemed to be from polarizing content; information packaged in a way to indulge tribal antagonisms. Overall, the economists wrote, deactivation “significantly reduced polarization of views on policy issues and a measure of exposure to polarizing news.”