Why The Path To Fake News Was Paved With Facebook’s Good Intentions

April 10, 2018
with contributions from:
No items found.

In 2018 the concept of credibility itself is under siege. No one put it better than Alexander Nix, the then-CEO of Cambridge Analytica, in a recorded conversation with undercover reporters released on March 19. After bragging about how his firm can entrap political candidates in bribery schemes and prostitution rings, Nix said something that is relatively obvious to anyone who has been following election news over the past year: “these are things that don’t necessarily need to be true as long as they are believed.”

Nix’s words refer to scurrilous political rumor, but they could apply to any of the fake news that passes daily through Facebook users’ newsfeeds, spreading spurious accounts of nonexistent terror threats, long-discredited “scientific” studies, and pedophile sex rings in pizzeria basements. This kind of misinformation is not just poisonous because it is incorrect, it is poisonous because it infects our confidence in the institutions that have helped us curate fact from fiction, and understand what is truth and what is a lie. And on Facebook, it spreads literally at lightning speed, with no way for informed or reasonable people to stop or slow it.

That poison is in part the derivative impact of a decision made with the best of intentions by top Facebook brass. When asked in a 2014 Q&A why Facebook still didn’t have a dislike button after a decade of users requesting one, Mark Zuckerberg implied that the ability to label posts “good” or “bad” would foster too much negativity on the site. “I don’t think that’s socially very valuable or good for the community to help people share the important moments in their lives,” he said.

Unfortunately by trying to restrain negativity, Facebook was actually fostering it. When Zuckerberg decided not to include a dislike button on Facebook, he could not have known the enormity of the forces of destruction he was unleashing upon our culture. However, that decision to remove a filter for negative thoughts, facts, and ideas sits at the heart of a global crisis that has damaged our ability to judge truth and righteousness. A dislike button would unquestionably be “socially very valuable”—in fact, at the present moment it might be what’s needed to bring our society back from the brink.

As easy as it is to point fingers, however, it’s not only Zuckerberg who deserves the blame. Most of us in tech and venture capital have aided and abetted the damage wrought by the Facebook machine. Until we as an industry recognize the damage created by imagining a utopian world where bad ideas don’t exist, we will continue on a road to devastation.

A marketplace of echoes

In a functioning marketplace of ideas, people share the things they agree with and reject or discredit those that they don’t. Democratic societies have operated this way for hundreds of years. It hasn’t been a perfect system, but for the most part it has been an effective way to put the brakes on bad information, ugly concepts, and other toxic noise that pollutes our ideological landscape.

By dispensing with the dislike button, Facebook has created a marketplace of ideas where there’s acceleration, but no brake. Things that users find enjoyable, entertaining, and pleasing are easily passed along at the speed of light. But when a user encounters the aforementioned toxic noise, there isn’t a way to push back. Commenting on an offensive post simply pushes it higher in one’s friends’ newsfeeds, giving it more attention and credibility. Ignoring it lets it continue its acceleration into virality unabated. As more and more of life is lived online, this design choice has come to have an outsize and overwhelmingly destructive impact.

If we tell a lie offline, “in the real world”, there is an immediate opportunity to contradict that untruth across a broad spectrum of social norms. We call these moments of dishonesty out in our conversations with others, in our business dealings, in our press and media, in our courts and judiciary, and through our instruments of government. Artists shine a spotlight on lies and injustices, as do social activists and victims. Facebook, however, has gagged these activities, stripping us and our in institutions of the ability to contradict bad ideas in the public square.

The only way to push back on these bad ideas on Facebook is to launch a conflicting opinion as a new meme into the social media sphere. For those who wish to hear it, social networks speed these ideas to the like-minded. But those who may need most acutely to be exposed to the truth have been insulated from the counter attack, thanks to those echo chambers that are another byproduct of Facebook’s lack of ideological brakes. Instead of engaging with and evaluating disparate ideas, we hear what we want to hear, and are sheltered from the rest. The like button is an accelerant fanning the passions of the like-minded and fueling a remarkable velocity of both the signal and the noise we seek.

A platform for propaganda

Why did Zuckerberg create an online world where it’s almost impossible to push back on bad ideas? I suspect he may have been motivated by the liberal, politically correct impulse to be careful of other people’s sensitivities. In that same 2014 Q&A, Zuckerberg stated that If Facebook added a broader spectrum of emotions, they would do so in a way that didn’t involve “demeaning the posts people put out there.” Indeed, when Facebook rolled out an expanded suite of reaction buttons in 2016, a dislike option was nowhere to be found. Even the reactions that involve negative emotion, like anger or sadness, are today generally used in ways that imply empathy with the poster and not the other way around. An angry emoji on a post can mean “your opinion makes me angry” but more often “I share your righteous anger.” After more than a decade, Facebook still doesn’t give users an unambiguous way to push back on bad ideas or fake facts.

If you’re afraid of “demeaning the posts people put out there,” herding users into echo chambers is a great way to ensure no one ever gets their feelings hurt. The Facebook platform might be filled with vile right-wing invective and, in Myanmar at least, incitements to actual genocide. But it’s also in another sense a safe space where everyone can find their own supportive audience—the only problem being that “everyone” necessarily includes racists and mass murderers.

There’s a second, more pragmatic reason for the absence of a dislike button on Facebook: Zuckerberg knows that advertisers don’t want it.

And they don’t want it for the same reasons propagandists such as Cambridge Analytica don’t want it. Advertisers are fundamentally in the business of getting people to believe somewhat questionable “facts”—be it their claims about products, or the existence of “needs” we didn’t know we had, or insecurities they teach us to carry (and then build products to relieve). They want these facts delivered quickly and inexpensively through word-of-mouth, without dissent or pushback. This is also the mission of propagandists. By catering to advertisers, Zuckerberg has created the most effective platform for disseminating propaganda the world has ever seen.

And this is why we are left with the increasingly shrill noise of the left and the right, shouting past each other online, on television, and increasingly in our cities, schools, churches and family gatherings. Everyone is armed with their own “facts” to support their point of view, and we have been trained not to consider an alternative perspective, because the idea of counterpoint has been whitewashed from the network which delivers an ever increasing percentage of the ideas we are exposed to.

I don’t pretend to be the first person in tech or venture capital to think about this or to deeply regret my culpability in the situation. But this crisis of our social networks needs to be amplified clearly and consistently from within our industry’s ranks if we are going to address the collateral damage it has created.

The private sector can’t meet this crisis alone. The two congressional hearings at which Zuckerberg will testify this week are a crucial opportunity for the government to show it is serious about reining in social media’s propagandistic tendencies. Leaders of the Senate Judiciary Committee have hinted that after their joint hearing with the Commerce Committee today, they will lay down new “rules of the road” for Facebook and similar companies. Both the Senate and the House should consider legislation to impede the spread of propaganda and poisonous ideas online— and help society as a whole take a step back from the abyss.

Subscribe to the latest AE NEWS