Why Do We Care If Facebook Is Biased?

The controversy over whether Facebook’s trending topics show a liberal bias lays bare the tension between Facebook’s desire to be a utility and its reality as a media company that publishes and curates material.Photograph by Erkan Mehmet / Alamy

The “trending” section of Facebook—the space on the top right-hand side of the page where you can read about what’s happening in the world—functions sort of like the front page of a newspaper. But if you compare it to the cover of an actual national newspaper you won’t see much overlap. On Tuesday morning, the top articles on the front page of the print edition of the Times were about Donald Trump’s uncomfortable attempts to court the Republican establishment and about the legal battle over a North Carolina law regulating which bathrooms transgender people can use. When I opened Facebook that afternoon, the top trending topics included the news that Eva Mendes and Ryan Gosling had, according to TMZ, given birth to a child (Amada) and that the “Guinness Book of World Records” had anointed the oldest living cat (thirty years old, Siamese).

The difference comes down to a distinction in how newspapers and Facebook judge what’s important to their readers. Newspaper editors have long operated under the principle that they will determine what’s worth reading, and, by God, you’ll read it. Facebook and other Internet platforms—Twitter, Google—take a more populist approach. Facebook has emphasized in the past that trending topics are mostly the product of an algorithm built to suss out what’s most popular, though human editors approve the topics and write headlines to describe them. This work involves some fairly clerical duties—keeping “lunch” off the trending topics during the lunch hour, for instance—but at times editorial judgment is involved. For instance, curators can suppress articles that are deemed to come from unreliable or biased sources. This aspect of the job appears to be at the center of a Gizmodo report on Monday, in which an unnamed former curator for Facebook, who happens to lean to the political right, alleges that other curators regularly kept stories that might be of interest to conservative readers from appearing in the “trending” section—even if they were trending among Facebook users.

On Monday afternoon, Tom Stocky, a Facebook executive, wrote that Facebook has found no evidence that the allegations of bias are accurate. “There are rigorous guidelines in place for the review team to ensure consistency and neutrality,” Stocky wrote. “These guidelines do not permit the suppression of political perspectives. Nor do they permit the prioritization of one viewpoint over another or one news outlet over another.” He added that “we’ve designed our tools” to make ideological discrimination “technically not feasible.”

Stocky didn’t elaborate on how that was possible, and his statement did nothing to douse the ire of conservatives. On Tuesday, the Senate Committee on Commerce, chaired by Republican Senator John Thune, wrote to Mark Zuckerberg, the C.E.O. of Facebook, asking for details about the company’s curation process. “If Facebook presents its Trending Topics section as the result of a neutral, objective algorithm, but it is in fact subjective and filtered to support or suppress particular political viewpoints,” Thune wrote, it “misleads the public.” Facebook doesn’t appear to have claimed that its algorithm is neutral or objective, but Thune’s basic point—that Facebook positions itself as a passive transmitter of information, rather than an active participant in that transmission—seems fair. Stocky, in his post, wrote that Facebook’s trending topics are surfaced by an algorithm and simply “audited” by human editors to avoid mistakes.

The Gizmodo report, and the stir it raised, has been a reminder of the extent that we look to Facebook to learn about what’s happening in our world. Last year, the Pew Research Center found that sixty-three per cent of Facebook users in the U.S. get their news from the platform. This doesn’t happen just through the “trending” section; articles also appear, of course, on our news feeds, when they’re posted by our friends or by the news organizations we follow, and in Facebook’s Paper app. When you’re standing in line at the grocery store scrolling through Facebook, part of what you’re doing—consciously or not—is absorbing Facebook’s version of the news. What’s more, Facebook makes decisions all the time about which kinds of stories to privilege—just not in ways that we typically recognize as biased. An example: when Facebook recently made available a feature, Facebook Live, that lets publishers transmit real-time video clips to their followers, it announced that live videos would be prioritized over non-live ones in people’s news feeds. What’s more, according to Buzzfeed, Facebook is paying news organizations and celebrities to produce live videos for its platform. None of this should be surprising. Facebook generates revenue when people spend a lot of time using it, and people spend time using it when they find interesting material there. And publishers remain an important source of that material. Facebook is playing editor all the time—it’s just that we don’t recognize it, because the editorial influence takes a different form than it would at a traditional news organization.

It’s interesting to consider why it so troubles us, on a gut level, to learn that human editors are involved in Facebook’s editorial process. It can’t be that it disturbs us to have our information mediated by humans, who are fallible and biased. After all, we’re allowing this anytime we read what is called news. In one 2010 research paper, two economists used a novel method of quantifying the bias in newspapers—counting up the frequency with which they printed phrases typically used by liberals (“minimum wage”) versus conservatives (“tax relief”). Their approach found that the Daily Oklahoman had a conservative bias, whereas the San Francisco Chronicle had a liberal one. The Wall Street Journal was slightly right of center, and the Times slightly left. The most obvious difference between Facebook and the Times, of course, is that Facebook began as a tech firm, with its faith in code. The notion that algorithms are better at this stuff than people are was popularized before Facebook even existed—by a tech company with its own big ambitions. When Google created Google News, in 2002, the fine print at the bottom of the site read, “This page was generated entirely by computer algorithms without human editors. No humans were harmed or even used in the creation of this page.”

There’s something about how we experience Facebook that sets it apart from traditional publishers. On theatlantic.com, Derek Thompson wrote, “Facebook is a media company, but more than that, it is a utility, an integral piece of information infrastructure upon which hundreds of publishers and media companies rely to reach their audience.” The point isn’t just that so many people use Facebook (though that’s part of the point). It’s that it has been positioned as part of the Internet “infrastructure,” not so different from the electric grid or the telecommunications system.

Mark Zuckerberg has been among the most enthusiastic proponents of this idea. Talking with an Atlantic editor in 2013, he said, “Maybe electricity was cool when it first came out, but pretty quickly people stopped talking about it, because it’s not the new thing. The real question you want to track at that point is, Are fewer people turning on their lights because it’s less cool?” The answer, of course, is no; electricity has become too invaluable for that. Facebook wants to be just as invaluable.

The thing about utilities, though, is that, at least in the U.S., we expect a kind of dumb passivity from them. The phone companies don’t send through certain calls and not others; Internet providers—which are being treated more and more like utilities—can’t decide which Web sites we visit. Facebook doesn’t show us every news item published in any given hour—that would be too overwhelming—which means it must have some internal system to make choices. But imagining that a machine is behind this system, even if the machine’s software is human-built, feels more comfortable than picturing humans making these decisions.

People and their judgment do come into the picture, however. Despite being ubiquitous, Facebook and Google are not actual utilities. People depend on utilities to fulfill basic needs; Facebook and Google provide perfectly useful services, but, in a pinch, most of us could make do with a differently structured service like Twitter or Wikipedia instead. Utilities, because of their importance, are supposed to be regulated in order to assure fairness and consistency of service; Facebook and Google are not. Utilities bring in revenue by charging for their services; Facebook and Google do so by selling ads.

In other words, Facebook and Google have come to resemble a different old-school type of business—one that, like the Internet business, is highly competitive and relatively unrestricted by the U.S. government. They look like media companies. As Facebook becomes increasingly active in determining which news its users see, it would do well to acknowledge the use of humans—including their potential for errors and bias. This might even give the company some freedom to be more discerning in deciding which headlines we should see. If an algorithm believes that the anointment of the oldest living cat is more important than the Presidential election, mightn’t it be useful for a human to step in and make some adjustments?