There’s So Much More Garbage On Facebook Than You Even Realize, And It’s Impossible To Keep Up

There are 1.56 billion daily active users on Facebook, and the platform disabled 2.19 billion fake accounts in the first quarter of 2019. Plus, more disturbing stats from the company's latest Community Standards Enforcement Report.

On Thursday, Facebook published its third Community Standards Enforcement Report, which includes, for the first time, data on appeals and content restored, plus data on regulated goods on the platform. In total, the company is now tracking metrics for nine policies across the vast amount of content on its website: adult nudity and sexual activity, bullying and harassment, child nudity and sexual exploitation of children, fake accounts, hate speech, regulated goods, spam, global terrorist propaganda and violence, and graphic content.

"We have a responsibility to protect people’s freedom of expression in everything we do," Facebook CEO Mark Zuckerberg said on a call with reporters shortly after the Community Standards update published. "But at the same time, we also have a responsibility to keep people safe on Facebook and prevent harm from playing out across our services."

In short, the company says it's catching huge volumes of harmful content, but its numbers also show there is so much more of it than you probably ever realized.

Here are some more details about all the garbage on Facebook, according to its Community Standards update:

  • The site has 1.56 billion daily active users and 2.38 billion monthly active users. The company disabled 2.19 billion fake accounts in Q1 2019.
  • The social network estimates about 5% of monthly active accounts are fake.
  • Facebook took down 4 million hate speech posts in the same quarter. It says it cannot share the “prevalence” of hate speech yet.
  • Justin Osofsky, Facebook’s vice president of the global operations team, said on a press call that Facebook will have a pilot program where some of its content reviewers will specialize in hate speech.
  • There were at least 21 million instances of child nudity and sexual exploitation on Facebook during the last three quarters.
  • Facebook took action on an estimated 900,000 pieces of drug sale content, 83.3% of which it said it detected proactively. In the same period, it took action on 670,000 pieces of firearm sale content, 69.9% of which it detected proactively.
  • Facebook says that when it takes action on a piece of content, it notifies the person who posted it and in most cases offer them the ability to tell us if they think we made a mistake — though it also admits its "enforcement isn't perfect."
  • “Our budget [for Facebook’s safety and security systems] in 2019 is greater than the whole revenue of our company in the year before we went public, in 2012,” Zuckerberg told reporters Thursday. A Facebook spokesperson had earlier clarified, in a report in Variety, that Zuckerberg was referring to the company’s 2011 financials. That year, Facebook pulled in $3.7 billion in revenue.


In its blog post, Facebook said its two key metrics are "prevalence" and "content actioned." Prevalence is meant to describe how much harmful content still pervades Facebook for people to see and how much of it the company hasn't yet identified. "Content actioned" points to how many things Facebook took down. Facebook says it has put together this data "by periodically sampling content viewed on Facebook and then reviewing it to see what percent violates our standards." Of course, there's no way to know for sure that Facebook's statistics are accurate, because, well, only Facebook has unfettered access to its own trove of content. (The company made a point of saying it has established a Data Transparency Advisory Group to independently check Facebook's work, but that group doesn't have access to all of Facebook's data across its platform.)

Facebook also touts a "Proactive Rate" metric, which is supposed to indicate how much (garbage) content was detected by its AI in a particular policy area, before a human reported it at all. On this front, Facebook said it can now proactively detect 65% of the content, up from 24% just over a year ago.

On Twitter, the former chief security officer at Facebook, Alex Stamos, resurfaced a tweet suggesting that the billions of disabled accounts the company reported reflects a dramatic increase in attacks on Facebook and effective company action in policing them.

The intentional misuse of this statistic by some folks in the media is why the companies don't give us transparency. This is an attacker-influenced metric, you have to be extremely careful in how you use it. https://t.co/hY5z3ZezLX

Zuckerberg often offers AI as a panacea for Facebook's massive content problem — and did so dozens of times during his testimony before the Senate Judiciary and Commerce committees last year. Still, AI is never going to be a 100% perfect solution for Facebook — as its own chief technology officer admits. “It’s never going to go to zero,” Mike Schroepfer said of problematic posts on Facebook in a recent interview with the New York Times. "AI is not a silver bullet," Guy Rosen, Facebook's vice president of integrity, said in a call with reporters Thursday. ●

Topics in this article

Skip to footer