Facebook offers up first-ever estimate of hate speech prevalence on its platform

The world's largest social media company, under scrutiny over its policing of abuses, particularly around November's U.S. presidential election, released the estimate in its quarterly content moderation report.

November 20, 2020 11:07 am | Updated November 28, 2021 01:49 pm IST

Facebook offers up first-ever estimate of hate speech prevalence on its platform.

Facebook offers up first-ever estimate of hate speech prevalence on its platform.

(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click  here  to subscribe for free.)

Facebook Inc for the first time on Thursday disclosed numbers on the prevalence of hate speech on its platform, saying that out of every 10,000 content views in the third quarter, 10 to 11 included hate speech.

The world's largest social media company, under scrutiny over its policing of abuses, particularly around November's U.S. presidential election, released the estimate in its quarterly content moderation report.

Facebook said it took action on 22.1 million pieces of hate speech content in the third quarter, about 95% of which was proactively identified, compared to 22.5 million in the previous quarter.

The company defines 'taking action' as removing content, covering it with a warning, disabling accounts, or escalating it to external agencies.

This summer, civil rights groups organized a widespread advertising boycott to try to pressure Facebook to act against hate speech.

Also Read | Hackers use fake Bitcoin platform to scam Facebook users

The company agreed to disclose the hate speech metric, calculated by examining a representative sample of content seen on Facebook, and submit itself to an independent audit of its enforcement record.

On a call with reporters, Facebook's head of safety and integrity Guy Rosen said the audit would be completed “over the course of 2021.”

The Anti-Defamation League, one of the groups behind the boycott, said Facebook's new metric still lacked sufficient context for a full assessment of its performance.

“We still don't know from this report exactly how many pieces of content users are flagging to Facebook whether or not action was taken,” said ADL spokesman Todd Gutnick. That data matters, he said, as “there are many forms of hate speech that are not being removed, even after they're flagged.”

 

Rivals Twitter and YouTube, owned by Alphabet Inc's Google, do not disclose comparable prevalence metrics.

Facebook's Rosen also said that from March 1 to the Nov. 3 election, the company removed more than 265,000 pieces of content from Facebook and Instagram in the United States for violating its voter interference policies.

In October, Facebook said it was updating its hate speech policy to ban content that denies or distorts the Holocaust, a turnaround from public comments Facebook's Chief Executive Mark Zuckerberg had made about what should be allowed.

Facebook said it took action on 19.2 million pieces of violent and graphic content in the third quarter, up from 15 million in the second. On Instagram, it took action on 4.1 million pieces of violent and graphic content.

Earlier this week, Zuckerberg and Twitter Inc CEO Jack Dorsey were grilled by Congress on their companies' content moderation practices, from Republican allegations of political bias to decisions about violent speech.

Last week, Reuters reported that Zuckerberg told an all-staff meeting that former Trump White House adviser Steve Bannon had not violated enough of the company's policies to justify suspension when he urged the beheading of two U.S. officials.

Also Read | Facebook, swamped with misinformation, extends post-election U.S. political ad ban

The company has also been criticized in recent months for allowing large Facebook groups sharing false election claims and violent rhetoric to gain traction.

Facebook said its rates for finding rule-breaking content before users reported it were up in most areas due to improvements in artificial intelligence tools and expanding its detection technologies to more languages.

In a blog post, Facebook said the COVID-19 pandemic continued to disrupt its content-review workforce, though some enforcement metrics were returning to pre-pandemic levels.

An open letter https://www.foxglove.org.uk/news/open-letter-from-content-moderators-re-pandemic from more than 200 Facebook content moderators published on Wednesday accused the company of forcing these workers back to the office and 'needlessly risking' lives during the pandemic.

“The facilities meet or exceed the guidance on a safe workspace,” said Facebook's Rosen.

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.