Internet companies write their own rules for determining hate speech, which gives them wide latitude in interpreting what constitutes offensive material in different countries
For Google last week, the decision was clear. An anti-Islamic video that provoked violence worldwide was not hate speech under its rules because it did not specifically incite violence against Muslims, even if it mocked their faith.
The White House was not so sure, and it asked Google to reconsider the determination, a request the company rebuffed.
Although the administration’s request was unusual, for Google, it represented the kind of delicate balancing act that Internet companies confront every day.
These companies, which include communications media like Facebook and Twitter, write their own edicts about what kind of expression is allowed, things as diverse as pointed political criticism, nudity, and notions as murky as hate speech. And their employees work around the clock to check when users run afoul of their rules.
Google is not the only Internet company to grapple in recent days with questions involving the anti-Islamic video, which appeared on YouTube, which Google owns. Facebook on Friday confirmed that it had blocked links to the video in Pakistan, where it violates the country’s blasphemy law. A spokeswoman said Facebook had also removed a post that contained a threat to a United States ambassador, after receiving a report from the State Department; Facebook has declined to say in which country the ambassador worked.
“Because these speech platforms are so important, the decisions they take become jurisprudence,” said Andrew McLaughlin, who has worked for both Google and the White House. Most vexing among those decisions are ones that involve whether a form of expression is hate speech. Hate speech has no universally accepted definition, legal experts say. And countries, including democratic ones, have widely divergent legal approaches to regulating speech they consider to be offensive or inflammatory. Europe bans neo-Nazi speech, for instance, but courts there have also banned material that offends the religious sensibilities of one group or another. Indian law frowns on speech that could threaten public order. Turkey can shut down a Web site that insults its founding President, Kemal Ataturk. Like the countries, the Internet companies have their own positions, which give them wide latitude on how to interpret expression in different countries.
Although Google says the anti-Islamic video, “Innocence of Muslims,” was not hate speech, it restricted access to the video in Libya and Egypt because of the extraordinarily delicate situation on the ground and out of respect for cultural norms.
Google has not yet explained why its cultural norms edict applied to only two countries and not others, where Muslim sensitivities have been demonstrably offended.
Google’s fine parsing led to a debate in the blogosphere about whether the video constituted hateful or offensive speech.
Peter J. Spiro, a law professor at Temple University, said Google was justified in restricting access to the video in certain places, if for no other reason than to stanch the violence.
“Maybe the hate speech/offensive speech distinction can be elided by the smart folks in Google’s foreign ministry,” Mr. Spiro wrote on the blog Opinio Juris. “If material is literally setting off global firestorms through its dissemination online, Google will strategically pull the plug.”
Every company, in order to do business globally, makes a point of obeying the laws of every country in which it operates. Google has already said that it took down links to the incendiary video in India and Indonesia, because it violates local statutes.
But even as a company sets its own rules, capriciously sometimes and without the due process that binds most countries, legal experts say they must be flexible to strike the right balance between democratic values and law.
“Companies are benevolent rulers trying to approximate the kinds of decisions they think would be respectful of free speech as a value and also human safety,” said Jonathan Zittrain, a law professor at Harvard.
Unlike Google, Twitter does not explicitly address hate speech, but it says in its rule book that “users are allowed to post content, including potentially inflammatory content, provided they do not violate the Twitter Terms of Service and Rules.” Those include a prohibition against “direct, specific threats of violence against others.”
That wide margin for speech sometimes lands Twitter in feuds with governments and lobbyists. Twitter was pressed this summer to take down several accounts the Indian government considered offensive. Company officials agreed to remove only those that blatantly impersonated others; impersonation violates company rules, unless the user makes it clear that it is satirical.
Facebook has some of the industry’s strictest rules. Terrorist organisations are not permitted on the social network, according to the company’s terms of service. In recent years, the company has repeatedly shut down fan pages set up by Hezbollah.
In a statement after the killings of United States Embassy employees in Libya, the company said, “Facebook’s policy prohibits content that threatens or organises violence, or praises violent organisations.”
Facebook also explicitly prohibits what it calls “hate speech,” which it defines as attacking a person. In addition, it allows users to report content they find objectionable, which Facebook employees then vet. Facebook’s algorithms also pick up certain words that are then sent to human inspectors to review; the company declined to provide details on what kinds of words set off that kind of review. — New York Times News