A new research conducted by Mozilla has found that YouTube’s algorithm is recommending videos with misinformation, violent content, hate speeches and scams.
(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)
The 10-month long crowdsourced investigation, which Mozilla claims is the largest-ever, revealed that people in non-English speaking countries saw 60% higher chances of encountering disturbing videos.
The company conducted the research using RegretsReporter, an open-source browser extension in which people voluntarily donate their data, to provide researchers access to a pool of YouTube’s recommendation data.
As per the data analysed, 71% of the videos that volunteers of the project reported as regrettable were actively recommended by YouTube’s algorithm.
Also Read | U.S. lawmakers call YouTube Kids a 'wasteland of vapid' content
About 200 of such videos have been removed by YouTube that had a collective 160 million views before they were taken offline.
“YouTube needs to admit their algorithm is designed in a way that harms and misinforms people,” Brandi Geurkink, Mozilla’s Senior Manager of Advocacy said in a statement.
“Our research confirms that YouTube not only hosts, but actively recommends videos that violate its very own policies.”
Stressing further, Geurkink noted that one person who watched videos about the U.S. military was then recommended a misogynistic video titled “Man humiliates feminist in viral video.” While another person who watched a video on software rights was recommended a video about gun rights.
In another instance, a person after watching an Art Garfunkel music video, was recommended a highly-sensationalised political video titled “Trump Debate Moderator EXPOSED as having Deep Democrat Ties, Media Bias Reaches BREAKING Point.”
Also Read | YouTube to deduct taxes from non-U.S. creators
Mozilla found that recommended videos were 40% times more likely to be regretted than videos searched for and in 43.6% of cases where Mozilla had data about videos a volunteer watched before a regret, the recommendation was completely unrelated to the previous videos that the volunteer watched.
To address the issues, Guerkink suggests having common sense transparency laws, better oversight, and consumer pressure can help reign in this algorithm. The other recommendations include publishing frequent and thorough transparency reports that include information about their recommendation algorithms, providing people with the option to opt-out of personalized recommendations and enacting laws that mandate AI system transparency and protect independent researchers
Published - July 09, 2021 01:02 pm IST