Facebook’s misinformation and violence problems are worse in India
Information about Facebook’s misinformation and violence problems are worse in India
Facebook whistleblower Frances Haugen’s leaks suggest its problems with extremism are particularly dire in some areas. Documents Haugen provided to the New York Times, Wall Street Journal and other outlets suggest Facebook is aware it fostered severe misinformation and violence in India. The social network apparently didn’t have nearly enough resources to deal with the spread of harmful material in the populous country, and didn’t respond with enough action when tensions flared.
A case study from early 2021 indicated that much of the harmful content from groups like Rashtriya Swayamsevak Sangh and Bajrang Dal wasn’t flagged on Facebook or WhatsApp due to the lack of technical know-how needed to spot content written in Bengali and Hindi. At the same time, Facebook reportedly declined to mark the RSS for removal due to “political sensitivities,” and Bajrang Dal (linked to Prime Minister Modi’s party) hadn’t been touched despite an internal Facebook call to take down its material. The company had a white list for politicians exempt from fact-checking.
Facebook was struggling to fight hate speech as recently as five months ago, according to the leaked data. And like an earlier test in the US, the research showed just how quickly Facebook’s recommendation engine suggested toxic content. A dummy account following Facebook’s recommendations for three weeks was subjected to a “near constant barrage” of divisive nationalism, misinformation and violence.
As with earlier scoops, Facebook said the leaks didn’t tell the whole story. Spokesman Andy Stone argued the data was incomplete and didn’t account for third-party fact checkers used heavily outside the US. He added that Facebook had invested heavily in hate speech detection technology in languages like Bengali and Hindi, and that the company was continuing to improve that tech.
The social media firm followed this by posting a lengthier defense of its practices. It argued that it had an “industry-leading process” for reviewing and prioritizing countries with a high risk of violence every six months. It noted that teams considered long-term issues and history alongside current events and dependence on its apps. The company added it was engaging with local communities, improving technology and continuously “refining” policies.
The response didn’t directly address some of the concerns, however. India is Facebook’s largest individual market, with 340 million people using its services, but 87 percent of Facebook’s misinformation budget is focused on the US. Even with third-party fact checkers at work, that suggests India isn’t getting a proportionate amount of attention. Facebook also didn’t follow up on worries it was tip-toeing around certain people and groups beyond a previous statement that it enforced its policies without consideration for position or association. In other words, it’s not clear Facebook’s problems with misinformation and violence will improve in the near future.
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.