TOKYO — American social media platforms are behind the curve in dealing with conspiracy theories and other dubious content in Japan compared with the U.S.
This struggle is reflected in internal documents leaked by Frances Haugen, a former employee of Facebook, now known as Meta. The papers were disclosed to the U.S. Securities and Exchange Commission and provided to Congress in redacted form by Haugen’s legal counsel. The redacted versions received by Congress were reviewed by a consortium of news organizations, including Nikkei.
In one internal exchange recorded in the documents, a member of the content review team asked in January if there were proactive investigations into QAnon content on Instagram, the photo sharing app, given its popularity in Japan.
“I think I can say, somewhat confidently yet sadly, that we haven’t done so,” said an employee involved in that area.
“We took a ton of action in Q4 against QAnon entities but that was mostly focused on the US and English/Spanish accounts,” the employee wrote.
This response is in keeping with a slew of reports underscoring Facebook’s inability to effectively moderate content outside its home country, even as the platform gained a global audience. It has been particularly criticized for gaps in its coverage of Arabic and Indian languages.
In response to questions about its handling of Japanese QAnon content, Meta told Nikkei that “artificial intelligence and human employees work in tandem” to handle posts that violate its community rules and guidelines.
Followers of QAnon believe that the world is controlled by a shadowy satanic cabal of child-trafficking elites. Many were among the supporters of previous U.S. President Donald Trump who attacked the Capitol on Jan. 6. The movement has also promoted misinformation about the coronavirus pandemic and COVID-19 vaccines.
The company said last December it would remove “false claims about the safety, efficacy, ingredients or side effects” of coronavirus vaccines.
But Japanese posts that likely run afoul of this policy — suggesting, for example, that the virus and vaccines are part of a depopulation plan — have stayed up, include some from influential figures.
A member of a prefectural assembly has repeatedly spoken out against coronavirus vaccines on Facebook, and claims that he has never had a post deleted. As recently as Friday, he made a post asserting that vaccine recipients can be tracked remotely.
Asked about this specific account, Meta declined to comment on individual cases, but acknowledged that it “cannot identify all harmful content.”
Dubious Japanese content is a problem on Twitter as well. QAnon-related Japanese tweets began picking up around April 2020, according to Fujio Toriumi, a professor at the University of Tokyo and expert on social media analysis. While the pace slowed down after the Capitol assault, there are still about 1,000 such posts per day.
Twitter announced in January that it had suspended more than 70,000 accounts in a QAnon crackdown. A study by the Digital Forensics Research Lab at the Atlantic Council, an American think tank, found that English-language QAnon content has “all but evaporated” from major social media platforms.
Twitter’s response to Japanese content has lagged behind. The company told Nikkei that it “deals with accounts that violate our rules around the world.”
Meta said it has about 40,000 people working on safety and security and monitoring content worldwide, but neither it nor Twitter has specified how many employees it has assigned exclusively to Japan. Yahoo, which has a much stronger presence in Japan than in the U.S., has released concrete details and data about its content moderation practices.
“You can’t say overseas platform operators have been doing enough,” said Hosei University professor and media expert Hiroyuki Fujishiro.
If social media fails to take effective measures against problematic content that take countries’ individual circumstances into account, these services can contribute to social instability.
Meta has been criticized in the past for a corporate culture that puts profits ahead of safety. Reporting spurred by Haugen’s leaks has highlighted the company’s struggles to get hate speech and misinformation under control.
The leaks, and Haugen’s testimony before Congress, could encourage stronger regulation on social media companies around the world. The Cambridge Analytica scandal of 2018, in which a British consulting firm was found to have collected data from millions of Facebook users without consent, sparked a similar push to rein in these services.