To combat disinformation, Japan could draw lessons from Australia and Europe

Japan is moving to strengthen its resilience to disinformation, though so far it’s only in the preparatory stage.

The EU and some countries have introduced requirements in content moderation for digital platforms. By contrast, Japan has proceeded with only expert discussion on countermeasures against disinformation. While that progress is welcome, Tokyo needs to consider establishing its own standards and join a growing global consensus on countering disinformation, including foreign information manipulation linked to malign state actors.

2024 was a tough year for Japan in countering disinformation campaigns. Immediately after the Noto earthquake in January, false rescue requests were widely spread on social media, diverting scarce resources of emergency services away from people who genuinely needed help. After record-breaking rainfall hit the Tohoku region in July, more than 100,000 spam posts disguised as disaster information appeared on social media. And ahead of the September election for the Liberal Democratic Party’s president and Japan’s October general elections, Japan Fact-check Center identified the spread of false and misleading information about political parties and candidates.

Japan is in a delicate situation. It’s one of the countries at the forefront of Chinese hybrid threats due to its proximity to Taiwan and principled stance upholding the rules-based order. But Japanese society, accustomed to little political division and to passively receiving information, may lack the resilience to disinformation of countries such as the United States or Korea.

Now, about 67 million Japanese are active users of X, more than half the population. X has become an important news and information source for a segment of Japanese society that is less inclined to confirm the accuracy of news items via more mainstream sources.

In response, the government has taken steps to combat disinformation and misinformation. In April 2023, a specialised unit was established within the Cabinet Secretariat to collect and analyse disinformation spread by foreign actors. As president of the G7, Japan introduced the Hiroshima AI Process in 2023 to address AI-enabled disinformation. Furthermore, the Ministry of Foreign Affairs produced solid evidence to effectively counter disinformation campaigns relating to the release of treated wastewater from the Fukushima Daiichi nuclear power plant. This disinformation may have come from China. The ministry’s effort should be applauded and serve as a model for future responses.

But simply responding to every incident may not be sustainable. Countering the proliferation of disinformation also requires content moderation, which must be balanced to protect freedom of expression and avoid placing an undue burden on digital platforms. Thankfully, international partners provide some good examples for reference.

The EU’s Digital Services Act (in full effect since 2024) forces digital platforms to disclose the reasoning behind content moderation decisions and introduces measures to report illicit content. In Australia, the Combatting Misinformation and Disinformation Bill (2024) was intended provide the Australian Communications and Media Authority with powers to force digital platforms to take proactive steps to manage the risk of disinformation. While it was abandoned in late November, Japan could use this as a lesson to avoid similar outcomes.

Japan’s government has commissioned various study groups but so far has taken no legislative action to combat misinformation and disinformation. The present reliance on voluntary efforts by digital platforms is insufficient, especially given the growing likelihood and sophistication of disinformation threats. Concrete measures are needed.

The Japanese government should engage multiple stakeholder communities, including digital platforms, such as X, and fact checking organisations, to collectively set minimum standards of necessary content moderation by digital platforms. While the specifics of moderation can be left to the discretion of the digital platform, minimum standards could include, for example, labelling trusted media and government agencies and assigning them a higher algorithmic priority for display. If minimum standards are not met, the digital platform would be subjected to guidance or advice by a government authority. But the authority would not have the power to remove or reorder individual content.

Setting standards in this way would respect existing limits of freedom of expression while reducing exposure of users to disinformation that could cause serious harm. It would require, however, verifiable criteria used to determine trusted accounts and the establishment of a contact point for complaints within digital platforms or trusted private fact-checkers.

Regulating digital platforms will not be enough. It’s also important to call out malicious actors and strengthen public awareness and media literacy. Proliferation of disinformation with political intent by foreign actors is a global problem. So Japan should cooperate with partners with similar democratic values, such as Australia. As such, Tokyo should be prepared to be more proactive in joining public attributions of malicious state-sponsored campaigns. It was, for example, with the advisory, initially prepared by Australia, on the cyber threat actor APT40.

Japan’s resilience to disinformation is still developing. Given its prevalent role in the regional and global order and its proven commitment to a rules-based international order, a higher level of urgency is required.