Last week X’s head of product, Nikita Bier, made an allegation of enormous consequence with remarkable casualness: China controls millions of spam accounts used to censor the platform during periods of political unrest.
Responding publicly to user complaints that Chinese-language search on X had become unusable, Bier alleged that the Chinese government flooded the platform with pornographic advertising content during periods of political unrest, deliberately drowning out real-time information. He added that X believed Beijing controlled a pool of 5 million to 10 million accounts, created before the company tightened sign-up controls. Such a pool would give Beijing a ready-made capability to manipulate search results at scale.
It was a striking claim, not only because of its scale but because of how it arrived—not via a transparency report, technical briefing, or regulatory filing, but in a reply post. For many Chinese-language users, it marked the moment when a daily experience of interference, long suspected to be state-directed, was acknowledged by the platform itself.
We should not have to rely on the occasional tweet from X executives, however candid, to understand how one of the world’s most important information ecosystems is being manipulated. Casual executive pronouncements are replacing formal transparency mechanisms—a dangerous regression that demands urgent attention. If state-level interference on this scale is occurring, it requires structured disclosure, independent verification and regulatory scrutiny, not off-hand acknowledgments delivered via social media.
Despite its many flaws, X still plays an outsized role in several global information communities. For those tracking developments in AI, for instance, it remains close to indispensable. Another group that relies heavily on it is the Chinese dissident and diaspora community, which has carved out a fragile refuge beyond the Great Firewall, even as party-state propagandists and nationalist networks pursue them across it.
For years, X has been one of the few major platforms where Chinese netizens and activists could discuss sensitive political and social issues beyond the direct grasp of Beijing’s censorship system. Over time, Chinese-language political discourse migrated there in force, forming a large, transnational ecosystem linking people inside China with journalists, researchers and policymakers outside it.
That ecosystem proved critical during China’s Covid-zero period. Wang Zhi’an, a former CCTV journalist who has spent years reporting from Japan, observed that the near-simultaneous emergence of identical protest slogans across Chinese cities could have occurred only because demonstrators were drawing on Chinese-language Twitter as a shared information space beyond Beijing’s censorship system, enabling rapid national diffusion despite the absence of domestic media coverage.
For more than a decade, Chinese-language Twitter has been a contested space rather than a neutral one. A substantial body of research—including several detailed investigations by ASPI—has documented state-linked information operations, coordinated harassment of dissidents, and large-scale inauthentic activity originating from China.
During 2022 protests over Covid-zero, Chinese-language users on Twitter repeatedly complained that searches for city names and protest-related terms were being overwhelmed by pornographic and gambling spam, burying footage and first-hand accounts. At the time, it was unclear whether this reflected a coordinated censorship tactic or opportunistic spam exploiting trending keywords.
In 2023, journalist Wenhao Ma documented a related pattern of sextortion and pornographic spam targeting politically sensitive Chinese-language accounts. While activists he interviewed didn’t believe the activity was directly state-run, its effects were functionally repressive: silencing prominent voices, driving users off the platform, and degrading information flows.
Bier’s remarks suggest that X now interprets this activity as deliberate state-level information denial: censorship by saturation rather than removal. What stands out is the platform’s response. When users and researchers raised detailed questions at the time, Twitter offered no public explanation. Years later, X has instead opted for an executive tweet asserting intent and scale without evidence, context, or structured disclosure.
Bier’s intervention matters because it marks a rare public attribution by a senior platform executive. It also raises significant questions. He was explicit about intent, framing the spam as a censorship tactic, and about scale, citing the number of accounts. He was not explicit about evidence. X has not released data, explained how the figure was calculated, identified the accounts involved or outlined what countermeasures are being taken beyond the assertion that the company is ‘aware and working on it’.
That lack of detail is notable because it departs from past practice. Before its acquisition by Elon Musk and renaming as X, Twitter routinely published detailed transparency reports and worked with external researchers, including ASPI, to document state-linked information operations. Those disclosures allowed independent scrutiny and informed policymaking. Today, users and researchers are left with sporadic social media posts from executives.
This opacity isn’t just a transparency failure; it’s potentially a legal one. Under the European Union’s Digital Services Act (DSA), X, as a designated very large online platform, must assess and mitigate systemic risks, including those arising from intentional manipulation through inauthentic or automated activity.
The European Commission is already investigating X’s recommender systems and AI deployments under this framework. If X believes a foreign government can mobilise millions of accounts to degrade search functionality, that claim should be examined as part of the same systemic risk profile the DSA is designed to surface. X has published DSA risk assessments, but there is still no public, evidenced disclosure substantiating Bier’s claim about scale, methodology, or the mechanics of the alleged search degradation.
The episode also highlights what is missing in Australia. The government’s abandoned misinformation and disinformation reforms—shelved in late 2024 after Senate opposition—would have strengthened the Australian Communications and Media Authority’s ability to require more structured transparency and reporting from major platforms, including in relation to foreign interference risks. Instead, Australia relies on a mix of voluntary transparency reporting and regulator commentary that has repeatedly fallen short of timely, granular disclosure, leaving users and researchers to piece together key episodes from screenshots and occasional executive remarks.
This is not sustainable, particularly when these platforms play a critical role in monitoring events in closed or authoritarian environments.
Whether Bier’s claim ultimately proves accurate or not, the underlying issue is clear. Platforms such as X have long occupied a central position in geopolitical information flows. When they allege state-level interference at massive scale, transparency is not optional; it is the minimum requirement for public trust.
If platforms are unwilling or unable to meet that standard voluntarily, then transparency cannot remain discretionary. It must be compelled through regulation, rather than left to occasional, context-free assertions delivered by executives on social media.
