Beijing raised eyebrows when it decided not to sign an international declaration this month to keep humans, not artificial intelligence, in control of decision-making on nuclear weapons.
It is unclear why China opted out of the non-binding joint statement – endorsed by over 60 countries, including the US and Ukraine – at the end of the second Responsible AI in the Military Domain (REAIM) conference, hosted by South Korea.
Observers say it underlines Beijing’s dilemma as it tries to balance concerns about making nuclear-related commitments amid its rivalry with the US on military AI, and its desire for a bigger say in global governance of the rapidly evolving technology.
They also say it is an example of how the worsening feud between the two powers holds back global efforts to regulate AI, especially its expanding military use.
“At this stage, China’s approach is to engage in international discussions while being extremely cautious about making specific commitments that might tie its hands in the future,” said Tong Zhao, a nuclear expert and senior fellow at the Carnegie Endowment for International Peace.
“[China] is also interested in delaying its participation in international regulations to protest US export control policies against China’s access to advanced chips, which are crucial for its competition with the US over AI technologies.”
The two-day summit in Seoul, which involved nearly 100 countries, ended on September 10 with a “blueprint for action” declaration that said it was essential to “maintain human control and involvement for all actions … concerning nuclear weapons employment”.
“We stress the need to prevent AI technologies from being used to contribute to the proliferation of weapons of mass destruction, and emphasise that AI technologies support and do not hinder disarmament, arms control and non-proliferation efforts,” the declaration said, according to Yonhap.
“AI applications should be ethical and human-centric” and AI capabilities in the military domain “must be applied in accordance with applicable national and international law”.
This year’s pact was believed to be more “action-oriented” than a modest “call to action” document adopted at the first REAIM meeting in The Hague last year, which was endorsed by some 60 nations, including China.
Russia was not invited to the talks in Seoul or The Hague due to its invasion of Ukraine.
Although China’s nuclear policy experts have largely supported the principle of not allowing AI to make nuclear authorisation decisions, Beijing seems to have reservations, according to Zhao.
“China may worry that committing to such a principle could lead to greater pressure to provide transparency over its nuclear weapons and nuclear command, control and communication systems,” he said.
Zhao said other possible explanations included that Beijing may want to avoid supporting a mostly Western-led event organised by a US ally and a proposition it knows Russia opposes.
“That said, it’s hard to tell if China’s decision not to support the blueprint for action is primarily due to concerns about limiting AI’s incorporation into nuclear systems,” he said. “The blueprint also includes many other general commitments about regulating the military application of AI that China is not ready to make, fearing it could constrain its future options.”
Seong-Hyon Lee, an associate with the Harvard University Asia Centre, said despite the improvement in operational capabilities brought by AI in the military it also posed the risk of misuse, making it a “double-edged sword”.
He said broader tensions with Washington also influenced Beijing’s stance, making it hesitant to join US-led initiatives.
“The military use of AI has become a key component of the US-China strategic rivalry, with both nations heavily investing in AI development to gain a future military edge,” he said.
He described Beijing’s decision as “another example of how US-China competition impedes global AI regulation efforts, which require cooperation between the two powers”.
Pointing to growing concerns about an AI arms race and potential conflict escalation, he said the US worried about China’s misuse of AI, while China opposed US restrictions on AI technology.
“Reaching an agreement on military AI between the two countries is challenging due to strategic competition, differing values, and deep-rooted mistrust,” he said.
In October last year, China put forward a short policy statement titled Global AI Governance Initiative, which highlighted its focus on “the well-being of humanity” – rather than the West’s emphasis on protecting human rights and the rule of law.
Beijing’s initiative called for ensuring that “AI always remains under human control”. It opposed “drawing ideological lines or forming exclusive groups to obstruct other countries from developing AI” and “creating barriers and disrupting the global AI supply chain through technological monopolies and unilateral coercive measures”.
The following month, China attended the first AI safety summit in Bletchley Park, England, signing a joint declaration along with the US and the European Union calling for global cooperation to mitigate risks from the technology.
And in July this year, the ruling Communist Party issued a call to establish “regulatory systems to ensure the safety of AI” in its policy document after the third plenum.
Lee said China might also be concerned that a public commitment on keeping AI out of nuclear-related decisions could constrain its future strategic options given the gaps on military AI use between Beijing and Washington.
“Furthermore, China may prefer to develop its own … AI governance frameworks rather than adopting Western-led proposals, reflecting its desire for greater autonomy in this area,” he said.
Is China’s technology falling behind in the race for its own ChatGPT?
Manoj Harjani, a research fellow and coordinator of the military transformations programme at Singapore’s S. Rajaratnam School of International Studies, agreed. He said China was likely focused on its efforts to lead a resolution at the UN General Assembly on military AI governance.
In March, the UN General Assembly adopted its first resolution on AI, drafted by the US and co-sponsored by over 120 countries including China, by consensus without a vote, calling for “safe, secure and trustworthy” development of the technology.
However, China chose to abstain last November on a UN resolution seeking to address the use of AI in autonomous weapons systems, also known as “killer robots”, which was endorsed by 164 countries, including the US. Five countries, including Russia and India, voted against it while eight countries abstained, including Israel and Iran.
Harjani said the US was not a significant factor in Beijing’s move to opt out of the Seoul declaration, which he said did not “necessarily signal that China absolutely disagrees with its content and the REAIM process”.
“Differences with the US will certainly be one factor affecting China’s general approach towards global military AI governance, but the REAIM process is not led by the US, so I don’t think that it weighs very significantly in this instance,” he said.
Instead, he pointed to the bilateral dialogue between the two countries as an effective way to “improve mutual understanding while reducing risks from miscommunication”.
Beijing and Washington held their first dialogue on AI in Geneva in May, which US deputy secretary of state Kurt Campbell described as a sign that China “may be prepared to talk about other issues around nuclear issues”.
China is believed to be expanding its nuclear forces “faster than any other country”, according to the Stockholm International Peace Research Institute. However, it has been criticised for a lack of transparency and for dodging strategic dialogue with the US on nuclear matters.
Beijing described the Geneva talks as an “in-depth, professional and constructive” exchange of views. US officials raised concerns about China’s “misuse of AI” while Beijing “expressed a stern stance on the US restrictions and suppression in the field of AI”.
China’s embassy in Washington said the two sides discussed the application of artificial intelligence in managing and deploying nuclear weapons.
Harjani said a recently announced second round of talks on AI was “a positive sign”.
“I’m not sure it will lead to a legally binding agreement, but the fact that there is a platform for both countries to build consensus and discuss areas of disagreement is valuable,” he said.
“We shouldn’t assume that a legally binding agreement is needed to solve every aspect of military AI governance – a platform to clarify positions and intentions may be sufficient for some issues.”
With a comprehensive global agreement unlikely, Lee of the Harvard University Asia Centre said it would be realistic for the world to work towards limited agreements on specific AI applications, focusing on safety and ethics.
“Globally, achieving consensus on AI regulation, particularly in military applications, is equally difficult amid geopolitical tensions, especially between the US-led West and the China-Russia coalition,” he said.
Lee added that China should strive for more transparency, especially on its approach to AI and its military use.
“It would have been beneficial, actually, for China to explain its reservations and what it perceives as problematic or unfair,” he said. “As China attempts to position itself as a responsible superpower, its ability to articulate its views to the international community will become increasingly important in terms of gaining global support for its positions.”