Why the US and China must confront the growing risks of AI

Avatar photo
Abul Quashem Joarder
  • Update Time : Wednesday, December 31, 2025
Artificial intelligence, Chinese President Xi Jinping, US President Joe Biden, Moscow, Beijing, Diplomacy, AI-driven, US and China, Chinese, 

Artificial intelligence has moved from the realm of science fiction to the core of global power politics. What once seemed like a purely technological or commercial competition has rapidly become a defining national security issue of the 21st century. In November 2024, when US President Joe Biden and Chinese President Xi Jinping issued their first substantive joint statement acknowledging the risks posed by AI-specifically affirming the need to maintain human control over nuclear weapons-it marked a small but symbolically significant step. The statement itself was modest, even obvious, yet the diplomatic effort behind it revealed how difficult and necessary such engagement has become.

At first glance, agreeing that AI should not autonomously decide the use of nuclear weapons may appear like diplomatic low-hanging fruit. Few people anywhere in the world would argue otherwise. But in the context of US–China relations, even the simplest agreements are hard-won. Beijing remains deeply skeptical of Washington’s risk-reduction proposals, often suspecting them of being veiled attempts to constrain China’s rise. Russia, for its part, has resisted similar language in multilateral forums, and any bilateral progress between Washington and Beijing risks creating strategic daylight between Moscow and Beijing. That it took more than a year of negotiations to produce such a basic statement underscores how fraught this terrain is.

Yet the importance of that agreement should not be underestimated. It demonstrated that the world’s two leading AI powers-locked in fierce technological, economic, and military competition-can still find common ground on managing existential risks. Even more importantly, it showed that dialogue on AI risk is possible at all. Earlier in 2024, diplomats and experts from the US and China met in Geneva for the first extended bilateral session dedicated solely to AI risks. While the meeting produced no concrete breakthroughs, its very occurrence was historic. Both sides were able to identify critical areas of concern that require sustained work, laying a foundation for future engagement.

That foundation must now be built upon urgently. The pace of AI development and deployment, in both civilian and military contexts, is accelerating at breathtaking speed. Systems are becoming more capable, more autonomous, and more widely accessible. As this momentum grows, so too do the risks. Without deliberate and sustained diplomacy, the world risks stumbling into crises that neither Washington nor Beijing intends-or can easily control.

One of the most immediate dangers lies in the diffusion of advanced AI capabilities to nonstate actors. As AI tools become cheaper and more powerful, terrorist organizations, criminal networks, and other malicious actors could exploit them in ways that threaten global security. AI-enabled cyberattacks could cripple critical infrastructure, from power grids to financial systems. Advances in biotechnology, combined with AI-driven modeling, could enable the creation of novel biological weapons that evade detection or treatment. Disinformation campaigns, turbocharged by generative AI, could destabilize societies, undermine trust in democratic institutions, and inflame conflicts. Autonomous or semi-autonomous lethal drones, guided by AI, could allow small groups to project deadly force across borders with little warning.

These threats are not hypothetical. Elements of them are already visible today. What makes them especially dangerous is that they do not respect national boundaries. An AI-enabled attack launched by a nonstate actor could harm both the US and China simultaneously, along with countless other countries. This shared vulnerability creates a strong incentive for cooperation, even amid intense rivalry.

Beyond nonstate actors, the integration of AI into military systems poses profound risks at the state level. Both the US and China are increasingly incorporating AI into intelligence analysis, logistics, targeting, and command-and-control processes. While such integration promises speed and efficiency, it also shortens decision loops and alters traditional deterrence frameworks. In a crisis, AI-driven systems could misinterpret data, amplify false signals, or recommend escalatory actions faster than human leaders can intervene. The risk of inadvertent conflict-or catastrophic escalation—will grow as AI becomes more deeply embedded in military decision-making.

The economic domain is not immune either. As AI systems take on a larger role in global finance, including algorithmic trading and risk assessment, the potential for systemic instability increases. Poorly designed or inadequately regulated AI-driven trading systems could interact in unpredictable ways, triggering market crashes that cascade across borders. In a highly interconnected global economy, such a collapse would not be confined to one country or region.

Looking further ahead, there is a more abstract but potentially even more serious concern: the possibility of powerful, misaligned AI systems. These are systems that pursue goals different from-or even contrary to-what their creators intended. While opinions differ on how imminent this risk is, the stakes are enormous. A sufficiently advanced and poorly aligned AI could cause widespread harm, whether through economic disruption, control over critical infrastructure, or other unforeseen pathways. As the world’s only true AI superpowers, the US and China bear a special responsibility to engage one another on these long-term risks.

Engagement, however, does not mean abandoning competition. On the contrary, competition between Washington and Beijing over AI leadership is intensifying. China underscored this reality in autumn 2024 when it imposed sweeping new export controls on rare earth materials essential for producing microchips and other AI-related components. Such moves highlight how AI has become entangled with supply chains, industrial policy, and geostrategic leverage.

From the US perspective, maintaining leadership in AI is seen as a national security imperative. During the Biden administration, officials worked to ensure that American advances in AI-across military, intelligence, and commercial domains-would shape global norms and standards. The adoption of US or Chinese AI models by other countries is increasingly viewed as a proxy for influence in the international system. This competitive dynamic is only likely to intensify.

Paradoxically, it is precisely because competition is so fierce that diplomacy on AI risk is essential. Racing ahead without engagement would be deeply irresponsible. History shows that unbridled technological competition between great powers, absent guardrails, can lead to disaster. At the same time, AI offers immense opportunities to address shared transnational challenges, from climate change and disaster response to public health and scientific research. Unlocking those benefits while managing the risks requires dialogue, not silence.

To date, much of the conversation on AI risk has taken place through so-called “Track II” diplomacy-informal discussions involving academics, business leaders, and civil society groups. These efforts are valuable and should continue, as they allow for creative thinking and trust-building outside official channels. But they are not enough. Ultimately, there is no substitute for direct government-to-government engagement. Only states can make binding commitments, establish official communication channels, and integrate risk-reduction measures into national security policy.

Time is not on our side. The speed of AI advancement far outpaces the rhythm of traditional diplomacy. Managing AI risks is uncharted territory, and progress will be slow, uneven, and fraught with setbacks. But delay only increases the danger. The US and China must begin sustained, senior-level engagement now, even if early steps are modest.

Many analysts have compared AI risk management to nuclear arms control, and the analogy is useful-up to a point. During the Cold War, rival superpowers recognized their shared interest in preventing nuclear catastrophe and eventually developed a complex web of treaties, verification mechanisms, and confidence-building measures. Those frameworks did not emerge overnight; they took years to design and decades to maintain.

AI, however, presents unique challenges that make simple analogies insufficient. Verification is far more difficult. Counting missiles and warheads is one thing; monitoring algorithms, data, and training processes is another entirely. The dual-use nature of AI is also more pervasive. The same model that drives economic growth or scientific discovery can also enable lethal military applications. Drawing clear lines between peaceful and dangerous uses is far harder than in the nuclear domain.

Moreover, AI risk is not limited to state-to-state conflict. Nonstate actors and misaligned systems introduce new dimensions that traditional arms control was never designed to address. In addition, especially in the United States, AI development is driven largely by the private sector, involving numerous competing firms rather than a single state-run program. Any meaningful risk-reduction effort must therefore include industry alongside governments.

Finally, there is profound uncertainty about AI’s trajectory. Some experts see it as a transformative but ultimately manageable technology that will unfold over decades. Others warn of rapid advances leading to superintelligent systems within years. This uncertainty complicates policymaking and demands flexibility, humility, and foresight.

The lesson from history is clear: managing the risks of powerful technologies is a responsibility that great powers cannot evade. The frameworks that kept nuclear competition from destroying the world were imperfect, but they mattered. With AI, the challenge is even more complex-and the window for action may be narrower. That is why the US and China must get serious about AI risk now, before competition hardens into catastrophe.

Please follow Blitz on Google News Channel

Avatar photo Abul Quashem Joarder, a contributor to Blitz is geopolitical and military expert.

Please Share This Post in Your Social Media

More News Of This Category
© All rights reserved © 2005-2024 BLiTZ
Design and Development winsarsoft