An existential risk can be defined as “a risk that threatens to destroy the long-term potential of humanity”. To put it bluntly, it is a risk that can conceivably lead to human extinction, with irreversible damage to the ability of human civilization to repair itself. The “terminal impacts” on existential risks – for example their challenges to our existence – need not manifest themselves in the short term; and that is why they are often overlooked. The existing communities of researchers who focus on existential risks (x-risks) remain divided on the exact boundaries and constituents of the set of x-risks – although most agree that the following examples count as examples “fundamentals”: a global nuclear winter (resulting from the deployment of nuclear weapons or other sources of fallout), a (manufactured) pandemic that infects the entire population of Earth, or Artificial Intelligence (AI ) that destroys humanity. Although these risks are often considered to be overly speculative or exaggerated in nature, they deserve our attention not necessarily because of the probabilities with which they occur, but because of the sheer scale and intensity of the devastation they would cause to humanity.
Many could well mean the end of humanity. Despite this, most discussion of X-risk tends to remain in the realms of moral and applied philosophy – most notably, the effective altruism and long-term movements have been instrumental in popularizing the concept. Yet, the fact remains that the subject is not sufficiently taken into account in the international relations (IR) community, with the notable exception of the joint research project of Jordan Schneider and Pradyumna Prasad, who pointed out the risks arising from a potential war between the United States and China, two major nuclear powers with extremely strained relations. Indeed, the communities of long-termism/risk x and international relations have remained, by far, fundamentally disjointed. The following seeks to outline some conceptually grounded arguments regarding why the field of IR and IR researchers must take seriously the possibility of existential risks, to fully address the issues and challenges we face today. today.
Picture this: a series of explosions take the world by storm in quick succession, incinerating large swaths of Earth’s population and killing many more through the smoke emissions and environmental damage that immediately ensues. Radioactive traces from the detonations and bombings permeate the thickest walls of aboveground buildings, affecting the billions left behind. The gargantuan volume of particles emitted from the detonations fills the sky with fog and smoke so dense that it would take years, if not decades, for the sky to clear. Darkness reigns.
The image above is of a global nuclear winter. Like Coupe et al. notes in a 2019 paper, a potential nuclear winter following a hot war between the United States and Russia could lead to a “10°C reduction in global average surface temperatures and extreme changes in precipitation.” On the face of it, there are enough security mechanisms to make this worst-case scenario unlikely: military commanders aware of the risks of escalation; the existence of bunkers in which individuals can take refuge; the fact of mutually assured destruction imposing sufficient deterrence on key decision makers.
Yet this possibility cannot be so simply ruled out. It has been more than two hundred days since the Russian army invaded Ukraine. Recent battlefield setbacks and growing Russian public discontent have precipitously increased the likelihood that Putin will consider deploying a tactical nuclear weapon on the battlefield. Without going into specific quantitative estimates (although examples of these can be found here), the underlying explanations are relatively straightforward: seeking an increasingly unlikely victory over territories officially claimed by Russia in Ukraine , preserving his national credibility and political position, and forcing the hands of NATO and Ukraine to come to the negotiating table, Putin may feel like he has run out of viable options.
The nuclear option is most certainly undesirable even for Putin given the potential repercussions, but could be seen as preferable to perceived capitulation and the possibility of overthrow by a real opposition – for which the chances of success are currently relatively limited. Indeed, categorically, full-fledged military conflicts between two nuclear powers – Russia, China or the United States; Pakistan and India – could escalate, through the security dilemma – by inadvertently precipitating a nuclear confrontation between these powers.
Nuclear winters are by no means the only X-risks. Take for example the vaunted AI “arms race” – as AI rushes towards higher speeds, greater precision and cultivates a capability deeper adjustment and course correction through self-driven calibration and imitation, it is evident that it, too, could equip countries with significantly greater capabilities to do harm. While research and development itself would generate relatively innocuous results – such as programs that can track and monitor individuals’ behaviors and speech patterns, or AIs guiding lethal autonomous weapons in choosing their targets. , it is the desire for competition and victory that poses a fundamental problem. threat to global security.
We have seen leading powers such as China and the United States seek to outdo each other with punitive and preemptive measures regarding chips and semiconductors. As a result, the level of coordination and communication on sensitive issues – such as AI and drone deployment – has dropped significantly, reflecting broader attitudes of mistrust and skepticism that underpin the bilateral relationship. In the absence of clearly agreed frameworks for regulation and alignment of expectations, it would not be surprising if the race for AI between the world’s two largest economies ends in a vicious race towards a particular trough: a trough in human well-being as AI is wielded by antagonistic powers to achieve geopolitical goals and in doing so causes substantial disruption and irrevocable destruction to our digital and data infrastructure.
Aside from the potential dangers of a clash between world powers, there is another positive argument in favor of genuine international cooperation. Existential risks require coordination of resources, strategies and broader governance frameworks to be properly addressed. The risks arising from a strong and unaligned artificial intelligence, i.e. a self-aware, improving and truly autonomous AI, whose preferences diverge from those of humans (interests), could well end up to human extinction. Such risks require careful management and the installation of safeguards and responsive programs that could mitigate possible misalignment and/or premature arrival of powerful AI. In theory, countries at the forefront of technology and innovation should allocate substantial resources to the design of a shared and transparent AI regulatory framework, as well as forward-looking research to plan for various scenarios. and possible trajectories adopted by the AI. In practice, government cynicism and the strategic importance attached to accelerating national-to-national AI developments have made such long-term initiatives incredibly difficult. Even EU AI legislation – arguably the most advanced among its counterparts – remains vulnerable to internal divergence and misalignment. Greater interlocution between continents and geopolitical alliances is therefore essential to enable the development of regulations, laws and decision-making principles that can say what to do in the face of AI risks.
An alternative concern looms, regarding the stability of food supplies in the face of extreme weather and other geopolitical disruptions. Consider the current global food crisis, which results from a combination of the ongoing war in Ukraine and regional droughts and floods resulting from a prolonged La Niña (attributed by some to climate change). The key to resolving such crises requires both targeted and comprehensive agreements on food production and distribution, as well as a fundamental structural effort for a faster green transition. Without global coordination, much of this would be extremely difficult – food supply chains cannot be optimized if trade barriers and border skirmishes continually disrupt cross-border flows. Attempts to reduce carbon emissions and move away from non-renewable energy would require countries to see fit to commit to and meet strict but essential commitments to reducing their carbon footprint. Without a real division of labor and collaboration – on the production of solar panels and renewable energy, for example – we would be dangerously on a path of no return.
Some argue that climate change is not, in fact, an existential risk; that its effects are unevenly distributed around the world and could be overcome through adaptive technologies. Yet this underestimates the extent to which disruptions in food production and supply can provoke or exacerbate pre-existing geopolitical and cultural tensions, thus precipitating conflicts that could eventually escalate into all-out or nuclear war. The probability may be objectively low, but the damage is significant enough to warrant serious attention.
The academic community would thus gain from taking seriously the quantum of impacts that international conflict and collaboration have in relation to the existential challenges of humanity. Much remains to be explored at the intersection of long-termism and IR – the quantification and mechanization of causal processes, the design and evaluation of prospective solutions. Fundamentally, it is imperative that IR theory be able to account not only for probable and close threats from a probabilistic perspective, but also for structural threats that could undermine the continuity and survival of the human species.
Further Reading on Electronic International Relations