The Nuclear Governance Model Won’t Work For AI

AI is increasingly discussed as an existential threat on the same scale as nuclear weapons and climate change. This parallel is distorting the conversation about regulation.

By Yasmin Afina and Dr Patricia Lewis



As AI technologies are developed and deployed at scale, concern is growing around the risks they pose. In May, some industry leaders and scientists went as far as to claim AI is as great a threat to humanity as nuclear war.  

The analogy between both fields is gaining increasing traction and influential figures, including OpenAI’s CEO Sam Altman and the UN Secretary-General Antonio Guterres, have proposed the establishment of an international agency akin to the International Atomic Energy Agency (IAEA).  

But they are very different types of technology, and the nuclear governance model would not work at all well for AI.

What The IAEA Is

The IAEA was established in 1957 to promote the peaceful use of nuclear technology thanks to US President Eisenhower, who proposed the agency in his ‘Atoms for Peace’ speech, with the hope that  ‘… the splitting of the atom may lead to the unifying of the entire divided world.’  

The agency is charged by its statute to promote nuclear energy for peace, health and prosperity and ensure – as far as it is possible – that it is not used in ways that further military purposes. The IAEA conducts safeguarding inspections in civil nuclear facilities such as nuclear power plants and research reactors to ensure that nuclear materials in non-nuclear weapons states are not transferred to military programmes.  

The agency has been extraordinarily successful in its safeguarding, with the exception of Iraq in the late 1980s. It has discovered several instances of non-compliance and, except in the case of North Korea, has contributed significantly to the reversal of behaviour and prevention of proliferation, including thus far in Iran. 

Existential Fear Of Nuclear War

From early on in their development nuclear weapons posed a known, quantifiable existential risk. The nuclear bombing of Hiroshima and Nagasaki in August 1945 attested to the destructive, indiscriminate, and uncontainable nature of these weapons. One of the key motivations for founding the IAEA and for arms control treaties such as the Nuclear Non-Proliferation Treaty (NPT) was the deep fear of nuclear war. These fears were well founded. At the height of the Cold War, the US and then Soviet Union were said to have enough nuclear weaponry to ‘destroy humanity as we know it’.

Recent calculations reveal that the number of nuclear weapons required to destroy conditions for human habitation is fewer than 100.  

The risks posed by nuclear weapons’ very existence and the threat of their use are therefore existential; and the profound humanitarian risks and consequences that would result from their use was a driving force leading to the 2017 adoption of the Treaty on the Prohibition of Nuclear Weapons. 

Fear Of Catastrophe Is Distracting Efforts Away From Known Risks

Many of the concerns remain hypothetical and are derailing public attention from the already-pressing ethical and legal risks stemming from AI and their subsequent harms. This is not to say that AI risks do not exist: they do. A growing body of evidence documents the harm these technologies can pose, especially on those most at risk such as ethnic minorities, populations in developing countries, and other vulnerable groups.  

Over-dependency on AI, especially for critical national infrastructure (CNI), could be a source of significant vulnerability – but this would not be catastrophic for the species.  Concerns over wider, existential AI risks do need to be considered, carefully step-by-step, as the evidence is gathered and analysed. But moving too fast to control could also do harm.

AI Is Difficult, If Not Impossible, To Contain

The technicalities of nuclear weapons are inherently different from AI. The development of nuclear weapons is faced with physical bottlenecks. Their manufacture requires specific materials in specific forms – such as plutonium and highly-enriched (above 90 per cent) uranium and tritium.  

These materials produce unique, measurable signatures. The tiniest of traces can be discovered in routine inspections, and clandestine activities exposed.

Nuclear weapons cannot be made without these special materials. Controlling access to the materials physically prohibits countries that are not allowed to acquire them from doing so. This is very different from AI, which is essentially software-based and general-purpose. 

Although the development and training of AI can require heavy investment and supercomputers with tremendous processing power, its applications are widespread and increasingly designed for mass use across all segments of society. AI is, in that sense, the very opposite of nuclear weapons.  

The intangible nature of AI would make it difficult, if not impossible, to contain – especially with the increase of open-source AI.

Safeguarding measures and verification methods akin to those employed by the IAEA would therefore not work for AI due to these inherent technical differences.

What Could Work?

Policy responses are necessary to address the risks in developing and deploying AI technologies. But governance models away from the nuclear field offer better inspiration.  

A solution similar to the US Food and Drug Administration (FDA) might provide a sensible approach to overseeing the release and commercialization of AI products. This would consist of a scaled launching model, alongside robust auditing requirements and comprehensive risk assessments to evaluate both the direct and indirect implications of the product in question.  

The EU’s Reference Laboratory for Genetically Modified Food and Feed (EURL GMFF) also provides a useful way to think about some AI controls and regulation.  National and international attempts to control and regulate human gene editing and human embryo research are worth study, as attempts to control and regulate amorphous technology in very different cultural contexts.

AI could benefit from an international agency, but this should draw inspiration from the International Panel on Climate Change (IPCC) and from taking the UN Secretary-General’s recommendation for a high-level advisory body for AI one step further. Such an agency would help provide the international community with diversified and complete data in the field, ensuring that subsequent deliberations are holistic, evidence-based, and inclusive.  

This international agency could be newly established or expand the work of existing specialized agencies such as the International Telecommunications Union (ITU) – which combines both standards and regulations, revised on a regular basis through the World Radiocommunication Conference (WRC). This structure could work well for AI in the light of its dynamic nature and fast-paced technological progress.
 
The agency’s activities would promote participation by all stakeholders; assist negotiations and continuing efforts to curb AI risks; and carry out more in-depth, long-term research.

Governance models for AI could also be discussed through forums including The Internet Governance Forums (IGF), meetings at the UN including the Global Digital Compact and the Summit of the Future, and through the work of the Secretary-General’s Envoy on Technology. In 2025, the World Summit on the Information Society (WSIS) will provide an important moment to agree ways forward for AI governance.

These agencies and forums represent better models for AI regulation. They would also more effectively leverage AI’s full potential to benefit all.

Yasmin Afina is Research Fellow, Digital Society Initiative at Chatham House

Dr Patricia Lewis is Research Director; Director, International Security Programme at Chatham House

You Might Also Read: 

The Destabilizing Danger Of Cyberattacks On Missile Systems:

___________________________________________________________________________________________

If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.

  • Individual £5 per month or £50 per year. Sign Up
  • Multi-User, Corporate & Library Accounts Available on Request

Cyber Security Intelligence: Captured Organised & Accessible


 

« The CIA Has A Social Media Campaign Just For Russians
A Brief History Of Artificial Intelligence & Its Potential Future »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

XYPRO Technology

XYPRO Technology

XYPRO is the market leader in HPE Non-Stop Security, Risk Management and Compliance.

CYRIN

CYRIN

CYRIN® Cyber Range. Real Tools, Real Attacks, Real Scenarios. See why leading educational institutions and companies in the U.S. have begun to adopt the CYRIN® system.

Cyber Security Supplier Directory

Cyber Security Supplier Directory

Our Supplier Directory lists 6,000+ specialist cyber security service providers in 128 countries worldwide. IS YOUR ORGANISATION LISTED?

NordLayer

NordLayer

NordLayer is an adaptive network access security solution for modern businesses — from the world’s most trusted cybersecurity brand, Nord Security. 

Alvacomm

Alvacomm

Alvacomm offers holistic VIP cybersecurity services, providing comprehensive protection against cyber threats. Our solutions include risk assessment, threat detection, incident response.

HPE Aruba Networking

HPE Aruba Networking

HPE Aruba Networking, a Hewlett Packard Enterprise company, is a leading provider of next-generation network access solutions for the mobile enterprise.

FRSecure

FRSecure

FRSecure is a full-service information security management company that protects sensitive, confidential business information from unauthorized access, disclosure, distribution and destruction.

Center for Research on Scientific & Technical Information (CERIST)

Center for Research on Scientific & Technical Information (CERIST)

CERIST is a scientific and technical research centre with activities focused in the area of networks, information systems and IT security.

CyberCareers.gov

CyberCareers.gov

CyberCareers.gov is a platform for Cybersecurity Job Seekers, Federal Hiring Managers and Supervisors, Current Federal Cybersecurity Employees, Students and Universities.

Cyber Threat Alliance

Cyber Threat Alliance

CTA is working to improve cybersecurity of our digital ecosystem by enabling near real-time cyber threat information sharing among companies and organizations in the cybersecurity field.

Get Indemnity

Get Indemnity

Get Indemnity are specialist insurance brokers with experience working on a wide range of innovative business insurance products that combine risk management, indemnity and incident response services.

Iowa Cyber Hub

Iowa Cyber Hub

Iowa Cyber Hub is a cybersecurity education partnership between Iowa State University and Des Moines Area Community College.

CyberSheath Services International

CyberSheath Services International

CyberSheath integrates your compliance and threat mitigation efforts and eliminates redundant security practices that don’t improve and in fact might probably weaken your security posture.

Pelta Cyber Security

Pelta Cyber Security

Pelta Cyber Security is the cyber security consulting and solutions division of Softworld Inc. We provide staffing and recruitment services as well as consulting and solutions for outsourced projects.

Strac

Strac

Eliminate Personal Data Risks from your business. Our Dataless SaaS removes the need to manage sensitive data across web, mobile apps, servers and communication channels.

Imprivata

Imprivata

Imprivata is the digital identity company for life- and mission-critical industries, redefining how organizations solve complex workflow, security, and compliance challenges.

Sonet.io

Sonet.io

Sonet.io is built for IT leaders that want a great experience for their remote workers, while enhancing security and observability.

ANY.RUN

ANY.RUN

ANY.RUN is an interactive online malware analysis service created for dynamic as well as static research of multiple types of cyber threats.

CloudDefense.AI

CloudDefense.AI

CloudDefense.AI is an industry-leading multi-layered Cloud Native Application and Protection Platform (CNAPP) that safeguards your cloud infrastructure and cloud-native apps,

Northern Computer

Northern Computer

Northern Computer provides comprehensive IT solutions that streamline your operations and help you achieve your business goals.

Scalarr

Scalarr

Scalarr is an innovative, next-generation cyber security firm focused on automation and AI to detect and prevent threats in mobile and Edge/IoT infrastructures.