Global AI Safety: Scientists Can Move The Needle

Amid global uncertainties on the state of AI safety, divergence between national frameworks, and strains on international cooperation, scientist-to-scientist dialogue can depoliticize, improve the inclusivity of global conversations, and advance shared understandings of the science itself. 

AI Safety Is Political

AI demands global governance. However, there is no global consensus on AI safety: its definition, benchmarks, or achieving it. As states develop and implement their AI governance frameworks, it is increasingly evident that national definitions of ‘safety’ are diverse, reflecting distinct political values and priorities.

There have been bold efforts towards improving convergence and interoperability between varied approaches to AI, such as the recently announced international network of AI safety institutes. But efforts still have a long way to go, both between like-minded democracies and between states with different political systems.

For example, Canada, the US, UK and EU share risk-based, human-centric governance models for AI, rooted in rights and democratic values. They benefit from existing mechanisms for improved coherence. For instance, Canada’s AI governance model will have interoperability built-in, having borrowed language from the EU’s Digital Services Act.

Despite important similarities in their risk-based approaches, they still differ in how levels of risk are defined and the type of obligations on AI model developers.
  
Different still is China’s approach to AI, which more explicitly defines risks in terms of sovereignty, social stability and national security. The recent Shanghai Declaration sets out their vision for global AI cooperation. However, different approaches to AI safety do not preclude cooperation: for instance, China has participated in the two AI safety summits and bilateral meetings with the US in April. States will never fully align on a single definition of AI safety, not only due to political differences but also because safety – as a practice and objective – is no monolith.

Benchmarks and standards for safe, responsible AI shift as the science develops. Technical standards for managing risk must be updated as the technology advances, as should socio-technical safety evaluations.

In addition, binding and non-binding national frameworks alone are insufficient to tackle risks with cross-border impacts, like misuse. Harmonization between governance models and improved interoperability between standards and benchmarks are needed. Achieving this is a challenge, but not an insurmountable one. State-led efforts for global governance cannot escape politicization, even when working towards global, shared objectives. In contrast, scientist-led exchanges in AI and other fields have demonstrated their power to depoliticize global safety discussions, improve global inclusivity, and move the needle on collaboration.

Scientific Consensus Has Power

Historically, scientist-led venues have a track record in working across borders to achieve progress on thorny, collective global problems. This is largely because these venues are evidence-driven, and scientists are comfortable expressing uncertainty and handling scrutiny: more so than political leaders under more pressure to express certainty.

The Intergovernmental Panel on Climate Change (IPCC) is a strong example, offering states regular, evidence-based scientific information, used to develop policy. Recent scientist-led work by the National Academies of Science, Engineering and Medicine has assessed the state of global risks and risk analysis methods of nuclear war and terrorism.

One of AI’s most promising scientific exchanges is ongoing: under the helm of leading AI scientist Professor Yoshua Bengio, global scientists worked together on the inaugural (interim) International Scientific Report on the Safety of Advanced AI, released alongside May’s AI Summit in Seoul.

This was an unprecedented, historic step towards developing a realistic, evidence-based and internationally-shared scientific understanding of AI safety. The report welcomed contributions from scientists from 30 countries, ranging from Japan and the UK to China and Saudi Arabia. It was apparently developed by consensus and does not shy away from highlighting uncertainties about the state of AI capabilities, risks and risk mitigations. The report underscores how the complexity of general-purpose AI systems makes it difficult to conduct thorough evaluations. It doesn’t push a single definition of AI safety.
 
Scientist-to-scientist collaboration has the potential to not only depoliticize global AI safety conversations, but also to improve the global inclusivity of these conversations.

However, looking ahead, the potential of these exchanges will be contingent on how their findings are channelled into concrete policymaking.

Looking Ahead

International institutional arrangements for global collaboration on AI continue to take shape. From high-level gatherings to science-to-policy mechanisms, there are several opportunities for global AI policymakers to benefit from inclusive, scientist-led efforts. However, policymakers must remain clearheaded: as evidenced by digital technical standards proposals, scientific venues are not inherently free from political influence. What’s more, some scientists carry their own political bias into their work. Policymakers can co-opt, re-frame or even blame ‘independent’ expertise to serve varied political agendas.

AI demands globally inclusive governance. Summit organizers (like the forthcoming Paris AI Summit) must commit to meaningful inclusivity, recognizing that global problems demand collective responses based on diverse inputs.
 
High-level gatherings are still dominated by a handful of states, companies, and technology thought leaders. Opening the doors for diverse, scientific inputs will not only improve inclusivity, but also buy-in. The International Scientific Report’s May launch at Seoul is a strong example of building these channels.

Scientist-led and -inclusive exchanges can enable dialogue alongside geopolitical rivalry, as evidenced by a Beijing-hosted high-level dialogue on AI safety in March, which brought together Western and Chinese AI scientists to discuss ‘red lines’ and called for global cooperation. 

However, high-level gatherings are not the full picture. There are already several promising mechanisms for science-policy exchanges, like the Global Partnership on AI and the OECD AI Policy Observatory. Policymakers should draw from this repository of expertise; for example, on where functional equivalence is possible between risk management approaches.

Looking ahead, the UN’s Global Digital Compact also commits to launching an International Scientific Panel on AI, tasked with conducting multi-disciplinary, evidence-based impact and risk assessments.

As the International Scientific Report continues its work defines its future institutional ‘home’, bolder steps are needed towards including a more diverse range of scientific expertise (from under-represented disciplines, like climate experts) and states (from the Global Majority, or without a defined AI framework).

Similar actions should be mirrored by the AI safety institute network as it advances the science of AI safety, improves interoperability and information-sharing. As it develops, it must also grow, advancing a reliable, shared understanding of the state of risk by drawing from diverse inputs and protecting the independence and resourcing of scientist-led exchanges.

Isabella Wilkinson is Research Fellow, Digital Society Initiative at Chatham House

Image: Philip Oroni

You Might Also Read: 

Important Differences Between Different Types Of Artificial Intelligence [extract]:

DIRECTORY OF SUPPLIERS - AI Security & Governance:


If you like this website and use the comprehensive 7,000-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.

  • Individual £5 per month or £50 per year. Sign Up
  • Multi-User, Corporate & Library Accounts Available on Request

Cyber Security Intelligence: Captured Organised & Accessible


 


 

« High Stakes: Business Email Compromise
Beware Of Online Rental Scams »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

Clayden Law

Clayden Law

Clayden Law advise global businesses that buy and sell technology products and services. We are experts in information technology, data privacy and cybersecurity law.

Alvacomm

Alvacomm

Alvacomm offers holistic VIP cybersecurity services, providing comprehensive protection against cyber threats. Our solutions include risk assessment, threat detection, incident response.

CSI Consulting Services

CSI Consulting Services

Get Advice From The Experts: * Training * Penetration Testing * Data Governance * GDPR Compliance. Connecting you to the best in the business.

ZenGRC

ZenGRC

ZenGRC - the first, easy-to-use, enterprise-grade information security solution for compliance and risk management - offers businesses efficient control tracking, testing, and enforcement.

Perimeter 81 / How to Select the Right ZTNA Solution

Perimeter 81 / How to Select the Right ZTNA Solution

Gartner insights into How to Select the Right ZTNA offering. Download this FREE report for a limited time only.

France Cybersecurity

France Cybersecurity

France Cybersecurity represents the French cybersecurity industry to raise international awareness of French cybersecurity capabilities and solutions.

Northwave

Northwave

Northwave offers an Intelligent combination of cyber security services to protect your information.

MSG Systems

MSG Systems

MSG are committed to intelligent IT and industry solutions and offer independent consulting on all aspects of information security.

Clavister

Clavister

Clavister is a network security vendor delivering a full range of network security solutions for both physical and virtualized environments.

Intezer Labs

Intezer Labs

The only solution replicating the concepts of the biological immune system into cyber-security. Intezer provides enterprises with unparalleled Threat Detection and accelerates Incident Response.

Smokescreen

Smokescreen

Smokescreen's IllusionBLACK employs deception technology to detect, deflect and defeat advanced hacker attacks.

Y-PARC

Y-PARC

Y-PARC is a center of excellence for cybersecurity, precision industries and medtech, fostering innovation and development and support for startups.

Cyber Gate Defense (CyberGate)

Cyber Gate Defense (CyberGate)

CyberGate is an Emirati establishment founded with an objective to provide cyber security services that would improve the overarching cyber security posture of the UAE.

Grip Security

Grip Security

Grip Security provides comprehensive visibility, governance and data security to help enterprises effortlessly secure a burgeoning and chaotic SaaS ecosystem.

Dataprise

Dataprise

Dataprise is a leading IT managed services provider offering IT Management and Help Desk Support Services, Cloud Services, Information Security Solution, IT Strategy and Consulting.

blueAllianceIT

blueAllianceIT

blueAlliance IT is an investment and growth platform that unites local MSP and IT companies around the nation, helping them to grow and operate competitively.

LockLizard

LockLizard

Locklizard provides PDF DRM software that protects PDF documents from unauthorized access and misuse. Share and sell documents securely - prevent document leakage, sharing and piracy.

Istari

Istari

ISTARI is a new kind of cyber risk management company. We’re an agile collective of best-in-class capabilities and experts, who build ongoing partnerships with clients.

Safe Decision

Safe Decision

Safe Decision is an information technology company offering Cyber Security, Network, and Infrastructure Services and Solutions.

DV Cyber Security

DV Cyber Security

DV Cyber (formerly A76) is an innovative cyber security company vertically focused on Threat Intelligence and Cyber Security Research.

GISEC Global

GISEC Global

GISEC Global provides vendors and companies from around the world with access to lucrative opportunity to capitalize on what's set to become one of the world's booming markets.