Guidelines For The ‘Catastrophic Risks’ Of AI
Governments are waking up to the transformative challenge that Artificial Intelligence (AI) presents and there have been rapid movements internationally, including the USA, the European Union and China, to contain and manage the threats and opportunities of this rapidly developing technology
Now, OpenAI, the creator of ChatGPT, has taken the initiative and published its own guidelines for assessing possible “catastrophic risks” of Artificial Intelligence (AI) in models. This announcement follows the firm’s CEO Sam Altman being fired by the board and then re-hired after staff and investors rebelled.
The company’s statement, titled “Preparedness Framework,” reads: “We believe the scientific study of catastrophic risks from AI has fallen far short of where we need to be.”
According to Techxplore, the framework is meant to help address this gap. A monitoring and evaluation team will focus on “frontier models” that are currently being developed (extremely high-capability models). The team will then individually assess the models and assign them a level of risk, from “low” to “critical,” in four main categories.
Only models with a risk score of “medium” or below will be approved to be deployed. The four risk categories are as follows:
- The first category concerns cyber security and the model’s ability to carry out large-scale cyberattacks.
- The second category will measure the software’s inclination to help create things that could be harmful to humans (a chemical mixture, an organism like a virus, or a nuclear weapon).
- The third category concerns the model’s power of persuasion and the extent to which it is able to influence human behaviour.
- The fourth category concerns the model’s potential autonomy, and more specifically whether it can escape the control of its creators.
Then, once the risks have been identified, the team will submit it to OpenAI’s Safety Advisory Group, a new body whose task is to make appropriate recommendations either to the CEO or to a person appointed by him, who will then decide on the changes the model in question requires to reduce the risks.
OpenAI: The Hill: I-HLS: TechXplore: NDTV: Hindustan Times: Xinhua:
Image: Maria Shalabaieva
You Might Also Read:
The AI Dilemma: Regulate, Monopolize, Or Liberate:
___________________________________________________________________________________________
If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible