Google Join With Microsoft, OpenAI & Anthropic To Regulate AI Development
Leading Artificial Intelligence (AI) firms have assembled the Frontier Model Forum to regulate the development of cutting-edge AI technology.
These are four of the most influential companies in AI have announced the formation of an industry body to oversee safe development of the most advanced models.
In a significant move towards ensuring the safe and responsible development of frontier AI models, four major tech companies announced the formation of the Frontier Model Forum.
The Frontier Model Forum has been formed by the ChatGPT developer OpenAI, Anthropic, Microsoft and Google, the owner of the UK-based DeepMind.
The group said it would focus on the “safe and responsible” development of frontier AI models, referring to AI technology even more advanced than the examples available currently.
The main goal of the Frontier Model Forum is to enhance AI safety research to assist responsible frontier model development and risk reduction. Frontier models are presently the most sophisticated AI systems, outperforming current capabilities in a variety of tasks.
According to the forum members, the primary goals are to promote AI safety research like guidelines for assessing models, promoting responsible deployment of advanced AI models, engaging in dialogue with politicians and academics over safety risks in AI as well as assisting in the development of positive uses for the technology like ending global warming and detecting illnesses like cancer.
Basically this will identify best practices to promote knowledge sharing among industry, governments, civil society, and academia, focusing on safety standards and procedures to mitigate potential risks.
Secondly, it will advance AI safety research by identifying the most important open research questions on AI safety.
“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” said Brad Smith, the president of Microsoft. “This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”
The forum’s members said their main objectives were to promote research in AI safety, such as developing standards for evaluating models; encouraging responsible deployment of advanced AI models; discussing trust and safety risks in AI with politicians and academics; and helping develop positive uses for AI such as combating the climate crisis and detecting cancer.
They added that membership of the group was open to organisations that develop frontier models, which is defined as “large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks”.
The announcement comes as moves to regulate the technology gather pace.Recently, tech companies, including the founder members of the Frontier Model Forum, agreed to new AI safeguards after a White House meeting with Joe Biden.
Commitments from the meeting included watermarking AI content to make it easier to spot misleading material such as deepfakes and allowing independent experts to test AI models.
The White House announcement was met with scepticism by some campaigners who said the tech industry had a history of failing to adhere to pledges on self-regulation.
Recently Meta said that it was releasing an AI model to the public and it was described by one expert as being “a bit like giving people a template to build a nuclear bomb”.
The Guardian: Search Engine Journal: Euronews: Google: Tech Times: Almayadeen
You Might Also Read:
DIRECTORY OF SUPPLIERS - AI Security & Governance:
___________________________________________________________________________________________
If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible