Global Guidelines For Artificial Intelligence Agreed
The British National Cyber Security Centre (NCSC) has announced a new set of global guidelines on the security considerations of developing Artificial Intelligence (AI) systems. These guidelines as the first to be agreed globally, with the target of ensuring AI systems are created, developed, and used securely.
They are descibed by the NCSC as “Guidelines for providers of any systems that use artificial intelligence (AI), whether those systems have been created from scratch or built on top of tools and services provided by others”.
The NCSC guidelines have been endorsed by agencies from 18 countries, including all members of the G7, have agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.
These recommendations apply to anyone developing systems that use AI, whether they are building a new AI tool, or improving an existing system.
The new guidelines are the first to be agreed upon globally. They will help developers of any systems that use AI make informed cyber security decisions at every stage of the development process, whether those systems have been recently created, or built on top of tools and services provided by others.
The NCSC also wants developers to assess whether the service they are looking to create is “most appropriately addressed using AI”, and if so, whether they should choose to train a new model, use an existing model (and whether this will need fine-tuning), or work with an external model provider.
The guidelines will cover four key areas of an AI system’s development life cycle: secure design, development, deployment, operations and maintenance.
The guidance on secure development covers how developer’s can secure their supply chains, ensuring any software not produced in-house adheres to their organisation’s security standards.Secure development includes generating the appropriate documentation of data, models, and prompts, as well as managing technical debt throughout the development process.
The NCSC’s advice on secure deployment outlines the measures developers should take to protect their infrastructure and models against compromise, threat, or loss. The advisory also requires robust infrastructure security principles across the system’s life cycle such as applying access controls to APIs, models and data, and the models’ training pipelines.
The guidelines are intended as a global, multi-stakeholder effort to address that issue, following the UK Government’ hosted AI Safety Summit’s Bletchley Decalaration on sustained international cooperation on managing AI risks.
NCSC: Gov.UK: CISA: Reuters: ITPro: Techmonitor: DatatechVibe:
Image: Growtika
You Might Also Read:
President Biden Takes Action On Artificial Intelligence:
DIRECTORY OF SUPPLIERS - AI Security & Governance:
___________________________________________________________________________________________
If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible