Guidelines For AI Systems Development
On 26th November, the U.S. Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom’s National Cyber Security Centre (NCSC) jointly released Guidelines for Secure AI System Development.
These guidelines mark the direction that the industry and its regulators are moving toward and reflect a best practice that entities within the AI supply chain should adhere to, for the benefit and protection of the end user.
However, for a lot of entities in the space, this is going to mean increased workload to implement Secure Design, Development, and Deployment practices into their workflows.
Fresh Challenges For Developers
Ensuring best practice often involves changing the way we work, which is always a challenge in an already rapidly evolving space. That’s why it is vital to establish a tone at the top to ensure the message of “Security First” permeates the teams that are responsible for developing AI systems. Once the tone is set, and there is sufficient awareness, it is time to implement best practice into the development lifecycle. Begin by assessing the risks associated with AI models used compared to the minimum functionality that is required for the application. This is a key step that likely represents a shift in the current mindset for many developers.
Enabling transparency, a characteristic encouraged by the CISA and NCSC alike, is also key. This means sharing information on known vulnerabilities – and general risks associated with the use of AI – for the benefit of the entire industry and its users. This information might take the form of SBOMs or internal policies governing the manner in which vulnerabilities should be disclosed. Consider the technical knowledge of many end-users of AI applications: there is a need to tailor the language appropriately to the audience to ensure they can make well-informed decisions about how they interact and input data into AI applications.
Supply Chain Challenges
It is also worth noting that the AI supply chain can be very complex, and delineating who is responsible for what becomes increasingly unclear when white-labelled AI services are used to create a product that end users will input sensitive information into. The guidelines suggest all entities within the supply chain of an AI application should assess the risks arising from their specific activities and mitigate them. Where such risks cannot be effectively mitigated by an entity in the supply chain, that entity should inform users further down the supply chain of the residual risk that they are going to be shouldering as a result and advise them on how to use their component of the end product in a secure manner.
Relieving The Burden On Users
As with all best practice guidelines, there is an end-goal in sight. As stated by the NCSC and CISA, these guidelines reflect a further opportunity to shift the burden of insecure development practices away from the end user. Doing so ultimately increases trust in the industry. Given there is still a large portion of the population that is hesitant or sceptical about AI, increasing confidence and dispelling myths by way of secure development and radical transparency will serve to benefit the industry as a whole.
The guidelines also acknowledge that the types of sensitive information that AI supply chains are becoming the custodians of will increase their value as a target for a malicious attack.
Adopting guidelines like these is an opportunity to start bolstering the defences against such attacks, by ensuring AI products are Secure by Default. The cost of not doing so being significant loss of revenue and reputational damage, and potential harm to the end users of such systems.
Where Next For Government Oversight?
These guidelines are the first of their kind, and definitely won’t be the last. Whether your view of an “AI-enabled” future is Utopian or Dystopian, it’s not unreasonable to think AI tools will become an everyday part of our economy and society in future.
As of right now, AI tools and techniques are something of black box to the vast majority of the population. Combine this with the rate of growth in AI this year alone, and it’s clear that there is a responsibility on the part of global regulators to implement requirements that AI companies must adhere to in order to protect the end user and enable them to make informed decisions about how they interact with AI tools.
Over time as more regulatory frameworks are created around AI, it should result in an ecosystem that protects consumers while also allowing AI to continue growing and yielding benefits to end users in a controlled manner.
Martin Davies is Audit Alliance Manager at Drata
Image: Mohamed_hassan
You Might Also Read:
Bletchley Declaration On Artificial Intelligence Gets International Support:
___________________________________________________________________________________________
If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible