Strengthen Software Supply Chain & Governance For Better AI System Cybersecurity
As the range of AI solutions grows within a company so does the AI surface attack area, along with sophisticated AI tools for bad actors to use. Governments, regional legislators and the private sector are taking these threats seriously.
A few months ago at the Aspen Security Forum, a group of leading technology companies launched the Coalition for Secure AI (CoSAI) to address key AI security issues including software supply chain security for AI systems, preparing defenders for a changing security landscape and AI risk governance.
AI security is more important than ever, with the rise in hackers using AI to make their phishing emails and deep fake attacks more sophisticated.
At the Black Hat security conference a few years ago, Singapore’s Government Technology Agency (GovTech) presented the results of an experiment in which a security team sent simulated spear phishing emails to internal users. More people clicked the links in the AI-generated phishing emails than in the human-written ones, by a significant margin.
And earlier this year, a finance worker at a multinational firm was tricked into paying $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call.
As such, the launch of CoSAI is to be welcomed. As noted above, one of the key workstreams it will focus on is software supply chain security for AI systems. The AI supply chain spans the entire lifecycle of AI systems, from data collection and model training to deployment and maintenance. Due to the complexity and interconnectedness of this ecosystem, vulnerabilities at any stage can affect the entire system.
AI systems often depend on third-party libraries, frameworks, and components, which, while speeding up development, can introduce potential vulnerabilities. Therefore, it’s crucial to use automated tools to regularly check and address security issues related to these dependencies.
Additionally, the widespread availability of open-source large language models (LLMs) necessitates robust provenance tracking to verify the origin and integrity of models and datasets. Automated security tools should also be used to scan these models and datasets for vulnerabilities and malware. Relatedly, LLMs on-device can offer enhanced data security as compute is performed on the device without needing connection to the cloud.
When looking at closed source, the proprietary nature of the model may provide security through obscurity, making it challenging for malicious actors to exploit vulnerabilities. However, this also implies that identifying and addressing security issues might be a prolonged process.
With open source, we have security gains from the collaborative efforts of the community. The scrutiny of many eyes on the code facilitates the swift detection and resolution of security vulnerabilities. Nevertheless, the public exposure of the code may reveal potential weaknesses.
CoSAI’s focus on is AI security governance is also timely. For example, this year, the National Institute of Science and Technology published a paper outlining four types of machine learning attacks. These include data poisoning, data abuse, privacy attacks, and evasion attacks against predictive and generative AI systems.
And the EU’s AI Act also highlights the need for cybersecurity measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate a training data set (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion), and confidentiality attacks or model flaws.
Companies can share their expertise by participating in the regulatory process and through research conducted in cooperation with customers, partners, industry leadership associations and research institutions. Their mutual commitment to innovation requires that AI be secure.
The governance of AI security necessitates specialised resources to address the unique challenges and risks associated with AI. Developing a standard library for risk and control mapping helps in achieving consistent AI security practices across the industry.
Additionally, an AI security maturity assessment checklist and a standardised scoring mechanism would enable organisations to conduct self-assessments of their AI security measures. This process can provide customers with assurance about the security of AI products. This approach parallels the secure software development lifecycle (SDLC) practices already employed by organisations through software assurance maturity model (SAMM) assessments.
Products and solutions can then be used in applications that help organisations comply with HIPAA, PCI-DSS, and GDPR, FIPS-140 validation and Common Criteria on products. Organisations should be looking to leverage their technology partners’ AI enablers, software development kits, APIs and developer tools to build secure, scalable digital services with ease and speed.
Technology companies can commit to developing secure AI solutions that improve worker productivity and edge deployment by integrating multiple layers of protection and focusing on security that is easy to deploy without hindering performance.
Much like companies do for cybersecurity and other initiatives that require company-wide coordination, they should continue to evolve AI processes, principles, tools and training while ensuring consistency and compliance through an internal hub-and-spoke governance model.
Srikrishna ShankavIvelinRadkovaram is Principal Cyber Security Architect, CTO Office at Zebra Technologies
Image: IvelinRadkov
You Might Also Read:
GenAI Is The Biggest Cyber Security Risk:
If you like this website and use the comprehensive 7,000-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible