GenAI Is The Biggest Cyber Security Risk
The leading ethical hacking platform, HackerOne, has found that 48% of security professionals believe AI is the most significant security risk to their organisation. Ahead of the launch of its annual Hacker-Powered Security Report, HackerOne has revealed early findings, which include data from a survey of 500 security professionals.
Respondents were most concerned with the leaking of training data (35%), unauthorised usage of AI within their organisations (33%), and the hacking of AI models by outsiders (32%).
When asked about handling the challenges that AI safety and security issues present, 68% said that an external and unbiased review of AI implementations is the most effective way to identify AI safety and security issues.
AI Red Teaming offers this type of external review through the global security researcher community, who help to safeguard AI models from risks, biases, malicious exploits, and harmful outputs. “While we’re still reaching industry consensus around AI security and safety best practices, there are some clear tactics where organizations have found success,” said Michiel Prins, co-founder at HackerOne. “Anthropic, Adobe, Snap, and other leading organisations all trust the global security researcher community to give expert third-party perspective on their AI deployments.” he said
Further research from a HackerOne-sponsored SANS Institute Report explored the impact of AI on cybersecurity and found that over half (58%) of respondents predict AI may contribute to an “arms race” between the tactics and techniques used by security teams and cybercriminals.
The research also found optimism around the use of AI for security team productivity, with 71% reporting satisfaction from implementing AI to automate tedious tasks. However, respondents believed AI productivity gains have benefited adversaries and were most concerned with AI-powered phishing campaigns (79%) and automated vulnerability exploitation (74%). “Security teams must find the best applications for AI to keep up with adversaries while also considering its existing limitations - or risk creating more work for themselves,” said Matt Bromiley, Analyst at The SANS Institute. “Our research suggests AI should be viewed as an enabler, rather than a threat to jobs. Automating routine tasks empowers security teams to focus on more strategic activities.” Bromiley said.
HackerOne’s AI-powered co-pilot Hai can help security teams by automating tasks and saving security teams an average of five hours of work per week. Indeed, AI-focused products continue to drive HackerOne’s business, with AI Red Teaming growing 200% quarter over quarter in Q2 and a 171% increase in security programs adding AI assets into scope.
Test your AI risk readiness with this HackerOne interactive quiz HERE
HackerOne | HackerOne | SANS Institute |
Image: Allison Saeng
You Might Also Read:
The Crucial Role Of AI Red Teaming In Safeguarding Systems & Data:
DIRECTORY OF SUPPLIERS - AI Security & Governance:
If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible