AI & Cloud Are At The Intersection Of Cyber Security
A recent survey by Check Point and Cybersecurity Insiders asked hundreds of professionals from across different industries how they’ve been using AI so far, how much of a priority it is for their companies, and how it has impacted their workforces.
Check Point’s 2024 Cloud Security Report highlights how technological advances are breeding stronger cloud threats with 91% of those surveyed expressing concern over emerging risks and zero-day attacks. The report also exposes a critical surge in cloud security incidents, marking a significant increase from 24% in 2023 to 61% in 2024 (a 154% increase), highlighting the escalating complexity and frequency of cloud threats.
Furthermore, Check Point identify an urgent need for AI and 'Safety First' preventive security measures. In particular, the survey results reveal a lack of awareness about the crucial role of internal controls and governance policies when AI is involved.
- 91% view the adoption of Artificial Intelligence (AI) as a priority, highlighting vulnerability assessment and threat detection as key benefits
- Nevertheless, only 61% of respondents acknowledged that their organisation is in the planning or development phases of adopting AI and ML for cyber security
Artificial Intelligence and Machine Learning (AI and ML) are recognised as important parts of the future of cyber security and cloud security. But how integrated are these technologies in cyber security functions currently?
Where Does AI In Cyber Security Stand?
Several questions on the survey asked respondents about the state of AI in their organisations’ cyber security plans as of today, including how fully implemented it is and how that implementation is going. Their responses paint a picture of an industry that is moving slowly and cautiously, and perhaps hasn’t gone as “all-in” on AI as some may expect.
Organisations still seem to be evaluating the benefits and risks associated with AI and ML tools, and businesses are moving carefully to establish firm best practices that comply with relevant regulations.
When asked to describe their organisation’s adoption of AI and ML in cyber security, 61% of respondents described it as being either in the “planning” or “development” stages, significantly more than the 24% who categorised it as “maturing” or “advanced.” Additionally, 15% of those surveyed said that their organisations haven’t implemented AI and ML into their cyber security efforts at all.
Clearly, while the selling points of AI for cyber security efforts are persuading many businesses to start exploring their potential, few businesses have fully embraced them at this point.
Another question on the survey got more specific, asking respondents “Which cyber security (cloud) functions in your organisation are currently enhanced by AI and ML?” The answers are illuminating, with malware detection leading the way at 35%, with user behaviour analysis and supply chain security following right behind. Towards the bottom of the list, fewer organisations look to be using AI for security posture management or adversarial AI research. Taken together with the responses to the previously discussed question about the overall state of AI, the data shows that individual applications of AI and ML in cyber security are still far from being universal.
One reason that AI adoption hasn’t gone at a faster pace is the challenge of navigating a rapidly shifting regulatory landscape. In these early days, laws and government guidance is still evolving around AI and cyber security. Businesses can’t afford to take risks when it comes to compliance and keeping up with these rapid changes can be complex and resources intensive.
How Are Organisations Approaching AI For Cyber Security?
Despite the slow and cautious adoption of AI in cyber security so far, it’s almost universally regarded as an important priority going forward with 91% ranking it as a priority for their organisation, and only 9% of those surveyed said it’s a low priority or not a priority at all.
Respondents clearly see the promise of AI to automate repetitive tasks and improve the detection of anomalies and malware, with 48% identifying that as the area with the most potential. Additionally, 41% see promise in reinforcement learning for dynamic security posture management using AI, especially interesting when compared to the only 18% who are currently using AI for this function. The excitement is obvious, but there are challenges in the way of realising this potential.
Beyond specific applications, respondents were asked to identify what they see as the biggest benefits of incorporating AI into cyber security operations. The most popular answers included vulnerability assessment and threat detection, but cost efficiency was the least-popular answer, at just 21%. Likely due to the pricey challenge of regulatory compliance and the cost of implementation, AI isn’t currently viewed as a significant money-saving tool for most who answered.
Concerns & Conflicting Attitudes Around AI In Cyber Security
Additional questions on the survey provided insight into professional concerns and a lack of clarity about some of the fundamentals of AI and cyber security.
- On the subject of the impact of AI on the cyber security workforce, it’s apparent that this is still an open question without clear answers yet. 49% identified new skills being required by AI, and 35% noted redefined job roles.
- While 33% said that their workforce size has been reduced as the result of AI, 29% said that their workforce size has actually increased.
- Implementing AI into cyber security is clearly a work in progress, and while greater efficiency is a promise that might be realised in the future, for now many businesses are actually having to hire more people to integrate the new tech.
Notably, there was a significant split in the answers to the question: Do you agree with the following statement: “Our organisation would be comfortable using Generative AI without implementing any internal controls for data quality and governance policies”? While 44% disagreed or strongly disagreed with the statement, 37% said that they would agree or strongly agree.
It’s very rare to see such a substantial split on a question like this on a professional survey, and that split seems to indicate a lack of consensus, or perhaps simply a lack of awareness regarding the importance of internal controls and governance policies when AI is involved.
Image: Unsplash
You Might Also Read:
Cloud Threats Require New Advanced Defenses:
___________________________________________________________________________________________
If you like this website and use the comprehensive 7,000-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible