Using AI To Its Full Cybersecurity Potential
As 2024 closes out as the “Year of AI,” the technology’s transformative impact on industries worldwide shows no signs of slowing down into 2025. A recent report by the UK government revealed that 68% of businesses are already using at least one AI technology, with another 32% planning to implement it.
From reducing financial costs, to its fast and efficient processes, artificial intelligence brings a wealth of benefits to the cybersecurity space.
Its role in this industry is now critical, offering enhanced threat detection, proactive defence, and adaptive learning capabilities. By analysing vast datasets, AI can identify risks such as phishing and malware faster than traditional methods, enhancing cyber resilience. However, the same capabilities that make AI a powerful protection tool can also be exploited by cybercriminals. AI-generated deepfakes, large-scale bot attacks, and advanced hacking techniques are becoming increasingly sophisticated, highlighting the two-sided nature of AI in cybersecurity.
We spoke to four industry experts to find out their predictions for 2025 when it comes to AI and its pros and cons.
AI: Friend Or Foe?
AI is set to play a pivotal role in the cybersecurity landscape, acting as both friend and foe. Geoff Barlow, Product and Strategy Director at Node4, notes that “AI is playing a dual role in the cybersecurity arena, both enhancing and challenging it.” While AI empowers cybercriminals by increasing “the speed, volume, and sophistication of cyber-attacks,” it also offers powerful tools for defence, enabling organisations to “anticipate and respond to threats.” Node4’s Mid-Market Research reveals that 30% of IT decision-makers view AI as a top cybersecurity threat, with 28% concerned it could expose businesses to new risks, and 25% worried it could inadvertently leak sensitive data.
Barlow highlights that organisations must focus on “improving threat detection, hunting, and intelligence capabilities using AI” while addressing the AI skills gap through education and third-party support.
Similarly, Moshe Weis, CISO at Aqua Security, highlights that GenAI continues to empower attackers through enabling “complex, targeted phishing, deepfakes, and adaptive malware.” However, it also supports defence through “cloud-native security solutions [that] leverage GenAI to automate threat detection and response across distributed environments,” providing real-time analysis and predictive defences.
By 2025, Weis emphasises that “using AI within cloud-native frameworks will be essential for maintaining the agility needed to counter increasingly adaptive threats.”
The Rules & Regulations of AI
Another focus in cybersecurity for 2025 is the need for a more integrated approach to governance, risk, and compliance (GRC). Matt Hillary, CISO at Drata, highlights that “security, privacy, and compliance will become increasingly intertwined,” driven by “increasing cyber threats, stricter regulations, and a heightened public awareness of privacy issues.” The rise of AI further complicates this landscape, as organisations must navigate “the ethical and privacy implications of the use of AI in GRC processes” while balancing its potential with maintaining high privacy standards.
Simultaneously, advancements in cloud-native solutions could enhance security across the data lifecycle.
Aqua Security’s Weis emphasises that these solutions “provide dynamic protection across data lifecycles, securing data at rest, in motion, and in use,” which will be critical as “stricter compliance standards and more data-centric attacks demand robust, consistent security.”
Dane Sherrets, Staff Innovations Architect at HackerOne, summarises that 2025 will bring “greater industry adoption of AI security and safety standards” to improve transparency in processes. Businesses will increasingly focus on “responsible AI adoption” and employ methods like “AI red teaming” to uncover safety and security vulnerabilities in generative AI systems.
Training For The AI-Driven Cybersecurity Era
However, due to its rapid rise in popularity, a significant AI skills gap has also emerged. A survey from AI Quest found that 75% of employees lack the understanding of how to effectively use AI in their roles. As businesses navigate the complexities of AI, training employees in its potential whilst mitigating risks will be essential to fully benefit from its capabilities over the next year.
Sherrets highlights the importance of “benchmarks that improve AI transparency,” such as the adoption of AI model cards. These model cards function “much like nutrition labels on packaged goods,” providing users with essential information, including the model’s intended use, “performance evaluation procedures, and metadata about the datasets” involved. This transparency will be crucial for fostering trust and accountability in AI-driven systems both internally and externally.
Equally vital is equipping employees with the skills to navigate and manage AI tools effectively. Node4’s Barlow emphasises that “regardless of the tools or service chosen to help, all organisations should be implementing some form of training to support in-house employees with the surge in AI adoption.”
By investing in comprehensive training, organisations can ensure their workforce is prepared to leverage AI responsibly and effectively, enabling them to “tackle whatever AI developments and threats [emerge] in 2025.”
Image: Ideogram
You Might Also Read:
How AI Is Reshaping The Cybersecurity Landscape:
If you like this website and use the comprehensive 7,000-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible