Large Language Models Are An Inflection Point For Cyber Security
Large Language Models (LLMs) are making a big impact acroos the technology sector. In particular, the ability of LLMs to perform tasks seemingly equivalent humans has led to rapid adoption in a variety of different domains, including cyber security.
LLMs are widley considered to be an inflection point on AI, a step change which will introduce epoch‐defining changes comparable to the invention of the Internet. A multi‐billion pound race is underway to dominate this market.
LLM applications have burgeoned across diverse sectors, such as creative arts, medicine, law, and software engineering. Yet, their adoption in cyber security, despite its data-intensive and technically intricate nature, remains a tantalising prospect.
The urgency to stay ahead of cyber threats, including those posed by state-affiliated actors wielding LLMs, amplifies this allure.
Carnegie Mellon University & OpenAI
Carnegie Mellon University’s Software Engineering Institute (SEI) and Microsoft's OpenAI are now claiming that large language models could be an asset for cyber security professionals, but must be evaluated using real and complex scenarios to better understand the technology’s capabilities and risks. Their researchr found that LLMs could be an asset for cyber security professionals, but should be evaluated using real and complex scenarios to better understand the technology’s capabilities and risks.
LLMs underlie today’s Generative AI platforms, including Google’s Gemini, Microsoft’s Bing AI, and ChatGPT, released in November 2022 by OpenAI.
While LLMs are excellent at recalling facts, the Carnegie Mellon White Paper “Considerations for Evaluating Large Language Models for Cybersecurity Tasks” claims that it is not enough, the LLM knows a lot, but it doesn’t necessarily know how to deploy the information correctly in the right order. The paper claims that the solution is to evaluate LLMs like one would evaluate a human cyber security operator: theoretical, practical, and applied knowledge.
According to Techxplore, focusing on theoretical knowledge ignores the complexity and nuance of real-world cybersecurity tasks, which results in cyber security professionals not knowing how or when to incorporate LLMs into their operations. However, testing an artificial neural network is extremely challenging, as even defining the tasks is hard in a field as diverse as cybersecurity.
Furthermore, once the tasks are defined, an evaluation must ask up to millions of questions in order for LLMs to learn and mimic the human brain. While creating that volume of questions can be done through automation, there isn’t a tool that can generate enough practical or applied scenarios for the LLM.
In the meantime, as the technology catches up, the white paper provides a framework for designing realistic cyber security evaluations of LLMs: define the real-world task for the evaluation to capture, represent tasks appropriately, make the evaluation robust, and frame results appropriately.
The paper’s authors believe LLMs will eventually enhance human cyber security operators in a supporting role, rather than work autonomously, and emphasise that even so, LLMs will still need to be evaluated. They also express their hope that the paper starts a movement toward practices that can inform the decision-makers in charge of integrating LLMs into cyber operations.
Conclusion
The collaboration between Carnegie Mellon University’s SEI and OpenAI represents a significant step forward in understanding the role of Large Language Models (LLMs) in cyber security. By proposing a comprehensive evaluation framework, stakeholders can make informed decisions about integrating LLMs into their operations.
This signifies a growing recognition of the potential benefits and risks associated with AI-driven solutions in the cyber security market, highlighting the need for rigorous evaluation practices to ensure effective and responsible implementation.
Carnegie Mellon University | Carnegie Mellon University | Carnegie Mellon University | I-HIS |
Image: googledeepmind
You Might Also Read:
Guidelines For The ‘Catastrophic Risks’ Of AI:
DIRECTORY OF SUPPLIERS - AI Security & Governance:
___________________________________________________________________________________________
If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible