Advanced AI For Cyber Operations
The cyber security landscape is evolving at breakneck speed. With threat levels, the amounts of data being stored and computer power and speeds all significantly increasing. Added to this, the diversity of Internet and network-connected technologies are following an even faster curve. There are some hard truths that many organisations ignore at their own peril.
Most security departments will acknowledge that their resources are already spread too thinly. Now there is an expectation to do much more with even less. Could AI be the answer to extending the value and efficacy of cyber security?
Now the US Defense Advanced Research Projects Agency (DARPA), in conjunction with the Pentagon’s Joint Artificial Intelligence Center (JAIC), are setting their sights on the rapidly expanding intersection of Artificial Intelligence (AI) and cyber security and cyber warfare operations. Development of AI tools and applications for use in the cyber realm is one of several focus areas that DARPA plans to delve further into, as part of the agency’s long-term strategy.
One of the agency’s flagship efforts, the Harnessing Autonomy for Countering Cyber Adversary Systems (HACCS) programme, is making strides in integrating AI-enhanced technologies into cyber operations.
The overall goal of the HACCS programme is the development of “autonomous software agents” capable of countering targeted network attacks by botnet implants, as well as large-scale malware campaigns, according to an agency fact sheet.
These HACCS software agents “will develop the techniques and algorithms necessary to measure the accuracy of identifying botnet-infected networks, the accuracy of identifying the type of devices residing in a network, and the stability of potential access vectors,” it said.
DARPA is investing more than $2 billion in new and existing programs called the “AI Next” campaign. Key areas of the campaign include automating critical DoD business processes, such as security clearance vetting or accrediting software systems for operational deployment; improving the robustness and reliability of AI systems.
This means enhancing the security and resiliency of machine learning and AI technologies; reducing power, data, and performance inefficiencies; and pioneering the next generation of AI algorithms and applications, such as “explainability” and common sense reasoning.
DARPA says AI technologies have demonstrated great value to missions as diverse as space-based imagery analysis, cyberattack warning, supply chain logistics and analysis of microbiologic systems. At the same time, the failure modes of AI technologies are poorly understood. DARPA is working to address this shortfall, with focused R&D, both analytic and empirical. DARPA’s success is essential for the Department to deploy AI technologies, particularly to the tactical edge, where reliable performance is required.
The most powerful AI tool today is machine learning (ML). ML systems can be easily duped by changes to inputs that would never fool a human.
The data used to train such systems can be corrupted. And, the software itself is vulnerable to cyber-attack. These areas, and more, must be addressed at scale as more AI-enabled systems are operationally deployed.DARPA research aims to enable AI systems to explain their actions, and to acquire and reason with common sense knowledge. The irony of artificial intelligence is how much human brainpower is required to build it.
DARPA is now creating the next wave of AI technologies that will enable the United States to maintain its technological edge in this critical area.
Recently the University of Texas at Dallas (UT Dallas) researchers received a grant from DARPA to simulate dynamic and unexpected events that can be used to train AI systems, computer systems that emulate human cognition, to adapt to the unpredictable. UT Dallasuse Polycraft World to a modification of the video game Minecraft, was developed by researchers to teach chemistry and engineering. Now the game that allows players to build virtual worlds is serving as the foundation for federal research to develop smarter artificial intelligence (AI) technology. The simulated scenarios could include changing weather or unfamiliar terrain. In response to the COVID-19 pandemic, researchers have added the threat of an infectious disease outbreak.
AI security technologies still require the human component – but the transition is moving security professional activities away from extensive manual checking and configuration into roles of oversight and strategy. The biggest problem in the future is likely to be how to prevent hackers from using variations of the same AI capabilities to perform intrusions and exploits.
DARPA: Federal News Network: Janes: Infosecurity Magazine: Universtity of Texas:
You Might Also Read:
DARPA To Test Infrastructure Resilience: