Future Threats Are Growing Closer
Future generations may look back at our time as one of intense change. In a few short decades, we have morphed from a machine-based society to an information-based society and society has been forced to develop a new and intimate familiarity with data-driven and algorithmic systems and two of the most interesting topics on the Internet today are Artificial Intelligence (AI) and Cyber Security.
Organisations often focus on the positive aspects of AI such as the way it can predict what customers need through data customer result and when the attack and security aspects of AI is discussed, the conversation often centers on data privacy. But an important emerging trend is an increase in AI-enabled cyber attacks.
A report from Forrester Consulting, found that 88% of decision-makers in the security industry believe offensive AI is coming. Half of the respondents expect an increase in attacks. In addition, two-thirds of those surveyed expect AI to lead new attacks. Also Deloitte’s report Smart Cyber: How AI can help Manage Cyber Risk, says that smart cyber is a spectrum. It starts with robotic process automation, moving to cognitive automation and then evolving to AI. “In the digital age, artificial intelligence technologies are starting to have the same kind of game-changing impact that factories and assembly lines had on manufacturing at the dawn of the industrial age, dramatically improving efficiency and enabling new products, services, and business models that simply weren’t possible before”, says the Report.
Previously, cyber-attacks were on the lower end of the spectrum of simply mimicking human actions. However, now that cyber criminals have moved to fully using AI, their attacks mimic human intelligence.
The underlying concept of AI security, using data to become smarter and more accurate, is what makes the trend so dangerous. Because the attacks become smarter with each success and each failure, they are harder to predict and stop. Once the threats outpace defenders’ expertise and tools, the attacks quickly become much harder to control. Because of the nature of AI security, we must react quickly to the increasing AI attacks before we as an industry are too late in the game to catch up.
Increased speed and reliability provide businesses with many benefits, such as when they can process large amounts of data in almost real-time. Cyber criminals are now benefiting from this speed as well, most notably in terms of increased 5G coverage.
Cyber attacks can learn from themselves much quicker now, as well as use swarm attacks to gain access quickly. The faster speeds also mean that threat actors can work more quickly, which often means not being detected by technology or humans until it’s too late to stop them. The problem with protecting against AI attacks is the pace of change. Defensive tech is lagging behind, which means that in the very near future it is likely that attackers may truly have the upper hand. Based on the nature of AI security, once that happens, it will be challenging, if not impossible, for defenders to regain control.
Perhaps the most attractive aspect of AI security is the way it can understand context, and combine speed with that context. Before, automated cyber attacks couldn’t do that.
Cyber Criminals Are Using AI
Threat actors weaponise AI in two main ways: first, to design the attack, and then to conduct the attack. The predictive nature of the tech lends itself to both elements. AI can mimic trusted actors. This means they learn about a real person and then use bots to copy their actions and language.
While many businesses use AI to predict customers’ needs, threat actors use the same concept to increase the odds of an attack’s success.
By using data collected from other similar users, or even from the exact user targeted, cyber criminals can design an attack likely to work for that specific person.For example, if an employee receives emails from their children’s school in their work email, the bot can launch a phishing attack designed to mimic a school email or link.
- By using AI, attackers can more quickly spot openings, such as a network without protection or downed firewall, which means that a very short window can be used for an attack. AI enables vulnerabilities to be found that a human couldn’t detect, since a bot can use data from previous attacks to spot very slight changes.
- By using AI, threat actors can design attacks that create new mutations based on the type of defense launched at the attack.
Security professionals have to defend against constantly changing bots, which are very hard to stop. As soon as they get close to blocking an attack, a new attack emerges and AI can make it harder for defenders to detect the specific bot or attack.
AI Security Can Stop Attacks
As attacks become smarter, the industry must increase our use of sophisticated techniques. A bit ironically, the most effective response to defending against AI attacks is to use AI against them. As the World Economic Forum clearly said, only AI can play AI at its own game. By using AI security to protect and defend, your AI systems become smarter and more effective with each attack. Similar to how threat actors use it to predict actions and risks, it can predict the attackers.
In the past, cyber security revolved around protecting the infrastructure and then reacting to threats. By using AI, it moves from proactive to predictive.
AI has rapidly become a cornerstone for many systems and platforms, ranging from retail to marketing to finance. Soon, if not already, it will be considered a standard feature, not a bonus or differentiation among tools. The same trend is likely to follow with AI in cyber security, the predictive tech will become the standard. By combining AI security with zero trust, organisations increase their likelihood of preventing many attacks and quickly defusing any that make it through.
By upgrading systems and processes to use AI now, cyber security teams have a much better chance of catching up to threat actors. An advanced artificial intelligence system that tracks users not machines could be the goal that every CISO strives towards to reduce risk and keep the business running smoothly.
World Economic Forum: CISO Magazine: RAND: ReadWrite: Deloitte: Darktrace: Security Intelligence:
You Might Also Read:
Criminal Use Of Artificial Intelliegence: