The New Battlefield
Technology is radically changing the nature of warfare, with the risk moving from a physical disruption to an unpredictable cyber risk that is much more complex. The speed at which technology is developing has introduced the next phase of cyber warfare.
The effects of cyber warfare are not limited to the digital domain and can have real-world consequences. For instance, an attack on a hospital, or nuclear facility can cause injury or even worse loss of life. To navigate this new threat landscape, it is important to be able to control the cyber world just as allied forces seeks control in traditional domains, and even voice imitation has real importance for hackers to give orders and transfer funds.
As the CEO of a company who was interviewed on TV last year, a hacking group was trailing the CEO and taped the interview and then taught a computer to perfectly imitate the CEO’s voice, so it could then give credible instructions for a wire transfer of funds to a third party.
This “voice phishing” hack brought to light the growing abilities of artificial intelligence-based technologies to perpetuate cyber-attacks and cyber-crime.
Using new AI-based software, hackers have imitated the voices of a number of senior company officials around the world and thereby given out instructions to perform transactions for them, such as money transfers. The software can learn how to perfectly imitate a voice after just 20 minutes of listening to it and can then speak with that voice and say things that the hacker types into the software.
Some of these attempts were foiled, but other hackers were successful in getting their hands on money. Leading officials at a cybersecurity conference in Tel Aviv recently warned of the growing threat of hackers using AI tools to create new attack surfaces and causing new threats.
Artificial intelligence is a field that gives computers the ability to think and learn, and although the concept has been around since the 1950s it is only now enjoying a resurgence made possible by chips’ higher computational power. The artificial intelligence market is expected to expand almost 37% annually and reach $191 billion by 2025, according to research firm MarketsandMarkets.
Artificial intelligence and machine learning are used today for a wide range of applications, from facial recognition to detection of diseases in medical images to global competitions in games such as chess and Go.
As our world becomes more and more digitalised, with everything from home appliances to hospital equipment being connected to the internet, the opportunity for hackers to disrupt our lives becomes ever greater. Whereas human hackers once spent considerable time poring over lines of code for a weak point they could penetrate, today AI tools can find vulnerabilities at a much faster speed, warned Yaniv Balmas, head of cyber research at Israel’s largest cybersecurity firm, Check Point Software Technologies.
Spear-Phishing
Artificial intelligence tools are also already being used to create extremely sophisticated phishing campaigns, said Hudi Zack, chief executive director, Technology Unit, of the Israel National Cyber Directorate, in charge of the nation’s civilian cybersecurity.
Traditional phishing campaigns use emails or messages to get people to click on a link and then infect them with a virus or get them to perform certain actions.Users are today generally able to easily identify these campaigns and avoid responding to them, because the phishing emails come from unfamiliar people or addresses and have content that is generic or irrelevant to the recipient. Now however, sophisticated AI systems create “very sophisticated spear-phishing campaigns” against “high-value” people, such as company CEOs or high-ranking officials, and send emails addressing them directly, sometimes even ostensibly from someone they know personally, and often with very relevant content, like a CV for a position they are looking to staff.
A sophisticated AI system would enable an attacker to “perform most of these actions for any target in a matter of seconds,” and thus spear phishing campaigns could aim at “thousands or even millions of targets,” Zack said.
These tools are mainly in the hands of well-funded state hackers, Zack said, declining to mention which ones, but he foresaw them spreading in time to less sophisticated groups.Perhaps the greatest AI-based threat that lurks ahead is the ability to interfere with the integrity of products embedded with AI technologies that support important processes in such fields as finance, energy or transportation. Increasingly sophisticated attacks will cause the ensuing cyber-battles to move from “human-to-human mind games to machine-to-machine battles.”
The stakes are too high to refrain from pursuing challenging conversations on AI safety and security. On such vital issues, pragmatic engagement means pursuing courses of action that can be productive and mutually beneficial, while mitigating the risks. in the absence of trust, great powers should exercise greater agency in shaping the future of AI and responding to the dilemmas it poses for global security and stability.
Times of Israel: Australian Defence Magazine: Defense One:
You Might Also Read:
Reshaping The Future Of War With Malware: