Combating The Threat Of Malicious AI
A group of academics and researchers from leading universities and think-tanks, including Oxford, Yale, Cambridge and Open AI, recently published a chilling report titled, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.
The report raised alarm bells about the rising possibilities that rogue states, criminals, terrorists and other malefactors could soon exploit AI capabilities to cause wide spread harm.
These risks are weighty and disturbing, albeit not surprising. Several politicians and humanitarians have repeatedly advocated for the need to regulate AI, with some calling it humanity’s most plausible existential threat.
For instance, back in 2016, Barack Obama, then President of the United States, publicly admitted his fears that an AI algorithm could be unleashed against US nuclear weapons. “There could be an algorithm that said, ‘Go penetrate the nuclear codes and figure out how to launch some missiles,'” Obama cautioned.
A year later, in August 2017, the charismatic Tesla and SpaceX CEO, Elon Musk, teamed up with 116 executives and scholars to sign an open letter to the UN, urging the world governing body to urgently enact statutes to ban the global use of lethal autonomous weapons or so-called “killer robots.”
While AI’s ability to boost fraud detection and cyber defense is unquestionable, this vital role could soon prove to be a zero-sum game.
The same technology could be exploited by malefactors to develop superior and elusive AI programs that will unleash advanced persistent threats against critical systems, manipulate stock markets, perpetrate high-value fraud or steal intellectual property.
What makes this new report particularly significant is its emphasis on the immediacy of the threat. It predicts that widespread use of AI for malicious purposes, such as repurposed autonomous weapons, automated hacking, target impersonation, highly tuned phishing attacks, etc., could all eventuate as early as the next decade.
So, why has this malicious AI threat escalated from Hollywood fantasy to potential reality far more rapidly than many pundits anticipated?
There are three primary drivers:
- First, cyber-threat actors are increasingly agile and inventive, spurred by the growing base of financial resources and absence of regulation, factors that often stifle innovation for legitimate enterprises.
- Secondly and perhaps most important, the rapid intersection between cyber-crime and politics, combined with deep suspicions that adversarial nations are using advanced programs to manipulate elections, spy on military programs or debilitate critical infrastructure, have further dented prospects of meaningful international cooperation.
- Thirdly, advanced AI-based programs developed by nation-states may inadvertently fall into wrong hands.
An unsettling example is the 2016 incident, in which a ghostly group of hackers, going by the moniker “The Shadow Brokers,” reportedly infiltrated the US National Security Agency (NSA) and stole advanced cyber weapons that were allegedly used to unleash the WannaCry ransomware in May 2017.
As these weapons become more powerful and autonomous, the associated risks will invariably grow. The prospect of an autonomous drone equipped with hellfire missiles falling into wrong hands, for instance, would be disconcerting to us all.
It’s clear that addressing this grave threat will be complex and pricey, but the task is pressing. As report co-author Dr. Seán Ó hÉigeartaigh stressed, “We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems, because the risks are real.” Several strategic measures are required, but the following two are urgent:
- There is need for deeper, transparent and well-intentioned collaboration between academics, professional associations, the private sector, regulators and world governing bodies. This threat transcends the periphery of any single enterprise or nation. Strategic collaboration will be more impactful than unilateral responses.
- As the report highlighted, we can learn from disciplines such as cybersecurity that have a credible history in developing best practices to handle dual-use risks.
Again, while this is an important step, much more is required. As Musk and his co-collaborators wrote to the UN, addressing this risk requires binding international laws. After all, regulations and standards are only as good as their enforcement.
This is an old story; history is repeating itself. As Craig Timberg wrote in The Threatened Net: How the Web Became a Perilous Place, “When they [Internet designers] thought about security, they foresaw the need to protect the network against potential intruders and military threats, but they didn’t anticipate that the Internet’s own users would someday use the Internet to attack one another.”
The Internet’s rapid transformation from a safe collaboration tool to a dangerous place provides an important lesson. If we discount this adjacent threat, AI’s capabilities, which hold so much promise, will similarly be exploited by those with bad intentions.
Absent a coherent international response, the same technology that is being used to derive deep customer insights, tackle complex and chronic ailments, alleviate poverty and advance human development could be misappropriated and lead to grave consequences.
You Might Also Read: