Artificial Intelligence Could Be As Powerful As A Nuclear Weapon
AI has the potential to play a significantly positive role in our futures, but any development within Artificial intelligence (AI) also has the potential to have a negative impact. Like nuclear weapons, AI weaponry has the potential to inflict mass damage in our future. The open-source development of AI can enable the democratic supervision of AI, which is very different from the style of secrecy which concealed the development of nuclear weapons.
The connection between AI and nuclear weaponry is not new. In fact, AI has been part of the nuclear deterrence architecture for decades.
AI refers to systems able to perform tasks that normally require human intelligence, such as visual and speech recognition, decision making and, perhaps one day, thinking. As early as in the 1960s, the United States and the Soviet Union saw that the nascent field of AI could play a role in the development and maintenance of their retaliatory capability, that is, the capability to respond to a nuclear attack, even by surprise. “AI can be more dangerous than nuclear weapons in the future. People have suffered from the 1945 trauma and been constantly trying to ensure the regulations against nuclear weapons and the expansion of nuclear power.... the lack of regulations and difficulty to set rules for the development of artificial intelligence render AI to be more dangerous than nuclear weapons. Thus new rules and means of supervision are needed for AI development to make sure it is for the betterment of human race, rather than endangering us,” says academic researcher Xuanbing Cheng in a recent paper published by Stanford University.
Nuclear weapons ultimately led to laws and agreements to prevent their spread and use partly through arms control nnd now many experts now believe AI will require a regulatory regime similar to nuclear weapons.
Progress in computing is making it possible for machines to accomplish many tasks that once required human effort or were considered altogether impossible and now ethical questions are starting to pop up globally in the emerging field of AI.
Self-driving cars. Automated medical diagnoses. Algorithms that decide where to deploy police officers. Robotic soldiers. AI has the unsettling potential to transform modern life over the next few decades.
This might suggest new capabilities that AI could have to spur arms races or increase the likelihood of states escalating to nuclear use, either intentionally or accidentally, during a crisis. However, incorporating AI into early warning systems could create time efficiencies in nuclear crises. AI could improve the speed and quality of information processing, giving decision-makers more time to react.
Albert Einstein described the universe as “finite but unbounded.” That definition could fit AI’s future applications. Perhaps the only comparable disruptive technology was nuclear weapons. These weapons irreversibly disrupted and changed the nature, conduct and character of the politics of war.
The reason id that there are no winners. Only victims and losers would emerge after a thermonuclear holocaust killed the combatants.
Nuclear weapons provoked heated debate over the moral and legal implications and when or how these weapons could or should be employed from a counterforce first strike against military targets to “tactically” to limit escalation or rectify conventional arms imbalances.
- Nuclear weapons affected national security and AI most certainly will affect the broader sweep of society, in the same way as the industrial and information revolutions with positive and negative consequences. The destructive power of these weapons made them so significant.
- AI needs an intermediary link to exercise its full disruptive power, however, as societies became more advanced, those two revolutions had the unintended consequence of also creating greater vulnerabilities, weaknesses and dependencies subject to major and even catastrophic disruption.
COVID-19, massive storms, fires, droughts and cyber attacks are unmistakable symptoms of the power of the new MAD, Massive Attacks of Disruption. AI is a potential multiplier, exploiting inherent societal weaknesses and vulnerabilities and creating new ones as well as preventing their harmful effects.
Unlike nuclear weapons, if used properly AI will have enormous and even revolutionary benefits.
A permanent 'AI Oversight Council' with a substantial amount of research funding to examine AI’s societal implications should be created. Membership should be drawn from the public and the legislative and executive branches of government. Funding should go to the best research institutions, another parallel with nuclear weapons. This council would also coordinate, liaise and consult with the international community, including China, Russia, allies, friends and others to widen the intellectual aperture and as confidence building measures.
By employing the lessons learned from studying the nuclear balance, not only can AI’s potentially destructive consequences be mitigated. More importantly, if properly used, AI has nearly unbounded opportunity to advance the public good.
In the near future, AI could be used to conduct remote sensing operations in areas that were previously hardly accessible for manned and remotely-controlled systems, such as in the deep sea. Autonomous unmanned systems such as aerial drones or unmanned underwater vehicles could also be seen by nuclear weapon states as an alternative to intercontinental ballistic missiles as well as manned bomber and submarines for nuclear weapon delivery.
According to Elon Musk, AI represents a serious danger to the public, and needs regulatory oversight from a public body. In touching on the nuclear weapon analogy, he said: "The danger of AI is much greater than the danger of nuclear warheads, by a lot and nobody would suggest that we allow anyone to just build nuclear warheads if they want - that would be insane."
RAND: The Hill: UNUCPR: Venturebeat: EuroLeadershipNetwork: Vassar Insider:
Time: Stanford.edu: 2021.AI: TechRepublic:
You Might Also Read:
Cyber Threats & Nuclear Dangers: