Artificial Intelligence & The Ethics Of War
The United States Dept. of Defense (DoD) has recently published its AI Ethical use principals and it recognises that AI will transform war and national security.
If the military manages to adopt, implement, and follow the guidelines, it would leap into an increasingly rare position as a leader in establishing standards for the wider tech world.
The DoD recommends 5 AI ethics principals:
- Responsible is the most straightforward: “Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use, and outcomes of AI systems.” This is similar to Defense Department 2012doctrine.
- Equitable as in, avoiding “unintended bias” in algorithms that might be used in combat or elsewhere. This one prescribes care in constructing datasets used in machine learning, that use AI to reduce for example, racist or contribute to racial bias.
- Traceable: technicians in the Defense Department must be able to go back through any output or AI process and be able to see how the software reached its conclusion.
- Reliable: Defining “explicit, well-defined domain of use,” and lots of rigorous tests against that use.
- Governable: built with the “ability to detect and avoid unintended harm or disruption.” The software has to be able to stop itself if it sees that it might be causing problems.
The policy doceument emulats similar efforst inthe private sector where the tech companies like Google, Microsoft, and Facebook, all of which have published their own ethical principles for AI. In the case of Google, a number of their engineers have protested the company's was involvment with the US Defense Department in a develoing AI applications for intelligence gathering using drones named Project Maven.
The DoD's documnt advocates that the discussion of AI ethics and technological advances and innovation should happen simultaneously.
"Ethics cannot be “bolted on” after a widget is built or considered only once a deployed process unfolds, and policy cannot wait for scientists and engineers to figure out particular technology problems. Rather, there must be an integrated, iterative development of technology with ethics, law, and policy considerations happening alongside technological development."
Although at an early stage, if the Pentagon ultimately does use AI in a dangerous way in comabt, it won’t be because the it has not begun to think carefully about having guidelines for the US military to follow.
DefenseOne: US Defense Innovation Board: Image: Nick Youngson
You Might Also Read:
AI - Driven Warfare Using Robots: