Home Cybernetics U.S. Department of Defense Unveils Principles of Ethics for Artificial Intelligence

U.S. Department of Defense Unveils Principles of Ethics for Artificial Intelligence

by Glenn Moore

The United States continues to invest and research in artificial intelligence, a technology the Pentagon thinks will be pivotal to worldwide strength. Because of this, the military formally announced serious precautions it will take in the future.

Defense Secretary Mark Esper signed off on a five-point AI ethics memorandum that will go into everything from the research and development of the technology, to the data used to explain how AI is implemented.

These five principle areas, listed below, are based on recommendations from a 15-month study and consultation by the Defense Innovation Board. The board includes a panel of science and technology experts from academia, commercial industry, government, and the American public. The analysis and feedback from a rigorous process concluded with the adoption of AI ethical principles. These are aligned with the DoD AI strategy objective of directing the United States military lead in AI ethics and the lawful use of AI technology.

These new principles will guide both non-combat and combat AI applications, including surveillance and preventing mechanical issues.

All Government-Issued Artificial Intelligence Must Obey The Rule of Five

The department’s AI ethical principles cover five major areas:

  1. Responsible. DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
  2. Equitable. The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  3. Traceable. The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedures and documentation.
  4. Reliable. The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
  5. Governable. The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

These principles align closely with continuing Trump Administration efforts to advance trustworthy AI technologies into the future. Last year, President Trump started the American AI Initiative, which is the United States national strategy for leadership in artificial intelligence, which promises and promotes innovative uses of AI while protecting privacy, civil liberties, and American values.

“The United States, together with our allies and partners, must accelerate the adoption of AI and lead in its national security applications to maintain our strategic position, prevail on future battlefields, and safeguard the rules-based international order,” explained Secretary Esper.

“AI technology will change much about the battlefield of the future, but nothing will change America’s steadfast commitment to responsible and lawful behavior. The adoption of AI ethical principles will enhance the department’s commitment to upholding the highest ethical standards as outlined in the DOD AI Strategy while embracing the U.S. military’s strong history of applying rigorous testing and fielding standards for technology innovations.”

A "Need To Know" Basis

When it comes to the use and development of AI within the DoD remains to be seen. DoD Chief Information Officer Dana Deasy said the way the military responds to these principles and guidelines will evolve with the use of AI. Deasy added there is no such thing as an end state, rather DoD will continue to learn.

One intriguing principle is No. 5, which requires AI to be “governable.” This means the automated technology can be stopped if unintended behavior occurs. But the department promises to develop AI in a way to be reliable and traceable by being easily auditable so problems can be quickly identified and corrected.

“The department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior,” the department added in a press release.

As for the future of AI after the release of these guidelines, look for autonomous vehicles to computer-assisted decision making. And the United States is not the only country looking to the future of AI. Russia, China, and a range of other countries view AI as a critical emerging technology.

Let the AI technology race begin.

Hey, chum. These posts don't write themselves. If you wanna stay in the know, it's gotta be a two way street.*

Leave a Comment

1 comment

SheZero September 5, 2020 - 3:09 pm

IDK about military applications ( it makes me think of Robot Wars) but it occurred to me while reading this that AI might make very good judges, as they would be able to see the situation from many and all different viewpoints then.calculate the most equitable and fair judgement.

Reply

You may also like