The U.S. Department of Defense has articulated an ambitious vision and strategy for artificial intelligence (AI) with the Joint Artificial Intelligence Center as the focal point, but the DoD has yet to provide the JAIC with the visibility, authorities and resource commitments needed to scale AI and its impact across the department, according to a new RAND Corporation report. The DoD’s AI strategy also lacks baselines and metrics to meaningfully assess progress, researchers concluded.
“The DoD recognizes that AI could be a game-changer and has set up organizational structures focusing on AI,” said Danielle C. Tarraf, lead author of the report and a senior information scientist at RAND, a nonprofit, nonpartisan research organization. “But currently the JAIC doesn’t have the authorities or resources it needs to carry out its mission. The authorities and resources of the AI organizations within the Services are also unclear.”
If the Pentagon wants to get the maximum benefit from artificial intelligence-enhanced systems it will need to improve its posture along multiple dimensions, according to the report. The study assesses how well the defense department is positioned to build/acquire, test and sustain—on a large scale—technologies falling under the broad umbrella of AI.
The study frames its assessment in terms of three categories of DoD AI applications: enterprise AI, such as AI-enabled financial or personnel management systems; operational AI, such as AI-enabled targeting capabilities that might be embedded within an air defense system such as PATRIOT; and mission-support AI applications, such as Project Maven, which aims to use machine learning to assist humans in analyzing large quantities of imagery from full-motion video data collected by drones.
The field is evolving quickly, with the algorithms that drive the current push in AI optimized for commercial, rather than Defense Department use. However, the current state of AI verification, validation and testing is nowhere close to ensuring the performance and safety of AI applications, particularly where safety-critical systems are concerned, researchers found.
“Many different technologies underpin AI,” Tarraf said. “The current excitement, and hype, are due to leap-ahead advances in Deep Learning approaches. However, these approaches remain brittle and artisanal—they are not ready yet for prime time in safety-critical systems.”
The department lacks clear mechanisms for growing, tracking and cultivating personnel who have AI skills, even as it faces a tight job market. The department also faces multiple data challenges, including the lack of data. “The success of Deep Learning is currently predicated on the availability of large, labeled data sets. Pursuing AI on a department-wide scale will require DoD to fundamentally transform its culture into a data-enabled one,” Tarraf said.
Tarraf and her colleagues offer a set of 11 strategic and tactical recommendations. Among them: The department should adapt AI governance structures that align authorities and resources with the mission of scaling AI. Also, the JAIC should develop a five-year strategic roadmap—backed by baseline measurements—to execute the mission of scaling AI and its impact.
DoD also should advance the science and practice of verification and testing of AI systems, working in close partnership with industry and academia. The department also should recognize data as critical resources, continue to create practices for their collection and curation, and increase sharing while resolving issues in protecting the data after sharing and during analysis and use.
The report recommends that DoD pursue opportunities to leverage new advances in AI, with particular attention to verification, validation, testing and evaluation, and in line with ethical principles. However, it is important for the department to maintain realistic expectations for both performance and timelines in going from demonstrations of the art of the possible to deployments at scale, researchers said.
Other authors of the report, “The Department of Defense Posture for Artificial Intelligence: Assessment and Recommendations,” are William Shelton, Edward Parker, Brien Alkire, Diana Gehlhaus Carew, Justin Grana, Alexis Levedahl, Jasmin Leveille, Jared Mondschein, James Ryseff, Ali Wyne, Daniel Elinoff, Ed Geist, Benjamin Harris, Eric Hui, Cedrick Kenney, Sydne Newberry, Chandler Sachs, Peter Schimer, Danielle Schlang, Victoria Smith, Abbie Tingstad, Padmaja Vedula and Kristin Warren.
Research for the congressionally mandated report was sponsored by the Department of Defense Joint Artificial Intelligence Center and was conducted within the Acquisition and Technology Policy Center of the RAND National Defense Research Institute. RAND NDRI is a federally funded research and development center sponsored by the Office of the Secretary of Defense, the Joint Staff, the Unified Combatant commands, the Navy, the Marine Corps, the defense agencies and the defense intelligence community.