Artificial Intelligence: Protecting the Oncoming ‘Fly Wheel Effect’

Published: June 13, 2019


Many officials are wary of investing their precious dollars in technologies that have so recently emerged from the basement of tech labs, but once agencies recognize that AI and smart automation in fact work, their acceptance will gain steam.

Intriguing Artificial Intelligence projects will grab news headlines. But where the public sector will first realize the greatest impact won’t make the news, because it’s back-office work.

Smart automation or Robotic Process Automation (RPA) is helping to eliminate manual tasks and allow employees to tackle the more important responsibilities. As is often the case, many officials are wary of investing their precious dollars in technologies that have so recently emerged from the basement of tech labs. But, experts say, once agencies recognize that AI and smart automation in fact work, their acceptance will gain steam.

“I call it the ‘Fly Wheel Effect.’ Once a few people begin to understand what smart automation does, it will catch on,” said U.S. Air Force Lieutenant General John Shanahan. As director of the Joint Artificial Intelligence Center (JAIC) at the Department of Defense, he told Senate Armed Services Committee members that, “We have to have people believe it’s real and not just science fiction.”

Beyond the Back Office

Along with pushing AI ahead in the FY2020 budget proposal and military leaders’ discussions before congressional committees, the White House issued an Executive Order in February 2019 to drive AI forward into agencies.

The initiative tackles AI from several different angles. First, the Trump Administration wants agencies to prioritize AI research and development. For example, the Department of Defense founded the JAIC based on the FY2019 National Defense Authorization Act. Its aim is to set a common vision and focus to drive department-wide AI capability delivery. JAIC operates across the full AI lifecycle to meet current needs by emphasizing near-term prototyping, execution, and operational adoption.

In the Executive Order, the Administration intends to make federal data and computing resources available for R&D, setting governance standards, building the AI workforce, and protecting the United States’ AI advantage.

Within three months of receiving their annual appropriations, the President expects officials to find programs to which they can apply their AI R&D priority.

The White House also ordered officials to connect with AI R&D organizations outside of the Federal government. The goal is to listen to the AI research community and find ways to improve data and model inventory documentation to enable discovery and usability.

“As the pace of AI innovation increases around the world, we cannot sit idly by and presume that our leadership is guaranteed,” the White House announced with the order’s release.

Faster than the Speed of Law

To be frank, technology and law work at vastly different paces. Technology is a quick-footed, forward-looking innovator. Law often reacts to what’s already happening or attempts to curb foreseen problems.

For example, advanced autonomy and AI are poised to change the international battlefield within the near future. Air Force officials intend to push into the lead as the modern battlefield changes.

The Air Force’s Aerospace Systems Directorate is conducting market research and concept analysis to prototype Autonomous, Unmanned Combat Air Vehicles (UCAV) as an Early Operational Capability (EOC). The program name is “Skyborg.”

“Investments in this technology by our adversaries, combined with evolving anti-access, area denial strategies, threaten to erode the United States’ current technological and operational advantages,” according to a recent request for information.

Military officials and experts need to consider what that future looks like, Acting Secretary of Defense Patrick Shanahan told the House Armed Services Committee.

“We really need to think what does ‘machine-on-machine’ mean, as we take humans out of the loop,” he said.

On the civilian side, cars and trucks are ready to hit the road with their AI brains. Looking ahead, developers and AI leaders expect them to aid in reducing traffic deaths and to provide enhanced mobility. However, absolute assurance of safety is unattainable because no one—human or non-human—can be prepared for all situations.

“Put simply, the automated vehicle will be confronted with a situation in which there is no good outcome, a situation it cannot be trained to foresee, or which is outside of its capabilities,” GAO wrote in a report on AI.

The Department of Transportation’s Volpe National Transportation Systems Center is prepping for this and other coming problems. In January 2019, it released a Sources Sought notice to research various platforms’ capabilities and susceptibilities to be modified to operate autonomously in an unsafe manner on roadways, waterways and in the air.

Experts told GAO that regulators should consider safety ratings based on areas of interest including protection from penetration from hackers. At the same time, regulators should not set policies too prematurely. Instead, they should allow the structures to evolve and then adapt to those changes. Similarly, policies cannot be benchmarked against perfection; trade-offs are the keys instead. These trade-offs include accuracy, speed of computation, transparency, fairness, and security.

Policy options and Congressional legislation

Standing on the brink of AI advances, policy-makers find themselves in a conundrum: a quickly evolving field with unknown boundaries; competing nations shuffling to be the leader; talent and expertise at their fingertips; and ethical questions demanding attention.

Both the House and Senate have AI caucuses that intend to balance AI’s risks and rewards to ensure the competitiveness of the U.S. economy, while maintaining important ethical standards.

“AI is a mix of promise and pitfall, which means, as policymakers, we need to be clear-eyed about its potential,” Sen. Rob Portman (R-Ohio) said in March. Congress needs “substantive conversations necessary to make responsible policy about emerging technology and ensure AI works for, and not against, American citizens and U.S. competitiveness.”

Likewise, Acting Defense Secretary Shanahan said AI and similar advanced weapons systems will need to be added to arms agreements, particularly with countries with which the U.S. has no agreements.

The underlying key to success is a “top-down pressure and bottom-up innovation,” Lt. Gen. Shanahan said. The bottom-up innovation exists and it needs an outlet. At the same time, senior officials need to give resources, authorities and policies. The balance can allow for evolution without calamity.