The House IT Subcommittee Seeks Understanding and Solutions for AI
Published: March 14, 2018
In a three-part hearing series, the House Information Technology (IT) Subcommittee led by Chairman Rep. William Hurd (R-TX), will tackle the issue of how the government, academia and private sector are using artificial intelligence (AI) and the regulations that should be put in place to ensure the effectiveness of AI within U.S. technology.
The legislative branch is seeking to understand AI and its uses, challenges and potential advantages and hindrances within the government. The House IT Subcommittee is holding a three-part hearing series titled, “GAME CHANGERS: ARTIFICIAL INTELLIGENCE.” Part I took place on February 14th with leaders in academia and industry describing the foundational elements of AI and its use in those market sectors. Part II took place on March 7th with testimonies from DoD, GSA, NSF and DHS on how the government is, leveraging the emerging technology and challenges in implementing AI throughout the federal world. According to Chairman Rep. William Hurd, he envisions the hearings leading to more federal use of AI stating, “I want to get to a point where we can be making decisions within the government on where to spend dollars or resources based on the analysis of large volumes of data.”
Members of industry from the first hearing stated that AI is positioned to revolutionize innovation and spur improved decision making and processes across multiple disciplines. For instance, Dr. Amir Khosrowshahi, Vice President and Chief Technology Officer of Intel Corporation’s Artificial Intelligence Products Group, stated that the company is working on AI solutions for the healthcare, agricultural and law enforcement sectors. Dr. Ian Buck, Vice President and General Manager of Accelerated Computing at NVIDIA, stated they are working with industry partners to develop faster cybersecurity systems and train federal employees through the use of AI.
Industry and academia called on government to encourage and enable the use of AI, without placing burdensome restrictions on the maturing technology. Likewise, industry urged government to be more transparent and open with its data so that private sector AI solutions can be applied to large federal data sets for in research and innovation. Moreover, the government was called to set the stage for more AI workers by encouraging STEM programs and research programs within NSF that work closely with academia. Finally, witnesses requested that government set guidelines for society to adapt to AI to discourage the undesirable uses and impacts from AI such as risks in lack of transparency, security and accountability.
Witnesses from the second hearing explained how government is involved with AI. For instance, Mr. Keith Nakasone, Deputy Assistant Commissioner of Acquisition Operations, Office of Information Technology Category (ITC) at GSA, informed policymakers of the various acquisition methods the agency has set up for government to leverage this new technology. In addition to Schedule 70 and associated programs such as FAStland and Startup Springboard, the EIS, Alliant GWACs, STARS II and VETS 2 make AI available under their respective vehicles. GSA is also utilizing AI within its own processes. For instance, GSA has piloted the Solicitation Review Tool (SRT) to identify compliance with Section 508 within publicly posted solicitation through natural language processing, text mining and machine learning algorithms. GSA anticipates expanding the tool to predict further noncompliance in other federal regulations such as cybersecurity.
Moreover, Dr. Douglas Maughan, Division Director at the Cyber Security Division Science & Technology Directorate at DHS, informed committee members that his office has been involved in applying AI for cyber protection purposes. In fact, DARPA is using AI and machine learning techniques for purposes including “predictive analysis for malware evolution; enabling defensive techniques to be established ahead of a future malware variant; detecting anomalous network traffic and behaviors to inform cyber defensive decision making; and helping identify, categorize and score various adversarial Telephony Denial of Service (TDoS) techniques.” As an example of application, DARPA used TDOS protection in a large U.S. bank to screen and block calls based on voice firewall policies that include robocalls and fraudulent calls. The technology has since blocked more than 120,000 calls per month.
Dr. John Everett, Deputy Director for the Information Innovation Office at DARPA, believes that the next wave of AI will combine what has become of the technology over the past several decades and produce systems more aware of context to interact more effectively with people by using more common sense reasoning and natural language processing.
Challenges in AI, according to government witnesses, include the acceptance and adoption of the emerging technology by agencies. Congress can help change and set the mindset for enhanced use of AI. Harnessing the government data in order to make sense of the information is an additional hindrance to implementing AI across the federal spectrum. Moreover, deterring the potential for a widening gap between rapidly-changing technology capabilities will be something government must watch for.
Stay tuned. According to Hurd, the third part of the subcommittee’s three-part series in AI is scheduled in April 2018. It will be where lawmakers discuss the regulatory and non-regulatory solutions in AI to ensure it is both maturing and being used correctly to keep the United States “number one in this technology.”