AI in the Federal Health Sector

Published: September 13, 2024

Federal Market AnalysisArtificial Intelligence/Machine LearningDHAHHSHealth ITInformation TechnologyPolicy and LegislationSpending TrendsVA

As the federal artificial intelligence market continues to evolve, AI in the federal healthcare environment is transforming as well.

Artificial intelligence technologies have the potential to transform the healthcare sector, with applications in helping to inform medical decisions, reducing administrative workloads, optimizing user experience, identifying and reducing fraud, and strengthening early and predictive diagnosis. Federal agencies such as HHS, VA and DHA are exploring AI/ML in these different applications, while also navigating AI risk and bias, developing policies to ensure safe and responsible use of AI, and preparing and integrating data (including synthetic data) for AI use.    

The following provides a small snapshot of considerations when it comes to AI and the federal health IT environment. 

FY 2025 Budget

The FY 2025 federal budget request does not specifically provide a breakout of AI for federal agencies, however, there are several mentions on the use of funds for specific AI-related programs at federal health agencies. For example, HHS requests $6M to increase the oversight of the department’s Office of the Chief AI Officer to coordinate HHS AI efforts and mitigate risk. Recent activities from the office include coordinating 156 AI use cases across the department, deploying a testbed Large Language Model (LLM) for internal operations, and developing guidance for internal use of third-party AI solutions. 

At the VA, the department is requesting $20M to pursue four main workstreams in AI: establish AI governance, prototype high priority AI use cases, create infrastructure to develop and integrate AI issues across senior leadership, and establish continuous evaluation of AI solutions and guidelines for AI improper use.

Policy and Legislation 

The release of the AI executive order (EO) last year set in motion federal health agency actions to address AI safety and risk concerns. The EO calls on federal health agencies to prioritize AI advancement in personalized immune-response profiles for patients, improve of healthcare-data quality, and improve the quality of veterans’ healthcare.  

Among the EO’s other health-related directives is the creation of an HHS task force to develop and implement policies centered on AI adoption in the health sector within 365 days of the task force’s creation. Since then, the HHS task force has published guidelines to address the bias and risks present in the use of healthcare algorithms. 

Furthering the pursuit of managing AI risk, the administration announced in December 2023 the voluntary commitments from over 25 healthcare companies to develop AI solutions to optimize healthcare, ensure AI outcomes are fair and appropriate, increase transparency on AI-generated content, adhere to a risk management framework, and develop AI responsibly. 

In January 2024, the Office of the National Coordinator for Health Information Technology at HHS published a finalized rule, called Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing, or HTI-1. This rule requires developers to improve transparency about AI and algorithms used in decision-making as part of the certification process.

AI Contract Spending at HHS, VA and DHA

A look at spending among three federal health mission agencies between FY 2021 and 2023 from obligations that Deltek has identified as AI-related reveals an increase in spending among HHS, VA and DHA from FY 2021-2023. Interestingly enough, spending dipped in FY 2022 among all three agencies. One possible theory is the impact of COVID-19 accelerating AI use within the agencies reflected in FY 2021, with the renewed interest in AI, particularly with the public emergence of generative AI, in FY 2023. Keep in mind that the below spending reflects AI transactions at the entire agency and is not broken out by those directly related to health-related measures. 

Sources: FPDS, Deltek

Opportunities and Challenges of GenAI in Health 

In a recent report, the GAO examined the impact of generative AI (GenAI) in the health care industry. The federal government watchdog found that GenAI can be applied to several use cases in health, including drug, development, clinical documentation, clinical trials, and medical imaging. 

Nonetheless, several challenges exist with GenAI in healthcare, much of which exists in all AI and AI-enabled technologies. Challenges include falsified information when the GenAI models “hallucinate” and produce inaccurate outputs. Bias in the data may also lead to erroneous outputs for underrepresented populations. Another challenge includes data privacy and navigating laws surrounding the protection of health care data to avoid exploitation of patient information. Lastly, GenAI requires large volumes of data for accurate results, yet the fragmentation of health data across systems and formats, even amongst federal health entities, make it difficult to obtain the data needed for the models. 

The report concludes with three policy questions for consideration: 

  • What safeguards are needed to address false information and bias in generative AI in health care settings?
  • What steps should be taken to protect patient data used to train generative AI models from unauthorized disclosures and comply with relevant privacy laws?
  • What steps should be taken to improve access to health care datasets for training generative AI models?

Sample Programs

Though many federal agencies are not yet scaling AI technologies, several are piloting AI, including federal health entities. The following is a sampling of some of those programs: 

  • GenAI at ARPA-H: The agency is piloting GenAI technologies to process volumes of data related to healthcare program areas to help shape and launch research programs. 
  • TrialGPT at NLM: NIH’s National Library of Medicine is piloting a large language model prototype called TrialGPT to help predict patient eligibility for clinical trials based on patient notes.
  • Interoperability at NAIRR: NIH is working alongside Energy and the NSF to provide datasets to the newly-established National Artificial Intelligence Research Resource (NAIRR) to contribute to the AI research ecosystem. Recently proposed legislation in the Senate called the GUIDE AI Act looks to further these efforts by establishing a data exchange center for biomedical data through NIM, NLM and NAIRR. 
  • VA Tech Sprints: Mandated by the AI EO, the department instituted a competition to develop AI solutions to address healthcare worker burnout. The sprint contains two tracks: speech-to-text solutions to take notes for clinical staff and a tool that can reduce time in integrating non-VA medical records into patients’ VA records. 

Federal health agencies are slated to continue exploring AI technologies health care, particularly with the prompt of policies such as the AI EO. Earlier this week, CMS issued a Request for Information inviting interested parties in the healthcare community to submit information on existing AI technologies supporting health services and delivery. From there, CMS plans to invite organizations to a “demo day” to help the agency better understand AI use cases in the health sector.