CISA’s Tips on Securely Integrating AI in OT Points the Way for Contractors

Published: December 05, 2025

Federal Market AnalysisArtificial Intelligence/Machine LearningCritical Infrastructure ProtectionCybersecurityCISAInternet of ThingsSmart Infrastructure

Federal AI solutions providers should proactively build security into their offerings and prepare for specific contract requirements.

The Cybersecurity and Infrastructure Security Agency (CISA), in collaboration with international cybersecurity partner agencies, recently published guidance that provides principles for critical infrastructure owners and operators to securely integrate AI into Operational Technology (OT) environments. This guidance also provides ideas that may be useful to contractors and AI solutions providers in ways that they can design and deploy their solutions with security in mind and to remain competitive in the federal marketplace.

Key Cybersecurity Principles for Integrating AI

CISA’s guidance outlines four key principles critical infrastructure owners and operators can follow to leverage the benefits of AI in OT systems while reducing risk.

  1. Understand AI: Educate personnel on AI risks, impacts, and secure development lifecycles.
  2. Assess AI Use in OT: Evaluate business cases, manage OT data security risks, and address immediate and long-term integration challenges.
  3. Establish AI Governance: Implement governance frameworks, test AI models continuously, and ensure regulatory compliance.
  4. Embed Safety and Security: Maintain oversight, ensure transparency, and integrate AI into incident response plans.

AI Vendor Management Provisions

In addition to tailored data protections and network security controls, to address unique vulnerabilities to implementing AI, CISA suggests that OT owners procuring AI solutions should pursue the following vendor arrangements:

  • Require Software Bill of Materials (SBOM) for AI components
  • Demand transparency on where AI models are hosted
  • Negotiate contractual agreements detailing AI features
  • Establish explicit data usage policies
  • Require vendor notification of AI vulnerabilities or improper outputs

Implications for Federal Contractors and Solution Providers

The guidance raises some implications and considerations for federal contractors and solution providers – especially around creating and deploying secure AI solutions – that may require adaptation to their business practices and solutions development processes to remain competitive in the federal marketplace and beyond.

  • Compliance and Standards Alignment – Design solutions that align with NIST AI Risk Management Framework, follow CISA's Secure by Design principles, and comply with emerging AI technical standards (ETSI TR 104 128, TS 104 223, TR 104 048). Solutions providers should implement CISA Cybersecurity Performance Goals (CPGs), particularly 2.F for network controls.
  • Product Development Requirements – Build AI systems following secure development lifecycle principles. Implement explainable AI (XAI) features where it is feasible. Solutions should be designed to allow for graceful failure without disrupting critical operations and enable operators to disable AI features when needed. Finally, solutions should provide clear audit trails of AI-driven decisions.
  • Documentation and Transparency – Producers should provide comprehensive SBOMs including AI components, document their AI model supply chain (development, hosting, dependencies), disclose any AI-specific vulnerabilities and risks, maintain detailed logs of AI system operations, and offer clear documentation on security responsibilities in shared models.
  • Deployment Considerations – Providers should support on-premises deployment options to reduce internet dependency, enable push-based data architectures that don't require persistent OT access, implement proper segmentation that prevents AI from being persistent attack path, provide tools for monitoring AI model drift and performance degradation, and include failsafe mechanisms that revert to non-AI operations.
  • Risk Management – Address AI-specific attack vectors (adversarial inputs, data poisoning, model theft), implement controls against prompt injection attacks, provide mechanisms to detect and respond to AI hallucinations, consider reliability issues when AI makes safety-critical decisions, and account for increased system complexity and attack surface.
  • Operational Support – Provide training materials on AI fundamentals and threat modeling, support integration with existing security frameworks and tools, enable continuous model validation and refinement, offer monitoring dashboards integrated with existing HMI views, and support human-in-the-loop decision-making architectures
  • Data Handling – To bolster security, implement strict data residency and sovereignty controls, protect intellectual property contained in process data, and support encryption and access controls for all AI-related data.
  • Incident Response – Include AI-specific scenarios in incident response plans, provide mechanisms for rapid AI system bypass or rollback, and support forensic analysis of AI decision-making.

Sustaining Competitiveness

CISA’s guidance signals that federal agencies expect AI solutions for OT environments to prioritize security and safety equally with functionality. Contractors should view AI security not as an add-on but as fundamental to product design, with particular emphasis on maintaining the reliability and safety of critical infrastructure systems.

Federal contractors who proactively address these principles will be better positioned for critical infrastructure contracts requiring AI integration, including integration with Department of Defense and civilian agency OT systems. Further, these companies will ensure they remain compliant with emerging federal AI regulations and meet agency "Secure by Design" procurement requirements.