Trading places: ‘AI in the loop’ takes center stage in cyber defense strategies
A critical change of view
Until recently, the prevailing view of artificial intelligence (AI)-specific cyber risk and its application for cyber defense was that it should be viewed as a generic ‘emerging tech’ add‑on. However, the new National Institute of Standards and Technology (NIST) Cyber AI Profile (a draft of which is out for comment until June 2026) pushes Chief Information Security Officers (CISOs) to treat AI risk and AI in cyber defense as a first‑level design consideration, expressed inside the NIST Cybersecurity Framework (CSF) 2.0.
Past guidance on AI from many industry followers emphasized high‑level enablement, ‘human‑in-the-loop’ security and a loosely defined governance environment. In stark contrast, the NIST profile now gives AI and cybersecurity executives a more prescriptive, deterministic control‑level blueprint that will hopefully force changes in strategies, priorities, investment plans and operating models.
The draft Cyber AI Profile applies CSF 2.0’s ‘Govern–Identify–Protect–Detect–Respond–Recover’ functions directly to AI use cases, and specifies AI‑tailored cybersecurity outcomes. This is a step change, explicitly positioning AI as a leading component of defensive measures, while not replacing the CSF or NIST’s AI Risk Management Framework (RMF).
The draft has three focus areas:
- Governing AI systems.
- Using AI to enhance cybersecurity or cyber defense at ’machine speed’ (faster than humans alone can achieve).
- Managing AI‑enabled threats.
We also expect to see, in short order, mappings to AI RMF and other NIST resources, to anchor AI cyber controls.
Implications for CISOs
Too often, industry guidance on the implication of AI for CISOs is tied solely to strategic themes (around AI trust and information security management, for example). Likewise, recent predictions around cyber threats stress deepfake attacks, insecure AI‑generated code and adaptive human protection. While these are important, they stop short of a specific CSF‑aligned set of control requirements for AI assets, or of prescribing CSF‑based control profiles for AI systems (proprietary systems and those provided by third parties).
Also, analyst reports have generally framed AI for security as a technology‑led enhancement (such as in extended detection and response [XDR] with AI analytics, or AI‑driven security operations center [SOC] automation), and warned about ‘bringing your own AI’ and embedded AI in software as a service (SaaS) offerings. Again, these have neglected to include a structured mapping to NIST CSF 2.0 or AI RMF outcomes. The draft profile closes this gap by treating AI models, data pipelines and AI‑enabled attacks as first‑order CSF objects that need explicit governance, asset identification, protection, detection and response patterns.
Chartis’ advice: make the shift
Although the NIST AI Cyber Profile is still in draft form and out for public comment, Chartis recommends that CISOs should begin their cyber defense strategies in early 2026 to adopt the principles of this guidance in the following ways:
- Recast AI guidance from 2025 into CSF 2.0 language, using the Cyber AI Profile as the canonical reference for AI‑specific outcomes, control objectives and mappings to AI RMF and other NIST artifacts. This means rewriting security strategy decks, roadmaps and risk registers so that ‘AI risk’ is expressed through CSF functions with clear ownership, metrics and planned maturity targets.
- Elevate AI governance, resilience and compliance. In its recent study of AI model governance, Chartis observed that AI is no longer peripheral. These technologies have moved from experimental pilots to core differentiators in enterprise governance, risk and compliance (GRC) and cybersecurity technology stacks. ‘AI in the loop’ is switching places with ‘human in the loop’ in several areas, including regulatory, risk and open-source intelligence, continuous compliance, cyber threat management, and financial crime and entity resolution. This is helping organizations improve efficiency by automating repetitive work, enhancing consistency and surfacing actionable insights from fractured data landscapes. The emergence of AI-assisted risk and compliance co-pilots will serve to differentiate platforms not by bolted-on features but by deep integration within enterprise software, scalable analytics and process automation.
Chartis’ many conversations in 2025 revealed that good AI governance is an accelerant for trusted enterprise AI deployment, adoption and control. AI governance programs and technologies are essential for enabling systematic expansion of AI capabilities, ensuring safe and efficient operations that evolve with business workflows. Without governance frameworks, data and workflow integrations that focus on compliance, data quality, ethical use and continuous monitoring, AI deployments will remain siloed and suboptimal overall, and will fall short of achieving a return on investment that justifies the bets placed on these transformative technologies. - Treat AI components as first‑class assets. Market guidance has often lumped AI under ‘applications’ or ‘data’. But NIST now expects explicit identification and classification of AI systems, models, training data, prompts and external AI services as distinct assets with their own threats and dependencies. This means that CISOs must treat classic GRC, machine learning operations (MLOps), model risk management (MRM) and cyber risk quantification (CRQ) as composable systems that combine configuration management databases (CMDBs), risk and control libraries, asset inventories, performance monitors and data catalogs to tag AI models and AI providers. They must also link these systems into third‑party risk workflows and supply‑chain risk assessments that earlier playbooks only addressed generically.
- Move from generic controls to AI‑specific ‘protect/detect’ design. While advice from the classic analyst community calls out GenAI‑driven phishing, deepfakes and insecure AI code, the Cyber AI Profile adds expectations for controls around model integrity, data poisoning, prompt injection and misuse of AI outputs, integrated with traditional security tooling. CISOs will need to specify new safeguards (e.g., model access control, content filters, robust logging of AI interactions, input/output validation and human‑in‑the‑loop checks) rather than assuming that existing data loss prevention, IAM and endpoint detection and response (EDR) controls are sufficient.
- Operationalize ‘AI for cyber’ with risk guardrails. Analysts have encouraged firms to leverage AI in SOCs and code security, but largely for efficiency. NIST’s profile frames AI‑enabled cyber defense as a mission‑assurance and proactive risk‑management capability (which Chartis labels as ‘AI in the loop’) that must itself be governed and monitored for unintended consequences. CISOs will need to define explicit risk criteria, metrics and oversight mechanisms for AI‑driven detection and response (e.g., thresholds for AI‑suggested actions, rollback procedures and model performance/drift monitoring in security analytics).
- Integrate AI‑enabled threats into threat intel and incident response. The Cyber AI Profile expects deepfake attacks, AI-enabled phishing and other methods to be integrated into standard threat intelligence. CISOs should revise playbooks to cover AI‑specific incidents (e.g., model exfiltration, prompt‑based data leakage, compromised training data, manipulated AI agents, etc.) and align them with CSF respond/recover expectations and AI RMF mitigation activities.
- Align AI cyber controls with regulatory and assurance expectations. Analyst guidance often flags regulatory change and legal exposure, but NIST’s AI profile is likely to become a reference point for regulators, auditors and due diligence teams when they assess AI‑related cyber controls. CISOs will need to map existing AI governance frameworks and internal policies to the Cyber AI Profile to demonstrate alignment in audits, vendor assessments and board reporting. This is especially true in regulated sectors where other NIST community profiles already matter.
A practical step forward
Crucially, for CISOs to achieve this shift successfully, Chartis believes that their talent, plans and metrics must also evolve in several important ways:
- Program architecture and portfolios. Security program roadmaps that previously listed ‘GenAI risk’ as a single initiative under innovation or threat management will need to be decomposed into CSF‑aligned workstreams for each AI focus area (securing AI, AI for security, AI‑enabled threats), each with defined deliverables and owners. This will change portfolio planning, funding allocations and sequencing, moving some AI work out of generic ‘emerging tech’ buckets and into core identity, data, application and resilience programs.
- Metrics and reporting. Instead of only tracking AI‑related incidents or pilot counts, CISOs will be expected to report CSF‑style AI maturity and coverage metrics (e.g., percentage of AI systems with completed threat models, proportion of AI providers assessed under third-party risk management [TPRM], coverage of AI activity logging, etc.). This will complement, and in some cases replace, the high‑level adoption and risk narratives that some industry experts currently emphasize in scorecards to boards and regulators.
- Talent and operating model. Analyst research has already stressed greater human‑centric and adaptive protection skills, but the Cyber AI Profile implies new blended roles that combine AI engineering, data science and cybersecurity with knowledge of CSF and AI RMF. CISOs will likely need to reshape and re-skill security engineering (SecEng), application security (AppSec) and GRC teams. These will have to include dedicated AI security architects and AI risk leads responsible for implementing Cyber AI Profile controls across the AI lifecycle.
AI is no longer a peripheral add-on to cybersecurity, and CISOs that move early to align with the NIST Cyber AI Profile will be better positioned to secure AI systems, harness AI for cyber defense and withstand AI-enabled threats. By recasting strategies, portfolios and metrics in CSF 2.0 terms and treating AI components as first-class assets, organizations will pivot governance and control into accelerants for scalable AI adoption, rather than making them a brake on innovation.
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@chartis-research.com or view our subscription options here: https://www.chartis-research.com/static/become-a-member
You are currently unable to print this content. Please contact info@chartis-research.com to find out more.
You are currently unable to copy this content. Please contact info@chartis-research.com to find out more.
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@chartis-research.com
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@chartis-research.com