Up & Out: Intelligence, AI, and the Judgement Advantage
Thought Leadership
July 30, 2025
How human–machine partnerships can support decision confidence.
In previous editions of Up & Out, we explored actionable steps intelligence teams can use to earn executive trust, embed themselves in enterprise workflows, and shift from just delivering insight to gaining strategic influence. But how can they retain this position as advances in artificial intelligence (AI) force them to adapt ways of working?
AI is no longer an emerging capability. It is already embedded in the workflows of many security, threat, and risk teams. From automated sentiment analysis to geopolitical forecasting and anomaly detection, AI offers speed and scale. But those attributes are not synonymous with value. To remain relevant and authoritative in this new era, intelligence professionals must not only use AI—they must shape its role in decision-making.
So, for this iteration of Up & Out, we’ve considered how intelligence teams can position themselves as stewards of ethical, high-trust, AI-augmented decision support. By following the principles discussed here, we believe intelligence teams can help organisations move faster without sacrificing clarity, context, or credibility.
I. Don’t Just Deploy AI—Define the Collaboration Contract
AI systems are tools but they are also collaborators. And collaborators require contracts.
While AI can process signals at scale, it cannot grasp narrative, nuance, or the strategic implications of a decision. That remains firmly within the remit of the intelligence professional. However, as AI becomes more integrated into analytic workflows, it begins to shape what is surfaced, what is emphasised, and what is omitted. If intelligence teams do not define the terms of that collaboration, someone (or something) else will. Rather than treating AI as a black box, treat it as a junior analyst with extraordinary processing power: excellent at recognising patterns, less effective at recognising blindspots. It is an intelligence analyst’s responsibility to establish when AI should lead, when it should support, and when it requires human override.
What this might look like: You work in global risk for a consumer brand, and your team has developed a model to detect reputational threats on social media. You notice that the model disproportionately flags discourse in North America whilst missing growing backlash in Latin America regarding sourcing practices. This prompts you to adjust the model’s inputs, expand linguistic coverage, and include regional experts in the review loop. Your executive sponsors can therefore still appreciate the AI’s efficiency without losing trust in your team’s judgement to interpret the signal.
II. Build a Framework for Epistemic Value
Accuracy is necessary but not sufficient. The real question is: Does it make us smarter?
AI can generate outputs that are technically accurate but strategically inconsequential. Intelligence teams must construct frameworks to evaluate not only whether AI is correct, but whether it is useful. This is where the concept of epistemic value becomes essential—the degree to which an insight enhances foresight, reduces noise, or expands the decision space.
Intelligence adds lasting value not by blindly accepting machine-generated outputs, but by auditing their assumptions, understanding their implications, and contextualising their relevance.
What this might look like: You support the global security and intelligence function for a multinational energy company. An AI model flags escalating social media activity in a North African country where your firm is planning to expand operations. The volume and sentiment data suggest instability, but the output lacks context. Your team conducts a deeper review, identifying that whilst protests are indeed increasing, they are hyper-localised and primarily centred around a regional labour dispute unrelated to foreign investment. You brief the executive risk committee with a nuanced assessment: whilst the AI correctly identified unrest, your analysis determines that the expansion plan remains viable with a few risk mitigation adjustments. The AI model spotted disruption, whilst you delivered the insight.
III.Make Transparency a Leadership Standard
Trust in AI is not built through technical explanations, but accountable decisions.
Executives rarely ask for AI transparency in the form of code. They are more likely to ask: “Can I trust this?”
Intelligence professionals are uniquely positioned to provide the answer. This is not because they are experts in the technical complexities of AI, but because they excel at assessing information for reliability and accuracy before contextualising it for an intelligence estimate.
As with any conventional form of intelligence work, this requires a keen understanding of data validity and careful curation of source inputs. This is especially critical when AI models are involved, as their outputs are only as strong as the data they are trained or prompted with. Poor-quality inputs—biased, irrelevant, or outdated data—can easily generate misleading insights at scale. The old adage applies: rubbish in, rubbish out.
For intelligence teams, curating high-quality data is not just a backend function—it’s a strategic one. By building structured, validated datasets (such as a curated geopolitical risk dataset), analysts can ensure that AI systems are learning from accurate, diverse, and relevant information.
To reassure trust, intelligence teams must be capable of explaining how AI-informed insights were generated, what data informed them, and how factors like risk and bias were considered.
Data validity is a foundational pillar of that transparency. When executives know that your assessments are grounded in curated, trustworthy data, your insights become not only more credible—but more actionable.
By maintaining this transparency in the workflow, you can protect the credibility of your function as scrutiny of the use of AI increases. More likely than not, it also leaves the intelligence team in good stead for aligning with enterprise-level policies on the use of AI, which many firms are still developing. ms and the common obstacles they face in spreading their wings. If you have any thoughts or stories to share on this topic, we’d love to hear from you.
What this might look like: Your team develops a daily geopolitical risk signal dashboard, powered by AI and open-source intelligence. To promote trust and usability, you introduce a “confidence layer” that clearly explains how each score is derived, what assumptions underlie it, and when human review is applied. Executives begin referencing it in briefings with the board—not because it is automated, but because it is trusted.
IV. Lead with Layman’s Language
If decision-makers cannot understand how the system works, they will not rely on it.
AI can be technically sophisticated, but your communication should not be. Intelligence professionals must learn to translate complex systems into simpler terms. This means replacing jargon (“large language model inference”) with clear framing (“we used a tool trained to detect tone shifts in diplomatic language”).
By doing so, AI becomes part of the leadership conversation. It is not a mystery to be feared, but a capability to be understood and directed.
What this might look like: You are briefing the Chief Risk Officer (CRO) on regional protest risk. Rather than describe your AI model’s natural language processing capability, you say: “We’re using a tool that scans local media sources for tone changes and emerging activist language. It’s like having 500 analysts reading the news with a highlighter.” The CRO gets it and trusts the approach.
V. Use Ethics as a Strategic Differentiator
Ethics is not just about avoiding risk—it’s about earning trust.
When your intelligence workflows are not only fast and insightful, but also auditable, interpretable and justifiable, you become a strategic asset. Ethics then becomes a foundation for credibility and not just a check box for compliance.
By articulating the human–AI collaboration contract, assessing epistemic value, and enforcing transparency, your team can become a central, trusted partner to leadership, who will see you as a steward of good decision-making.
What this might look like: You lead a corporate security team at a multinational, and your CEO is considering using AI-generated incident summaries across the enterprise. You recommend a hybrid model: AI drafts the alert, but an analyst adds context and vetting before release. You explain why trust, timing, and accurate information are critical in crisis communications. The CEO not only agrees but invites you to help shape the company’s internal AI policy.
AI is already reshaping how intelligence is produced, consumed, and trusted. But human judgement remains at the centre. Intelligence teams who embrace AI with clarity, caution, and creativity will not only secure future investment in their mission. They will lead the next era of enterprise decision support.