
The CIA is getting ready to give its analysts digital coworkers. Deputy Director Michael Ellis said Thursday that the agency plans to fold generative artificial intelligence into the day-to-day work of its analysts, baking the tools into existing analytic platforms so they can speed routine processing, surface patterns in data and help draft assessments, while leaving the actual calls to humans.
“It won’t do the thinking for our analysts, but it will help draft key judgments, edit for clarity and compare drafts against tradecraft standards,” Ellis said, according to Defense One. The agency ran more than 300 AI projects last year and recently used AI to generate an intelligence report for the first time, early experiments the CIA says are meant to help analysts triage massive data streams and spot emerging threats faster.
Remarks came at a D.C. AI summit
Ellis laid out the plan on Thursday at the Special Competitive Studies Project’s AI+ Intelligence summit in Washington, D.C., a gathering focused on how U.S. intelligence can harness frontier technology. The event program highlighted April 9 sessions on reworking intelligence workflows and finding ways to deploy commercial AI inside classified environments, a schedule that underlined how urgently the agency wants to move, according to the Special Competitive Studies Project.
Vendor fight complicates the rollout
The CIA’s push to embed AI is playing out against a very public fight between the Defense Department and AI vendor Anthropic. Earlier this year the Pentagon moved to label the company a “supply-chain risk,” a designation Anthropic has sued to block. Courts have issued mixed rulings as the company challenges the proposed blacklist in multiple filings, according to Reuters. The standoff escalated after reports that President Donald Trump ordered federal agencies in March to halt use of Anthropic’s technology, a move that has sparked broader questions about vendor limits and government leverage in AI procurement; coverage that pulled that reporting together appeared on TradingView/Cointelegraph.
Faster buys, more partners
Inside Langley, one response has been procedural. The CIA recently revamped an acquisition framework meant to cut the time it takes to move commercial technology into classified systems. The new approach is designed to let the agency test and field tools from multiple vendors and avoid single supplier chokepoints, according to reporting from NextGov.
Tradecraft and cybersecurity concerns
Security researchers warn that piping AI helpers directly into intelligence workflows is not a free upgrade. They point to new tradecraft and cybersecurity risks, from model failure or bias to adversarial attempts to manipulate agentic systems that can act semi-autonomously. Analysts and technologists argue these tools will have to be tightly constrained and closely monitored so they do not let small mistakes scale into big failures. For a deeper technical rundown of those agentic AI and cybersecurity questions, see Tech Monitor.
CIA officials say the goal is decision support, not automated judgment. Ellis has emphasized that humans will stay in the loop and that the agency intends to diversify suppliers so a single company’s internal policies cannot quietly shape or restrict operations. That stance reflects a broader tension now surfacing in court cases and contracting talks over how much control private vendors can insist on while serving highly sensitive government missions.
For Washington, the shift is both tactical and strategic. Ellis framed the AI push as necessary to keep pace with competitors and to scale hard-won analytic tradecraft, even as lawmakers and courts still sort out how far the government can go in compelling vendor cooperation. The story is still unfolding as the Anthropic litigation, procurement changes and field experiments with AI tools continue to roll forward.









