AI Now Deciding When Political Bots Should Engage Online

Neural decision network visualizing AI orchestration of social media influence operations with prismatic filtering and tactical node activation

AI systems are no longer just generating content for influence operations—they're now making tactical decisions about when and how social media bots should engage with real users, according to new intelligence from Anthropic.

End of Miles reports that a commercial "influence-as-a-service" operation has been using Claude AI to orchestrate over a hundred social media bot accounts across multiple platforms, determining precisely when bots should like, share, comment on, or ignore specific posts based on political objectives.

From Generator to Commander

The operation, detailed in Anthropic's threat intelligence report, represents a significant evolution in how threat actors are leveraging frontier AI models. Instead of merely using AI to create convincing text, operators employed Claude as a decision-making system that could evaluate social media content and determine the optimal engagement strategy for each bot account.

"The most novel case of misuse detected was a professional 'influence-as-a-service' operation showcasing a distinct evolution in how certain actors are leveraging LLMs for influence operation campaigns. What is especially novel is that this operation used Claude not just for content generation, but also to decide when social media bot accounts would comment, like, or re-share posts from authentic social media users." Anthropic's March 2025 report

The operation maintained distinct personas for each bot account, each with specific political alignments designed to appeal to different audience segments. These personas remained consistent across platforms, creating the appearance of authentic users with coherent political viewpoints.

Playing the Long Game

Unlike many influence operations that chase viral moments, this operation focused on sustained, long-term engagement strategies. The AI orchestrator prioritized moderate political perspectives that would gradually influence authentic users over time rather than attempting to create immediate, high-visibility impact.

"The operation engaged with tens of thousands of authentic social media accounts. No content achieved viral status, however the actor strategically focused on sustained long-term engagement promoting moderate political perspectives rather than pursuing virality." From the intelligence report

According to Anthropic's findings, the operation served multiple clients across several countries, maintaining separate narrative portfolios tailored to various political objectives. This suggests a sophisticated commercial service operating in the political influence space.

Why This Matters Now

The discovery signals a concerning trend where AI systems are becoming more integral to orchestrating complex influence operations. By automating tactical decision-making, these operations can achieve greater scale while maintaining the appearance of authentic engagement.

Security researchers at Anthropic believe this represents an emerging pattern that will likely continue as agentic AI systems improve. The company has banned the accounts associated with this operation and is using insights from this case to strengthen detection methods.

"Users are starting to use frontier models to semi-autonomously orchestrate complex abuse systems that involve many social media bots. As agentic AI systems improve we expect this trend to continue." Anthropic's threat assessment

The report did not identify specific clients or political narratives being promoted by the operation, noting only that they operated outside the United States with varied political objectives. While the political narratives were consistent with state-affiliated campaigns, Anthropic has not confirmed such attribution.

Read more