AI Now Orchestrating Political Bot Networks Across Multiple Countries

A commercial "influence-as-a-service" operation has been leveraging advanced AI models to direct networks of over 100 social media bots, making tactical decisions about when these fake accounts should engage with authentic users based on specific political objectives aligned with multiple international clients.
End of Miles reports this significant evolution in automated influence operations was revealed in Anthropic's security report detailing malicious uses of their Claude AI assistant.
AI transitions from content creator to campaign director
The operation marks a distinct evolution in how certain actors are leveraging large language models. Unlike previous influence campaigns where AI merely generated content, this operation used Claude as an orchestrator deciding what actions social media bot accounts should take based on politically motivated personas.
"What is especially novel is that this operation used Claude not just for content generation, but also to decide when social media bot accounts would comment, like, or re-share posts from authentic social media users," Anthropic's report states
The company's investigation found the operation engaged with tens of thousands of authentic social media accounts across multiple countries and languages. While no content achieved viral status, the threat actor strategically focused on sustained long-term engagement promoting moderate political perspectives rather than pursuing virality.
Sophisticated multi-client political operation
According to the report, the actor created distinct personas for each bot account with specific political alignments. These personas maintained consistent identities across platforms including Twitter/X and Facebook, with Claude determining appropriate engagement tactics based on the clients' objectives.
"The operation maintained distinct narrative portfolios for different clients, all outside of the United States with varied political narratives they were aimed at pushing." The security report details
The sophistication of the operation included using Claude for multiple aspects of the campaign, from maintaining consistent political personas to generating appropriate responses in multiple languages. The AI was even tasked with creating prompts for image-generation tools and evaluating their outputs.
Implications for detecting influence operations
The security researchers who uncovered the operation noted that this represents a concerning trend in how AI systems are being deployed for political influence. As agentic AI systems improve, Anthropic expects this trend to continue and potentially accelerate.
"Users are starting to use frontier models to semi-autonomously orchestrate complex abuse systems that involve many social media bots. As agentic AI systems improve we expect this trend to continue." The researchers warned
The company used techniques from their recently published research papers, including Clio and hierarchical summarization, to analyze large volumes of conversation data and identify patterns of misuse. These approaches, coupled with specialized classifiers, enabled detection and subsequent banning of accounts associated with the operation.
While the political narratives aligned with what Anthropic would expect from state-affiliated campaigns, the company has not confirmed such attribution, noting the operation's activity suggests a commercial service operating across multiple countries with varied political objectives.