Advances in artificial intelligence (AI) offer the prospect of manipulating beliefs and behaviors on a population-wide level (1). Large language models (LLMs) and autonomous agents (2) let influence campaigns reach unprecedented scale and precision. Generative tools can expand propaganda output without sacrificing credibility (3) and inexpensively create falsehoods that are rated as more human-like than those written by humans (3, 4). Techniques meant to refine AI reasoning, such as chain-of-thought prompting, can be used to generate more convincing falsehoods. Enabled by these capabilities, a disruptive threat is emerging: swarms of collaborative, malicious AI agents. Fusing LLM reasoning with multiagent architectures (2), these systems are capable of coordinating autonomously, infiltrating communities, and fabricating consensus efficiently. By adaptively mimicking human social dynamics, they threaten democracy. Because the resulting harms stem from design, commercial incentives, and governance, we prioritize interventions at multiple leverage points, focusing on pragmatic mechanisms over voluntary compliance.
DANIEL THILO SCHROEDER, MEEYOUNG CHA, ANDREA BARONCHELLI, NICK BOSTROM, NICHOLAS A. CHRISTAKIS, DAVID GARCIA, AMIT GOLDENBERG, YARA KYRYCHENKO, KEVIN LEYTON-BROWN, NINA LUTZ, GARY MARCUS, FILIPPO MENCZER, GORDON PENNYCOOK, DAVID G. RAND, MARIA RESSA, FRANK SCHWEITZER, DAWN SONG, CHRISTOPHER SUMMERFIELD, AUDREY TANG, JAY J. VAN BAVEL, SANDER VAN DER LINDEN, AND JONAS R. KUNST
SCIENCE 22 Jan 2026 Vol 391, Issue 6783 pp. 354-357
Read the full article at: www.science.org