Author: cxdig

When Maxwell’s Demon leaves the room

P.G. Tello,  S. Kauffman

BioSystems Volume 258, December 2025, 105618

This work revisits the Maxwell Demon paradigm to explore its implications for evolutionary dynamics from an information-theoretic perspective. By removing the Demon as an intentional agent, we reinterpret the emergence of order as a natural outcome of physical laws combined with stochastic processes. Using models inspired by information theory, such as binary and Z-channels, we show how random fluctuations (e.g., stochastic resonance) can decrease entropy, generate mutual information, and induce non-ergodicity. These dynamics highlight the role of memory and correlation as emergent features of purely physical interactions without recourse to purposeful agency. In this framework, evolutionary exaptations, rather than sole adaptations, emerge as key drivers of biological evolution. Finally, we connect our analysis with recent contributions on agency and memory, underscoring the relevance of informational concepts for understanding the purposeless yet structured dynamics of evolutionary processes.

Read the full article at: www.sciencedirect.com

Origins of life: the possible and the actual [Special Issue]

compiled and edited by Ricard Solé, Chris Kempes and Susan Stepney
What is life, and how does it begin? This theme issue explores one of science’s deepest questions: how life can emerge from non-living matter. Researchers from many fields — from physics and chemistry to biology and artificial life — are working to uncover the basic principles that make life possible. Key themes include the role of energy and information in early cells, the plausibility of alternative forms of life, and efforts to recreate life-like systems in the lab. By bringing together diverse perspectives, this issue offers a fresh look at both the limits and possibilities for how life may arise, on Earth and beyond.

Read the full articles at: royalsocietypublishing.org

Engineering Emergence

bel Jansma, Erik Hoel

One of the reasons complex systems are complex is because they have multiscale structure. How does this multiscale structure come about? We argue that it reflects an emergent hierarchy of scales that contribute to the system’s causal workings. An example is how a computer can be described at the level of its hardware circuitry but also its software. But we show that many systems, even simple ones, have such an emergent hierarchy, built from a small subset of all their possible scales of description. Formally, we extend the theory of causal emergence (2.0) so as to analyze the causal contributions across the full multiscale structure of a system rather than just over a single path that traverses the system’s scales. Our methods reveal that systems can be classified as being causally top-heavy or bottom-heavy, or their emergent hierarchies can be highly complex. We argue that this provides a more specific notion of scale-freeness (here, when causation is spread equally across the scales of a system) than the standard network science terminology. More broadly, we provide the mathematical tools to quantify this complexity and provide diverse examples of the taxonomy of emergent hierarchies. Finally, we demonstrate the ability to engineer not just degree of emergence in a system, but how that emergence is distributed across the multiscale structure.

Read the full article at: arxiv.org

Artificially intelligent agents in the social and behavioral sciences: A history and outlook

Petter Holme, Milena Tsvetkova

We review the historical development and current trends of artificially intelligent agents (agentic AI) in the social and behavioral sciences: from the first programmable computers, and social simulations soon thereafter, to today’s experiments with large language models. This overview emphasizes the role of AI in the scientific process and the changes brought about, both through technological advancements and the broader evolution of science from around 1950 to the present. Some of the specific points we cover include: the challenges of presenting the first social simulation studies to a world unaware of computers, the rise of social systems science, intelligent game theoretic agents, the age of big data and the epistemic upheaval in its wake, and the current enthusiasm around applications of generative AI, and many other topics. A pervasive theme is how deeply entwined we are with the technologies we use to understand ourselves.

Read the full article at: arxiv.org

Emergent Coordination in Multi-Agent Language Models

Christoph Riedl

When are multi-agent LLM systems merely a collection of individual agents versus an integrated collective with higher-order structure? We introduce an information-theoretic framework to test — in a purely data-driven way — whether multi-agent systems show signs of higher-order structure. This information decomposition lets us measure whether dynamical emergence is present in multi-agent LLM systems, localize it, and distinguish spurious temporal coupling from performance-relevant cross-agent synergy. We implement both a practical criterion and an emergence capacity criterion operationalized as partial information decomposition of time-delayed mutual information (TDMI). We apply our framework to experiments using a simple guessing game without direct agent communication and only minimal group-level feedback with three randomized interventions. Groups in the control condition exhibit strong temporal synergy but only little coordinated alignment across agents. Assigning a persona to each agent introduces stable identity-linked differentiation. Combining personas with an instruction to “think about what other agents might do” shows identity-linked differentiation and goal-directed complementarity across agents. Taken together, our framework establishes that multi-agent LLM systems can be steered with prompt design from mere aggregates to higher-order collectives. Our results are robust across emergence measures and entropy estimators, and not explained by coordination-free baselines or temporal dynamics alone. Without attributing human-like cognition to the agents, the patterns of interaction we observe mirror well-established principles of collective intelligence in human groups: effective performance requires both alignment on shared objectives and complementary contributions across members.

https://arxiv.org/abs/2510.05174