Month: January 2026

Comparing Different Physics Fields Using Statistical Linguistics

María Fernanda Sánchez-Puig, Carlos Gershenson, Carlos Pineda

The large digital archives of the American Physical Society (APS) offer an opportunity to quantitatively analyze the structure and evolution of scientific communication. In this paper, we perform a comparative analysis of the language used in eight APS journals (Phys. Rev. A, B, C, D, E, Lett., X, Rev. Mod. Phys.) using methods from statistical linguistics. We study word rank distributions (from monograms to hexagrams), finding that they are consistent with Zipf’s law. We also analyze rank diversity over time, which follows a characteristic sigmoid shape. To quantify the linguistic similarity between journals, we use the rank-biased overlap (RBO) distance, comparing the journals not only to each other, but also to corpora from Google Books and Twitter. This analysis reveals that the most significant differences emerge when focusing on content words rather than the full vocabulary. By identifying the unique and common content words for each specialized journal, we develop an article classifier that predicts a paper’s journal of origin based on its unique word distribution. This classifier uses a proposed “importance factor” to weigh the significance of each word. Finally, we analyze the frequency of mention of prominent physicists and compare it to their cultural recognitions ranked in the Pantheon dataset, finding a low correlation that highlights the context-dependent nature of scientific fame. These results demonstrate that scientific language itself can serve as a quantitative window into the organization and evolution of science.

Read the full article at: www.preprints.org

Functional Percolation: Criticality of Form and Function

Galen J. Wilkerson
Understanding how network structure constrains and enables information processing is a central problem in the statistical mechanics of interacting systems. Here we study random networks across the structural percolation transition and analyze how connectivity governs realizable input-output transformations under cascade dynamics. Using Erdos-Renyi networks as a minimal ensemble, we examine structural, functional, and information-theoretic observables as functions of mean degree. We find that the emergence of the giant connected component coincides with a sharp transition in realizable information processing: complex input-output response functions become accessible, functional diversity increases rapidly, output entropy rises, and directed information flow, quantified by transfer entropy, extends beyond local neighborhoods. We term this coincidence of structural, functional, and informational transitions functional percolation, referring to a sharp expansion of the space of realizable input-output functions at the percolation threshold. Near criticality, networks exhibit a Pareto-optimal tradeoff between functional complexity and diversity, suggesting that percolation criticality may provide a general organizing principle of information processing capacity in systems with local interactions and propagating influences.

Read the full article at: arxiv.org

The creation of information: how evolution generates novel improvisations in the biosphere

Andrea Roli, Sudip Patra, Stuart Kauffman

Interface Focus (2025) 15 (6): 20250038

We discuss the creation of information in the evolution of the biosphere by elaborating on the interplay between affordances and constraints. We maintain that information is created when affordances are seized and, therefore, at the same time, meaning is generated and a new space of possibilities is created.

Read the full article at: royalsocietypublishing.org

From cognitive coherence to political polarization: A data-driven agent-based model of belief change

Marlene C. L. Batzke, Peter Steiglechner, Jan Lorenz, Bruce Edmonds, František Kalvas

Political Psychology 

Political polarization represents a rising issue in many countries, making it more and more important to understand its relation to cognitive-motivational and social influence mechanisms. Yet, the link between micro-level mechanisms and macro-level phenomena remains unclear. We investigated the consequences of individuals striving for cognitive coherence in their belief systems on political polarization in society in an agent-based model. In this, we formalized how cognitive coherence affects how individuals update their beliefs following social influence and self-reflection processes. We derive agents’ political beliefs as well as their subjective belief systems, defining what determines coherence for different individuals, from European Social Survey data via correlational class analysis. The simulation shows that agents polarize in their beliefs when they have a strong strive for cognitive coherence, and especially when they have structurally different belief systems. In a mathematical analysis, we not only explain the main findings but also underscore the necessity of simulations for understanding the complex dynamics of socially embedded phenomena such as political polarization.

Read the full article at: onlinelibrary.wiley.com

Disentangling Boltzmann Brains, the Time-Asymmetry of Memory, and the Second Law

David Wolpert, Carlo Rovelli, and Jordan Scharnhorst

Entropy 2025, 27(12), 1227

Are your perceptions, memories and observations merely a statistical fluctuation arising from of the thermal equilibrium of the universe, bearing no correlation to the actual past state of the universe? Arguments are given in the literature for and against this “Boltzmann brain” hypothesis. Complicating these arguments have been the many subtle—and very often implicit—joint dependencies among these arguments and others that have been given for the past hypothesis, the second law, and even for Bayesian inference of the reliability of experimental data. These dependencies can easily lead to circular reasoning. To avoid this problem, since all of these arguments involve the stochastic properties of the dynamics of the universe’s entropy, we begin by formalizing that dynamics as a time-symmetric, time-translation invariant Markov process, which we call the entropy conjecture. Crucially, like all stochastic processes, the entropy conjecture does not specify any time(s) which it should be conditioned on in order to infer the stochastic dynamics of our universe’s entropy. Any such choice of conditioning times and associated entropy values must be introduced as an independent assumption. This observation allows us to disentangle the standard Boltzmann brain hypothesis, its “1000CE” variant, the past hypothesis, the second law, and the reliability of our experimental data, all in a fully formal manner. In particular, we show that these all make an arbitrary assumption that the dynamics of the universe’s entropy should be conditioned on a single event at a single moment in time, differing only in the details of their assumptions. In this aspect, the Boltzmann brain hypothesis and the second law are equally legitimate (or not).

Read the full article at: www.mdpi.com