Quantifying the Complexity of Materials with Assembly Theory

Keith Y Patarroyo, Abhishek Sharma, Ian Seet, Ignas Packmore, Sara I. Walker, Leroy Cronin

Quantifying the evolution and complexity of materials is of importance in many areas of science and engineering, where a central open challenge is developing experimental complexity measurements to distinguish random structures from evolved or engineered materials. Assembly Theory (AT) was developed to measure complexity produced by selection, evolution and technology. Here, we extend the fundamentals of AT to quantify complexity in inorganic molecules and solid-state periodic objects such as crystals, minerals and microprocessors, showing how the framework of AT can be used to distinguish naturally formed materials from evolved and engineered ones by quantifying the amount of assembly using the assembly equation defined by AT. We show how tracking the Assembly of repeated structures within a material allows us formalizing the complexity of materials in a manner accessible to measurement. We confirm the physical relevance of our formal approach, by applying it to phase transformations in crystals using the HCP to FCC transformation as a model system. To explore this approach, we introduce random stacking faults in closed-packed systems simplified to one-dimensional strings and demonstrate how Assembly can track the phase transformation. We then compare the Assembly of closed-packed structures with random or engineered faults, demonstrating its utility in distinguishing engineered materials from randomly structured ones. Our results have implications for the study of pre-genetic minerals at the origin of life, optimization of material design in the trade-off between complexity and function, and new approaches to explore material technosignatures which can be unambiguously identified as products of engineered design.

Read the full article at: arxiv.org

A Bayesian Interpretation of the Internal Model Principle

Manuel Baltieri, Martin Biehl, Matteo Capucci, Nathaniel Virgo

The internal model principle, originally proposed in the theory of control of linear systems, nowadays represents a more general class of results in control theory and cybernetics. The central claim of these results is that, under suitable assumptions, if a system (a controller) can regulate against a class of external inputs (from the environment), it is because the system contains a model of the system causing these inputs, which can be used to generate signals counteracting them. Similar claims on the role of internal models appear also in cognitive science, especially in modern Bayesian treatments of cognitive agents, often suggesting that a system (a human subject, or some other agent) models its environment to adapt against disturbances and perform goal-directed behaviour. It is however unclear whether the Bayesian internal models discussed in cognitive science bear any formal relation to the internal models invoked in standard treatments of control theory. Here, we first review the internal model principle and present a precise formulation of it using concepts inspired by categorical systems theory. This leads to a formal definition of `model’ generalising its use in the internal model principle. Although this notion of model is not a priori related to the notion of Bayesian reasoning, we show that it can be seen as a special case of possibilistic Bayesian filtering. This result is based on a recent line of work formalising, using Markov categories, a notion of `interpretation’, describing when a system can be interpreted as performing Bayesian filtering on an outside world in a consistent way.

Read the full article at: arxiv.org

Physical Network Constraints Define the Lognormal Architecture of the Brain’s Connectome

Ben Piazza, Dániel L. Barabási, André Ferreira Castro, Giulia Menichetti, Albert-László Barabási

The brain has long been conceptualized as a network of neurons connected by synapses. However, attempts to describe the connectome using established network science models have yielded conflicting outcomes, leaving the architecture of neural networks unresolved. Here, by performing a comparative analysis of eight experimentally mapped connectomes, we find that their degree distributions cannot be captured by the well-established random or scale-free models. Instead, the node degrees and strengths are well approximated by lognormal distributions, although these lack a mechanistic explanation in the context of the brain. By acknowledging the physical network nature of the brain, we show that neuron size is governed by a multiplicative process, which allows us to analytically derive the lognormal nature of the neuron length distribution. Our framework not only predicts the degree and strength distributions across each of the eight connectomes, but also yields a series of novel and empirically falsifiable relationships between different neuron characteristics. The resulting multiplicative network represents a novel architecture for network science, whose distinctive quantitative features bridge critical gaps between neural structure and function, with implications for brain dynamics, robustness, and synchronization.

Read the full article at: www.biorxiv.org

Special Issue on “Cybernetics and Systems Education: Past, Present, and Future”

Submission Deadlines:
Abstracts: 01 April 2025
Full Papers: 01 August 2025
Publication: March-April 2026

Cybernetics and systems education have long played a vital role in understanding complex, purposeful and adaptive systems. With the advent of next generation artificial intelligence and with the vast range of complex socio-technical systems that require collective transformation, cybernetic and systems principles have only become more relevant. There is a growing, transnational need for education programs to prepare the current and next generation to operate within and beyond these frameworks.

This special issue seeks to bring together educators, researchers and practitioners to explore the past, present, and future of cybernetics and systems education. We aim to examine how cybernetic concepts and systems thinking have been previously and/or are currently integrated into educational paradigms, showcase novel approaches to teaching these principles, and envision transformative methodologies that may shape the future of cybernetics and systems education.

Through this special issue we seek to create and promote a transnational network to further cybernetic and cybernetically informed systems education.

Read the full article at: onlinelibrary.wiley.com

Brains, Minds and Machines

The goal of this course is to help produce a community of leaders that is equally knowledgeable in neuroscience, cognitive science, and computer science and will lead the scientific understanding of intelligence and the development of true biologically inspired AI.

Course/Program Dates:
Aug 03, 2025 – Aug 24, 2025
Application due date:
Mar 24, 2025

The basis of intelligence – how the brain produces intelligent behavior and how to endow machines with human-like intelligence – is arguably the greatest problem in science and technology. To solve it, we will need to understand how natural intelligence emerges from computations in neural circuits, with rigor sufficient to reproduce similar intelligent behavior in machines. Success in this endeavor will ultimately enable us to understand ourselves better, to produce smarter machines, and perhaps even to make ourselves smarter. Today’s AI technologies are impressive but quite different from human intelligence. We still do not understand the mechanisms underlying the robustness, the generalization, and the continual learning capabilities of biological intelligence. The synergistic combination of cognitive science, neurobiology, engineering, mathematics, and computer science holds the promise of significant progress. Elucidating how human intelligence works will in turn lead to more sophisticated AI algorithms. The goal of this course is to help produce a community of leaders that is equally knowledgeable in neuroscience, cognitive science, and computer science and will lead the scientific understanding of intelligence and the development of true biologically inspired AI.

Apply at: www.mbl.edu