Measuring Complexity using Information

Klaus Jaffe

Measuring complexity in multidimensional systems with high degrees of freedom and a variety of types of information, remains an important challenge. The complexity of a system is related to the number and variety of components, the number and type of interactions among them, the degree of redundancy, and the degrees of freedom of the system. Examples show that different disciplines of science converge in complexity measures for low and high dimensional problems. For low dimensional systems, such as coded strings of symbols (text, computer code, DNA, RNA, proteins, music), Shannon’s Information Entropy (expected amount of information in an event drawn from a given distribution) and Kolmogorov’s Algorithmic Complexity (the length of the shortest algorithm that produces the object as output), are used for quantitative measurements of complexity. For systems with more dimensions (ecosystems, brains, social groupings), network science provides better tools for that purpose. For highly complex multidimensional systems, none of the former methods are useful. Here, information related to complexity can be used in systems, ranging from the subatomic to the ecological, social, mental and to AI. Useful Information Φ (Information that produces thermodynamic free energy) can be quantified by measuring the thermodynamic Free Energy and/or useful Work it produces. Complexity can be measured as Total Information I of the system, that includes Φ, useless information or Noise N, and Redundant Information R. Measuring one or more of these variables allows quantifying and classifying complexity. Complexity and Information are two windows overlooking the same fundamental phenomenon, broadening out tools to explore the deep structural dynamics of nature at all levels of complexity, including natural and artificial intelligence.

Read the full article at: papers.ssrn.com

Is stochastic thermodynamics the key to understanding the energy costs of computation?

David H. Wolpert, et al.

PNAS 121 (45) e2321112121

The relationship between the thermodynamic and computational properties of physical systems has been a major theoretical interest since at least the 19th century. It has also become of increasing practical importance over the last half-century as the energetic cost of digital devices has exploded. Importantly, real-world computers obey multiple physical constraints on how they work, which affects their thermodynamic properties. Moreover, many of these constraints apply to both naturally occurring computers, like brains or Eukaryotic cells, and digital systems. Most obviously, all such systems must finish their computation quickly, using as few degrees of freedom as possible. This means that they operate far from thermal equilibrium. Furthermore, many computers, both digital and biological, are modular, hierarchical systems with strong constraints on the connectivity among their subsystems. Yet another example is that to simplify their design, digital computers are required to be periodic processes governed by a global clock. None of these constraints were considered in 20th-century analyses of the thermodynamics of computation. The new field of stochastic thermodynamics provides formal tools for analyzing systems subject to all of these constraints. We argue here that these tools may help us understand at a far deeper level just how the fundamental thermodynamic properties of physical systems are related to the computation they perform.

Read the full article at: www.pnas.org

Rethinking life and predicting its origin

Diogo Gonçalves

Theory in Biosciences Volume 143, pages 205–215, (2024)

The definition, origin and recreation of life remain elusive. As others have suggested, only once we put life into reductionist physical terms will we be able to solve those questions. To that end, this work proposes the phenomenon of life to be the product of two dissipative mechanisms. From them, one characterises extant biological life and deduces a testable scenario for its origin. The proposed theory of life allows its replication, reinterprets ecological evolution and creates new constraints on the search for life.

Read the full article at: link.springer.com

A History of Bodies, Brains, and Minds The Evolution of Life and Consciousness, by Francisco Aboitiz

A panoramic view of the evolution of life on our planet, from its origins to humanity’s future.

In A History of Bodies, Brains, and Minds, Francisco Aboitiz provides a brief history of life, the brain, and cognition, from the earliest living beings to our own species. The author proceeds from the basic premise that, since evolution by natural selection is the process underlying the origin of life and its evolution on earth, the brain—and thus our minds—must also be the result of biological evolution. The aim of this book is to narrate how animal bodies came to be built with their nervous systems and how our species evolved with culture, technology, language, and consciousness.

The book is organized in four parts, each delving into a different aspect of evolutionary development:
• Definitions lays the groundwork by discussing the principles of biological evolution and explores the definition and mechanisms of life itself.
• Beginnings describes the origins of life, starting from the emergence of the first cells to the development of neurons as the building blocks for brain networks.
• The Rise of Bodies and Brains examines the evolution of animals with bilateral symmetry, the emergence of chordates and vertebrates, and the expansion and diversification of the vertebrate brain.
• A Singular Ape explores Homo sapiens and our species’ unique traits, such as bipedality, tool use, culture, language, communication, and consciousness.
Comprehensive and deeply insightful, this book helps us understand our place in the natural world and the cosmos—as well as what the future might hold for life on earth.

More at: mitpress.mit.edu

From Sensing to Sentience How Feeling Emerges from the Brain, by Todd E. Feinberg

A new theory of Neurobiological Emergentism that explains how sentience emerges from the brain.

Sentience is the feeling aspect of consciousness. In From Sensing to Sentience, Todd Feinberg develops a new theory called Neurobiological Emergentism (NBE) that integrates biological, neurobiological, evolutionary, and philosophical perspectives to explain how sentience naturally emerges from the brain.

Emergent properties are broadly defined as features of a complex system that are not present in the parts of a system when they are considered in isolation but may emerge as a system feature of those parts and their interactions. Tracing a journey of billions of years of evolution from life to the basic sensing capabilities of single-celled organisms up to the sentience of animals with advanced nervous systems, including all vertebrates (for instance, fish, reptiles, birds, and mammals), arthropods (insects and crabs), and cephalopods (such as the octopus), Feinberg argues that sentience gradually but eventually emerged along diverse evolutionary lines with the evolution of sufficiently neurobiologically complex brains during the Cambrian period over 520 million years ago.

Ultimately, Feinberg argues that viewing sentience as an emergent process can explain both its neurobiological basis as well its perplexing personal nature, thus solving the historical philosophical problem of the apparent “explanatory gap” between the brain and experience.

More at: mitpress.mit.edu