Provenance of life: Chemical autonomous agents surviving through associative learning

Stuart Bartlett and David Louapre

Phys. Rev. E 106, 034401

We present a benchmark study of autonomous, chemical agents exhibiting associative learning of an environmental feature. Associative learning systems have been widely studied in cognitive science and artificial intelligence but are most commonly implemented in highly complex or carefully engineered systems, such as animal brains, artificial neural networks, DNA computing systems, and gene regulatory networks, among others. The ability to encode environmental information and use it to make simple predictions is a benchmark of biological resilience and underpins a plethora of adaptive responses in the living hierarchy, spanning prey animal species anticipating the arrival of predators to epigenetic systems in microorganisms learning environmental correlations. Given the ubiquitous and essential presence of learning behaviors in the biosphere, we aimed to explore whether simple, nonliving dissipative structures could also exhibit associative learning. Inspired by previous modeling of associative learning in chemical networks, we simulated simple systems composed of long- and short-term memory chemical species that could encode the presence or absence of temporal correlations between two external species. The ability to learn this association was implemented in Gray-Scott reaction-diffusion spots, emergent chemical patterns that exhibit self-replication and homeostasis. With the novel ability of associative learning, we demonstrate that simple chemical patterns can exhibit a broad repertoire of lifelike behavior, paving the way for in vitro studies of autonomous chemical learning systems, with potential relevance to artificial life, origins of life, and systems chemistry. The experimental realization of these learning behaviors in protocell or coacervate systems could advance a new research direction in astrobiology, since our system significantly reduces the lower bound on the required complexity for autonomous chemical learning.

Read the full article at: link.aps.org

Sketch of a novel approach to a neural model

Gabriele Scheler
In this paper, we lay out a novel model of neuroplasticity in the form of a horizontal-vertical integration model of neural processing. We believe a new approach to neural modeling will benefit the 3rd wave of AI. The horizontal plane consists of an adaptive network of neurons connected by transmission links which generates spatio-temporal spike patterns. This fits with standard computational neuroscience approaches. Additionally for each individual neuron there is a vertical part consisting of internal adaptive parameters steering the external membrane-expressed parameters which are involved in neural transmission. Each neuron has a vertical modular system of parameters corresponding to (a) external parameters at the membrane layer, divided into compartments (spines, boutons) (b) internal parameters in the submembrane zone and the cytoplasm with its protein signaling network and (c) core parameters in the nucleus for genetic and epigenetic information. In such models, each node (=neuron) in the horizontal network has its own internal memory. Neural transmission and information storage are systematically separated, an important conceptual advance over synaptic weight models. We discuss the membrane-based (external) filtering and selection of outside signals for processing vs. signal loss by fast fluctuations and the neuron-internal computing strategies from intracellular protein signaling to the nucleus as the core system. We want to show that the individual neuron has an important role in the computation of signals and that many assumptions derived from the synaptic weight adjustment hypothesis of memory may not hold in a real brain. Not every transmission event leaves a trace and the neuron is a self-programming device, rather than passively determined by current input. Ultimately we strive to build a flexible memory system that processes facts and events automatically.

Read the full article at: arxiv.org

Sequential motifs in observed walks

Timothy LaRock, Ingo Scholtes, Tina Eliassi-Rad
Journal of Complex Networks, Volume 10, Issue 5, October 2022, cnac036,

The structure of complex networks can be characterized by counting and analysing network motifs. Motifs are small graph structures that occur repeatedly in a network, such as triangles or chains. Recent work has generalized motifs to temporal and dynamic network data. However, existing techniques do not generalize to sequential or trajectory data, which represent entities moving through the nodes of a network, such as passengers moving through transportation networks. The unit of observation in these data is fundamentally different since we analyse observations of trajectories (e.g. a trip from airport A to airport C through airport B), rather than independent observations of edges or snapshots of graphs over time. In this work, we define sequential motifs in trajectory data, which are small, directed and sequence-ordered graphs corresponding to patterns in observed sequences. We draw a connection between the counting and analysis of sequential motifs and Higher-Order Network (HON) models. We show that by mapping edges of a HON, specifically a kth-order DeBruijn graph, to sequential motifs, we can count and evaluate their importance in observed data. We test our methodology with two datasets: (1) passengers navigating an airport network and (2) people navigating the Wikipedia article network. We find that the most prevalent and important sequential motifs correspond to intuitive patterns of traversal in the real systems and show empirically that the heterogeneity of edge weights in an observed higher-order DeBruijn graph has implications for the distributions of sequential motifs we expect to see across our null models.

Read the full article at: academic.oup.com

Untangling the network effects of productivity and prominence among scientists

Weihua Li, Sam Zhang, Zhiming Zheng, Skyler J. Cranmer & Aaron Clauset
Nature Communications volume 13, Article number: 4907 (2022)

While inequalities in science are common, most efforts to understand them treat scientists as isolated individuals, ignoring the network effects of collaboration. Here, we develop models that untangle the network effects of productivity defined as paper counts, and prominence referring to high-impact publications, of individual scientists from their collaboration networks. We find that gendered differences in the productivity and prominence of mid-career researchers can be largely explained by differences in their coauthorship networks. Hence, collaboration networks act as a form of social capital, and we find evidence of their transferability from senior to junior collaborators, with benefits that decay as researchers age. Collaboration network effects can also explain a large proportion of the productivity and prominence advantages held by researchers at prestigious institutions. These results highlight a substantial role of social networks in driving inequalities in science, and suggest that collaboration networks represent an important form of unequally distributed social capital that shapes who makes what scientific discoveries.

Read the full article at: www.nature.com