The illusion of conscious AI

Neuroscientist Anil Seth lays out three reasons why people tend to overestimate the odds of AI becoming conscious. No one knows what it would take to build a conscious machine — but as Seth notes, we can’t rule it out. Given the unknowns, he warns against trying to deliberately create artificial consciousness.

Read the full article at: bigthink.com

Constructor theory of time

David Deutsch, Chiara Marletto

Constructor theory asserts that the laws of physics are expressible as specifications of which transformations of physical systems can or cannot be brought about with unbounded accuracy by devices capable of operating in a cycle (‘constructors’). Hence, in particular, such specifications cannot refer to time. Thus, laws expressed in constructor-theoretic form automatically avoid the anomalous properties of time in traditional formulations of fundamental theories. But that raises the problem of how they can nevertheless give meaning to duration and dynamics, and thereby be compatible with traditionally formulated laws. Here we show how.

Read the full article at: arxiv.org

Chemical Complexity of Food and Implications for Therapeutics

Giulia Menichetti, Ph.D., Albert-László Barabási, Ph.D., and Joseph Loscalzo, M.D., Ph.D.

N Engl J Med 2025;392:1836-1845

Food contains more than 139,000 molecules, which influence nearly half the human proteome. Systematic analysis of food–chemical interactions can potentially advance nutrition science and drug discovery.

Read the full article at: www.nejm.org

AI in a vat: Fundamental limits of efficient world modelling for agent sandboxing and interpretability

Fernando Rosas, Alexander Boyd, Manuel Baltieri

Recent work proposes using world models to generate controlled virtual environments in which AI agents can be tested before deployment to ensure their reliability and safety. However, accurate world models often have high computational demands that can severely restrict the scope and depth of such assessments. Inspired by the classic `brain in a vat’ thought experiment, here we investigate ways of simplifying world models that remain agnostic to the AI agent under evaluation. By following principles from computational mechanics, our approach reveals a fundamental trade-off in world model construction between efficiency and interpretability, demonstrating that no single world model can optimise all desirable characteristics. Building on this trade-off, we identify procedures to build world models that either minimise memory requirements, delineate the boundaries of what is learnable, or allow tracking causes of undesirable outcomes. In doing so, this work establishes fundamental limits in world modelling, leading to actionable guidelines that inform core design choices related to effective agent evaluation.

Read the full article at: arxiv.org

Feedback. A podcast with Peter Erdi and Nicholas Golledge moderated by Carlos Gershenson.

Nick and Peter wrote a book with the same title (Feedback). They discuss the similarities and differences between the two books.

Feedback: How to Destroy and Save the World by Peter Erdi.

Feedback: Uncovering the Hidden Connections between Life and the Universe by Nicholas Golledge.

Watch/listen at: www.youtube.com