What makes Individual I’s a Collective We; Coordination mechanisms & costs

Jisung Yoon, Chris Kempes, Vicky Chuqiao Yang, Geoffrey West, Hyejin Youn

For a collective to become greater than the sum of its parts, individuals’ efforts and activities must be coordinated or regulated. Not readily observable and measurable, this particular aspect often goes unnoticed and understudied in complex systems. Diving into the Wikipedia ecosystem, where people are free to join and voluntarily edit individual pages with no firm rules, we identified and quantified three fundamental coordination mechanisms and found they scale with an influx of contributors in a remarkably systemic way over three order of magnitudes. Firstly, we have found a super-linear growth in mutual adjustments (scaling exponent: 1.3), manifested through extensive discussions and activity reversals. Secondly, the increase in direct supervision (scaling exponent: 0.9), as represented by the administrators’ activities, is disproportionately limited. Finally, the rate of rule enforcement exhibits the slowest escalation (scaling exponent 0.7), reflected by automated bots. The observed scaling exponents are notably robust across topical categories with minor variations attributed to the topic complication. Our findings suggest that as more people contribute to a project, a self-regulating ecosystem incurs faster mutual adjustments than direct supervision and rule enforcement. These findings have practical implications for online collaborative communities aiming to enhance their coordination efficiency. These results also have implications for how we understand human organizations in general.

Read the full article at: arxiv.org

Meditation and Complexity: a Systematic Review

Daniel Andrew Atad Pedro A. M. Mediano Fernando Rosas Aviva Berkovich-Ohana

Recent years have seen a growing interest in the use of measures inspired by complexity science for the study of consciousness. The work done in this field has shown remarkable results in discerning conscious from unconscious states, and in characterizing states of altered conscious experience following intake of psychedelic substances as involving enhanced complexity. However, the relationship between meditation and complexity is unclear, as empirical studies based on different theoretical frameworks point to meditation being associated with either enhancement or reduction of complexity. Here we provide a systematic review of the accumulating literature studying the complexity of neural activity in meditation, which disentangles different families of measures, short-term (state) from long-term (trait) effects, and meditation styles. Beyond families of measures used, our review uncovers a convergence toward identifying a higher complexity of neural activity during the meditative state when compared to waking rest or mind-wandering, and a decreased baseline complexity as a trait in experienced meditators compared to novices and controls. This review contributes to guide current debates and provides a framework for understanding the complexity of neural activity in meditation, while suggesting some practical guidelines for future research in the field.

Read the full article at: psyarxiv.com

On Hate Scaling Laws For Data-Swamps

Abeba Birhane, Vinay Prabhu, Sang Han, Vishnu Naresh Boddeti

`Scale the model, scale the data, scale the GPU-farms’ is the reigning sentiment in the world of generative AI today. While model scaling has been extensively studied, data scaling and its downstream impacts remain under explored. This is especially of critical importance in the context of visio-linguistic datasets whose main source is the World Wide Web, condensed and packaged as the CommonCrawl dump. This large scale data-dump, which is known to have numerous drawbacks, is repeatedly mined and serves as the data-motherlode for large generative models. In this paper, we: 1) investigate the effect of scaling datasets on hateful content through a comparative audit of the LAION-400M and LAION-2B-en, containing 400 million and 2 billion samples respectively, and 2) evaluate the downstream impact of scale on visio-linguistic models trained on these dataset variants by measuring racial bias of the models trained on them using the Chicago Face Dataset (CFD) as a probe. Our results show that 1) the presence of hateful content in datasets, when measured with a Hate Content Rate (HCR) metric on the inferences of the Pysentimiento hate-detection Natural Language Processing (NLP) model, increased by nearly 12% and 2) societal biases and negative stereotypes were also exacerbated with scale on the models we evaluated. As scale increased, the tendency of the model to associate images of human faces with the `human being’ class over 7 other offensive classes reduced by half. Furthermore, for the Black female category, the tendency of the model to associate their faces with the `criminal’ class doubled, while quintupling for Black male faces. We present a qualitative and historical analysis of the model audit results, reflect on our findings and its implications for dataset curation practice, and close with a summary of our findings and potential future work to be done in this area.

Read the full article at: arxiv.org