Group Selection as a Safeguard Against AI Substitution
Reliance on generative AI can reduce cultural variance and diversity, especially in creative work. This reduction in variance has already led to problems in model performance, including model collapse and hallucination. In this paper, we examine the long-term consequences of AI use for human cultural evolution and the conditions under which widespread AI use may lead to “cultural collapse”, a process in which reliance on AI-generated content reduces human variation and innovation and slows cumulative cultural evolution. Using an agent-based model and evolutionary game theory, we compare two types of AI use: complement and substitute. AI-complement users seek suggestions and guidance while remaining the main producers of the final output, whereas AI-substitute users provide minimal input, and rely on AI to produce most of the output. We then study how these use strategies compete and spread under evolutionary dynamics. We find that AI-substitute users prevail under individual-level selection despite the stronger reduction in cultural variance. By contrast, AI-complement users can benefit their groups by maintaining the variance needed for exploration, and can therefore be favored under cultural group selection when group boundaries are strong. Overall, our findings shed light on the long-term, population-level effects of AI adoption and inform policy and organizational strategies to mitigate these risks.
💡 Research Summary
The paper investigates how widespread use of generative AI (GenAI) may affect human cultural evolution, focusing on the tension between short‑term efficiency and long‑term cultural diversity. The authors distinguish two broad usage strategies: AI‑Complement, where users solicit suggestions from AI but retain primary control over the final product, and AI‑Substitute, where users provide minimal prompts and let the AI generate most of the output. Drawing on Henrich’s cumulative cultural evolution (CCE) framework, they construct an agent‑based model in which a population of N agents each possesses a cultural skill level z. In each learning round, an agent identifies the highest‑skill individual in its social neighbourhood, copies that model, and draws its post‑learning skill z′ from a Gumbel distribution with location μ = z_j − α and dispersion β. The parameters α (average learning error) and β (variance of outcomes) capture the stochastic nature of cultural transmission.
AI usage modifies these parameters. For the Complement strategy, the error reduction factor r(C)_α and variance reduction factor r(C)_β scale down α and β moderately. For the Substitute strategy, larger reductions r(S)_α and r(S)_β are applied, reflecting the stronger reliance on AI. The authors then model strategy diffusion via “pay‑off‑biased social learning”: an agent i samples another agent k and adopts k’s strategy with probability P(i←k)=1/(1+e^{−δ(z_k−z_i)}). This rule implements a replicator‑like dynamic where higher cultural skill translates into higher reproductive (or imitation) success.
Two experimental settings are explored. In a well‑mixed population, the Substitute strategy quickly dominates because it yields higher immediate skill (lower α) and spreads faster. However, the aggressive reduction in β drains cultural variance, leading to a rapid plateau in cumulative skill gains and a risk of “cultural collapse” where innovation stalls. In contrast, when agents interact predominantly within tightly knit groups (strong group structure, limited inter‑group mixing), the Complement strategy can persist. Although its short‑term average skill is lower, the modest reduction in β preserves enough variance for ongoing exploration, resulting in a steadier, higher long‑term cumulative cultural trajectory. Simulations show that Complement‑adopting groups overtake Substitute‑dominated ones after roughly 15–20 generations despite an early lag.
To complement the agent‑based experiments, the authors conduct an analytical replicator‑dynamics study. Expected payoffs π_s(x) for each strategy s∈{0 (no AI), C, S} are estimated via Monte‑Carlo sampling of the learning step while holding the population composition x fixed. The payoff functions are frequency‑dependent because the skill of the best model z_j depends on the distribution of skills across strategies. Results confirm that while π_S is initially highest, it declines sharply as x_S increases (due to variance loss). Conversely, π_C rises with x_C once a critical mass is reached, creating an interior evolutionarily stable state where both Complement users and non‑AI users coexist, and the Substitute strategy is eliminated.
The discussion interprets these findings in evolutionary terms: individual‑level selection favors the “cheapest” path to higher skill (AI‑Substitute), but group‑level selection can favor strategies that maintain cultural variance, a prerequisite for long‑term adaptation and innovation. The authors argue that policy and organizational design should therefore encourage Complement usage—e.g., by providing AI tools that act as brainstorming assistants rather than full generators, by training workers to critically engage with AI suggestions, and by preserving a substantial corpus of human‑generated content for model training. They also suggest mechanisms to limit the proportion of synthetic data in training pipelines to avoid feedback loops that erode cultural diversity.
In conclusion, the paper provides a rigorous theoretical and computational demonstration that AI‑Substitute strategies, while advantageous in the short run, risk undermining the very cultural substrate that fuels cumulative progress. By contrast, AI‑Complement strategies can be favored under strong group selection pressures, acting as a safeguard against cultural collapse. The work bridges AI ethics, cultural evolution theory, and organizational policy, offering concrete guidance for mitigating long‑term societal risks associated with pervasive generative AI.
Comments & Academic Discussion
Loading comments...
Leave a Comment