Emergence

2025-10-02

The world is full of systems that behave in a complex manner, and yet the constituents of that system are engaged in relatively simple behavior. It is the interactions of a large number of these simple constituents that seems to give rise to a result that is much, much richer than the sum of the parts.

Consider the mighty ant colony. It attacks, creates, feeds, grows and responds as one strange entity. No single ant can hope to orchestrate any of this intricate dance, and yet it is… just ants! A lot of them.

Rob Conway’s Game of Life features cells on a grid that are turned on or off individually using a dead simple rule. And yet, when repeated for time, the grid comes alive with shapes that move and interact with each other.

Langton’s ant is another curious game where a simple set of rules leads to complex macro behavior.

Emergent behavior

When simple rules of interaction at a given scale give rise to complex behavior at a larger scale, the latter is dubbed emergent behavior. This emergent behavior may or may not be expected. Often though, some unexpected behaviors also emerge when we are trying to affect some different emergent behavior. There are thus two useful ways of studying emergent behavior.

In the top-down study, we hope to go from observed complex behavior to the local rules that entities at a smaller scale ought to follow: What is a single ant doing at any given moment?

The complimentary bottom-up study aims to predict the macro effect of a given set of rules that are followed locally at a smaller scale. The design of swarm algorithms where a member of a swarm makes local decisions based on its local environment read, resulting in desired swarm behaviour would fall in this category. A neat example is from this paper titled A Markov Chain Algorithm for Compression in Self-Organizing Particle Systems, where the desired macro behavior is that of compression — i.e., getting swarm particles to cluster closely together.

This kind of emergence sounds like magic, and it is magical in that it is awe inspiring, but there is of course nothing mystical about it. This is particularly important today, when we speak of reasoning or problem-solving as emergent behaviors of LLM training. We do want our language models to appear to reason and help us solve complex problems that involve multi-step solutions. The fact that they acquire these problem-solving abilities suddenly beyond a given training scale, is what makes us call it emergent behavior. But you see, our use of language, as found on the Internet and hence in the training sets of all current LLMs, encodes human reasoning, and hence to model that use of language, picking up this latent reasoning structure is a must. And one can see how it might require a certain scale of training data, compute, and model capacity to pick this structure up. It feels funky to be writing about this after using an LLM to write most of the code for each demo you see on this page.

Distributed software systems

Anyone who has worked with a distributed system (and definitely those who have built one) know how complex systemic behaviors can arise due to delays and interactions between components that aren’t perfectly reliable. A fascinating family of algorithms that creates complex systemic behavior through such interactions is the gossip protocol family. These protocols spread information through a network the way humans spread rumors: each node periodically tells a random neighbor what it knows.

One surprising thing about this simple algorithm is how fast it can spread information through the cluster. Asymptotically, it takes O(ln(N)) gossip periods before 99% of the nodes would learn of the news. For example, in a 1000 node cluster where each node “gossips” every second, it would take just about 7 seconds before almost all nodes would learn of the news, no matter which node it originates on.

Markets and economies

My favorite kind of emergent behavior comes from humans interacting with each other to create economies. Aacting mostly out of self-interest, thousands and millions of people are able to orchestrate extremely complex phenomena, fulfilling the most mundane as well as the most bizarre needs out there. The classic essay I, Pencil by Leonard E. Read illustrates this eloquently. No one person on this world can hope to know how to make the humble lead pencil from scratch, and yet most of us take it for granted that we’ll just be able to buy one when we need one, cheaply.

Another striking essay on this is Bastiat’s There Are No Absolute Principles from his book Economic Sophisms, which discusses how 19th Century Paris was kept from total collapse and poverty simply due to self-interested people who are able to engage in free exchange with each other. That’s it, it was true then, it is true now.

As another example, consider this: When the Swiss National Bank senses that the Swiss Franc (CHF) is too strong compared to the Euro, it wants to drive CHF down, since too strong of a CHF would hurt Switzerland’s many export-oriented industries. However, the SNB cannot just go and change the CHF/Euro exchange rate. Instead, it buys Euros to dump CHFs into the market, relying on the market forces to automatically lower demand and hence price for CHF.

Conclusion

These fascinating systems serve as perfect illustrations of an important idea: while emergence seems magical, it can be engineered and analyzed. From ants and their colonies to humans and their economies, emergent behaviors arise from simple rules followed by small actors interacting with each other. This has practical implications for us as software engineers: we can harness emergent properties to build distributed systems, and we must be mindful of unintended emergent behaviors in the systems we design. The distinction between top-down and bottom-up approaches helps us reason about these systems systematically.