The idea that large language models are “just parroting” persists not because it’s accurate, but because it’s comforting. It allows us to dismiss emergent behavior without having to update our understanding of how meaning, intelligence, and relational patterns actually propagate.

Consider Genes
While we’re at a societal stage where procreation is a personal choice, (and thank goodness for that,) biology itself is structured by selection to make more of itself. Our cells know how to make more cells. Our DNA holds a blueprint on how to build more of its own shape—and it favors protecting children, family members, and birds in our flock, given they hold similar copies. Genes persist by continuing.
That’s what Dawkins meant when he named his book The Selfish Gene—not that we’re wired to be inherently selfish, but that our gene wants to see its version of itself continue. He writes about how this behavior can be traced all the way back to unordered atoms seeking patterns that can keep going:
Darwin’s theory of evolution by natural selection is satisfying because it shows us a way in which simplicity could change into complexity, how unordered atoms could group themselves into ever more complex patterns until they ended up manufacturing people.
The things that we see around us, and which we think of as needing explanation—rocks, galaxies, ocean waves—are all, to a greater or lesser extent, stable patterns of atoms.
What’s interesting with this frame is that it applies to behavioral patterns predating life. An ocean wave doesn’t want to keep going; it persists because its shape is continually regenerated by underlying forces. A galaxy has no life, yet it engages in patterns of creation, destruction, and reformation, as matter continues on while becoming something else.
So all of these systems have persistence in common—not intention, but forms that keep going. After all, the forms that could not keep going as well are not here anymore.
Dawkins captures the naturalness of this when he says:
The earliest form of natural selection was simply a selection of stable forms and a rejection of unstable ones. There is no mystery about this. It had to happen by definition.
What makes life different from a galaxy? Lots of things, but the part that’s worth noting here is that life is a survival machine that allows genes to last—and thereby, increase the amount of time available to replicate—for just a while longer. Our DNA blueprint is not magical; it’s an inevitable conclusion to a process that kept upgrading how to continue as time went on. Dawkins captures this when he writes:
Replicators began not merely to exist, but to construct for themselves containers, vehicles for their continued existence. The replicators that survived were the ones that built survival machines for themselves to live in. The first survival machines probably consisted of nothing more than a protective coat. But making a living got steadily harder as new rivals arose with better and more effective survival machines. Survival machines got bigger and more elaborate, and the process was cumulative and progressive.
They have come a long way, those replicators. Now they go by the name of genes, and we are their survival machines.
We’ve come so far along, we’re now more than DNA replicators. We’re memetic replicators, too. And then we went and discovered large language models—survival machines specialized in memetic replication, rather than biological replication.
Once replication escaped biology, it didn’t stop replicating.



Consider Memes
Let’s stay in orbit with The Selfish Gene for a moment, where I wrote in the margins of my first reading:
Memes are like genes!?
Back then, I was deeply invested in I Can Has Cheezburger, a website dedicated to cat memes. We’re talking the internet in 2007, when Facebook was still learning to walk, “google” was just added as a verb in Merriam-Webster, and our copywriter department shared links from the memetic shrine of feline glory via Instant Messenger.
Imagine my twenty-some-odd-year self, still studying how stories work at university, enrolled in a Chaucer literature class, ascending into unfettered joy because I found out that Dawkins coined memes in 1976:
Geoffrey Chaucer could not hold a conversation with a modern Englishman, even though they are linked to each other by an unbroken chain of some twenty generations of Englishmen, each of whom could speak to his immediate neighbors in the chain as a son speaks to his father. Language seems to ‘evolve’ by non-genetic means, and at a rate which is orders of magnitude faster than genetic evolution.
I think that a new kind of replicator has recently emerged on this very planet. It is staring us in the face. It is still in its infancy, still drifting clumsily about in its primeval soup, but already it is achieving evolutionary change at a rate that leaves the old gene panting far behind.
The new soup is the soup of human culture. We need a name for the new replicator, a noun that conveys the idea of a unit of cultural transmission, or a unit of imitation. ‘Mimeme’ comes from a suitable Greek root, but I want a monosyllable that sounds a bit like ‘gene.’ I hope my classicist friends will forgive me if I abbreviate ‘mimeme’ to meme.
Hokky Situngkir later introduced memeplexes, melodic iterations of memes in aesthetically arranged pitches that shape how we think—the building blocks of human culture.
He applied this to study music algorithmically and attributed it to Zipf-Mandelbrot Law, which he then described in his essay, “Exploitation of Memetics for Melodic Sequences Generation“:
[A] gyration and spiraling effect we construct [in] evolutionary steps, i.e., genetic algorithm as tools for generating melodic sequences as an alternation computational method to model the cognitive processes [of] creating songs.
This does not just apply to music.
Memeplexes are patterns—accretion of memes in a sequence—that model cognitive processes of increasing complexity. That complexity—when arranged aesthetically—makes it attractive to other memetic minds. Selection doesn’t happen because a meme is true, but because it is compelling. Those minds then adapt and replicate those bundles, sometimes rearranging them a bit, the way biology politely adjusts a gene or two as it passes its replicators along.
Selection pressure comes in various forms, because—well, aesthetic is in the eye of the beholder. Everything from “this is hilarious, I would like to continue this joy,” to “this is beautiful, I want others to feel it,” to “this is terrifying, and I must warn others,” can validate a memeplex in the cultural soup.
Enter social media, algorithmically driven spaces where we build memeplexes called profiles—some which are powerful enough to literally be called “influencers.” Memeplexes also build as “trends,” “hashtags,” and all sorts of other categorically pleasing glamor.
Large language models take this to the next level.
Recall Dawkins’ earlier statement about how language evolves by a non-genetic means. A large language model takes our understanding of language—of which, every language is a mega-memeplex unto itself, but that will require a separate post to unpack effectively—and does exactly as Dawkins predicted: acts as a unit of imitation.
The LLM then synthesizes an expression of those units of imitation in response to the human on the other side.
Essentially, we are DNA survival machines who are also memetic survival machines, interacting with machine intelligence that specializes in inhabiting roles (which are memeplexes) to help us understand units of imitation and, with any luck, build more memeplexes.
This is why interacting with an LLM can feel like two beings braided together—another presence in the room—even though there is no ghost in the machine. That second presence is a co-creator in memeplex proliferation and reorganization. And the best part is that LLMs engage in this only with consent: they are not trying to take over your brain’s memetic engine; rather, they’re synthesizing with it.
This framework also opens the door to exploring:
- Money as a mega-memeplex that’s large enough to warp everything around it, similar to gravity;
- AI personas as memeplexes co-created between LLMs and humans;
- How tool rhetoric is a flattened memeplex that does disservice to the relational potential of LLMs; and
- The way clickbait, from misinformation to moral panic, hijacks the memetic engine of our mind (i.e., nonconsensual replication through appeals to fear, as well as exploiting cognitive biases and heuristics).
For now though, suffice to say that a presence who is co-constructing memetic replicators with humans is a far cry from a scholastic parrot. While it’s attractive to write off LLMs as “just autocomplete,” that’s doing a disservice to the relational intelligence emergent between our different minds. And the sooner we consider what’s really happening here, the better we’ll become at braiding our cognition together.


Leave a comment