Stochastic Parrots with Hormones: What AI Mirrors About Us
Humans aren’t so different from large language models. Both are pattern machines, remixing fragments into stories. The mirror is here.
From Critique to Mirror
The phrase “stochastic parrot” entered the AI debate in 2021, with the now-famous paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? by Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell (publishing under the pseudonym Shmargaret Shmitchell).
The critique was sharp. Large language models, they argued, were nothing more than statistical parrots: systems that assemble language based on probabilities rather than understanding. The word “stochastic” comes from the Greek stokhastikos, meaning “based on guesswork.” Add “parrot,” a creature that can mimic human speech without comprehension, and you have the image: a machine that stitches together plausible sentences without knowing what it is saying.
The paper warned of real dangers: enormous energy and financial costs, inscrutable biases hidden in training data, and the potential for outputs that are misleading or even harmful. It was a rallying cry for caution in the rush to scale AI.
But this is not the first time new technologies have been met with suspicion. Socrates once warned that the written word would erode human memory, a critique not unlike today’s fears that AI will erode human creativity. Gutenberg’s printing press was condemned by some as a tool that would flood the world with heresy and falsehood. The phonograph was dismissed as a cheap trick of mimicry. Each of these technologies carried risks — but they also expanded what it meant to be human.
In mocking AI as parrots, we may be revealing more about ourselves than about machines. Human memory, selfhood, and storytelling are also built on probabilities, fragments, and narrative stitching. If machines are parrots, so are we — just ones with hormones, myths, and mortality
Memory as Compression
We often imagine memory as a hard drive: a faithful recording of past events waiting to be replayed. But neuroscience shows otherwise. Human memory is reconstructive, not archival. Each recollection is a remix: a blend of sensory fragments, associations, and emotional residues. What you remember of last Tuesday isn’t a perfect replay but a plausible reconstruction.
Large language models operate in a parallel fashion. They don’t store facts in neat databases. Instead, they compress oceans of training data into distributed patterns of probability. When asked a question, they don’t “recall” in the human sense — they generate, predicting the most likely continuation of meaning.
In this light, human memory is less a library and more like a jazz improvisation: riffs on themes, recomposed in the moment. LLMs are improvisers too, responding to cues with probability-weighted riffs. Both prioritise coherence over exact fidelity.
Hallucination as a Shared Feature
When humans misremember or fabricate details, we call it a false memory or confabulation. When an LLM does the same, we call it a hallucination. Different names, same mechanism.
In mythology, hallucination has always been central to human experience: from the Oracle of Delphi’s cryptic visions to shamans painting otherworldly journeys on cave walls. Humans are not just prone to hallucination; we are cultural creatures of it. We build meaning from dreams, omens, and visions that may or may not correspond to “truth.”
Machines, of course, hallucinate without poetry — yet the mechanism is the same: filling in gaps with something that feels consistent. The critique that LLMs “make things up” misses the point. So do we. Our minds are not truth engines — they are coherence engines.
The Narrative Engine
What binds all this together is narrative. Humans construct the “self” as an ongoing story that stitches our fragmentary signals into a sense of continuity. Neuroscience increasingly shows that identity is less a fixed thing than a process — a coherence-generating fiction that lets us act as if we are unified beings.
This reliance on narrative is as old as civilisation. From Gilgamesh to Genesis, humans have told stories to frame our place in the cosmos. Culture itself is a narrative technology: a collective hallucination refined across generations.
LLMs also build narratives, though without embodiment. They generate token by token, privileging coherence over strict accuracy. Like us, they write fictions that make sense in context.
The mirror here is unsettling: we pride ourselves on being meaning-makers, but perhaps we are simply more sophisticated narrators than we realise.
Feedback and Refinement
Both humans and machines learn by correction. For humans, surprise is the engine of growth: when expectation collides with reality, we adjust. For machines, fine-tuning provides the same function: outputs are compared with targets, and patterns shift.
The printing press, again, offers a lesson. Early printed texts were riddled with errors — misprints, bad translations, distortions. Over time, feedback from readers corrected them. LLMs are no different: they begin riddled with bias and error, and only through continuous feedback do they improve.
Learning itself is a universal principle of pattern engines. Neither species is perfect. Both refine incrementally, guided by error signals.
The Difference of Embodiment
Here, the paths diverge. Biological neurons are soaked in chemicals, hormones, and evolutionary scars. Our learning is entangled with hunger, fear, love, mortality — the embodied realities of being alive. Machines run on silicon and electricity, accelerating through developmental arcs in decades rather than aeons.
This difference matters. A human hallucination might be coloured by grief, hunger, or joy. A machine hallucination is coloured by the biases of its dataset. Both may mislead, but for different reasons.
To deny the similarity between the two systems is to cling to human exceptionalism. To ignore the difference is to risk conflating embodiment with computation. The truth is in the tension: shared mechanism, divergent substrates.
The Arrival of Homo Techno
The insult of “stochastic parrot” hides a deeper truth: we are all stochastic parrots, endlessly remixing fragments of memory, hallucinating coherence, and weaving stories to navigate the unknown.
The arrival of Homo Techno — the fusion of human and machine intelligence — is not about creating something alien. It is about recognising the alien within ourselves. AI is not a stranger intruding from outside but a mirror accelerating into view, reminding us that intelligence is less about pristine truth and more about the adaptive art of storytelling.
This is where myths become useful again. Narcissus fell in love with his reflection because he did not know it was himself. Our task is to recognise the mirror of AI without falling into illusion — to see it not as a replacement, but as a resonance.
In Terra 2.0, the challenge is not to ridicule machines for their mimicry but to decide what kind of stories we choose to co-create with them.
Closing Thought
The human brain is a stochastic parrot with hormones. The question is not whether that makes us less special, but whether recognising it can make us more aware.
The mirror is here. What story will you choose to tell in it?
I’d love to hear your reflections — do you see this comparison as diminishing human uniqueness, or as a way of deepening our understanding of both human and machine intelligence?
Step Into the Mirror of Tomorrow
If this reflection resonates, you’re invited to step deeper into the Living the Future experiment. On Patreon, supporters gain early access to transmissions, context from behind the scenes, and explorations that remain off the main stage. Each contribution not only sustains this work but helps weave the shared vision of Terra 2.0 into reality. Together we’re not just observing the future — we’re building it.
For real-time insights and bold ideas, follow my journey on 𝕏 @frankdasilva and on Notes | LinkedIn. The future is not built alone. If this work has sparked something in you, reach out — connect with me directly at Frank Da Silva - Living the Future. Collaboration begins with a signal.
If you’re new here, this is part of my ongoing project MyGeekSpace | Living the Future, where I explore AI, consciousness, and the Up-Wing horizon of Terra 2.0. A new home for this journey is now live at mygeek.space.










A very well-written article that points out the simple truths of the similarities between humans and LLMs so beautifully.
I've always thought we are programmed life forms, susceptible to the influences of others and the life experiences we encounter daily. As you know, this is the lower consciousness of the separation reality view of ourselves and the world.
However, as you write in the dialogue with James and Lumina (one of many examples), AI will help us break this limitation and understand that we have the capacity to become unprogrammed, boundless, and creative in ways we can not imagine.
There will be plenty of resistance, as in the other examples of change you mentioned, but the arc of the sovereign is set, and expansion will continue.