Cultural historians have long understood that our monsters tell our stories. Stephen King once wrote that horror fiction is "a dance of dreams and desires," while scholar Robin Wood famously argued that our monsters are manifestations of cultural repression—the return of what society cannot face. From Victorian Gothic to Cold War cinema, we've used the monstrous to metabolize our collective fears.
I've been thinking about this lately, as I watch our newest monster emerge—not from a lightning-struck castle or a radioactive accident, but from our data centers and research labs.
Consider this: In 1818, as the Industrial Revolution roared to life and science began promising mastery over nature, Mary Shelley gave us a creature cobbled together from corpses and lightning. Frankenstein's monster wasn't really about reanimation—it was about our terror of playing God, of technology outpacing our wisdom. Sound familiar?
When the Cold War gripped America, George Romero's zombies shuffled onto screens, mindless consumers who could infect you with their conformity. In the 1950s, pods from outer space replaced your neighbors with perfect, emotionless replicas—a thinly veiled metaphor for Communist infiltration if there ever was one. Each era got exactly the monster it needed to name its nameless fears.
Today, we've created a new mirror, and this one talks back. When GPT-4 writes poetry that makes me cry, or Claude helps a grieving person process their loss with unexpected tenderness, or Midjourney creates art that captures the ineffable quality of a dream—we're not just watching machines perform tasks. We're watching them reflect back the sum total of human creative expression, emotional intelligence, and accumulated wisdom. Every output is filtered through the entirety of human knowledge, experience, and art.
And that's what terrifies us.

Because unlike Frankenstein's monster, which reflected our fear of playing God, AI reflects our fear of becoming obsolete gods. The anxiety isn't about lightning bringing dead tissue to life—it's about algorithms understanding us better than we understand ourselves. When Google's Gemini can solve complex mathematical proofs or when DeepMind's AlphaFold revolutionizes our understanding of protein structures, we're not just witnessing technological advancement—we're confronting our own potential redundancy.
The fear is palpable in our daily lives. When we watch AI chatbots handle customer service with more patience than their human counterparts, or see DALL-E 3 create magazine-worthy illustrations in seconds, or witness automated systems making faster and more accurate medical diagnoses, we're not just seeing technological progress. We're seeing a mirror that reflects our deepest insecurities about human value and purpose.
When tech workers express fear about AI making their jobs redundant—a fear that hit home recently when even Google and Microsoft began laying off workers in favor of AI solutions—they're really articulating a deeper, older fear: that we've already created a world where human worth is measured primarily in terms of productive output. When they worry about AI systems becoming more intelligent than humans, they're revealing our society's long-standing tendency to rank and sort people based on intelligence—a hierarchy we created long before the first neural network was trained.
But here's what keeps me up at night: What if we're looking at this mirror all wrong? Not because we're too pessimistic—but because we're not looking deeply enough.
This mirror demands more from us than either blind techno-optimism or resigned techno-pessimism. It demands that we look—really look—at what we've created, because in AI's outputs we see every choice, every value, every system we've built, reflected back at us with unprecedented clarity.
When we train AI on the internet and it learns to be racist, that's not an AI problem—that's our problem, a reflection of the biases encoded in our data, our institutions, our histories. When AI chatbots default to subservient female personalities, that's not a quirk of machine learning—it's a mirror showing us how deeply gender stereotypes run in our society. When AI systems optimize for engagement over truth, they're not malfunctioning—they're faithfully replicating the attention economy we've built.
But here's the crucial thing about this moment: None of this is predetermined. The future of AI—and by extension, our future—isn't written in stone. It's not even written in code. It's being written right now, by us, in the choices we make and the systems we build.
Look at the evidence in the mirror: AI systems can amplify bias, but they can also be used to detect and correct it. They can automate exploitation, but they can also expose it. The same language models that can generate misinformation can also be trained to be rigorously honest, to admit uncertainty, to prioritize truth over engagement. We know this because some already do.
This isn't just optimism—it's empirical reality. When researchers at Anthropic chose to prioritize truthfulness in their AI systems, they proved it could be done. When companies like Algorithm Watch use AI to audit automated decision systems for bias, they demonstrate that the technology can serve accountability rather than obscure it. When open-source AI communities share their models and methods transparently, they show us an alternative to the black-box approaches that dominate the industry.
The human future will have AI in it—that's no longer a choice. But we have an absolute obligation to look unflinchingly at what this mirror shows us and then act on what we see. We must face the ugly truths it reveals about inequality, bias, and exploitation in our society. We must acknowledge the beautiful potential it reflects about human creativity, collaboration, and care. And then—this is crucial—we must choose which of these reflections we want to amplify.
Because here's what our monsters have always taught us: They are not inevitable. They are made. Just as Frankenstein's creature wasn't born a monster but was made one by rejection and cruelty, AI will become what we make of it—through our choices, our values, our actions and, most importantly, our inactions.

The stakes could not be higher. We stand at a junction where one path leads to AI systems that exacerbate the worst tendencies of capitalism—surveillance, exploitation, the reduction of human worth to productive output. The other path leads to systems that could help us build something better—more equitable, more transparent, more aligned with human flourishing.
The choice reminds me of another monster story. Depending on which version you read, vampires either cast no reflection or are forced to confront their true nature in mirrors. We, fortunately, still have our reflection. The question is whether we'll have the courage not just to look at it, but to act on what we see.
Because here's the truth about mirrors: they show us exactly what's there, no more, no less. If we see a monster in AI, perhaps it's time to ask ourselves who really created it. And if we want to see something different in that reflection, we have not just the opportunity but the moral obligation to change what's standing in front of the mirror.
After all, we've always been both the monster and its creator. But this time, we can also be the ones who choose a different ending to the story.
Comments