This Is How AI Becomes a Religion
Why calling out “AI slop” might make you a digital heretic.
One of the most time-worn aphorisms comes from 18th-century French Enlightenment thinker Voltaire, who said, “Those who can make you believe absurdities, can make you commit atrocities.” There are many such “truths” often spread around the Internet, some real, some entirely fabricated, as an easy and vague punctuation to a conversation in the hopes of sounding erudite.
The actual Voltaire quote, in its original French, reads, “Certainement qui est en droit de vous rendre absurde est en droit de vous rendre injuste.” Francophiles will easily decipher that sentence, but for the rest of us, here’s how Google Translate renders it in English: “Certainly he who has the right to make you absurd has the right to make you unjust.”
You see there, essentially the same meaning, without the flowery adornments tacked on over time by the literary game of telephone some translators play (like Norman L. Torrey, who is generally thought to have added his now ubiquitous artful interpretation to Voltaire’s original passage). Why should we care about obscure translations that sometimes subtly change how information is transmitted? Because, in this case, how we arrived at the unvarnished version of this quote to confirm its veracity matters a lot. At present, we seem to be allowing our truth-seeking muscles to atrophy.
I think about truth a lot these days, and why I believe the human discussion around AI may end up dividing us into factions that resemble something more akin to dogmatic sects, both religious and areligious, rather than along the lines of reason. Hints of this coming fracture have emerged through conversations on my podcast, where I’ve interviewed many AI engineers, scientists, professors, and business leaders. The question of superintelligence often comes up. And that conversation is usually pulled into the gravity well of theism vs. atheism.
Some believe that AI will eventually spawn a self-aware superintelligence. Others opine that such a thing is unlikely, and to give the idea purchase in the minds of the public is to engage in a kind of animism (common in Japan, by the way), imbuing inanimate objects with agency or even consciousness.
Will AI eventually become greater than its creators? Or is the very idea an indication of our own outsized ego as a species? Or are superintelligence naysayers expressing a natural revulsion to the concept, based on deep-seated fear of the unknown? At present, no one, no matter how versed in the field of AI, has a definitive answer to these questions, so they are essentially speaking to their faith.
The Evangelists of Artificial Souls
I’m planning a different, more research-driven exploration singularly focused on the topic of AI as God versus AI as dog (i.e., companion and service worker), but I wanted to bring it up now more briefly because I was struck by something I just read in The Hollywood Reporter. Former Wondery podcast network executive Jeanine Wright has started a new company dedicated to creating podcasts hosted by AI…er, entities. This is nothing new.
In 2024, Google launched NotebookLM, which allows users to create instant audio podcasts based on text and documents input into the system. The outputs are convincing conversational AI chats about whatever topic you choose. So the idea of a generative AI podcast network isn’t really original, or, in my view, worthwhile, but more on that later. The lack of originality wasn’t what stopped me in my tracks, it was the quotes Wright gave to the reporter.
“We believe that in the near future half the people on the planet will be AI, and we are the company that’s bringing those people to life,” said Wright. Half the people? What “people”? AI voices are not “people,” AI is, at least for now, algorithms running on computers that largely regurgitate the sum total of human expression in all its many forms. It’s fantastic technology. However, you cannot “bring those people to life” because they are not people.
AI slop is real, and it’s a problem. But acknowledging that fact doesn’t make you anti-tech or AI-averse… My view of generative AI is pretty straightforward: Its best use is as a force multiplier of human ability. That is, an enhancement of our existing powers.
Even if we give her a bit of grace and assume that she only meant people as in powerless avatars, we must remember that language (the code through which we program other humans) is powerful and can bend reality, so such conflations are not trivial. This kind of reductive intellectual foible is not uncommon among those who are so focused on tech startup profit that they begin to speak in clumsy ways that obfuscate the vital difference between the plain truth in 2025 and marketing bullshit. Maybe her sentence will make sense in 2125, but we live in science reality now, not science fiction’s future.
She continued: “I think that people who are still referring to all AI-generated content as AI slop are probably lazy luddites. Because there’s a lot of really good stuff out there.” Well, I agree that Luddites can sometimes impede progress and innovation. Still, much of what is now flooding Search, social media, and even places like Spotify and YouTube is, well, AI slop. And you’ll have a hard time finding someone who knows me who can credibly call me a Luddite. I wrote half of this essay typing in the air on my virtual keyboard in my Apple Vision Pro (more about that amazing tech here).
AI slop is real, and it’s a problem. But acknowledging that fact doesn’t make you anti-tech or AI-averse. Rather, it’s looking at a new technology with a rational perspective, divorced from investor-pleasing imperatives. My view of generative AI is pretty straightforward: Its best use is as a force multiplier of human ability. That is, an enhancement of our existing powers. This is the embodiment of the idea I embraced many years ago called transhumanism, which encourages us to use technology to augment ourselves into the next phase of human evolution.
Symbiosis or Supplantation
The problem with the current AI gold rush to haphazardly replace human expression—whether that is in audio/video podcasts, music, writing, or video—is that it is among the most blunt and unsophisticated approaches to developing AI tools and businesses I can think of. In the next 24 months, the best AI-powered films you will see will be directed and carefully guided by the hand of a human. Why? Because the best storytelling isn’t just about words, pictures, and formulaically correct composition, it’s about meaning. Many humans live interesting lives, and they have backstories laced with texture, nuance, and transcendent import that no algorithm can deliver. AI can only imitate these things.
Given Wondery’s track record of telling deeply human stories, one might expect Wright to appreciate this distinction. But while I can’t divine her internal motivations, I have come across more than a couple of people who see AI as purely a profit engine, the future of humans be damned. This myopic attitude fails to grasp that the richest AI implementations won’t be those that replace humans, but those that enhance them.
What do Wright’s wrong-headed notions have to do with Voltaire’s truth and the bifurcation of humans along faith-based lines? Well, people like this newly minted CEO are how cultural shifts start. Suppose we begin to take businesspeople who casually refer to AI as “people” seriously. Soon, it will gradually become easier to accept other false statements slickly slipped into our mental transoms as truth. Consider how someone types a question into OpenAI’s ChatGPT and accepts the answer, without ever checking the model’s sources to see if it’s hallucinating, which even the best AI continue to do. AI is neither human nor the ultimate source of truth. But AI is making at least some of us more superficial and perfunctory in our approach to clarity and reason—the very things technology was meant to help us with!
If, as Voltaire says, we allow some AI businesspeople (who are often only casually interested in the technology itself) to make us absurd through our belief in the often facile outputs of machines, then we will almost certainly eventually find it easier to be unjust to others and, ultimately, to ourselves. Many gloss over the fact that Voltaire’s famous assertion was, in context, a rejection of the Catholic Church and its orthodoxy based on what he saw as forced irrational beliefs in things we cannot prove. I think faith can be important, but it’s personal. And we’ve seen throughout human history what can happen when faith morphs into enforced doctrine, whether liturgical, cultural, or technological. It would be wise not to make those same mistakes with our nascent “belief” in AI.
“[Using AI] We might make a pollen podcast that maybe only 50 people listen to, but I’m already at unit profitability on that,” said Wright, in the article, “and so then maybe I can make 500 pollen report podcasts.” We don’t need 500 AI pollen report podcasts for 50 people, and those 50 people don’t deserve the injustice of that AI slop. Likewise, I suspect that somewhere in Wright’s UCLA-educated and legally trained intellectual corpus, there exists the ability to do better than AI slop podcasts that no one is asking for. That’s right, I said it again: AI slop. Now send me to the Luddite mines, where I’ll do my best to pretend to self-flagellate as penance for my non-belief.


