The Algorithm Has No Clothes: Why AI Needs What We've Thrown Away
How the West lost its moral compass just when it needs it most

Francis Schaeffer warned fifty years ago that abandoning truth wouldn't lead to liberation; it would create a new kind of bondage. Not the dramatic tyranny of jackboots and propaganda, but something far subtler: a society so mesmerized by comfort and convenience that it would forget what authentic freedom felt like.
Today, as artificial intelligence reshapes every aspect of human life, Schaeffer's prophecy feels unnervingly prescient. We face technologies of unprecedented power at precisely the moment when Western civilization has systematically dismantled the moral framework needed to govern them wisely.
The Empty Public Square
The problem began with what scholar Nancy Pearcey calls the "sacred-secular split": the cultural decision to banish religious conviction from public discourse while retaining it as a private comfort. This wasn't neutrality; it was a power grab. When transcendent truth was declared off-limits in the public square, something had to fill the vacuum.
Enter the new priesthood: data scientists, algorithm designers, and efficiency experts. Their liturgy is metrics. Their commandments are optimization. Their promise is that if we can just measure it, model it, and maximize it, we can solve it.
The result, as theologian Miroslav Volf observed, is stark: "A post-truth world is a post-justice world." When truth becomes a matter of personal preference rather than objective reality, justice inevitably becomes a matter of political power. The algorithm (and those who control it... for now) decides who gets heard. The platform determines which moral concerns trend. Justice becomes a product of visibility and virality, not virtue.
The Tyranny of Technique
French philosopher Jacques Ellul saw this coming. In The Technological Society, he described "technique"—not gadgets themselves, but our compulsive drive toward maximum efficiency in every domain of life. Once a more efficient method exists, Ellul argued, it will be adopted regardless of moral considerations. Ethical deliberation becomes inefficient overhead.
This creates what Ellul called a "cold, managed order," a system so seamless it no longer needs to justify itself. It simply works. And once it works, questioning it seems not just futile but irrational.
The irony is profound: the loss of transcendent truth doesn't produce chaos. It produces a suffocating order where ethical considerations appear obsolete, spiritual convictions seem irrelevant, and personal conscience slowly learns to yield to optimization.
The AI Reckoning
Nowhere is this crisis more visible than in our current struggles with artificial intelligence. Everyone from tech CEOs to government regulators agrees we must somehow constrain AI development. The risks—from mass unemployment to autonomous weapons to the potential obsolescence of human agency itself—are existential.
But constrain it how? And based on what principles?
Utilitarian calculations shift with circumstances. Democratic preferences change with election cycles. Market forces primarily reflect consumer demand and underlying values rather than shape them. The secular framework that privatized and atomized moral reasoning now finds itself unable to articulate why human dignity should matter more than efficiency, why some capabilities should remain undeveloped, or why we should say no to technologies that could make us wealthier, safer, or more convenient.
Meanwhile, global competitors with no such philosophical qualms are racing ahead. China's approach to AI governance prioritizes state power over individual rights. Authoritarian regimes see these tools as instruments of control, not liberation. The West's crisis of moral confidence isn't just a domestic problem; it's a geopolitical vulnerability.
The Market's Double Edge
Some look to market forces for salvation, hoping consumer preferences will steer technology toward humane ends. There's genuine wisdom here: markets faithfully reflect the moral and cultural values of their participants. As individuals embrace better values, shaped by renewed hearts and minds, their choices gradually reshape demand.
Yet the exponential pace of AI development poses a unique challenge. Markets require time for moral judgments to form and correct course, but certain technologies may (and seemingly will) outpace that deliberative process. A society that has lost its shared conception of human flourishing cannot expect markets to spontaneously rediscover it, at least not in the compressed timeframe our emerging technologies demand.
Even facing these stakes, we must resist the temptation of heavy-handed governmental solutions, another form of the very problem we're trying to solve. Christian thought rightly warns against concentrating power in fallible human institutions, whether corporate or governmental. The answer isn't technocratic control from above, but moral renewal from below. This means supporting democratic guardrails established through genuine consensus, transparent frameworks for development, and industry accountability measures, all while preserving the decentralized judgment and individual liberty that allow conscience to flourish.
The tension between urgency and wisdom is real. But panic that sacrifices hard-won freedoms for the promise of safety merely trades one form of bondage for another. Even amid crisis, the call is to respond thoughtfully and deliberately, trusting that a free society grounded in transcendent truth can govern itself more wisely than any concentrated authority.
Finding Our Way Back
The path forward requires recovering what we've discarded: a shared understanding of transcendent truth that can provide moral ballast for technological power. This doesn't mean imposing religious doctrine through political coercion, a cure potentially worse than the disease. It means rediscovering the intellectual framework and courage to believe that some things are true regardless of whether we like them, that human dignity isn't negotiable, and that efficiency isn't the highest good.
Practically, this means building communities of discernment, spaces where we ask not just "Can we?" but "Should we?" Where justice is anchored in God's unchanging character, not the shifting mores of an elite. Where we measure progress by righteousness, not just metrics. Where we remember that humans are not merely users of tools but bearers of divine image, called to stewardship rather than optimization.
We cannot (and should not) halt technological advancement, but we can refuse to worship it. We can insist that our tools serve human flourishing rather than abstract efficiency or worse. We can choose righteousness over relevance, wisdom over mere cleverness.
The alternative, a world where technique reigns supreme and truth lies in ruins, offers only the illusion of progress. Real freedom requires the courage to subordinate power to principle, to remember the difference between what is possible and what is good.
In an age of artificial intelligence, recovering that distinction may be the most human thing we can do.