TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

byThe Meridiem Team

6 min read

AI Deepfakes Cross From Threat to Active Fraud as Religious Leaders Become Targets

Deepfake technology has become economical and accessible enough for criminal-scale exploitation. Multiple religious organizations now facing systematic scams using impersonated pastors—marking the inflection from capability to operational threat infrastructure.

Article Image

The Meridiem TeamAt The Meridiem, we cover just about everything in the world of tech. Some of our favorite topics to follow include the ever-evolving streaming industry, the latest in artificial intelligence, and changes to the way our government interacts with Big Tech.

AI deepfakes have crossed a critical threshold. They're no longer theoretical threats or research-lab demonstrations—they're now commoditized fraud infrastructure targeting religious communities at scale. Multiple churches and pastors across the US are reporting coordinated attacks using AI-generated video and audio impersonations to solicit donations and exploit congregational trust. The technology has become cheap enough and convincing enough for criminal operators to weaponize it systematically. This is the moment the threat transitions from 'emerging concern' to 'immediate authentication crisis.'

Father Mike Schmitz, a Catholic priest with 1.2 million YouTube subscribers, used his November sermon for something unexpected: a public service announcement about himself. Not all versions of him were real anymore.

The fake Schmitz videos were convincing enough to threaten actual damage. "You're being watched by a demonic human," one impersonation intoned, with an hourglass ticking behind it. "You must act quickly, because the spots for sending prayers are already running out." Click the link, the video promised, and secure your blessing before it was too late. The real Schmitz, sitting in his L.L. Bean jacket over clerical whites, delivered the uncomfortable truth: "People can't necessarily tell. That's a problem. That's, like, a really big problem."

He wasn't alone. This isn't one incident. It's a pattern.

Churches in Birmingham, Alabama. Freeport, New York. Fort Lauderdale, Florida. The Ozarks. Nebraska. A megachurch in the Philippines. Each reporting the same inflection point: deepfake technology has become accessible enough, convincing enough, and economically viable enough for criminal operators to deploy it at scale against religious communities. When Wired searched for Father Schmitz on TikTok, they found more than 20 accounts impersonating him. Three remained active even after Schmitz had publicly called them out.

The attack vector is elegant in its simplicity. Religious authority figures build large audiences. They solicit donations—legitimately, for their ministries. They generate massive amounts of video and audio content. That content trains the deepfake models. Social media platforms then provide distribution infrastructure. The result: scammers get everything they need to create convincing replicas that can exploit the exact same trust mechanisms the real leaders built.

Rachel Tobac, CEO of SocialProof Security, framed it plainly: "If you're on TikTok or Reels, they've probably come across your For You page. This is somebody who looks to be a priest, wearing all of the garments, standing up on a pulpit, speaking to their congregation in a very enthusiastic way." The barrier to entry has collapsed. The cost to replicate a pastor's likeness has dropped from thousands to cents.

The scams themselves show the progression of threat sophistication. Early attacks were obvious—fake donation requests with slightly robotic delivery. But the technology is improving faster than detection. Pastor Alan Beauchamp in the Ozarks had his Facebook account hacked, then watched a deepfake version post a fake cryptocurrency trading certificate with his name attached, urging congregants to join. Phone calls now arrive using sampled voices from church livestreams, asking staff to authorize fund transfers. ChurchTrac, a Florida-based church management software provider, warned that "voices can be sampled and put into AI. The scammer can use that voice and call into a church and say 'Hey, would you transfer this fund to this account?'"

What makes this an inflection point—not just a problem, but a transition—is the economics and accessibility crossing a threshold simultaneously. This isn't nation-state cyber warfare or sophisticated organized crime with massive R&D budgets. This is what happens when AI-powered impersonation becomes commodity software. The technology that cost $100,000 to deploy a year ago now costs $10. Distribution is free. The target audience—congregations with inherent trust protocols—is pre-selected and documented on social media.

Tobac has identified a second, more insidious vector: AI-generated pastors that don't target any specific individual but prey on algorithmic virality. A TikTok account called "Guided in Grace" posted a video of a generic pastor ranting about billionaires needing accountability. The video accumulated 11 million views. The caption suggested it was real—"Meanwhile in my conservative grandma's church this morning"—and most comments treated it as authentic. The account bio acknowledged the AI generation, but most viewers never saw it. The video then gets monetized through TikTok's Creator Fund. "If you can go viral quickly, if you can get a lot of views, you are given more money," Tobac noted.

This is the moment where deepfake technology stops being a theoretical threat and becomes operational infrastructure. The convergence is clear: generative video models (Sora, similar systems) have crossed the quality threshold. Distribution platforms have normalized AI-generated content. Monetization systems reward whoever generates the most engagement, regardless of authenticity. The incentives point one direction.

What's particularly striking is that legitimate religious institutions are simultaneously adopting the same technology. A Dallas church in September showcased an AI-generated video of conservative activist Charlie Kirk preaching from beyond the grave. According to 2025 research cited in the Wired article, a majority of pastors now use ChatGPT or Grammarly for sermon preparation. Chatbots purporting to let users chat with God, Jesus, and various religious figures are flourishing. The technology is being integrated into legitimate religious practice even as it's simultaneously being weaponized against it.

The mental health dimension adds another layer. OpenAI reported that hundreds of thousands of ChatGPT users weekly show signs of psychosis in their conversations, with some delusions taking religious forms. Lucas Hansen, cofounder of AI education nonprofit CivAI, articulated the risk: "I think there might end up being a fair number of people that think that God is using AI as a tool to communicate with them. AI tries to figure out what the user would like to be true and then reinforces that. Those people that perhaps are slightly predisposed to these sorts of issues get those beliefs reinforced." Deepfakes targeting religious authority—combining authentication failure with authority amplification—sit at the intersection of multiple threat vectors.

This is where the timing matters. Organizations haven't yet established defensive protocols because the threat didn't exist at operational scale six months ago. But the window for establishing authentication standards is closing rapidly. The pattern is consistent with previous security transitions: once a capability becomes economical, adoption accelerates exponentially. Phone authentication took 18 months to go from optional to mandatory once the first major breach happened. Multi-factor authentication followed a similar arc. Voice and video verification standards will follow the same curve—but compressed, because deepfakes are improving faster than 2FA adoption did.

Religious organizations are experiencing the moment when a technology transitions from emerging concern to operational threat. The economics have shifted—deepfake creation is now cheap enough, convincing enough, and profitable enough for systematic exploitation. For decision-makers with public-facing leadership, this isn't a future problem: it's a present operational vulnerability. Investors should note the immediate market opportunity in voice/video verification and authentication platforms—the demand curve just shifted from speculative to urgent. Security professionals need to begin implementing verification protocols now, before your organization becomes a case study. The next threshold to watch: whether authentication standards crystallize through regulation, industry coordination, or crisis response. History suggests the third path is most likely, which means 18 months before verification becomes table stakes—and organizations that move now will have competitive defensive advantage over those that wait for the crisis to fully develop.

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiem

TheMeridiemLogo

Missed this week's big shifts?

Our newsletter breaks
them down in plain words.

Envelope
Envelope

Newsletter Subscription

Subscribe to our Newsletter

Feedback

Need support? Request a call from our team

Meridiem
Meridiem