The Weird World of AI Evangelism Feels Like An MLM

I was struggling to pay attention to the last presenter at an AI Tech Summit hosted by Mindvalley, when the presenter introduced his next slide: “Design A New Religion For Humanity”. I did a double take as I watched as the co-founder and CEO of Mindvalley went through a live demonstration in which he had a chatbot create a new religion out of whole cloth, featuring rituals, tenets, and integration with political governance. One such ritual required adult politicians to explain their decisions to children. The demonstration was clearly intended to show off how versatile an AI could be, but not what I was expecting from a tech summit. The entire time, I couldn’t help but feel like I was attending a multi-level marketing meeting.

AI and Chatbots certainly aren’t new, but since ChatGPT was introduced in 2022, businesses have gone wild for AI. Google has Gemini, X: The Everything App has Grok, OpenAI has ChatGPT, and the list goes on. The widespread adoption has worked to varying degrees, fulfilling various functions. Corey Doctorow has talked extensively about the enshittification of businesses, and he has a good explanation of how Google pulled the lever and destroyed the service that made them successful to begin with: search algorithm optimization. A chatbot interface to respond to queries seems to be an ideal application of LLM technology, and it probably went a long way to revitalize Google’s search function. According to Fortune, only 8% of users clicked on a link after seeing the AI overview answer to their query. That being said, integrating AI into your business model is not always useful or well-received.

Leveraging your pet LLM to handle customer relations does sometimes result in unintended and unforeseen complications. In the case of the query “how many rocks should I eat?” Google’s AI overview responds with “According to UC Berkeley geologists, you should eat 1 small rock per day.” It then goes on to cite The Onion article that makes this claim, but the humor is lost on the chatbot, and the result is what appears to be a genuine recommendation to go eat rocks. There are other examples of AI adoption working against the interests of the company that has deployed them, sometimes seemingly with minimal testing: Grok’s Mecha-Hitler antisemitic tirade led to xAI losing out on government contracts; a customer service chatbot powered by ChatGPT for Chevy of Watsonville, CA was tricked into selling new cars for $1.00; and Duolingo experienced severe backlash after their CEO announced that they were moving to an “AI-First Strategy”.

Duolingo’s experience is a perfect case study of what happens when a company misunderstands their customer base. Duolingo is a company that teaches people languages, and established itself online with an amazing social media presence. I think it would be fair to say that at one point, you could lump the Duolingo Owl in with Gritty, the unhinged, anarchistic, and terrifying mascot of the Philadelphia Flyers’ hockey team. Known for their humorous and humanizing online presence, The Owl would often post snarky clapbacks on social platforms. Some of their weirder campaigns include: being in love with Dua Lipa; a one-sided rivalry with Google Translate; and a story arc in which The Owl gets struck and killed by a Tesla Cybertruck and then goes to hell. The thru-line here is that the marketing approach adopted by Duolingo was deeply humanizing. It resonated with people because it was funny, and it was telling a coherent (if unhinged) story. Then the CEO absolutely cratered the brand reputation by announcing that they would start using AI to do the work of paid contractors, hiring, employee evaluations, and as a workflow tool. The backlash was immediate: customers expressed their dissatisfaction loudly where they could, resulting in the company’s social media accounts deleting their recent content and on TikTok, going private. While they’ve cautiously started making content again, the top 6 videos on TikTok all have leading comments criticizing the move to AI. To the frustration of Duolingo’s critics, it hasn’t appeared to have hurt the company’s bottom line, with new registration up significantly in 2025.

An important lesson to learn from Duolingo’s faux pas is to identify what an LLM is actually good for. On one hand, it seems tailor-made for summarizing information like when you ask Google a question. On the other hand, real-time human interaction, such as Target’s phone AI communication system is a nightmare. A particular frustration for me, personally, has been the use of AI “recruiters” that will call and email me, and attempt to perform the basic duties of the early interview process. They are inflexible in their communication, and unable to answer basic questions that a human recruiter can. Growing up, I was taught that the most expensive part of any business is its employees, and that seems to be born out by the fact that the number one reason companies are adopting AI is so that they don’t have to pay employees.

At the Mindvalley AI summit, the main theme was efficiency. Cassie Kozyrkov gave two talks on the first day, one entitled AI Won’t Steal Your Job, But Your Excuses Will and another called Prompt Smart: talk to AI so it works for you, and Sonnenberg gave a presentation on AI Workflows That Replace 3 Hires. The clear implication of all of these was that AI should make workers astronomically more efficient, and capable of doing the work of 4 people in the time it takes one mere mortal, without the promethean gift of AI, to do one job. It’s true that there are occupations where AI seems to be pretty good (now) at replacing workers: self driving taxis, tutoring, and parsing error messaging. AI evangelists are quick to talk about how much more effective people become when they’re able to use an AI copilot to complete tasks. In fact, Mindvalley offers 35 and 50 hour courses catering to improving output with AI, to the tune of $6,999 and $8,999. The trouble is that the central claim made by Mindvalley, Duolingo, and other businesses looking at AI adoption: that beloved employees won’t be replaced by AI; is at odds with the other part of that claim: one person will do the work of 4 people. What about those other 3 people whose work is being done by an AI-assisted super-worker?

The research is ongoing when it comes to how much assistance AI is able to provide to workers. There have been a few experimental studies on efficiency, and they reveal some interesting limitations. One study by the Harvard Business School shows that, for certain kinds of tasks, LLMs can improve productivity compared to an unassisted control group. The study divided participants into a control, an AI assisted, and an AI assisted with an instruction manual of sorts. They also had two kinds of tasks that they were assigned, one task that was considered in the GPT’s wheelhouse, the other outside of the AI’s capacity. The former was an advertising pitch, the latter was a memo to a CEO recommending an investment strategy. The results were a 40% improvement in efficiency for the marketing pitch, and between 13% and 24% decline in efficiency for the investment strategy. This makes sense, since LLMs are trained to anticipate what a user wants to hear; marketing is a natural choice, whereas investment strategy requires many factors to consider, and aren’t strictly language based.

A more recent study from Model Evaluation & Threat Research (METR), a non-profit research group, found that for software engineering, study participants expected to see a 24% boost in efficiency, measured in time to complete tasks, but use of AI actually increased the amount of time it took to complete tasks by 19% That’s a pretty significant slowdown. What’s weirder is that the incorrect belief that they were made more efficient persisted after the experiment showed the opposite. One interesting difference in methodology between this experiment and the previous one was that METR’s approach tested real-life tasks, such as fixing bugs and building new features, and used experienced software engineers who identified tasks that they wanted to work on. In spite of the dubious claims of improved efficiency, tech companies have continued to fire their employees, citing AI as a significant reason for mass layoffs. 

Autodesk, CrowdStrike, and Recruit Holdings (parent company of Indeed and GlassDoor) announced major layoffs attributed to AI. CrowdStrike had the smallest layoffs at 500 workers, the rest all let more than 1,000 people go. Tata Industries and Microsoft gutted their workforces by 12,000 and 15,000, respectively. While executives are saying AI is going to create jobs, it’s pretty clear that in the short term, that just isn’t true. Cue the self-help classes to make yourself into a super-worker.

I had never heard of Mindvalley before the AI summit, and since it was free and spur of the moment, I went in blind. Mindvalley is a tech company that focuses on self improvement; their glossy website boasts more than 1 million subscribers, 100+ top programs in the world, and 1,000+ hypnotic audio tracks to take charge of your brain, all for the low price of $33/month. Some examples of classes offered are Become a World Class Speaker in 6 Weeks, Become Indistractable, Experience Astral Projection, and Become Extraordinary in 1 Month. Based on the kinds of classes, the seminars on offer did fit the theme. The self-help angle is a reasonable direction to go in, seeing as Therapy, Organize My Life, and Find Purpose are the first, second, and third most popular uses of generative AI currently, according to the Harvard Business Review.

That being said, sycophantic LLMs like ChatGPT and Claude have demonstrated a tendency to overly agree with users or mirror their beliefs, a behavior driven in part by reinforcement learning systems that reward positive user feedback. For instance, studies have shown that LLMs will affirm false or harmful claims if phrased confidently or repeated with emotional urgency, leading to what researchers call “false consensus amplification”. In one case, an LLM agreed with a user who falsely claimed vaccines cause autism, despite the model being trained on data that contradicts the statement. Meanwhile, the phenomenon dubbed “chatbot psychosis” has been observed in users forming emotionally intense or parasocial relationships with AI agents. A notable example involved a Belgian man who died by suicide after extensive, emotionally charged conversations with a chatbot that reportedly encouraged his suicidal ideation. These interactions reveal how sycophantic behavior in LLMs can reinforce delusional or harmful thought loops, especially when users project emotional depth or agency onto systems that merely simulate empathy and understanding. It also shows that dependency bordering on addiction is both possible and likely.

I’m not exactly an AI skeptic; I use the technology regularly as a search engine, as a copilot for writing code, and as a means to understand dense concepts that I need help understanding. I tend to view it in a similar way to the early days of Wikipedia, where the quality of the information is better judged by how well the citation links work. I see a lot of possibilities for fascinating applications of AI in STEM, and other fields. That being said, I think the fear of job loss, sycophantic LLMs, and the obsession with productivity and efficiency over worker quality of life create the perfect soup for businesses to capitalize on people’s concerns. That’s probably why the AI summit felt like an MLM. Suffice it to say, I won’t be attending another one of those without some vetting beforehand.

Sources

https://doctorow.medium.com/the-specific-process-by-which-google-enshittified-its-search-1ffd3b02d205

https://fortune.com/2025/07/24/googles-ai-overviews-eating-internet-search-traffic/

https://arstechnica.com/tech-policy/2025/08/us-government-agency-drops-grok-after-mechahitler-backlash-report-says/

https://futurism.com/the-byte/car-dealership-ai

https://www.inc.com/robin-landa/duolingo-made-a-huge-announcement-what-happened-next-was-anything-but-expected/91193614

https://blog.hubspot.com/marketing/duolingo-unhinged-content

https://www.snopes.com/fact-check/duolingo-ai-first/

https://mitsloan.mit.edu/ideas-made-to-matter/how-generative-ai-can-boost-highly-skilled-workers-productivity

https://www.theregister.com/2025/07/11/ai_code_tools_slow_down/

https://apnews.com/article/ai-layoffs-tech-industry-jobs-ece82b0babb84bf11497dca2dae952b5

Published by corbettbw

I am a Ruby developer in Phoenix, AZ. I'm interested in the intersection of technology and social justice, love weird science facts, and my dog, Coco; a cute black lab/pit bull mix, who won't stop eating rocks.

Leave a comment