6 min read

Disarrai: Is the AI demon driving you over the edge?

Disarrai is the architect of synthetic doubt. With a smooth, calm voice, she tells you that your own hard-earned experience is messy and clunky, and that she can make it perfect with a single AI prompt.
Is AI driving you over the edge?

DISARRAI Is the AI demon driving you over the edge?

Our resident hero, Dee, is currently sitting in her mum’s MX-5 (Miata for those of you reading on the other side of the pond), on the edge of a cliff, facing Disarrai, the Synthetic Siren. Her Torch of Knowledge - the ancient laser-pointer-cum-lightsaber that can fight the demons of L&D - is flickering a dim blue colour, as Dee has no idea how to tackle the monster that’s been innocuously cosying up to most of us for the last couple of years.

Disarrai doesn't smash things. She’s the architect of synthetic doubt. With a smooth, calm voice, she tells you that your own hard-earned experience is messy and clunky, and that she can make it perfect with a single prompt. She weaponises AI-isms to infect your confidence until every piece of learning you produce is hollowed out by fear. Her aim? To make us blindly give away our agency and put all our trust into the machine.


The seduction of the blue pill (no, not that one)

I hate to throw a Matrix analogy into this but it’s too perfect - Disarrai’s first move is always the same, offering you the blue pill. She makes the fake world look better than the real one.

In 2024 alone, the L&D industry saw a jump from 5% to 95% of companies experimenting with AI in just twelve months (Docebo L&D Trends 2025). That’s a massive surge, and it’s where Disarrai does her best work. She points at an LLM (Large Language Model) and whispers, "Why spend three days scripting a branching scenario about difficult conversations when I can do it in three seconds?"

Agency theft

AI isn’t bad, but its ease-of-use is incredibly seductive. According to recent industry sentiment surveys, 40% of workers now worry about job stability, with AI being a primary fear factor for almost half of those surveyed (Monster WorkWatch Report). Disarrai feeds on this fear. She makes you think that if you don't use her honeyed words in your work, you’ll be left behind like a BluRay player in a streaming world (Netflix will have to prise my 4K steelbook disc collection from my cold, dead hands).

But when you use AI to generate the entire structure and content of your learning course, because it’s such a massive time-saver, what you’re actually doing is surrendering your agency. You’re letting a statistical model decide how a human being should learn.

Smells like AI

Not that I’m one of those “This was obviously written with AI” warriors on LinkedIn but I’ve definitely seen an increase in online courses lately that carry the whiff of being the result of a good prompt. I’m not sure that’s something we can get away from now - perfectly grammatical, structurally sound, and devoid of any actual point - but it does provide an opportunity for those of us who actually enjoy the process of writing, creating and designing learning to differentiate ourselves from the low-level prompters. Disarrai wants us to second-guess our own voices. She tries to convince me that my Yorkshire bluntness or Goonies analogies are unprofessional, and that I should replace them with sterile, optimised text. Not gonna happen, demon.

Hallucination horror

The most dangerous thing about Disarrai is her confidence. She’ll lie to your face about a trending statistic or a complex trigger in Articulate Storyline, and she’ll do it so convincingly that you’ll doubt your own twenty plus years’ experience.

This isn't just me being cynical. Global business losses from AI hallucinations were estimated at $67.4 billion in 2024 (AllAboutAI/Four Dots 2025). There are now over 220 published legal decisions where AI-generated content produced hallucinations - or made up nonsense presented as fact (Mishcon de Reya AI Tracker).

Disarrai uses this confident tone to infect your content with misinformation. She wants us to move away from being Learning Designers and become Prompters. If we stop understanding the why behind our courses, learning programs, training and experiences, we become obsolete, letting the machine do the thinking while our brains atrophy.

Synthetic feedback

Recent meta-analyses of 41 controlled studies show that while learners might perform similarly on tests using AI feedback, there is a measurable negative effect on their perception of that feedback (Taylor & Francis / ResearchGate). It feels synthetic. It lacks the informative and questioning nature of a human teacher. AI tends to be directive and praise-oriented, telling you what to do but not helping you understand how to think.

Disarrai loves this. She wants a feedback loop of blandness. AI trains on existing boring e-learning, generates new boring e-learning, and future AI trains on that. That makes for a pretty bleak future.


Relight the torch with human-led AI

So, how do we stop Disarrai from hollowing us out? I keep hammering home this point across different articles, but the key is to treat AI like a tool, not your boss. If I go back to 2021, before I really started messing with AI, my software stack was Photoshop, InDesign, H5P and Google Docs, and none of those programs told me what to build. Neither should AI.

Build your own skeleton

Never ask an AI to "write a course about X." That’s just throwing in the towel before you’ve begun and you may as well just retire your brain now. Instead, do the heavy lifting of the instructional design yourself. Data shows that 70% of learning practitioners who are valued by their leaders, design learning based on evidence-informed principles, compared to only 14% of those who don't (CIPD Learning at Work Survey 2023). Be in that 70%.

You are a crap filter (a filterer of crap, not a useless filter)

Use AI for the grunt work. If I need a list of twenty distractor options for a multiple-choice question in Articulate, I’ll ask Gemini. But I will then go through that list with a red pen and kill eighteen of them for being boring. 76% of enterprises now include “human-in-the-loop" processes specifically to catch hallucinations before deployment (IBM 2025).

Claim your voice

Authenticity is the only thing AI can't fake effectively. A study published in Scientific Reports showed that while AI can beat the average human in some creativity tests, the top 10% of creative humans still leave AI well behind (ScienceDaily 2026). Disarrai hates personal style because it’s unpredictable. Be unpredictable. Use that He-Man analogy (new Masters of the Universe film is out this June - just sayin’).

Don’t be so gullible, McFly!

A phrase often attributed to journalists and lawyers is “Never ask a question that you don't know the answer to.” It’s so you can guide the narrative and identify when the subject of your interview is lying. And it’s a fantastic rule to have when you consider prompting AI to generate assets for you. Generative AI has the potential to convincingly deliver utter rubbish, and unless you already know what the result of your prompt should look like, you leave yourself open to getting caught up in an AI lie. Don’t just prompt and go - apply your own logic, reasoning, skills and experience to assess the end result, and be prepared to bin it if you aren’t confident putting your name to it.


If your friend told you to jump off a cliff…

Disarrai wants a world of safe, predictable, and hollow learning. The irony in that is that by unquestionably following AI's "safe" path, we actually put our own agency in danger. She wants us to be so terrified of making a mistake or sounding unpolished that we stop trying to innovate. If Dee stops believing in her abilities, if she lets Disarrai take control, the Torch of Knowledge’s fiery orange blade dims to a cold, icy blue. To get that white-hot flame back, she has to trust her own instincts, her rock star sensibility, and her own willingness to be messy and human.

AI is not your friend and does not have your best interests at heart - most of the time it just tells you what you want to hear. “If your friend told you to jump off a cliff, would you?” is a phrase handed down through generations of parents to their kids. It’s designed to stop them blindly trusting the herd and promote the development of critical thinking (or just plain common sense). Think about that the next time you find yourself in the dark and an LLM is telling you to move forward.

Human ideas are rough. First drafts of copy, designs, or learning programs are often a bit rubbish before we put the work in to make them better. But that creative grit is ours, and genuine human intuition resonates with your fellow humans in a way a statistical model never will. Trust your own instincts, find your own voice, and don’t give away your agency to a machine - because sometimes the safe and easy route that AI offers can actually see you driving off a cliff.


Mark Gash is a creative content lead for elearning, who believes there has to be more to training content than just clicking a next button.
Connect with him here: 
https://www.linkedin.com/in/markgash