We Were Promised Cortana. We Got A Search Bar.
WE WERE PROMISED CORTANA. WE GOT A SEARCH BAR. Now meet NotebookLM.
In the Halo videogame series, the fictional AI, Cortana, was precise, honest, and never got anything wrong. Microsoft took her name and built something that got discontinued. NotebookLM is the first real attempt to close that gap. Here is the catch.
By Asegul Hulus
Perhaps you recognize Cortana as the Microsoft assistant that no one requested. The Windows 10 feature that persistently surfaced, aiming to assist with Bing searches, set unwanted reminders, and disrupted your work with its overly enthusiastic yet unhelpful presence. She was launched by Microsoft in 2014. She was already on her way out by the time Windows 11 was released, and Microsoft Copilot eventually replaced it after its discontinuation. Gone. Just like that.
Here's something Microsoft hasn't revealed to you. Cortana's name originates from a character in the Halo video game series. The difference between these two similar names is one of the most revealing accounts in the history of technology.
The fictional Halo Cortana is not a search bar. She is an AI that retrieves, analyses, and delivers exactly what is needed, exactly when it is needed. She does not guess. She does not hallucinate. She's versatile in her communication, capable of adapting to different formats and contexts, and she's meticulous about ensuring Master Chief has accurate information before engaging in combat. Her brilliance, precision, and honesty shine through in her understanding of what she knows and what she doesn't.
Microsoft took that name and built a Bing shortcut. Then discontinued it. They named their AI after the honest one. Then they built the opposite. The irony should bother us more than it does.
Three versions of the same promise.
Halo Cortana Fictional AI: Retrieves verified intelligence, adapts across formats, never fabricates. The standard we were reaching for.
Microsoft Cortana Named after the Halo character: Launched 2014. Replaced by Copilot. A Bing-powered reminder tool that could not live up to its own name.
NotebookLM: No heroic name. But the architecture finally starts to close the gap. It retrieves. It verifies. It does not make things up. Here is the catch.
The technology behind the truth
NotebookLM runs on Retrieval-Augmented Generation, or RAG. Rather than creating responses from scratch, it retrieves them from the sources you provide and then cross-references them before answering. It does not invent. It verifies. That is not how most AI works. That is how Cortana was supposed to work.
The evidence for this architecture comes from a place where accuracy genuinely cannot be faked: medicine. Tozuka et al. (2024) tested RAG in clinical lung cancer staging and found something that should make every educator pay attention.
86% diagnostic accuracy in high-stakes clinical decision-making using the same RAG architecture that powers NotebookLM. When getting it wrong costs lives, RAG delivered (Tozuka et al., 2024).
That number matters in a classroom too. Students build their understanding on what they are told. When the AI is wrong and confident about it, the damage compounds quietly across an entire degree. A tool that checks its sources before it speaks is not a minor upgrade. It is a different category of technology altogether.
What it Does in a Classroom
I mapped NotebookLM against two established educational frameworks, the Community of Inquiry model and Technological Pedagogical Content Knowledge (TPACK), and the alignment is stronger than you would expect from a commercial tool. The Community of Inquiry model asks for three things: cognitive presence, where students engage meaningfully with ideas; teaching presence, where educators design and facilitate learning effectively; and social presence, where participants feel genuinely connected to the learning environment. NotebookLM supports all three in ways most AI tools simply do not.
Through the TPACK lens, it demonstrates what genuine technology integration looks like. Not just adding a digital layer, but changing how knowledge moves between teacher, student, and content. It gives educators a verified foundation to work from rather than a plausible-sounding fiction generator.
And now it does something the Halo Cortana would recognise. Upload your lecture notes and NotebookLM generates a podcast. Feed it your research paper and it produces a video. Your syllabus becomes a conversation. It shifts format to fit the mission, exactly the way a real AI assistant should. That is not a gimmick. That is multimodal learning design finally arriving in a tool that works.
Cortana only existed because Master Chief had the hardware to run her. NotebookLM has the same problem.
Here is the Catch
What the excitement skips over:
- NotebookLM requires institutional infrastructure most under-resourced universities simply do not have.
- Faculty need genuine development support to use it well, not a tutorial and good luck.
- Students without strong digital literacy do not benefit equally. The gap widens, not narrows.
- The tool is built by Google. The data questions are not trivial and nobody is asking them loudly enough.
- Microsoft discontinued Cortana when it stopped serving their roadmap. What happens to NotebookLM when Google does the same?
This is the pattern we keep seeing. A technology arrives that genuinely works. Well-resourced institutions run with it. Everyone else waits. By the time it reaches the margins, the centre has already moved on to the next thing.
My research found that NotebookLM has real transformative potential in higher education, but only if implementation is meticulous and supported by robust infrastructure (Hulus, 2025). That word meticulous is doing a lot of work. It means this does not happen by accident. It means equity has to be designed in from the start, not discovered as an afterthought two years later.
Microsoft named their assistant after the honest AI and then built something that could not live up to the name. We should not make the same mistake with NotebookLM. The technology is finally starting to close the gap between the AI we imagined and the AI we actually built.
Now it is your Turn
if you are an educator: Upload one lecture. Generate one podcast. See what your students say. Then ask your institution why this is not standard.
If you are an institution: Stop waiting for the technology to prove itself. It already has. Start asking how you fund access for every student, not just the ones on the right campus.
If you are a reader: Try NotebookLM this week. Then come back and tell me it is not Cortana.
If you are in the room where decisions get made: Next time your institution talks about AI strategy, ask one question: who does this actually serve?
The tool exists. The research backs it. The only thing missing is the will to make it equitable.
We were promised the honest machine. For the first time, something close to it exists. Do not waste it.
Dr. Asegul Hulus is an Assistant Professor in Computer Science and a Fellow of the Higher Education Academy (FHEA). She is a distinguished researcher and published author with expertise across multiple Computer Science disciplines. She serves on the ACM Council on Women in Computing (ACM-W), where she is an investigative journalist and is on the Global Chapters Committee. She is also the founder of MetaTech Feminism, a pioneering framework at the intersection of technology and feminist research.

Member discussion