Opener
There’s no shortage of buzz around generative AI, with marketers touting Large Language Models (LLMs) as “reasoning engines” and “thinking machines.” But the truth? These models are being anthropomorphized — marketed to resemble human thought so that they become relatable. In this article, we demystify some of the terms that have become commonplace in LLM-centric discussions.
Reasoning by Momentum
GPTs don’t "reason" — they generate. One word at a time. Each output creates a trace that feeds the next. The result is a kind of linguistic momentum: a trace of text giving rise to another trace, and another. From a distance, it looks like reasoning. But up close, it’s language predicting more language. In reality, we are yet to define and model what "real" reasoning entails within the building blocks of generative technology.
Researchers have even extended this illusion by prompting the model with phrases like “wait…” right before it stops. That tiny hesitation nudges the model to continue the "thought" — stretching the trace a little longer. Not because it “thought again,” but because the prompt pushed it back onto the rails of generation.
With that said, we must give credit where credit is due. Such approaches have improved performance across multiple benchmarks — albeit with increased energy consumption — and are findings of undeniable value.
Hallucinations: A Feature, Not a Failure
When a model confidently makes something up, it’s not lying — it’s sampling. There’s no internal fact-checker, only probability-backed, plausible continuations of word sequences. Calling it a “hallucination” makes it sound accidental, but it’s not. It’s an expected outcome, baked into the inherent methodology of how these models are built.
"Hallucination" detection and mitigation is an active area of research, with significant ongoing efforts to improve model reliability and factual grounding.
Closer: Steering Away from the Theatrics of “AGI” and Co.
Marketers love to slap ill-defined labels (think Artificial General Intelligence) onto these tools, but such terms stretch what the models actually do. LLMs are sophisticated algorithms that compress huge amounts of internet data; they are data-driven text generators. Mythologizing these systems can only lead to misuse and a loss of user trust.
The magic happens when we stop buying into overblown sales tactics and start understanding — and designing for — the explicit utilities these models can offer.