Your LLM Wonโt Stop Lying Any Time Soon
Researchers call it โhallucinationโ; you might more accurately refer to it as confabulation, hornswaggle, hogwash, or just plain BS. Anyone who has used an LLM has encountered it; some people seem to find it behind every prompt, while others dismiss it as an occasional annoyance, but nobody claims it doesnโt happen. A recent paper by researchers at OpenAI (PDF) tries to drill down a bit deeper into just why that happens, and if anything can be done.
Spoiler alert: not really. Not unless we completely re-think the way weโre training these models, anyway. The analogy used in the conclusion is to an undergraduate in an exam room. Every right answer is going to get a point, but wrong answers arenโt penalizedโ so why the heck not guess? You might not pass an exam that way going in blind, but if you have studied (i.e., sucked up the entire internet without permission for training data) then you might get a few extra points. For an LLMโs training, like a studentโs final grade, every point scored on the exam is a good point.
The problem is that if you reward โI donโt knowโ in training, you may eventually produce a degenerate model that responds to every prompt with โIDKโ. Technically, thatโs trueโ the model is a stochastic mechanism; it doesnโt โknowโ anything. Itโs also completely useless. Unlike some other studies, however, the authors do not conclude that so-called hallucinations are an inevitable result of the stochastic nature of LLMs.
While that may be true, they point out itโs only the case for โbase modelsโโ pure LLMs. If you wrap the LLM with a โdumbโ program able to parse information into a calculator, for example, suddenly the blasted thing can pretend to count. (Thatโs how undergrads do it these days, too.) You can also provide the LLM with a cheat-sheet of facts to reference instead of hallucinating; it sounds like whatโs being proposed is a hybrid between an LLM and the sort of expert system you used to use Wolfram Alpha to access. (A combo weโve covered before.)
In that case, however, some skeptics might wonder why bother with the LLM at all, if the knowledge in the expert system is โgood enough.โ (Having seen one AI boom before, we can say with the judgement of history that the knowledge in an expert system isnโt good enough often enough to make many viable products.)
Unfortunately, that โeasyโ solution runs back into the issue of grading: if you want your model to do well on the scoreboards and beat ChatGPT or DeepSeek at popular benchmarks, thereโs a certain amount of โteaching to the testโ involved, and a model that occasionally makes stuff up will apparently do better on the benchmarks than one that refuses to guess. The obvious solution, as the authors propose, is changing the benchmarks.
If youโre interested in AI (and who isnโt, these days?), the paper makes an interesting, read. Interesting if, perhaps disheartening if you were hoping the LLMs would graduate from their eternal internship any time soon.
Via ComputerWorld, by way of whereisyouredat.
hackaday.com/2025/10/10/your-lโฆ