Your LLM Won’t Stop Lying Any Time Soon'nResearchers call it “hallucination”; you might more accurately refer to it as confabulation, hornswaggle, hogwash, or just plain BS.
-
Your LLM Won’t Stop Lying Any Time Soon
Researchers call it “hallucination”; you might more accurately refer to it as confabulation, hornswaggle, hogwash, or just plain BS. Anyone who has used an LLM has encountered it; some people seem to find it behind every prompt, while others dismiss it as an occasional annoyance, but nobody claims it doesn’t happen. A recent paper by researchers at OpenAI (PDF) tries to drill down a bit deeper into just why that happens, and if anything can be done.
Spoiler alert: not really. Not unless we completely re-think the way we’re training these models, anyway. The analogy used in the conclusion is to an undergraduate in an exam room. Every right answer is going to get a point, but wrong answers aren’t penalized– so why the heck not guess? You might not pass an exam that way going in blind, but if you have studied (i.e., sucked up the entire internet without permission for training data) then you might get a few extra points. For an LLM’s training, like a student’s final grade, every point scored on the exam is a good point.
The problem is that if you reward “I don’t know” in training, you may eventually produce a degenerate model that responds to every prompt with “IDK”. Technically, that’s true– the model is a stochastic mechanism; it doesn’t “know” anything. It’s also completely useless. Unlike some other studies, however, the authors do not conclude that so-called hallucinations are an inevitable result of the stochastic nature of LLMs.
While that may be true, they point out it’s only the case for “base models”– pure LLMs. If you wrap the LLM with a “dumb” program able to parse information into a calculator, for example, suddenly the blasted thing can pretend to count. (That’s how undergrads do it these days, too.) You can also provide the LLM with a cheat-sheet of facts to reference instead of hallucinating; it sounds like what’s being proposed is a hybrid between an LLM and the sort of expert system you used to use Wolfram Alpha to access. (A combo we’ve covered before.)
In that case, however, some skeptics might wonder why bother with the LLM at all, if the knowledge in the expert system is “good enough.” (Having seen one AI boom before, we can say with the judgement of history that the knowledge in an expert system isn’t good enough often enough to make many viable products.)
Unfortunately, that “easy” solution runs back into the issue of grading: if you want your model to do well on the scoreboards and beat ChatGPT or DeepSeek at popular benchmarks, there’s a certain amount of “teaching to the test” involved, and a model that occasionally makes stuff up will apparently do better on the benchmarks than one that refuses to guess. The obvious solution, as the authors propose, is changing the benchmarks.
If you’re interested in AI (and who isn’t, these days?), the paper makes an interesting, read. Interesting if, perhaps disheartening if you were hoping the LLMs would graduate from their eternal internship any time soon.
Via ComputerWorld, by way of whereisyouredat.