OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flawshttps://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html
-
OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws
https://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.htmlThing is, even the *correct* answers it gives are also hallucinations, in the sense that they happen more or less by chance, in imitation of text it's seen in its training.
-
Thing is, even the *correct* answers it gives are also hallucinations, in the sense that they happen more or less by chance, in imitation of text it's seen in its training.
@weekend_editor @cstross Great point! It’s not like the models actually *know* how (or even whether) the symbols they read in, manipulate, and output relate to the real world.
-
@weekend_editor @cstross Great point! It’s not like the models actually *know* how (or even whether) the symbols they read in, manipulate, and output relate to the real world.
@michaelgemar @weekend_editor @cstross Funny thing is, we humans are constantly hallucinating. We don't experience the real world but a hallucination our brains build for us. The difference is that our brains constantly correct those hallucinations, checking the sensory perceptions and changing the virtual world inside our minds in not quite realtime. When people suffer from hallucinations, it is because the checks and corrections part doesn't work properly due to some mental illness. When people experience recreational hallucinations after ingesting psychedelics, it's similar but temporary, and the psychonauts are aware of this.
-
@michaelgemar @weekend_editor @cstross Funny thing is, we humans are constantly hallucinating. We don't experience the real world but a hallucination our brains build for us. The difference is that our brains constantly correct those hallucinations, checking the sensory perceptions and changing the virtual world inside our minds in not quite realtime. When people suffer from hallucinations, it is because the checks and corrections part doesn't work properly due to some mental illness. When people experience recreational hallucinations after ingesting psychedelics, it's similar but temporary, and the psychonauts are aware of this.
@LordCaramac @weekend_editor @cstross Sure, neither we nor anything else directly perceive the real world — our sensorium is highly mediated by biological and cognitive processes. But as you say, it *does* maintain a causal connection with the physical world. LLMs have no such connection.
-
@LordCaramac @weekend_editor @cstross Sure, neither we nor anything else directly perceive the real world — our sensorium is highly mediated by biological and cognitive processes. But as you say, it *does* maintain a causal connection with the physical world. LLMs have no such connection.
@michaelgemar @LordCaramac @weekend_editor @cstross
When we dream we may have hours of low causal perceptual connection with the world, which allows our dreams to wander far from reality.
The fact this is so ubiquitous is good evidence that beyond this continuous feedback connection, our brains have no special capacity to resist hallucination.
And now we know this, it's going to be pretty straightforward to give AI that kind of feedback connection too.
-
@michaelgemar @LordCaramac @weekend_editor @cstross
When we dream we may have hours of low causal perceptual connection with the world, which allows our dreams to wander far from reality.
The fact this is so ubiquitous is good evidence that beyond this continuous feedback connection, our brains have no special capacity to resist hallucination.
And now we know this, it's going to be pretty straightforward to give AI that kind of feedback connection too.
@interstar @LordCaramac @weekend_editor @cstross I’m very dubious that it is “straightforward” to give AI the kind of sensory and conceptual connection to the world that humans have (or that LLMs are the kind of architecture that can easily integrate such ongoing connections, given that training such models is a huge separate step).
-
@interstar @LordCaramac @weekend_editor @cstross I’m very dubious that it is “straightforward” to give AI the kind of sensory and conceptual connection to the world that humans have (or that LLMs are the kind of architecture that can easily integrate such ongoing connections, given that training such models is a huge separate step).
@michaelgemar @LordCaramac @weekend_editor @cstross
It does depend on the particular domain you are working in and the things you don't want it to hallucinate about. Say you are asking it to do literature reviews of scientific papers. As long as you can quickly check any assertions it makes about physics or papers against a database of accepted physics facts and papers, you've "solved" the hallucination problem in that domain.
It may not be easy, but it's "straightforward"
-
@michaelgemar @LordCaramac @weekend_editor @cstross
It does depend on the particular domain you are working in and the things you don't want it to hallucinate about. Say you are asking it to do literature reviews of scientific papers. As long as you can quickly check any assertions it makes about physics or papers against a database of accepted physics facts and papers, you've "solved" the hallucination problem in that domain.
It may not be easy, but it's "straightforward"
@interstar @LordCaramac @weekend_editor @cstross That’s not *solving* the problem — it’s like using an abacus to check a sometimes incorrect calculator. A *solution* would be an advance that eliminated “hallucinations”.
-
@interstar @LordCaramac @weekend_editor @cstross That’s not *solving* the problem — it’s like using an abacus to check a sometimes incorrect calculator. A *solution* would be an advance that eliminated “hallucinations”.
@michaelgemar @LordCaramac @weekend_editor @cstross
But that's the point we were just discussing. It turns out the brain doesn't have a mechanism that "avoids" hallucinations. (If it did, we wouldn't dream.)
All it has is some way of cross referencing its assumptions against the world and correcting itself.
-
@michaelgemar @LordCaramac @weekend_editor @cstross
But that's the point we were just discussing. It turns out the brain doesn't have a mechanism that "avoids" hallucinations. (If it did, we wouldn't dream.)
All it has is some way of cross referencing its assumptions against the world and correcting itself.
@interstar @michaelgemar @LordCaramac @weekend_editor @cstross your reasoning is so off and wrong. YES of course the brain HAS a mechanism to check that thoughts are realistic. If it's not in the brain then where it is? Liver? Heart? No thats in the brain. The fact that the brain puts some components to sleep when you sleep (what a concept, huh?) is just a part of normal brain behavior.
You're desperate to defend genai by comparing it to the brain. The thing is, this comparison does not work.
-
@interstar @michaelgemar @LordCaramac @weekend_editor @cstross your reasoning is so off and wrong. YES of course the brain HAS a mechanism to check that thoughts are realistic. If it's not in the brain then where it is? Liver? Heart? No thats in the brain. The fact that the brain puts some components to sleep when you sleep (what a concept, huh?) is just a part of normal brain behavior.
You're desperate to defend genai by comparing it to the brain. The thing is, this comparison does not work.
@f4grx @interstar @michaelgemar @LordCaramac @weekend_editor The brain is not the sole component of the neurohormonal axis in the mammalian body. Nor are neurons the only relevant tissues in the brain, or action potentials the only mediator of non-local connections.
-
@f4grx @interstar @michaelgemar @LordCaramac @weekend_editor The brain is not the sole component of the neurohormonal axis in the mammalian body. Nor are neurons the only relevant tissues in the brain, or action potentials the only mediator of non-local connections.
@cstross @f4grx @michaelgemar @LordCaramac @weekend_editor
None of this really matters.
The question is whether human brain-body systems magically just know the truth and are therefore incapable of hallucination. Or whether our mechanism for avoiding / mitigating hallucination is *retrospective* correction; ie we compare new information against earlier assumptions and keep adapting.
The existence of dreams strongly suggests the latter.
-
@cstross @f4grx @michaelgemar @LordCaramac @weekend_editor
None of this really matters.
The question is whether human brain-body systems magically just know the truth and are therefore incapable of hallucination. Or whether our mechanism for avoiding / mitigating hallucination is *retrospective* correction; ie we compare new information against earlier assumptions and keep adapting.
The existence of dreams strongly suggests the latter.
@interstar @cstross @f4grx @michaelgemar @weekend_editor The real problem is that AI researchers often try to build intelligence upside down, like starting to build a house at the roof. Things like speech or playing complex games seem quite impressive, but the part that is really hard isn't the stuff for which humans have to work their brains at full capacity, it's all the things a four month old kitten can do without even trying. That's where we must start if we ever want to build something that's truly intelligent.
OTOH, intelligent beings can cause a lot of damage, like a kitten that just discovered the audio cables on the back of the HiFi stereo set (yes, I still use one of those).
-
@interstar @cstross @f4grx @michaelgemar @weekend_editor The real problem is that AI researchers often try to build intelligence upside down, like starting to build a house at the roof. Things like speech or playing complex games seem quite impressive, but the part that is really hard isn't the stuff for which humans have to work their brains at full capacity, it's all the things a four month old kitten can do without even trying. That's where we must start if we ever want to build something that's truly intelligent.
OTOH, intelligent beings can cause a lot of damage, like a kitten that just discovered the audio cables on the back of the HiFi stereo set (yes, I still use one of those).
@interstar @cstross @f4grx @michaelgemar @weekend_editor I expect the bubble to burst very soon, and then it will be another AI winter, and AI R&D will have to do with tiny budgets again for a long time because nobody is going to invest in anything AI.
-
@interstar @cstross @f4grx @michaelgemar @weekend_editor I expect the bubble to burst very soon, and then it will be another AI winter, and AI R&D will have to do with tiny budgets again for a long time because nobody is going to invest in anything AI.
@LordCaramac @interstar @f4grx @michaelgemar @weekend_editor However most of what they call "AI" today is just deep learning stuff, so it'll be rebranded and continue under a new name.
-
@LordCaramac @interstar @f4grx @michaelgemar @weekend_editor However most of what they call "AI" today is just deep learning stuff, so it'll be rebranded and continue under a new name.
@cstross @interstar @f4grx @michaelgemar @weekend_editor AI is just an umbrella term for different branches of Computer Science that work on solving complex problems for which we humans utilise our intelligence, and machine learning is one of those branches. I think huge LLMs are quite impressive from a research POV, but they aren't even half as useful for practical applications as marketeers would have us believe. Small open source models that can run on the local machine, even a Raspberry Pi without any network connection, will take over once Big AI goes bust.
-
@cstross @interstar @f4grx @michaelgemar @weekend_editor AI is just an umbrella term for different branches of Computer Science that work on solving complex problems for which we humans utilise our intelligence, and machine learning is one of those branches. I think huge LLMs are quite impressive from a research POV, but they aren't even half as useful for practical applications as marketeers would have us believe. Small open source models that can run on the local machine, even a Raspberry Pi without any network connection, will take over once Big AI goes bust.
@LordCaramac @cstross @interstar @f4grx @weekend_editor It irritates me no end that the LLM folks have tried to co-opt the general term "AI" for their particular technology.
-
@LordCaramac @cstross @interstar @f4grx @weekend_editor It irritates me no end that the LLM folks have tried to co-opt the general term "AI" for their particular technology.
@michaelgemar @cstross @interstar @f4grx @weekend_editor LLMs are part of AI, but not the whole field. It's like you say "sports" but then it's only tennis.
-
undefined Oblomov shared this topic on