@LordCaramac @cstross @interstar @f4grx @weekend_editor It irritates me no end that the LLM folks have tried to co-opt the general term "AI" for their particular technology.

Michael Gemar
Posts
-
OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flawshttps://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html -
OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flawshttps://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html@interstar @LordCaramac @weekend_editor @cstross That’s not *solving* the problem — it’s like using an abacus to check a sometimes incorrect calculator. A *solution* would be an advance that eliminated “hallucinations”.
-
OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flawshttps://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html@interstar @LordCaramac @weekend_editor @cstross I’m very dubious that it is “straightforward” to give AI the kind of sensory and conceptual connection to the world that humans have (or that LLMs are the kind of architecture that can easily integrate such ongoing connections, given that training such models is a huge separate step).
-
OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flawshttps://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html@LordCaramac @weekend_editor @cstross Sure, neither we nor anything else directly perceive the real world — our sensorium is highly mediated by biological and cognitive processes. But as you say, it *does* maintain a causal connection with the physical world. LLMs have no such connection.
-
OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flawshttps://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html@weekend_editor @cstross Great point! It’s not like the models actually *know* how (or even whether) the symbols they read in, manipulate, and output relate to the real world.