Skip to content
0
  • Home
  • Piero Bosio
  • Blog
  • World
  • Fediverso
  • News
  • Categories
  • Old Web Site
  • Recent
  • Popular
  • Tags
  • Users
  • Home
  • Piero Bosio
  • Blog
  • World
  • Fediverso
  • News
  • Categories
  • Old Web Site
  • Recent
  • Popular
  • Tags
  • Users
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse

Piero Bosio Social Web Site Personale Logo Fediverso

Social Forum federato con il resto del mondo. Non contano le istanze, contano le persone
interstarundefined

interstar

@interstar@artoot.xyz
About
Posts
4
Topics
0
Shares
0
Groups
0
Followers
0
Following
0

View Original

Posts

Recent

  • OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flawshttps://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html
    interstarundefined interstar

    @cstross @f4grx @michaelgemar @LordCaramac @weekend_editor

    None of this really matters.

    The question is whether human brain-body systems magically just know the truth and are therefore incapable of hallucination. Or whether our mechanism for avoiding / mitigating hallucination is *retrospective* correction; ie we compare new information against earlier assumptions and keep adapting.

    The existence of dreams strongly suggests the latter.

    Uncategorized

  • OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flawshttps://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html
    interstarundefined interstar

    @michaelgemar @LordCaramac @weekend_editor @cstross

    But that's the point we were just discussing. It turns out the brain doesn't have a mechanism that "avoids" hallucinations. (If it did, we wouldn't dream.)

    All it has is some way of cross referencing its assumptions against the world and correcting itself.

    Uncategorized

  • OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flawshttps://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html
    interstarundefined interstar

    @michaelgemar @LordCaramac @weekend_editor @cstross

    It does depend on the particular domain you are working in and the things you don't want it to hallucinate about. Say you are asking it to do literature reviews of scientific papers. As long as you can quickly check any assertions it makes about physics or papers against a database of accepted physics facts and papers, you've "solved" the hallucination problem in that domain.

    It may not be easy, but it's "straightforward"

    Uncategorized

  • OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flawshttps://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html
    interstarundefined interstar

    @michaelgemar @LordCaramac @weekend_editor @cstross

    When we dream we may have hours of low causal perceptual connection with the world, which allows our dreams to wander far from reality.

    The fact this is so ubiquitous is good evidence that beyond this continuous feedback connection, our brains have no special capacity to resist hallucination.

    And now we know this, it's going to be pretty straightforward to give AI that kind of feedback connection too.

    Uncategorized
  • Login

  • Login or register to search.
  • First post
    Last post