Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’::Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.

  • @vrighter@discuss.tchncs.de
    link
    fedilink
    English
    32 years ago

    the models are also getting larger (and require even more insane amounts of resources to train) far faster than they are getting better.

    • @stergro@feddit.de
      link
      fedilink
      English
      32 years ago

      But bigger models have new “emergent” capabilities. I heard that from a certain size they start to know what they know and hallucinate less.

  • @Taringano@lemm.ee
    link
    fedilink
    English
    12 years ago

    People make a big deal out of this but they forget humans will make shit up all the time.

  • @joelthelion@lemmy.world
    link
    fedilink
    English
    02 years ago

    I don’t understand why they don’t use a second model to detect falsehoods instead of trying to fix it in the original LLM?

    • @dirkgentle@lemmy.ca
      link
      fedilink
      English
      1
      edit-2
      2 years ago

      If it was easy to detect, it wouldn’t happen in the first place. So far, not even OpenAI themselves, have succeeded in implementing a AI detector.