• @linearchaos@lemmy.world
    link
    fedilink
    English
    -625 days ago

    Yeah there’s already a lot of this in play.

    You run the same query multiple times through multiple models and do a web search looking for conflicting data.

    I’ve had copilot answer a query, then erase the output and tell me it couldn’t answer it after about 5 seconds.

    I’ve also seen responses contradict themselves later paragraphs saying there are other points of view.

    It would be a simple matter to have it summarize the output it’s about to give you and dump the output of it paints the subject in a negative light.

    • @selfA
      link
      English
      1125 days ago

      It would be a simple matter to have it summarize the output it’s about to give you and dump the output of it paints the subject in a negative light.

      “it can’t be that stupid, you must be prompting it wrong”

    • @froztbyte
      link
      English
      825 days ago

      It would be a simple matter to have it summarize the output it’s about to give you and dump the output of it paints the subject in a negative light.

      lol. like that’s a fix

      (Hindenburg, hitler, great depression, ronald reagan, stalin, modi, putin, decades of north korea life, …)

      • @blakestaceyA
        link
        English
        724 days ago

        Hindenburg, hitler, great depression, ronald reagan, stalin, modi, putin, decades of north korea life, …

        🎶 we didn’t start the fire 🎶

    • @bitofhope
      link
      English
      6
      edit-2
      25 days ago

      Exactly, and all of this is a simple matter of having multiple models trained on different instances of the entire public internet and determining whether their outputs contradict each other or a web search.

      I wonder how they prevented search engine results from contradicting data found through web search before LLMs became a thing?

      • @linearchaos@lemmy.world
        link
        fedilink
        English
        -525 days ago

        They didn’t really have to before LLM. Search engine results, in the heyday we’re backlink driven. You could absolutely search disinformation and find it. But if you searched for a credible article on someone, chances are more people would have links to the good article than the disinformation. However, conspiracy theories often leaked through into search results. And in that case they just gave you the web pages and you had to decide for yourself.

        • @bitofhope
          link
          English
          1024 days ago

          They didn’t really have to before LLM.

          No shit. Maybe they should just get rid of the extra bullshit generator and serve the sources instead of piling more LLM on the problem that only exists because of it.

        • @froztbyte
          link
          English
          8
          edit-2
          25 days ago

          this naive revisionist shit still standing in ignorance of easily 15y+ of SEO-fuckery (first for influence, and then for spam) is hilarious