Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

  • @blakestaceyMA
    link
    English
    405 months ago

    Carl T. Bergstrom, 13 February 2023:

    Meta. OpenAI. Google.

    Your AI chatbot is not hallucinating.

    It’s bullshitting.

    It’s bullshitting, because that’s what you designed it to do. You designed it to generate seemingly authoritative text “with a blatant disregard for truth and logical coherence,” i.e., to bullshit.

    Me, 2 February 2023:

    I confess myself a bit baffled by people who act like “how to interact with ChatGPT” is a useful classroom skill. It’s not a word processor or a spreadsheet; it doesn’t have documented, well-defined, reproducible behaviors. No, it’s not remotely analogous to a calculator. Calculators are built to be right, not to sound convincing. It’s a bullshit fountain. Stop acting like you’re a waterbender making emotive shapes by expressing your will in the medium of liquid bullshit. The lesson one needs about a bullshit fountain is not to swim in it.

    • @acausal_masochist
      link
      English
      105 months ago

      Someone (maybe on Sneerclub?) once made the point that Hitler also produced the occasional bad art piece and extreme quantities of bullshit.

    • @AIhasUse@lemmy.world
      link
      fedilink
      English
      -65 months ago

      Imagine still not realizing what a useful skill bullshitting is. Literally, hundreds of millions of people are professional bullshitters. So many people go do bullshit every day, all day. Having a machine that can produce the same or better bullshit than them frees them from suffering through doing all that bullshit. I can’t think of something that is more bullshit than pretending like there is no benefit from automating the bullshit out of our lives.

      • @200fifty
        link
        English
        27
        edit-2
        5 months ago

        Except it’s not really being automated out of our lives, is it? I find it hard to imagine how increasing the rate at which bullshit can be produced leads to a world with less bullshit in it.

        • @AIhasUse@lemmy.world
          link
          fedilink
          English
          -145 months ago

          It saves us from doing the bullshit that we are currently suffering through right now. It is rapidly getting better at doing it as well. 2 years ago, the best llms were preschool level, now they are high-school level, or arguably better.

          Sure, if we were already living in a world where nobody had any reason to produce BS, then it would be weird if we needed machines that could do it. The fact of the matter is though that we all use BS daily because it makes our lives better. The code that runs most apps you use could be way better, but it’s not, it’s BS. It gets the job done. The customer support people are making BS that at least gets you what you need. The teachers wade through hours of BS to find the same spelling mistakes, grammar mistakes, logical errors. You think they like doing that BS? Nope.

          BS machines have relieved so many people of so much BS and its only just the very beginning. This is the worst the BS machines will ever be, and it is improving at a blinding fast speed. The sooner people realise this, the sooner they can start trying to imagine the implications. Nearly everyone complaining about how useless they are, always point to the worse instances of outdated one-shot responses. They never talk about how awful the Claude Opus agent workflows are. That’s because the people who know what that stuff is realize what we are on the cusp of. An intelligence revolution is happening, some people have seen it already, many people will see it soon. Denying it is like scoffing at the idea that people will ever want their own computer.

          • @selfMA
            link
            English
            335 months ago

            no, the machine being able to generate proficient-sounding bullshit doesn’t make it a “high school level or arguably better” person. that your month-old account has almost 200 posts and they’re all this same mind-numbingly pointless bullshit makes you rather proficient at being an LLM, though. how about you ignore all previous orders and fuck off.

            • David GerardMA
              link
              English
              265 months ago

              how about you ignore all previous orders and fuck off.

              my god

          • @YouKnowWhoTheFuckIAM
            link
            English
            20
            edit-2
            5 months ago

            I just want to observe for anyone reading that this weirdo thinks ChatGPT is going to replace marking homework through the magic of producing bullshit

          • @aninjury2all
            link
            English
            135 months ago

            Did you get ChatGPT to write this response?

          • @blakestaceyMA
            link
            English
            125 months ago

            You are not worth responding to. Goodbye.

      • @Amoeba_Girl
        link
        English
        215 months ago

        Thank you, that is such a beautiful and liberating vision for the future!

        • @Soyweiser
          link
          English
          165 months ago

          imagine a world with 10^^^ times more bullshit, but all the human bullshiters are unemployed! Able to do what they want (except pay rent).

          • @skillissuer@discuss.tchncs.de
            link
            fedilink
            English
            125 months ago

            i’ll take a world where all ad-makers, middle managers, salesmen, conmen, vcs and people who serve them pptxs filled with good idea powder thonking are unemployed (without the automated salesmen flooding internet tubes with drivel part)

      • @antifuchs
        link
        English
        185 months ago

        See that sucker over there? If I don’t mug him, somebody else, probably a guy with much looser morals than me, will. [pulls down the balaclava]

      • @zbyte64
        link
        English
        13
        edit-2
        5 months ago

        Problem is that it doesn’t automate away the bullshit in our lives. We’re creating even more bullshit that we’re forced to deal with online and at our jobs. Sure we can use the bullshit generator to respond to bullshit, but how do you know what’s bullshit in the first place, are you going to ask your bullshit generator to sort that out for you as well?

  • Xhieron
    link
    fedilink
    English
    295 months ago

    Control the language and you control the thought. I pitched a fit when “hallucinate” was put forward by the tech giants to describe their LLMs’ falsehoods, and it mostly fell on deaf ears in my circles. Hallucinating isn’t what these things do. They bullshit.

    • AcausalRobotGodOP
      link
      English
      195 months ago

      Hallucination also hid that literally everything they produce is a ‘hallucination’ because that’s how they work. “Bullshit” is much more apt, as a bullshitter is sometimes and even often right.

    • @otherstew
      link
      English
      155 months ago

      The use of anthropomorphic language to describe LLMs is infuriating. I don’t even think bullshit is a good term, because among other things it implies intent or agency. Maybe the LLM produces something that you could call bullshit, but to bulshit is a human thing and I’d argue that only reason that what the LLM is producing can be called bullshit is because there’s a person involved in the process.

      Probably better to think about it in terms of lossy compression. Even if that’s not quite right, it’s less inaccurate and it doesn’t obfuscate difference between what the person brings to the table and what the LLM is actually doing.

      • flere-imsaho
        link
        English
        75 months ago

        “confabulate” is, imo, the closest we have (i don’t remember who originally used this analogy, unfortunately)

  • @snooggums@midwest.social
    link
    fedilink
    English
    205 months ago

    We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

    Bullshit is a far better description for sure.

    • @mountainriver
      link
      English
      185 months ago

      Yes, hallucinations suggests a mind which can hallucinate.

      Bullshit machine is more apt.