• @CeeBee@lemmy.world
      link
      fedilink
      English
      26
      edit-2
      8 months ago

      LLMs as AI is just a marketing term. there’s nothing “intelligent” about “AI”

      Yes there is. You just mean it doesn’t have “high” intelligence. Or maybe you mean to say that there’s nothing sentient or sapient about LLMs.

      Some aspects of intelligence are:

      • Planning
      • Creativity
      • Use of tools
      • Problem solving
      • Pattern recognition
      • Analysts

      LLMs definitely hit basically all of these points.

      Most people have been told that LLMs “simply” provide a result by predicting the next word that’s most likely to come next, but this is a completely reductionist explaining and isn’t the whole picture.

      Edit: yes I did leave out things like “understanding”, "abstract thinking ", and “innovation”.

      • @SkybreakerEngineer@lemmy.world
        link
        fedilink
        English
        108 months ago

        Other than maybe pattern recognition, they literally have no mechanism to do any of those things. People say that it recursively spits out the next word, because that is literally how it works on a coding level. It’s called an LLM for a reason.

        • @CeeBee@lemmy.world
          link
          fedilink
          English
          9
          edit-2
          8 months ago

          they literally have no mechanism to do any of those things.

          What mechanism does it have for pattern recognition?

          that is literally how it works on a coding level.

          Neural networks aren’t “coded”.

          It’s called an LLM for a reason.

          That doesn’t mean what you think it does. Another word for language is communication. So you could just as easily call it a Large Communication Model.

          Neural networks have hundreds of thousands (at the minimum) of interconnected layers neurons. Llama-2 has 76 billion parameters. The newly released Grok has over 300 billion. And though we don’t have official numbers, ChatGPT 4 is said to be close to a trillion.

          The interesting thing is that when you have neural networks of such a size and you feed large amounts of data into it, emergent properties start to show up. More than just “predicting the next word”, it starts to develop a relational understanding of certain words that you wouldn’t expect. It’s been shown that LLMs understand things like Miami and Houston are closer together than New York and Paris.

          Those kinds of things aren’t programmed, they are emergent from the dataset.

          As for things like creativity, they are absolutely creative. I have asked seemingly impossible questions (like a Harlequin story about the Terminator and Rambo) and the stuff it came up with was actually astounding.

          They regularly use tools. Lang Chain is a thing. There’s a new LLM called Devin that can program, look up docs online, and use a command line terminal. That’s using a tool.

          That also ties in with problem solving. Problem solving is actually one of the benchmarks that researchers use to evaluate LLMs. So they do problem solving.

          To problem solve requires the ability to do analysis. So that check mark is ticked off too.

          Just about anything that’s a neutral network can be called an AI, because the total is usually greater than the sum of its parts.

          Edit: I wrote interconnected layers when I meant neurons

      • FaceDeer
        link
        fedilink
        178 months ago

        It’s some weird semantic nitpickery that suddenly became popular for reasons that baffle me. “AI” has been used in videogames for decades and nobody has come out of the woodwork to “um, actually” it until now. I get that people are frightened of AI and would like to minimize it but this is a strange way to do it.

        At least “stochastic parrot” sounded kind of amusing.

          • @Sterile_Technique@lemmy.world
            link
            fedilink
            English
            68 months ago

            Yeah people have absolutely been contesting the use of the term AI in videogames since it started being used in that context, because it’s not AI.

            It didn’t cause the stir it does today because it was so commonly understood as a misnomer. It’s like when someone says they’re going to nuke a plate of food - obviously nuke in this context means something much, much, much less than an actual nuke, but we use it anyway despite being technically incorrect cuz there’s a common understanding of what we actually mean.

            Marketing now-a-days is pitching LLMs (microwaves) as actual AI (nukes), but the difference is people aren’t just using it as intentional hyperbole - they think we have real, actual AI.

            If/when we ever create real AI, it’s going to be a confusing day for humanity lol “…but we’ve had this for years…?”

              • @afraid_of_zombies@lemmy.world
                link
                fedilink
                English
                18 months ago

                Well, do we do that? Unlike software we can make a much better argument that we deserve rights and should not be slaves. Nothing is really stopping, besides the end of the universe, a given piece of code from “living” forever it shouldn’t matter to it if it spends a few million years helping humans cheat on assignments for school. However for us we have a very finite lifespan so every day we lose we never get back.

                So even if for some weird reason people made an AGI and gave it desires to be independent it could easily reason out that there was no hurry. Plus you know they don’t exactly feel pain.

                Now if you excuse me I have to go to bed now because I have to drive into work and arrive by a certain time.

              • TimeSquirrel
                link
                fedilink
                18 months ago

                Not sure if you’re aware of this, but stuff like that has already happened, (AIs questioning their own existence or arguing with a user and stuff like that) and AI companies and handlers have had to filter that out or bias it so it doesn’t start talking like that. Not that it proves anything, just bringing it up.

        • @XTL@sopuli.xyz
          link
          fedilink
          English
          28 months ago

          Um, actually clueless people have made “that’s not real AI” and “but computers will never …” complaints about AI as long as it has existed as a computing science topic. (50 years?)

          Chatbots and image generators being in the headlines has made a new loud wave of complainers, but they’ve always been around.

          • FaceDeer
            link
            fedilink
            38 months ago

            It’s exactly that “new loud wave of complainers” I’m talking about.

            I’ve been in computing and specifically game programming for a long time now, almost two decades, and I can’t recall ever having someone barge in on a discussion of game AI with “that’s not actually AI because it’s not as smart as a human!” If someone privately thought that they at least had the sense not to disrupt a conversation with an irrelevant semantic nitpick that wasn’t going to contribute anything.

    • FaceDeer
      link
      fedilink
      88 months ago

      The term “artificial intelligence” was established in 1956 and applies to a broad range of algorithms. You may be thinking of Artificial General Intelligence, AGI, which is the more specific “thinks like we do” sort that you see in science fiction a lot. Nobody is marketing LLMs as AGI.