• @funkless_eck@sh.itjust.works
      link
      fedilink
      English
      111 year ago

      “ooh it’s more advanced but don’t worry- it’s not conscious”

      is as much a marketing tactic as “how it feels to chew 5 gum” or buzzfeedesque “top 10 celebrity mistakes - number 3 will blow your mind”

      it’s a tech product that runs a series of complicated loops against a large series of texts and returns the closest comparison, as it stands it’s never going to be dangerous in and of itself.

      • @Thorny_Insight@lemm.ee
        link
        fedilink
        English
        21 year ago

        Generative AI and LLMs is not what people mean when they’re talking about the dangers of AI. What we worry about doesn’t exist yet.

        • @funkless_eck@sh.itjust.works
          link
          fedilink
          English
          21 year ago

          I dont think AI sentience as danger is going to be an issue in our lifetimes - we’re 123 years in January since the first well known story featuring this trope (Karel Čapek’s Rossumovi Univerzáiní Robotī)

          We are a long way off from being able to copy virtual perception, action and unified agency of even basic organisms right now.

          Therefore all claims about the “dangers” of AI are only dangers of humans using the tool (akin to the dangers of driving a car vs the dangers of cars attacking their owners without human interaction) and thus are just marketing hyperbole

          in my opinion of course

          • @Thorny_Insight@lemm.ee
            link
            fedilink
            English
            11 year ago

            Well yeah perhaps, but isn’t that kind of like knowing that an asteroid is heading towards earth and feeling no urgency about it? There’s non-zero chance that we’ll create AGI withing the next couple years. The chances may be low but consequences have the potential to literally end humanity - or worse.

        • @hikaru755@feddit.de
          link
          fedilink
          English
          21 year ago

          I mean… It might be. Just depends on how much potential there still is to get models up to higher reasoning capabilities, and I don’t think anyone really knows that yet

          • @Thorny_Insight@lemm.ee
            link
            fedilink
            English
            31 year ago

            Yeah maybe. I just personally don’t think LLMs are actually intelligent. They’re just capable of faking intelligence but at the same time making errors that perfectly indicate that it’s basically just bluffing. I’d be more worried about an AI that knows less things but demonstrates higer capability for logic and reasoning.

    • @Thorny_Insight@lemm.ee
      link
      fedilink
      English
      7
      edit-2
      1 year ago

      AI can be dangerous. The point is not that it’s likely but that in the very unlikely event of it going rogue it can at worst have civilication ending consequences.

      Imagine how easy it is to trick a child as an adult. The difference in intelligence between a human and superintelligent AGI would be orders of magnitude greater that that.

      • @conciselyverbose@sh.itjust.works
        link
        fedilink
        English
        21 year ago

        An actual AI (that modern tools don’t even vaguely resemble) could maybe theoretically be dangerous.

        An LLM cannot be dangerous. There’s no path to anything resembling intelligence or agency.

    • xcjs
      link
      fedilink
      English
      4
      edit-2
      1 year ago

      I was reflecting on this myself the other day. For all my criticisms of Zuckerberg/Meta (which are very valid), they really didn’t have to release anything concerning LLaMA. They’re practically the only reason we have viable open source weights/models and an engine.

  • @BetaDoggo_@lemmy.world
    link
    fedilink
    English
    11 year ago

    The 8B is incredible for it’s size and they’ve managed to do sane refusal training this time for the official instruct.