• @plasticcheese@lemmy.one
    link
    fedilink
    English
    6
    edit-2
    1 month ago

    The more I use various LLMs, the more I’ve come to realise that they have a tendency to confidently lie. More often than not, it seems an LLM will give me the answer it thinks I want to hear, even if the details of it’s answer are factually incorrect.

    Using these tools to decide and affect real peoples lives is a very dangerous prospect.

    Interesting article. Thanks

  • @SubArcticTundra@lemmy.ml
    link
    fedilink
    English
    51 month ago

    Thank you for this much needed reality check. I don’t understand why the Government are doing venture capital’s bidding.

    • David GerardOPMA
      link
      English
      61 month ago

      Labour are all about the billionaire donors these days.

  • @aaron@infosec.pub
    link
    fedilink
    English
    -9
    edit-2
    1 month ago

    Presumably ‘AI’ can make simple rules based decisions, if done properly (unfortunately, being the UK government this is a big ‘if’).

    But what exactly is sacking a million people supposed to do to the economy?

    • @froztbyte
      link
      English
      121 month ago

      Presumably ‘AI’ can make simple rules based decisions, if done properl

      honest question: was this meant seriously, or in jest?

      • @aaron@infosec.pub
        link
        fedilink
        English
        -81 month ago

        Serious.

        1. Fill in form online
        2. AI analyses it, decides if applicant is entitled to benefits.

        Why do you ask the question?

        • @selfA
          link
          English
          121 month ago

          why do you think hallucinating autocomplete can make rules-based decisions reliably

          AI analyses it, decides if applicant is entitled to benefits.

          why do you think this is simple

            • @selfA
              link
              English
              131 month ago

              good, use your excel spreadsheet and not a tool that fucking sucks at it

        • jlow (he/him)
          link
          fedilink
          English
          10
          edit-2
          1 month ago

          You should not need an AI to do that if it’s not a freeform text input?

          • @aaron@infosec.pub
            link
            fedilink
            English
            -6
            edit-2
            1 month ago

            Who knows what the British government acutally means when they say ‘AI’? I doubt they do.

            And I assume there is a sliding scale from what I described my excel speadsheet is capable of today, to whatever it is they hope AI will eventually interpret. And in fact barring the inevitable fuckups AI probably can eventual handle a lot of interpretation currently carried out by human civil servants.

            But honestly I would have thought that all of this is obvious, and that I shouldn’t really have to articulate it.

            My main point is that they claim they will save one million jobs (so sack one million people) and this will somehow boost the UK economy. I don’t see how it can.

            Covid really showed that most jobs are not essential. I would suggest most jobs exist because the people exist to do them: a situation an increasingly small number of people have got themselves into a position to profit from.

            What happens when AI takes away swathes of bullshit jobs but the people still exist? Silicon valley is hoping to make AI functional enough essentially leave somebody else holding the bag, after emptying it of goodies. For some reason, the UK government seem to think this will be a boost to the UK economy: I’ll assume basic cluelessness until I see some other reasonable suggestion.

            • @selfA
              link
              English
              131 month ago

              And in fact barring the inevitable fuckups AI probably can eventual handle a lot of interpretation currently carried out by human civil servants.

              But honestly I would have thought that all of this is obvious, and that I shouldn’t really have to articulate it.

              you keep making claims about what LLMs are capable of that don’t match with any known reality outside of OpenAI and friends’ marketing, dodging anyone who asks you to explain, and acting like a bit of a shit about it. I don’t think we need your posts.

              • David GerardOPMA
                link
                English
                81 month ago

                the post history is very infosec dot pub

                • @selfA
                  link
                  English
                  101 month ago

                  a terrible place for both information and security

        • @froztbyte
          link
          English
          61 month ago

          citation/link/reference, please

        • Tar_Alcaran
          link
          fedilink
          English
          61 month ago

          “AI” in the context of the article is “LLMs”. So, the definition of not trustworthy.