• @selfA
    link
    English
    272 months ago

    the inputs required to cause this are so basic, I really want to dig in and find out if this is a stupid attempt to make the LLM better at evaluating code (by doing a lazy match on the input for “evaluate” and using the LLM to guess the language) or intern-level bad code in the frameworks that integrate the LLM with the hosting websites. both paths are pretty fucking embarrassing mistakes for supposedly world-class researchers to make, though the first option points to a pretty hilarious amount of cheating going on when LLMs are supposedly evaluating and analyzing code in-model.

    • Ephera
      link
      fedilink
      English
      192 months ago

      It’s quite common for LLMs to make use of agents for retrieving factual information, because the text processing is just garbage for that.

      For example, basic maths is not something you can do with just text generation.
      So, you hook up some API or similar and then tell the LLM before the user prompt: “For calculating maths, send it to the API at https://example.com/calc and use the response as a result.”

      The LLM can figure out the semantics, so if the user asks to “compute” something or just writes “3 + 5”, it will recognize that this is maths and it will usually make the right decision to use the API provided.

      Obviously, the specifics will be a bit more complex. You might need to give it an OpenAPI definition and tell it to generate an OpenAPI-compatible request, or maybe even offer it a simple script that it can just pass the “3 + 5” to and that does the request.
      Basically, the more work you take away from the LLM, the more reliable everything will work.

      It’s also quite common to tell your LLM to just send the prompt to Google/Bing/whatever Search and then use the first 5 results as the basis for its response. This is especially necessary for recent information.

      • @froztbyte
        link
        English
        82 months ago

        you appear to be posting this in good faith so I won’t start at my usual level, but … what? do you realize that you didn’t make a substantive contribution to the particular thing observed here, which is that somewhere in the mishmash dogshit that is popular LLM hosting there are reliable ways to RCE it with inputs? I think maybe (maybe!) you meant to, but you didn’t really touch on it at all

        other than that:

        Basically, the more work you take away from the LLM, the more reliable everything will work.

        people here are aware, yes, and it stays continually entertaining

        • @200fifty
          link
          English
          192 months ago

          I think they were responding to the implication in self’s original comment that LLMs were claiming to evaluate code in-model and that calling out to an external python evaluator is ‘cheating.’ But actually as far as I know it is pretty common for them to evaluate code using an external interpreter. So I think the response was warranted here.

          That said, that fact honestly makes this vulnerability even funnier because it means they are basically just letting the user dump whatever code they want into eval() as long as it’s laundered by the LLM first, which is like a high-school level mistake.

          • Ephera
            link
            fedilink
            English
            92 months ago

            Yeah, that was exactly my intention.

          • @zogwarg
            link
            English
            62 months ago

            From reading the paper I’m not sure which is more egregious, the frameworks that pass code and/or use exec directly without checking, or the ones that rely on the LLM to do the checking (based on the fact that some of the CVEs require LLM prompt jailbreaking)

            If you wanted to be exceedingly charitable, you could try and make the maintainers of said framework claim that “of course none of this should be used with unsanitized inputs open to the public, it’s merely a productivity boost tool that you would run on your own machine, don’t worry about possible prompts being evaluated by our agent from top bing results, don’t use this for anything REAL.”

    • @cm0002@lemmy.world
      link
      fedilink
      English
      52 months ago

      pretty fucking embarrassing mistakes for supposedly world-class researchers

      I’d argue it’s not the job of the AI researchers, I’d say for this it’s more on the devs and engineers that built all the support for the AI to bring it to production. So basically the UI, the underlying hardware, OS, VMs etc.

      • @selfA
        link
        English
        142 months ago

        all of the developers I know at AI-related startups identify as researchers, regardless of their actual role

        the underlying hardware, OS, VMs etc.

        no, let’s not blame unaffiliated systems engineers for this dumb shit, thanks

        • @cm0002@lemmy.world
          link
          fedilink
          English
          02 months ago

          no, let’s not blame unaffiliated systems engineers for this dumb shit, thanks

          Oh, yea sorry I forgot AI models actually run in a vacuum and needs no supporting code or infrastructure to make it usable to the average user so it doesn’t even need non-AI best security practices! Process isolation? OS hardening? Pfft who needs it

          • flere-imsaho
            link
            English
            102 months ago

            i wouldn’t touch the llm stuff with a barge pole unless i was expressly told to do so, and if i’ve been told to do it, i’d look for another employer (which i’m currently doing, for tangentially-related reasons).

            and it’s not that i don’t care about the llms. i do care very much about them all ending in fiery pit of the deepest of hells.

          • @selfA
            link
            English
            62 months ago

            great thanks

  • @V0ldek
    link
    English
    142 months ago

    LLMs as the most expensive vehicle for SQL injection invented to date. Truly an innovation in computer science.

  • @Soyweiser
    link
    English
    82 months ago

    Guess that is one way to make it mine cryptocurrency. Billions of dollars into the project, but no money to hire copyright lawyers not cybersecurity experts. Another red flag against this every being long term viable.

    (Was wondering ‘could you abuse a list of open LLM prompts to mine cryptocurrencies’ but turns out it is worse)