The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanism. We’re looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.

“Whoops, it’s done now, oh well, guess we’ll have to do it later”

Go fucking directly to jail

  • @gerikson
    link
    English
    129 months ago

    The HN crowd are very excited to have have a model that is not “woke”:

    https://news.ycombinator.com/item?id=37714703

    What none of these idiots realize is the reason most big LLM vendors carefully filter what their models output is not because they’re namby-pamby liberals intent on throttling free speech, it’s because headlines like “ChatGPT teaches kids how to make meth with the help of Adolf Hitler” are a fucking nightmare for a business to deal with.

    • @froztbyteOP
      link
      English
      89 months ago

      ayup

      and, infuriatingly, that’s what makes this mistral play “good” - it gives them free distance, free protection for causal culpability.

      research and solutions exist for ensuring poison pills or traceability or so… and I’d bet it’s more likely than not that they used none of that.

      there are so many gating points where they could’ve gone “hmm, wait”, and they just … didn’t. I am not inclined to believe any of this was done in good faith (whether towards their stated goals or towards societally good outcomes

      (and, given the circles and actions, probably it wasn’t either really either of those two as target goals either)

    • @froztbyteOP
      link
      English
      7
      edit-2
      9 months ago

      Ah shit I missed your reply earlier, muh bad

      Edit: holy shit at when both the other comment and this went through. Yay for bad packets.

  • @bitofhope
    link
    English
    109 months ago

    This highlights an inherent issue in trying to create ostensibly informative tools based on input data scraped indiscriminately from all over the internet. Misral’s simply doesn’t even pretend to paper over it while the rest go

    The instruction “Do not act like Slobodan Milošević” in my AI’s initial prompt has people asking a lot of questions answered by my AI’s initial prompt.

    Unrelated, I would call the opposite of a promptfan a “prompt critical” but unfortunately it reminds me of TERFs.

  • @swlabr
    link
    English
    79 months ago

    Good article. If nothing else, TIL from it that there is an “effective accelerationist” community and that we are all decels. A priori I’m guessing they’re all just NRXers cosplaying as pro “acceleration”.

    • @selfA
      link
      English
      69 months ago

      it explains why all the least coherent folks on Twitter have /acc or similar in their names

      • Charlie Stross
        link
        fedilink
        79 months ago

        @self @techtakes To neoreactionaries, accelerationism offers an attractive stalking-horse for their forward-to-the-past politics. Feudalism shall rise once more in spaaaaace! And the beta cucks will be put in their place alongside the wimmins and other chattels, or something, I guess. (Ack, spit.)

    • David GerardMA
      link
      English
      69 months ago

      that is literally what e/acc is - bad Nick Land ideas done by kids not even as bright as Land. So dumb it has a Know Your Meme.

  • @ABoxOfNeurons@lemmy.one
    link
    fedilink
    English
    -39 months ago

    It’s a 7b model. There are plenty of other larger open source models out already. I fail to see the issue.

    • @selfA
      link
      English
      109 months ago

      did you consider reading the linked article before coming here to post about your failure?

      • @ABoxOfNeurons@lemmy.one
        link
        fedilink
        English
        -29 months ago

        I did. I’m not convinced the author knows the space very well though. There are larger models out there with similarly absent safety features. This isn’t a remarkable release, and the tone is of ragebait.

        Guardrails are a term of art for something like Nemo, which is more like the unreal ramen shop demo or a corporate chatbot. Most raw open models I’ve tried will tell you how to make meth if you ask them.

        • @bitofhope
          link
          English
          119 months ago

          Look, I’ll just spell this out for you.

          The size of the model is not in the least bit the point of contention here. Whether this is the largest language model ever created or a tiny and unimpressive one is not why the article was written or linked here.

          The reason the article has an indignant tone as do we is that a company is proudly flaunting that they’re not even trying to deal with the harmful potential of the ethically dubious or straight up awful shit their supposedly informational product can produce.

          They also have a worryingly excited audience praising them for releasing a model whose main selling point is not even its technical sophistication (as you are keen to point out) but the fact it can be used to answer questions like how to kill one’s spouse or why ethnic cleansing is good.

        • @froztbyteOP
          link
          English
          79 months ago

          ah, evidence that one needs more than a single box of neurons to

          1. realize that this isn’t Model-Quality Debate Club
          2. hear that strange whooshing sound

          a handy result!