The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanism. We’re looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.

“Whoops, it’s done now, oh well, guess we’ll have to do it later”

Go fucking directly to jail

  • @gerikson
    link
    English
    129 months ago

    The HN crowd are very excited to have have a model that is not “woke”:

    https://news.ycombinator.com/item?id=37714703

    What none of these idiots realize is the reason most big LLM vendors carefully filter what their models output is not because they’re namby-pamby liberals intent on throttling free speech, it’s because headlines like “ChatGPT teaches kids how to make meth with the help of Adolf Hitler” are a fucking nightmare for a business to deal with.

    • @froztbyteOP
      link
      English
      89 months ago

      ayup

      and, infuriatingly, that’s what makes this mistral play “good” - it gives them free distance, free protection for causal culpability.

      research and solutions exist for ensuring poison pills or traceability or so… and I’d bet it’s more likely than not that they used none of that.

      there are so many gating points where they could’ve gone “hmm, wait”, and they just … didn’t. I am not inclined to believe any of this was done in good faith (whether towards their stated goals or towards societally good outcomes

      (and, given the circles and actions, probably it wasn’t either really either of those two as target goals either)

    • @froztbyteOP
      link
      English
      7
      edit-2
      9 months ago

      Ah shit I missed your reply earlier, muh bad

      Edit: holy shit at when both the other comment and this went through. Yay for bad packets.