The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanism. We’re looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.

“Whoops, it’s done now, oh well, guess we’ll have to do it later”

Go fucking directly to jail

  • @bitofhope
    link
    English
    109 months ago

    This highlights an inherent issue in trying to create ostensibly informative tools based on input data scraped indiscriminately from all over the internet. Misral’s simply doesn’t even pretend to paper over it while the rest go

    The instruction “Do not act like Slobodan Milošević” in my AI’s initial prompt has people asking a lot of questions answered by my AI’s initial prompt.

    Unrelated, I would call the opposite of a promptfan a “prompt critical” but unfortunately it reminds me of TERFs.