return2ozma@lemmy.world to politics @lemmy.world · 2 个月前OpenAI wants to stop ChatGPT from validating users’ political viewsarstechnica.comexternal-linkmessage-square10linkfedilinkarrow-up160arrow-down10cross-posted to: technology@lemmy.world
arrow-up160arrow-down1external-linkOpenAI wants to stop ChatGPT from validating users’ political viewsarstechnica.comreturn2ozma@lemmy.world to politics @lemmy.world · 2 个月前message-square10linkfedilinkcross-posted to: technology@lemmy.world
minus-squareSpikesOtherDog@ani.sociallinkfedilinkEnglisharrow-up5·2 个月前The LLM will always seek the most average answer.
minus-squareSandbar_Trekker@lemmy.todaylinkfedilinkEnglisharrow-up2·2 个月前Close, but not always. It will give out the answer based on the data it’s been trained on. There is also a bit of randomization with a “seed”. So, in general it will give out the most average answer, but that seed can occasionally direct it down the path of a less common answer.
minus-squareSpikesOtherDog@ani.sociallinkfedilinkEnglisharrow-up2·2 个月前Fair. I tell a lot of lies for children. It helps when talking to end users.
The LLM will always seek the most average answer.
Close, but not always. It will give out the answer based on the data it’s been trained on. There is also a bit of randomization with a “seed”.
So, in general it will give out the most average answer, but that seed can occasionally direct it down the path of a less common answer.
Fair.
I tell a lot of lies for children. It helps when talking to end users.