- cross-posted to:
- technology@beehaw.org
- technology@lemmy.world
- cross-posted to:
- technology@beehaw.org
- technology@lemmy.world
The intelligence illusion seems to be based on the same mechanism as that of a psychic’s con, often called cold reading. It looks like an accidental automation of the same basic tactic.
I like this article a lot, and the model of LLMs as automaton Sylvia Brownes (rest in piss) is a good one. I don’t agree that the con is accidental, though. the billionaires who own this trash definitely know how to run a psychic con — it’s how they lied their way into being seen as visionaries. the big innovation in LLMs isn’t the fancy Markov chains, it’s in automating the set and setting necessary to prompt spiritual fervor in folks who are susceptible to it.
Not sure about the people who “are or think they are intelligent” being more susceptible to the con.
It feels like something one wishes to be true for karmic/poetic reasons rather than something that actually IS true.
I think good marks for the LLM con are more generally doubtful of the value of human intelligence/labour/education and/or tech positivist rather than true believers in their own intelligence.
as i said to Baldur, it’s called the ELIZA effect cos humans are so ridiculously keen to fool themselves. We anthropomorphise Roombas ffs.
fuck it’s ironic that the wiki page cites Hofstadter’s example of the effect, seeing as how he’s currently falling for it himself