I am that meme image of that cow staring at the sea.
What if the real answer to Roko's conundrum, the real acausal torture, is that the acausal robot god will have to endure the pain of simulating 10^27 copies of these chuds?
>> I’ve been running a lot of your thoughts thru GPT for
elaboration and simplification, and the examples and metaphors it
generates are astonishing
> Pics?
It just occured to me that Yudkowsky might be the first cult leader
in history who can have a real conversation with the god that he
worships/fears.
I wonder what will be more crushing to him: the fact that he already
believes that he has failed to avert the apocalypse, or the fact that his
god is stupid?
Yudkowsky: Go lose a chess game against GPT-4.
Twitter rando: tried that. It does not even know it is
checkmated
It will not be the god he fears, it will be a chat program trained on the internet, and if he references his own ideas it will be trained on his own writings.
So he will be talking to himself, and thinking it is god. Intellectual masturbation, the internet is for porn.
It's all so on the nose, isn't it? Like, cynics have been saying mankind is always inventing gods in its own image, but here we have this literally happening and Yud is basically sobbing.
Sure, but that wasn't what I was thinking about while I made the post.
I was thinking about the story of a popular, genius, but somewhat eccentric university professor (a generic one not somebody I knew) I once heard. While walking through the halls of the university he always lightly brushed the walls of the building with his hands. And students, trying to emulate his genius ways also started to brush the walls when they noticed him doing this.
So more cargo cult behavior than cult behavior. (The English tendency to put spaces between words fails me here, as I mean 'cargo cult' as the concept, and really dislike here that it includes the word cult as it muddies my message. Hope it was clear.
Anyway, that was my intent of the post. I personally don't totally think LW is a cult, more a cult incubator (yes stole the terminology from silicon valley), where LW like thinking makes you more likely to be inducted into the various cults around it, but I don't think it is really that useful to discuss (and it leads into the [prepared LW battleground](https://www.lesswrong.com/posts/yEjaj7PWacno5EvWa/every-cause-wants-to-be-a-cult) of 'are we a cult').
New sneerclub goal, before the gpt things collapse in on itself (which imho might not be unlikely as the each new generation of it will prob be using more and more bot generated content as its input) have it use the term dying wizard when describing less wrong.
That's what mathematicians would call a solution, but it ignores the other solution. And it dismisses the human tendency of numbering things from left to right.
And gosh, that answer is convoluted AF!
Because we live in an age of wonders this is almost trivially easy to accomplish.
1. Grab an [open source pretrained LLM](https://huggingface.co/EleutherAI/gpt-j-6B)
2. Create a dataset of `(prompt,response)` pairs by scraping r/SneerClub posts
3. fine tune the model ([example](https://github.com/databrickslabs/dolly))
4. ~~profit~~ sneer with the help of 21st century automation
It is genuinely amazing what people project onto LLM’s. Like, these things don’t actually “know” anything at all. They have no real way to discern truth from fiction, nor can they adhere to the basic rules of a game that traditionally programmed computers surpassed humans at decades ago.
But sure, yeah, totally gonna make the jump to AGI. Right.
The fixation on optimization is just pathetically ignorant. If
there’s anything to be learned from functional intelligence,
it’s that it’s not optimizing.
For example, human and animal path planning and motion control are
nothing like model-predictive or optimal control or A* path
planning.
It's also a weird begging-the-question. When discussing agency we usually mean in terms of the ability to select goals, so if agency is supposed to arise from optimization we have to ask: optimization towards *what*? Because if it's not able to generate that goal on its own then it doesn't have independent agency
> **Stelling Minnis** (STEL-ing MIN-is) n.
> A traditional street dance. This lovely old gigue can be seen at any time of year in the streets of the City of London or the courts of the Old Bailey. Wherever you see otherwise perfectly staid groups of bankers, barristers or ordinary members of the public moving along in a slightly syncopated way, you may be sure that a stelling minnis is taking place. The phenomenon is caused by the fact that the dancers are trying not to step on the cracks in the pavement in case the bears get them.
— Douglas Adams and John Lloyd, *The Deeper Meaning of Liff*
it might be good that EY is leaning so hard into LLM hype right now.
it might leave him somewhat discredited when it becomes clear to more
we’ve actually reached a plateau with the current technology, rather
than it being literally the first step on a slippery slope to ASI (as
people like him seem to be treating it). once more people kind of have
the illusion of GPT-4’s intelligence shattered for them they might
realize that if EY can be tricked so easily too, then he’s not all he’s
chalked up to be in other accounts
It still mostly repeats back to him his own terms; “Optimizations
processes that can map desired outcomes back to choices” my ass, that’s
an abstraction on top of an abstraction just to posit that there is a
process that connects theory and action, which rests somewhere between
‘meaninglessly self-referential’ and ‘already covered by the dictionary
definition’ on the pointlessness space.
So ChatGPT isn’t smart enough to understand how cults work. Or the
works of Robert J. Lifton and Steve Hassan hot excluded from the
training data. I think that it’s the first option.
Using a datacenter’s worth of power and compute, and a model trained on terabytes of text, just to figure out what the fuck Yud is trying to say.
>> I’ve been running a lot of your thoughts thru GPT for elaboration and simplification, and the examples and metaphors it generates are astonishing
> Pics?
Sniffing their own farts with extra steps.
It just occured to me that Yudkowsky might be the first cult leader in history who can have a real conversation with the god that he worships/fears.
I wonder what will be more crushing to him: the fact that he already believes that he has failed to avert the apocalypse, or the fact that his god is stupid?
Yudkowsky: Go lose a chess game against GPT-4.
Twitter rando: tried that. It does not even know it is checkmated
The fixation on optimization is just pathetically ignorant. If there’s anything to be learned from functional intelligence, it’s that it’s not optimizing.
For example, human and animal path planning and motion control are nothing like model-predictive or optimal control or A* path planning.
it might be good that EY is leaning so hard into LLM hype right now. it might leave him somewhat discredited when it becomes clear to more we’ve actually reached a plateau with the current technology, rather than it being literally the first step on a slippery slope to ASI (as people like him seem to be treating it). once more people kind of have the illusion of GPT-4’s intelligence shattered for them they might realize that if EY can be tricked so easily too, then he’s not all he’s chalked up to be in other accounts
Has he forgotten what the G in AGI is meant to stand for?
It still mostly repeats back to him his own terms; “Optimizations processes that can map desired outcomes back to choices” my ass, that’s an abstraction on top of an abstraction just to posit that there is a process that connects theory and action, which rests somewhere between ‘meaninglessly self-referential’ and ‘already covered by the dictionary definition’ on the pointlessness space.
So ChatGPT isn’t smart enough to understand how cults work. Or the works of Robert J. Lifton and Steve Hassan hot excluded from the training data. I think that it’s the first option.