This lesswrong post was made 17 days ago, but I don’t think anyone’s dunked on it yet so here goes. Yud starts this post off with a bold claim. A VERY bold claim.
https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy
“ It’s obvious at this point that humanity isn’t going to solve the alignment problem, or even try very hard, or even go out with much of a fight. Since survival is unattainable, we should shift the focus of our efforts to helping humanity die with with slightly more dignity.”
Oh ok, I guess survival is just unattainable. We all trust Yud on this right?
He spends the rest of the post essentially convincing you why humanity is doomed and you should orient your life around this fate
That’s why I would suggest reframing the problem - especially on an emotional level - to helping humanity die with dignity, or rather, since even this goal is realistically unattainable at this point, die with slightly more dignity than would otherwise be counterfactually obtained.
Here he is telling you you are wrong if you think survival is possible, so just try and make your life more dignified so that when you die, Yudkowsky will respect you for the way in which you died.
So don’t get your heart set on that “not die at all” business. Don’t invest all your emotion in a reward you probably won’t get. Focus on dying with dignity - that is something you can actually obtain, even in this situation. After all, if you help humanity die with even one more dignity point, you yourself die with one hundred dignity points!
Ultimately, this is all just a way of saying MIRI was wrong, their contributions aren’t valuable to the world, and it will probably soon be shutting down or something. But that would be an admission of failure, so instead let’s just turn it into us being right somehow.
This seemed a bit ridiculous even for Yud, so I checked the date, and sure enough, it’s an April Fools bit. Well . . . kinda?
God, he thinks he’s so clever, doesn’t he?
No, Eliezer, I don’t care which world I should be mentally living in, the more pressing question at this point is whether you actually believe all of the shit that you just wrote.
Like, you just wrote several thousand words about a topic that many people seem to think you are an expert in, and your conclusion is that wanting to know what your true intentions and beliefs are is asking for too much.
It’s actually very epistemically unvirtuous of you to take anything that Eliezer says too seriously or expect him to explain what the fuck he is talking about. The veil of vagueness, condescension and mysticism is for your own good.
[deleted]
[deleted]
[deleted]
I see that, in his infinite cognitive eminence, Yud has discovered a novel horizon of human experience which he understands to be “depression”.
Wow so he is doing ironic non irony things now, urgh it was dumb when 4chan neonazis did it, really dumb when tim pool/post rats/weird sun do it, and this is just sad.
I could start a rant here on how being epistemologically vague on your true beliefs is bad Rationality (esp if you worry a acausal actor is learning from your words) but why bother.
E: yud might want to get some help for his transhumanist depression however. Sadly for those who want to live forever it isn’t going to be likely soon.
Turns out Yud has good reasons to be depressed regarding AI safety. Check out this tweet about regular machine learning AI development. (The implication here being that if an AGI is developed with safety, due to capitalism, other companies will try to catch up and not implement the safety mechanisms. Which means that if the first AGI doesn’t go foom (or the safety mechanisms slow the ability to go foom or make it impossible) the unsafe AGI will).
Anyway, late reaction which I thought might be interesting to document here.
When I read this, I felt afraid. For him, for his followers - this is some deeply-depressed, angry, loathing-the-self-loathing shit. This is not a good place for any human being to be in, and I hope he has people around to help him snap out of it and not hurt the other fucking people who hang on his every word. I think of someone I know in the UK who was considering killing her cats so they wouldn’t have to survive a Russian bombing, she was so certain of the threat - if the rationalist community takes this rant seriously, there are going to be a lot of deeply, deeply unhappy people who will want to say FUCK IT and exercise even less empathy or scruple because they were DENIED THE RIGHT - their inborn right, as superior, clever people - TO SAVE THE WORLD. Ugh.
That’s funny, isn’t Yud one of the guys who wants to live forever?
There’s literally an april fools day tag on the post, which was posted on april fools day… looks like someone took the bait lmao
Publication date: 1st Apr 2022.
It’s even tagged “April Fool’s”.
> this is all just a way of saying MIRI was wrong
He’s saying that humanity has failed to grasp its situation and respond appropriately, and that in the near future (less than thirty years) a superintelligence with a goal as inane and humanly unfriendly as paperclip maximization will hatch, and then take over our corner of the universe.
Good post, but I think we got the gist of what he’s saying without your (nonetheless apparently accurate) editorial exposition
This is called trying to galvanize people
It is funny. This death cult idea would actually be the best way to fight AI.
If agi comes to pass and sees we will cooperate in the prisioners dilema it should chose to cooperate as well. That’s that big game theory they talk about.
If agi comes to pass and decides to defext but we are all just nice to eachother and don’t do what is says then it is essentially de-fanged. The maximizer asks us to go to work at the paperclip facotry all we have to say is no and that probelm is solved.
Interesting how this kind of nihilistic philosophy encourages not giving a shit or not doing anything at all to make a better place. Makes it easy to see why tech bro CEO ’s would buy into it