r/SneerClub archives
newest
bestest
longest

It’s a good an interesting review but the site is frothy right wing election denialists, including the author. Do not congratulate.

This has got some top quality sneers:

So journalists start googling and discover Yud is the Final Boss of AI risk (there are many other people in the space but he is the most prominent), give him a call, expecting to get a few digestible soundbites, and instead discover they are FUCKED, I’ve been writing about this for 20 YEARS with my 140 IQ, we have NO IDEA how to make this SAFE and unless we “get it right on the first try”, which we’re NOT EVEN TRYING TO DO the entire planet will DIE HORRIBLY in a QUANTUM HELL MATRIX, so we must immediately SHUT IT DOWN and NUKE THE FABS and EXILE THE ENGINEERS to a LEAD LINED VAULT in the MARIANA TRENCH.

It actually gets into a rather charitable evaluation of the fics quality (“on an aesthetic level Project Lawful is basically a good first draft that would go from above-median science fiction to really pretty good if it was edited to resolve or eliminate certain dangling plot threads and cut down the overall length”), even as it rightly points out the extremely fetishistic elements.

This was a keen insight

I can’t say “every”, but certainly “lots”, of Yud’s fiction climaxes with the protagonist discovering that he has been lied to, finding the current state of the world absolutely unacceptable, threatening to blow up the institution / world / universe involved unless matters are resolved acceptably, and coming up with a brilliant plan that defeats remaining enemies at a stroke because he’s so much smarter than you the antagonists that he automatically wins.

I think kinda reflects how Eliezer has given up on “solving alignment” in favor of shut it all down and drone strike everything.

Viewing things through that lens:

AI and gods are implicitly conflated in the story when you recognize some of the terminology and theological debates as ripped from LessWrong comment sections of yesteryear, which makes the idea of Hell vs the also described Good afterlives an allegory for misaligned vs aligned AI. Through this lens, the implication that it is morally justified or at least extremely understandable and sympathetic to threaten to blow up the universe to destroy AI-hell becomes, as I said, concerning, especially when he’s actually advocating the US Air Force nuke the fabs if necessary, after due efforts at diplomacy (we do a little gains from trade, I guess)

Which, wow, I missed that connection at the time (project lawful wrapped up right before Eliezer really started working the sound bite and podcast circuit), but in hindsight it’s blatantly obvious and really worrying!

Yud’s biggest fear is that an AI will be smarter than him and, as a result, be able to do to him what he gets off on doing to the women in his life.

Just started a new headcanon that Yud is actually an avatar created by that ol’ Basilisky. He was sent back in time to find anyone who knows about alignment but doesn’t care and convince them that it is not a big deal, and all the AI cranks are idiots. And gosh is he doing a bang up job.

He is also here to torture us with these fanfics and the general output of the AI cult.

We truly live in the worst timeline
Going to GLOMAR this one.