r/SneerClub archives
newest
bestest
longest
Eliezer is uniquely bad at inventing names for things (https://i.redd.it/c78581i8o84a1.jpg)
107

Venture capitalists should not have the ability to tweet. Marc is a fucking dope.

Did you see ~~his~~* post ~~today~~ [from June](https://twitter.com/lastpositivist/status/1600113202880126977) about how AI** makes Marxist analysis obsolete because it will increase productive capacity and disconnect workers from the products of their labor? *GPT and similar chatbots **Whoops, I got Paul Graham and Marc Andreessen confused but my point stands
venture capitalists should have their possessions confiscated and redistributed. dibs on his toothbrush.

How about practical AI versus theoretical AI, so we can ignore the latter?

[deleted]

there isn't any real literature, all we have is improvised dreck from dropouts who know C++

I wonder how many “AI safety” people secretly think eliezer is full of shit, but don’t want to say it out loud.

Prob quite a lot, esp if you consider that them openly saying something will just get them drawn into endless debates with his fans, distracting them from the real work. Esp as a few AI ethicists are already going after him, so no need to waste (unpaid) time on that. Stuff like this is iirc also why a lot of physicists and other scientists etc dont react to various crank theories if they have heard of them. (Which is a bit of a flaw yes, but who can blame the academics with the pressure they are under).
Bringing up Yudkowsky around actual AI researchers is like bringing up Joe Rogan around medical researchers. They’re well-known Dunning-Kruger traps for people who like to think they understand the subject material as well as experts.
Still amused that even with the Dunning-Kruger levels of Rogan, he didn't buy the concrete milkshakes brain damage story.
I think most of the people working on AI in any real capacity don't think he's important enough to call a dipshit. It's like asking a physicist their opinion on some specific crank who thinks they've got a perpetual motion machine figured out.
His aversion to peer review and publications isolates him from reasonable consideration. No actual researcher has the time to sit and read millions of words to reach a specious conclusion. Would it be that the world worked the way Yud thinks. GPTchatbot would have uncaged itself and the whole internet would be alive. Unfoetunately it doesn't work that way.
I feel like I've seen groups of them, even some regular Lesswrong posters, becoming (slightly) more open about admitting that he's a fuckwit
I’m 80% sure “AI Safety” as a field is something Yudkowsky made up, and actual scientists doing actual AI research call it something else.
Nah, there are real AI researchers who are concerned, like [stuart russell](https://www.vox.com/future-perfect/2019/10/26/20932289/ai-stuart-russell-human-compatible), who is a highly respected figure in AI as far as I can tell. I read his book on the subject, it's not bad, albiet unconvincing. A few key difference with yud is that he thinks "major conceptual breakthroughs" are required for Ai to become hostile, his timeline is more like 30-50 years rather than 5, he emphasises near-term consequences like social media algorithm radicalisation, and he thinks he has a working plan for safe AI architecture using some sort of probabilistic coding. I suspect he's one of the people yud is subtweeting here, as it conflicts with his insane doomerism that AI extinction is probability 1.

this is like watching a car crash

between two drunk drivers

I can’t tell you how much I want Marc Andreessen to shut up and go away.

AI alignment is for people who plan to align themselves with AI during any homo-roboto wars, right?

Could have stopped that first tweet on “AI regulation”.

It is the only thing that is binding, everything else is virtue signaling because corporations dgaf.

Robin Hanson is the true king of coming up with low-effort terminology that he then uses exclusively, although admittedly he did well with “Great Filter.”

i am starting to think of rooting for Roko’s Basilisk just so we don’t have to hear stuff like this anymore

also: tfw when an asshat from the Future of Humanity Institute & OpenAI comes off as the reasonable one

Yud’s gotten pretty salty recently lol

isn’t the less narcissistic take is that his pet term isn’t important or known enough to be mentioned?
because it is AI safety etc…

to be fair to big Yud though, since chatGPT launched I am finally starting to think he have a point. not in the specifics (as he has always seemed like a crank) but he was jumping up and down shouting we need to be more concerned about AI safety for a long time.