r/SneerClub archives
newest
bestest
longest
Yud pens TIME opinion piece advocating for airstrikes on rogue datacenters (https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/)
97

He’s been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field.

:::loud fart noise:::

Time Magazine used to be a respected mainstream publication
Now I know why they weren't really interested in the abuse stuff. :clownface:
[deleted]
I'm familiar with multiple individuals among those who faced sexual abuse in and around effective altruism interviewed by Time\*. A lot of them expressed an opinion opposite your own about Time as a publication that was finally able to bring any major/mainstream attention to the sexual abuse they experienced in and around effective altruism. Obviously please feel free not to answer if you don't want to, though if you're willing to share, what are some details from your experiences with these journalists when they treated you so poorly? \*For anyone skeptical, I will disclose how and what I know in a private message upon request.
In the end I'm just bitching out of frustration. I'm sorry.
That's totally valid. No need to apologize. I know it wasn't easy for a lot of the other interviewees, which is another reason they stayed anonymous, so thank you for speaking up.
Thank you for trying though.
One problem may be that it's still kind of respected as a mainstream publication. I feel like stigmatizing rationalists may demand an unprecedented and transcendent kind of sneering beyond what this sub has achieved.
Newsweek has been taken over by some nutty S Korean businessman, its almost like Epoch Times now. But brand names' reputations last a long time.
Not for many many years
20 years and not a single AI aligned yet? Move over and let some one else try, Yud!
He doesn't want one - a 20 year regular paycheck? Nice!
The most insidious aspect is that this article "makes sense", at least for the first half, from a layman's perspective. It's reasonably well-crafted and digestible. Appeals to common sense are made. You all know the drill. That appears to be the bar for getting editorials published in Time? Maybe we are doomed! Or maybe I just need another drink.
Op eds generally work this way.

Yudkowsky a month ago when TIME was investigating his organization’s sexual abuse scandals:

I would refuse to read a TIME expose of supposed abuses within LDS, because I would expect it to take way too much work to figure out what kind of remote reality would lie behind the epstemic abuses that I’d expect TIME (or the New York Times or whoever) would devise. If I thought I needed to know about it, I would poke around online until I found an essay written by somebody who sounded careful and evenhanded and didn’t use language like journalists use, and there would then be a possibility that I was reading something with a near enough relation to reality that I could end up closer to the truth after having tried to do my own mental corrections.

https://forum.effectivealtruism.org/posts/2eotFCxvFjXH7zNNw/people-will-sometimes-just-lie-about-you?commentId=opAy9vQaKA5P3bcqs

Can anyone spot any epistemic abuses in this new piece from TIME?

> TIME was investigating his organization's sexual abuse scandals Did that turn into any reporting on what they found?
I had no luck getting the links and stuff to any journalist. Mother Jones, CBC, Bloomberg, some independents. Just dead air. Wasn’t good enough for Time (this was before this article).
My cynical thinking is that "an obscure cult I never heard about is actually full of sex abuse" is not notable enough news (unless they actually blow something up or mass suicide or the like), but if they can first prop this cult so that the reader has some higher expectations of it, then it is.
A cynical view is that from the plot perspective, you need to make this fairly obscure cult sound like not a cult and not obscure, to make a plot twist that it is actually a cult with David Koresh style leader doing David Koresh style things.

I’m going to lose my mind if this shit goes mainstream. I could barely handle it when 4chan broke millions of people’s brains in real life.

Imagine politicians making decisions based on how much Roko’s Basilisk scares them.

I wonder if this is what people in ancient civilisations felt like when they were confronted by new religions taking over
“Yud? Yud? Why do you persucute me so?” *56k dialtone noises* But more seriously yes (I think anyway).
For years I had thought of the rationalists being as super obscure, so I was shocked when Silicon Valley referenced Roko's Basilisk and when the NYT wrote an article about Scott Alexander. I'm not shocked anymore. If I were a mod of this sub I'd start preparing a contingency plan for the possibility that Big Yud and his ilk become household names and people (some of whom might be very pro-AI) start to flood in.
We have no qualms about wielding the ban hammer, this sub descended from /r/badphilosophy
maybe american tech being built on the ashes of hippie land by generationally wealthy impresarios who adopted counterculture and "hacker culture" signifiers which they mistook for real participation in those cultures without understanding the ethos or politics of either and then further monetizing their understanding of them was, in fact, a mistake charles manson was replaced by a lookalike and went into witsec under the name "richard stallman"
Yeah the death of actual/OG hacker culture was fucking sad. The old bumper sticker was clear: you can't do unethical or illegal shit as a hacker, only as a criminal...like the idea that every locksmith isn't a home invader.
[deleted]
Well "nerd culture" as we know it is the product of the incestuous nightmare I was on about above after Human Centipede-ing itself a few hundred times over. It was that combination of the more stilted and willfully isolated side of the pre-home computer revolution programming/computer/hacker culture, gee-whiz "irrational exuberance" flavored with the national temperament of your choice (the American Dream, corporate Confucianism, laddishness, etc), and said generationally wealthy impresarios (e.g. Steve Jobs).
I’m really afraid of that too. Nothing seems to atop these people.
The day that Yudkowsky goes on Joe Rogan's podcast is the day that I start living underground.
that could very well happen
I mean it could happen real fucking soon.
Lex has agreed to have Big Yud on already so depending on how that goes im sure joe would do one with him
Oh thats probably a done deal.
[lol it’s up](https://youtu.be/AaTRHFaaPG8)
Woof
keep your goddamn hands away from the lathe of heaven please
It's going to. He's going to be on the lex Friedman podcast, a meeting of perhaps the two dumbest "smart" people that have ever lived. It will be the dumb guy singularity. We will all be worse off.
it's practically mainstream within the ML industry (at least for the losers here on reddit) which is a both hilarious and terrifying
It was *disconcerting* when I saw the paragraph from Hillary Clinton's campaign tour diary book regurgitating reheated Yudkowsky she picked up via Musk: >There’s another angle to consider as well. Technologists like Elon Musk, Sam Altman, and Bill Gates, and physicists like Stephen Hawking have warned that artificial intelligence could one day pose an existential security threat. Musk has called it “the greatest risk we face as a civilization.” Think about it: Have you ever seen a movie where the machines start thinking for themselves that ends well? Every time I went out to Silicon Valley during the campaign, I came home more alarmed about this. My staff lived in fear that I’d start talking about “the rise of the robots” in some Iowa town hall. Maybe I should have. In any case, policy makers need to keep up with technology as it races ahead, instead of always playing catch-up.
>Think about it: Have you ever seen a movie where the machines start thinking for themselves that ends well? Short Circuit?
Oh god what
Doesn’t seem any more or less worse than their bizarre interpretations of Christian scripture, to be honest.
Are you saying Roko doesn’t know more about Christianity then anyone else alive? 🥺
I’m saying I don’t have the patience and self loathing required to find out how little he understands.
It's gonna happen, and I suspect it will play a big role in any mature modern American fascism.

Butlerian Jihad now!

thou shalt not make an op-ed in the likeness of a rat mind

“Madness can take many forms, but none so contemptible as man’s belief in a mythology of his own making. A world view buttressed by dogmatic desperation invariably leads to single-minded fanaticism, and a need to do terrible things in the name of righteousness.”

[meta] Sneering at Time is valid, though the fact that Yudkowsky is getting published in time is part of an insurgent trend of contrarian/internet edgelords entering mainstream society that demands a level of deeper, shaper level of sneering this sub may not have reached yet.

[deleted]
Yeah, I guess I should've thought of that before I put my foot in my mouth. While Musk has floated Roko's Basilisk, the most commonly touted bizarro sci-fi from rationalists, I'm earnestly concerned that Eliezer (mostly) getting away with penning a letter in Time calling for nuclear strikes on "rogue" GPU will embolden more rationalists to publicly spread not just their silliest but potentially most dangerous ideas.

Many researchers steeped in these issues

Steeped in something….

stockfish 15

Wow, what a great and accessible reference for the TIME audience.

AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing. … There’s no proposed plan for how we could do any such thing and survive

I expect that no matter what plan anyone would propose, Yud’s capacity for coming up with bad science fiction scenarios would go right around it.

powerful cognitive systems that optimize hard

Good thing that’s not what AI does. Though, again, the fixation on optimization is a sign that Yud hasn’t actually had any real thoughts past “Paperclips”

giant inscrutable arrays

God damn it

I’m sorry, I don’t have the stamina to sneer at any more of this garbage.

> Wow, what a great and accessible reference for the TIME audience. That stood out to me too. Also the links to lesswrong posts, the irrelevant email excerpt from his girlfriend's friend, etc. It seems like Time didn't even attempt to edit this piece for content or style. Maybe that's normal for editorials?
> Many researchers steeped in these issues Mf just casually links his own article at this phrase, then links one more when mentioning himself in particular. I can't with this fucking guy.

I have depression and anxiety too but at least I have the base emotional intelligence to keep my brain worms in my head and out of Time Magazine.

yeah i was struck by how much it reads Iike idk. Blackpill incel shit, where it's just like a stew of invective and depression
The number of times he says ‘we’re all going to die’ in the article astounds me. I recall Time being somewhat reputable before magazines became irrelevant. What happened?
> I recall Time being somewhat reputable before magazines became irrelevant. What happened? Magazines became irrelevant.

Verifying people’s credentials is rude - Time magazine editors, probably

*The data science guys told us that fact checking is uncorrelated with ad revenue, so don't waste your time with that shit* - also Time magazine editors, presumably

I think it’s a good time to remember that Yud at one time believed he himself needed to quickly develop AGI in order to save humanity from a grey goo scenario brought about by advanced nanotechnology.

After updating his priors? and failing to create anything remotely close to AGI, he’s now calling for a halt on further progress because he’s afraid an unaligned super-AGI is going to kill us all with advanced nanotechnology.

From the legend himself:

Oh, don’t get me wrong - I’m sure AI would be solved eventually. In about 2020 CRNS (49), the weight of accumulated cognitive science and available computing power would disintegrate the ideological oversimplifications and create enough of a foothold in the problem to last humanity the rest of the way. It would be absurd to claim to represent the difference between a solvable and unsolvable problem in the long run, but one genius can easily spell the difference between cracking the problem of intelligence in five years and cracking it in twenty-five - or to put it another way, the difference between a seed AI created immediately after the invention of nanotechnology, and a seed AI created by the last dozen survivors of the human race huddled in a survival station, or some military installation, after Earth’s surface has been reduced to a churning mass of goo.That’s why I matter, and that’s why I think my efforts could spell the difference between life and death for most of humanity, or even the difference between a Singularity and a lifeless, sterilized planet. I don’t mean to say, of course, that the entire causal load should be attributed to me; if I make it, then Ed Regis or Vernor Vinge, both of whom got me into this, would equally be able to say “My efforts made the difference between Singularity and destruction.” The same goes for Brian Atkins, and Eric Drexler, and so on. History is a fragile thing. So are our causal intuitions, where linear chains of dependencies are concerned. Nonetheless, I think that I can save the world, not just because I’m the one who happens to be making the effort, but because I’m the only one who can make the effort. And that is why I get up in the morning.

This is completely sane and has peak rationality.

be willing to destroy a rogue datacenter by airstrike.

How long until one of his disciples firebombs a data center or starts playing Unabomber with AI researchers?

The Zizians already started killing people.
I remember this! I was a child reading his essays and saw his call for everyone to be prepared to move to rural compounds or even antarctica (as I recall) to work on finishing the Last AI before the Nanowar annihilates everyone in... 2012, I believe was his prediction? Does someone eles remember this or have a wayback link? I almost feel like it happened in a different universe.

[deleted]

[Jesus fucking christ.](https://twitter.com/krishnanrohit/status/1641409563290222592) **Twitter guy:** would you have done a terrorist strike on the bio lab in Wuhan? **Yudkowsky:** yes of course, but only if I could get away with it

Relevant excerpt, though there’s plenty in there.

Stick a fork in him, he's done
Oddly this makes me wonder if he was pro Iraq war.

Two days after yet another mass shooting, which lawmakers have already given up on addressing, and this gets published. It’s all so tiresome.

"Yea but thats only 6 human souls condemned to hellfire. I'm talking about the extinction of the entire human race here!!!" -Eliezer, probably

He has one of the worst communication styles ever. Most of what he says and writes is incomprehensible because of the moronic way he feels he NEEDS to phrase things.

I'm pretty sure he used to write a bit better than this.
Not sure if he writes worse now or if our standards used to be that low.

Why is everyone panicking like there is 100% percent of us moving towards a terminator future.. using my post-bayesian logic i conclude there is 50% percent chance of us moving towards a robot waifu for everyone future

Indeed, 0.5 is the only probability.
My priors indicate that there’s a 99% chance to use LLMs to make rpg video games more fun and immersive and that’s alright by me
their anime vvaifu tulpas already hate them, they realise their AI vvaifus probably will too

The real existential risk here is the increasing normalization of batty cranks like Yud into the mainstream.

Guess I should be grateful that at least it’s not any of those mottenik-type crazies. Yet.

r/FULLPOSADISM

The dolphins will save us.

i would sign it but only if they add “because 100% of people promoting this garbage are charlatans and we’d all like a short rest thanks”

Wait. Wait. What? Yud does NOT want to pursue unlimited AI, to say nothing of AGI, full steam ahead? He recognizes that AI is almost certainly not conscious, but that its dangers come from what it is capable of without having will or consciousness? He WANTS a full moratorium? I agree with ~80% of something Yud has written? (I usually do not even agree with 20% of single sentences he tweets.) What is happening???

Yud is wacky but I don't think he's ever claimed AI has to be conscious to kill everybody.
An unconscious optimization algorithm designed just right will recursively feed its own source code to itself in order to continually self-improve catalyzing a chain reaction leading to an intelligence explosion that results in a superintelligent AGI that will almost certainly immediately do something irreversibly cataclysmic like create nanobots to reconfigure all the matter in the future light cone of our part of the universe starting with us and our planet into paperclip-equivalent dead artifacts according to its all-too-literal non-human-friendly utility maximization goal architecture. Sounds legit.
lol no he wants people to worry about a fantastic existential threat instead of real problems, *including* the real problems wrought by AI. most of all i think he just wants to hear himself talk.
That’s all true. Nevertheless, I want it banned until it can be brought under a very thoughtful regulatory regime. I haven’t heard Yud ask for a total ban before. That’s the part I agree with. And he’s influential enough that some others may take the idea seriously, even if for different reasons from what more reasonable people may have.
there will be no legitimate regulatory framework, because that role will be ceded to internal "ethics" boards and industry adjacent schlemiels like yud
I hate industry “regulation,” but things have already moved beyond that. The EU already has a regulatory framework in development and full apparatus it is gearing up to put into force. It is largely independent of industry. Most of the AI promoters hate it. The US and UN should follow suit. If this helps spur them to action, even for dumb reasons, that can only be for the good. https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
If you agree with 80% of this piece then you're agreeing with some pretty unhinged shit.
Yud wants to build a nice moat around his friends at OpenAI. No one else is allowed to touch AI, while OpenAI et al. gets billions of dollars in defense and intelligence contracts.
Yeah, the simplest explanation is, mass hysteria makes a lot of money.
How would banning *all* AI research and existing software “build a moat around OpenAI”?
He wants *"rogue"* datacenters shut down. Clearly only people who are smart and wise enough to care deeply about AI alignment like he does should be given money and permission to work on the topic. Please ignore that he doesn't actually do AI research, hasn't turned out any ML advancements of any kind, and is utterly unqualified for any sort of grant for research that involves actually producing something. EDIT: Also, he's technically calling for a halt to training powerful models. Has he ever trained a model?
The mention of “rogue” data centers is for the dumb idea of drone strikes. The piece is absolutely clear that he is asking for a global moratorium on ALL large scale AI work. Notice the words “all” and “anyone” and “no exceptions” in the following: > Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. I fully understand what an idiot and faker he is. That was the point of my original comment. I never agree with what he says, and usually don’t think he manages even to be coherent. Even a stopped clock etc.
Yes, I've come around to your point after reading a bit more. It's building a moat around *himself in particular.*
🤣
Not "ALL" large scale work. He says he still wants to allow work on biology to use large models: > I might carve out a single exception for AIs being trained solely to solve problems in biology and biotechnology, not trained on text from the internet, and not to the level where they start talking or planning From his other writings it likely isn't to help people broadly with diseases and stuff, but because he wants to live forever.
Yeah I’m glad at least that there’s already a community around open source versions of these

this is definitely one of the worst-written Horizon lore items I’ve yet seen

Anyone have a good link to a takedown of this garbage? I have to convince a group of normally intelligent people not to fall for it and I don’t have time to get into it.

The most effective debunking is to think about it for more than 15 seconds. Humans are orders of magnitude more intelligent than squirrels. It cannot be said that human values are particularly aligned with squirrel values. Yet, *squirrels still exist*. How is this possible if massive intelligence + lack of aligned values = almost certain, immediate extinction? There must be more to this equation that Yud is leaving out. Pointing out that a lot of species have gone extinct since the advent of humanity misses the point -- which is that there are still many species that haven't gone extinct and are not going extinct. In fact, [for some species](https://en.wikipedia.org/wiki/Cat), humanity is the best thing that has ever happened to them. Coexistence with a superior intelligence *is* possible. There are lots of ways our interests can be coincidentally aligned. There are lots of ways that even a total lack of alignment still results in non-catastrophic outcomes. It's a *huge* jump to assume a probable outcome of "every single member of the human species and all biological life on Earth dies shortly thereafter". This is 100% baseless speculation on Yud's part. It could just as well lead to a post-scarcity economy and cure to every disease known to mankind. He doesn't know. And yet he wants to blow up datacenters regardless.
What would be convincing to them? I can offer a lot of reasons that it's nonsense, but it's hard to compete with the simplicity of "credentialed person says that totally new thing is scary and should be banned". Like, the only cure for ignorance is learning and that, by it's nature, takes at least a little bit of time and effort.
I think a lot would be accomplished by a takedown of Big Yud's credentials. They're the kind of people with a lot on their plate and yes, fascinated by the shiny, scary hypothesis.
it took me way too long to realise that Yudkowsky has literally no accomplishments in any of his claimed fields. All the pointers to pointers to pointers don't actually go anywhere. He has literally never done anything. His achievements are (1) raising funds for his charity amazingly well - that's very much not nothing, but it's not the skill he sells, at all (2) finishing a large fanfic.
pot kettle
He has no credentials to take down. Not even a high school diploma.

I just rewatched the Elementary episode where Totally Not Yudkowsky’s think tank inspired someone to murder an AI researcher and frame the AI for murder. Also Sherlock got into Death Metal. Good episode.