r/SneerClub archives
newest
bestest
longest
Yudkowsky explains why, in order to prevent agi, we must hold the rest of the world at nuclear gunpoint. (https://i.redd.it/8a83atexygxa1.png)
70

MY FANFIC IS BACKED BY NUCLEAR WEAPONS

If you subtract out the last two sentences then it just reads as him getting mad about the idea of developing countries getting thrown a bone

For a second there I was almost on board with banning AI in first-world countries to give developing countries a competitive advantage. I was kinda bummed that was *not* where he was going with this. Now it's just another reason to invade Iraq. Perhaps they don't have WMDs, but they *might* be hiding GPUs ...
AI fuckery would probably explode their economies anyway.

This hypothetical makes no sense to me.

Say what you want about the US, but if there’s one thing it does well it’s invest in greenfield high-tech ideas with huge financial upside.

Literally that’s what tech VCs do every day.

You’re right that it doesn’t make sense out of context… I think the context is attempting to justify his “drone strike the data centers” idea with the claim that an incomplete ban on AI just leads to developing countries taking the lead on AI (and thus the world is doomed)? It feels like he’s gotten lazier over the years, going from dozens of pages of blogs posts intricately hyperlinked to isolated tweets that don’t even link the context from a few tweets backed. From a sneering perspective, this is actually a good thing, as it means the sneerable content is concisely readable instead of buried amidst thousands of words.
Your description or Yud Classic ("it's intricately hyperlinked") is great sneer. Thanks.
Only in an anime obsessed, Orion's Arm min maxing rationalists fever dream is, like, Nigeria or something a more attractive location for a tech start up than Silicon Valley.

IAIA has nice ring to it – Internatioanl Artificial Intelligence Agency. They’ll have a team of experts who come out and monitor your loss functions.

> They'll have a team of experts who come out and monitor your loss functions. With, one presumes, specialized handheld devices that turn red and make alarming noises when superintelligent AI is nearby. Gotta make sure you can detect crashing loss functions even in the GPU farms that people are hiding in their attics, like with illegal marijuana crops.
Grab your PKE meters folks, they’re good for detecting supernatural phenomena
IAIA Cthulhu ftaghn

hard to think of a sneer better than “you paid for Twitter”

Effective altruism in action!

imagine never leaving your house and thinking the “real world” was a revolving door of scholarly papers and spreadsheets with numbers dashing over your back-lit LCD screen

holy shit, somebody write that down….i feel like there’s a really good metaphor in there

Plato's man cave?
Tbh, I don’t think Eliezer has ever plugged all of his assumptions and guesswork into anything as rigorous as a spreadsheet? And at most he skims scholarly papers…
There was a recent post where Eliezer loudly and expressly refused to articulate his ideas in a detailed enough way for David Chalmers to engage with them. If Yud were required to run his ideas through any kind of academic process, he would never amount to anything at all.
He tried it with his TDT, but it was torn apart at review and probably spooked him.
uh, his brain? don't need to use excel if you already do excel

“Explains” is doing a lot of work in that sentence.

"This would, one would expect, \[fill in alarmist and armageddonist factoid here\]."

[deleted]

AI will kill "literally everyone" so we have to be willing to deliver a nuclear exchange that will directly or through its runoffs... also kill "literally everyone" 🤔
As this new panic provides nice cover for authoritarians to opress other countries, Im afraid (and perversly curious in a trainwreck kind of way) of what will happen when some (far) right politicians make this openly a part of their program. Will the Rationalist support anti woke AI Trump. To save the village we had to...
musk kinda does

The United States forsakes advanced technology to preserve its way of life, allowing developing nations to take the lead and losing its first world status

Hey that actually sounds like a pretty fun premise for a science fiction novel. I’d read that.

Oh wait lol, I already did: Aftermath (Supernova Alpha book 1)

The trick to making it work is that you have to contrive a plot in which that’s actually a good idea. Yudkowsky is really committed to writing a boring fanfic i guess.

Monopsonized?

The supply-side equivalent of monopoly when the market only has one buyer as opposed to only having one seller real word unfortunately
Lol, thanks, just never seen it before, but the concept or how it applies escapes me in Mr. Yudkowsky's point here? It seems to be doing some intellectual ass-covering by giving the appearance of a more fully fleshed out point?
yeah, "everything monopolised and monopsonized" would mean that there is only a single consumer. It's word salad pretending to be deep thought.
> "everything monopolised and monopsonized" would mean that there is only a single consumer. I don't think that's correct. Like, you're right overall that he's just spewing buzzwords, but "a given business is both a monopoly and a monopsony" is not an implausible state of affairs. Rural hospitals, for instance, are generally both the only provider of healthcare services in their area and the only employers of healthcare workers in their area.
Amazon dot com is in fact both a de-facto monopoly and a de-facto monopsony in some areas!
Monopsony is common in big tech - it can be said to undermine dynamism in the sense that it undercuts competition doesn't add a lot to the sentence though yeah
The only reason it's there in the first place is to demonstrate Eliezer is aware of it, is familiar enough with its meaning to understand it has some association with its more commonly recognised counterpart, and to vaguely hint that he has a conceptual understanding of economics incorporating both, which of course he does not.
[deleted]
My understanding from what I've read was that tech companies were somewhat monopsonistic, with a few big ones making up most of the demand for engineers. The fact that they can collude to keep prices low is evidence of this in the first place. Obviously there's not literally only one company in the sector. Could be wrong though, I'm not an expert.

I love how people like this are constantly doomsaying about how AI is going to control us all while they’re entire worldview is dictated by what an algorithm shows them on twitter and youtube.

Colonialism 2.0 Electric Boogaloo: “This Time We Need To Control Your Societies To Stop the Robots” Edition

Love the assumption that less developed countries have less regulations

Libertarian cryptobros tought the same about the laws at sea and the laws regarding cruise ships, so they bought one with hilarious results. Funny read if you look up the deeper dives.

Thread: https://twitter.com/ESYudkowsky/status/1653475829949829121

I’m sorry, I’m not used to reading bullshit from these idiots. Can someone tell me where nuclear anything is implied here? Or even that anything of substance has been said at all?

I’ve been reading things on this sub for a few months, and I don’t get it. The characters of this sub never seem to say anything worthwhile about anything. It’s like they don’t know how any of these systems actually work.

[deleted]
Thank you for the explanation. I really haven’t been keeping up. And that’s completely bananas. Wow.

Rephrased by SPH^TM (Snarking Post-Trained Human):

Omitted from text = If^pretending_to_hedge, please add them in your mind as might be most snarky.

The magic power of proto aucausal-robo-gods will obviously be large. Big corp and big gov, are somehow unbelievers that don’t join me in my genius assesment, they must be dumbest slow wits alive (I cannot possibly be wrong). The power is so aluring and difficult to contain, that it will spring forth from nothing, in younger countries and younger companies where the decadant skeptics have not plunged the world into blind lethargy.

(I can’t really make mind up wether regulation is good, and a sign of wisdom, or a stupid denial of the inevatible truth of the markets, not truly sure how to properly shoehorn it into my worldview yet, like obviously monopoly and monopsony [see how smart i am using econ 101 words in bold ways ?] can only exist in badly highly regulated countries, but I also want to monopolize and monopsonize the coming birth of GOD, to server my interersts above else all)

It pains me to say (i’m not sure it does, but i think saying this way buys me brownie points, even though it really doesn’t) these upstarts countries and companies cannot be allowed to prosper.

Sucks to be poor, but the only Rational^TM way to behave here, is to side with the opressors, to ensure magic powers aren’t used against me (when I say humanity, i mostly mean me), in my infinite sagacity i can see some of the evils of oppression, but cannot think of any other solution for my future imagined problem.

Woe is me, i didn’t want to preach oppression.

This is why you shouldn’t take Yudkowsky seriously.

AI safety is a serious issue, but we can’t trust anyone who’s been entangled with the rich or with the private sector. This is literally too important to trust profit-seeking companies (read up on the paperclip maximizer) and 95 percent of governments (perhaps, in Scandinavia, there are exceptions) with it.

AI ethics are a serious issue. Algorithmic bias locking the systemic racism of society into an opaque inscrutable automated system is a serious issue. AI accelerating capitalist profit maximization at the expense of everyday quality of life is a serious issue. But AI safety and AI alignment are the buzzwords of those focused on sci-fi existential risk scenarios and not. So that’s probably why you are getting downvotes, we are tired of real near term already happening issues being mixed up with fantastical scenarios that are at best speculative, improbable, and long term and at worst completely impossible.
You have a good point. I should have made myself clearer. I do think we're closer to serious AI safety risks than people realize--not because AIs will become sentient and "go rogue", but because programs become unpredictable at a certain level of complexity--but the quotidian issues of algorithmic bias and capitalism's regular evil fuckery reaching a new level of scale are more pressing right now.
I, for one, welcome the coming of the robotopia. All hail the robot god, hallowed be thy name.