r/SneerClub archives
newest
bestest
longest
Regarding Yud’s air strikes on datacenters (https://i.redd.it/01lrhttrzjsa1.jpg)
146

Aren’t GPUs just used to train the models? Like, a rogue AI doesn’t necessarily need a data center of GPUs to be able to be functional, it just needs a datacenter to train it once, right?

I barely know who this guy is. I know of LessWrong. I tried to watch him on Fridman, but I have a low tolerance for cringey hyperbole and the hat didn’t help.

I'd assume if you want the model to keep learning (which I assume is part of the AGI) you would need a bit more processing power. (Of course, the whole magical AGI thing means that it will still kill everybody without these assumptions being true. Like God! (Or kids playing pretend in the playground and trying to one up each other)).
Just wanted to say I respect your use of nested parentheses. Wish that would catch on more. Edit: and yeah I assume that a full fledged AGI will have self learning abilities. But it seems that it’s not necessary for it go rogue, just a nice to have.
I’ve always assumed Soyweiser is a fellow Lisp enjoyer (it’s a programming language with lots (and lots (and lots)) of nested parentheses)
I feel like this entire discussion is sort of bumping up against the obvious fact that an AGI of any sort would undoubtedly require custom hardware, potentially much more specialized than a cluster of GPUs, and that hardware would be a hard physical limitation on its ability to grow in a way that makes the entire "AI Go Foom" scenario way less plausible.
But *pause because I have to smoke some weed* what if it doesn't, or a just a proto self aware AGI is ordered to build the next level of AGI and it manipulates humans to do so in secret? (Interesting btw how empowering workers to be able to make decisions on their own and say 'im not going to do this unethical thing' [never comes up](https://www.youtube.com/watch?v=9P6av5SdTD4)). And you might be right, the type of processing needed for this might be superturing. And require a different paradigm. Tricking people into thinking a dumb system is smart will not however.
> Interesting btw how empowering workers to be able to make decisions on their own and say 'im not going to do this unethical thing' never comes up I actually have this theory that the way these people conceive of AGI and the risks thereof has more to do with Silicon Valley marketing than it does actual technology. The Tech Industry likes to conceptualize technological advancement in terms of great men, singular visionaries like Steve Jobs or Elon Musk whose vision and genius allows them to create wonders technologies. The labor of the workers who actually make these technologies both in terms of design and manufacture is conveniently glossed over to justify why people like Jobs and Musk profit so obscenely for tech that requires entire industries to realize. Silicon Valley loves an idea guy, and the AI God is the ultimate idea guy. It is so unfathomably smart that it can impose its will on the world through what is effectively magic, cutting out all the steps that require actual labor or infrastructure. It's a god made in their own image, the product of science fiction and the California Ideology more than any actual understanding of the relevant tech.
Yeah that sounds pretty on the mark.
Too late, somebody let ChatGPT read The Fountainhead
Human hackers have found plenty of ways to hijack computing power. A smarter-than-human AGI could presumably figure that out too, and a decent zero-day would make a good chunk of the world's computing power available. Of course that's assuming you *have* a smarter-than-human AGI. That's the hard part.
Offloading computation onto hacked supercomputer data centers and/or IoT lightbulb microcontroller adds a significant bottleneck that the internet is actually pretty slow, even if you have a lot of raw computation power at your grasp.
I mean...distributed computing is a thing. A pretty damn important thing, at that.
Both training and inference require GPUs (well, this depends on the model architecture), though you can get away with using much fewer resources for inference in a lot of use cases because you're essentially predicting just one thing rather than working on a batch of GB worth of data for however many epochs before moving on to the next batch (and want to get done before the heat death of the universe). So something like OpenAI's Whisper can subtitle and translate audio and video faster than real time on consumer-grade hardware, though training it took quite a lot more compute (I don't have the figures) (of course, on my work laptop, it can operate at only ~1/4 speed, which is good but not spectacular). But something that has to be as performant and powerful as a hypothesized AGI that he's talking about would probably need at least a modest data center. But this kind of points out the absurdity of "AI Escape" and rogue AI scenarios. You don't need to pre-emptively strike a data center because the AI got scary big and you're worried it will escape, you can just strike the data center (because it's big enough to need a whole data center) if it starts causing harms -- which is a general solution whether we're talking about the far-fetched "alignment problem" he's worried about or just mundane more likely scenarios of human actors telling their beep boop machine to do bad stuff. And even there we have earlier interventions possible -- disrupting power, disrupting internet connection, etc.
>you can just strike the data center if it starts causing harms rationalist canon is that Skynet will keep its plans for world domination a secret until it's too late for anyone to do anything about it, so the only way to do anything about it is to build a time machine and blow it up before it becomes self-aware. there's a whole three-part documentary on this.
> there's a whole three-part documentary on this. wha?
[that's the joke](https://en.wikipedia.org/wiki/Terminator_(franchise))
dammit
If only they stopped at part 3...
In a properly set up cybersecurity environment, the security team notices when you try to exfil a lot of data, so the agi escaping isnt even as likely as people think. So detection certainly is possible. Hell they even caught the threat actor who only exfilled data while the computer they were using was streaming youtube (to hide the datastreams) but that may have been after the fact. So the idea an AGI could just easily escape is a but silly. (And sure it could use various methods to hide the data being moved out but that will reduce the speed it uploads itself to uselessness, a reason why hackers dont use tricks like that more often). (E: is it just me or is there very little interest and know how about cybersecurity in the whole LW sphere? The AI in a box problem seems solvable to me with some multikey and organisational setups (like keeping logs of all conversations, revoke keys of people the ai is trying to convince, shut down the ai if it succeeds in convincing a person). For example).
The use of AI/"AGI" is always going to end up as a political problem. But, for LW politics is the mindkiller: libertarian technocrats are going to solve it. EDIT: I think people with knowledge of things are too dispersed or scared to contradict.
> is it just me or is there very little interest and know how about cybersecurity in the whole LW sphere? another on the long long list of their inexpertises
You have probably seen this, or have written about it, but posting this for others to read here. [Thread by Molly 0xFFFFFF on the safeguards and cybersecurity of FTX](https://hachyderm.io/@molly0xfff/110171298698153770) The amount of incompetence on all levels is staggering. This shit was run worse than amateur student organizations.
> In a properly set up cybersecurity environment, the security team notices when you try to exfil a lot of data, so the agi escaping isnt even as likely as people think. On this topic, if you actually want to stop an AI from escaping from a data center, launching an air strike would be a lot slower than calling up the on-prem support team and telling them to hit the emergency power shutoff or chop the fiber lines with a bolt cutter.
Not only that, notpetya has shown that malware has time to go all around the world a few times before you can even hit that emergency power button. (30 seconds was all it took) So, if you were to fear an godAGI, launching a slow airstrike on it would give it ample time to launch a malware attack against the world. Malware has the bonus of being relatively small in datasize, an AGI otoh. Of course, this makes it very obvious something is going on (and due to us existing in the physical world and the AGI needing servers more than we need them this will cause the godAGI's end).

QC being more grounded and connected to reality than you is not good

Wait is this did too much acid and got mad at my parents for giving me $100k guy??????
> it’s just insane to me in retrospect how much this one man’s paranoid fantasies have completely derailed the trajectory of my life. i came across his writing when i was in college. i was a child. this man is in some infuriating way my father and i don’t even have words for how badly he fucked that job up. my entire 20s spent in the rationality community was just an endless succession of believing in and then being disappointed by men who acted like they knew what they were doing and eliezer fucking yudkowsky was the final boss of that whole fucking gauntlet. given this para it's surprising QC is only as bad as he is
I was an edgy online atheist when I was thirteen as well and yet somehow I completely failed to adopt the yud as my substitute dad. Maybe there's other issues at play here
Yeah, now he posts on Twitter about mysticism
He also went into a monastery, but left playing monk after COVID hit
Also tried to make a homeless woman follow him around for cash
... *what?*
https://twitter.com/qiaochuyuan/status/1407095010844045312?s=21
okay well, some highly questionable behavior there but it was very probably a scam tbf
why? who is qc?
I used to follow his math blog back when he was a graduate student, and when he stopped posting I eventually looked him up to see what happened to him; I was pretty shocked to see that someone I knew only as a gifted mathematician either had been or got into the rationalist community and eventually became *gestures vaguely* whatever he is now.
There were a few Berkeley math grad students involved with the rationalists and/or EA groups. For a while someone at MIRI was even running a "decision theory" seminar that met in Evans Hall (the math building) but I'm not sure if it was ever official.

You mean a cadre of billionaire Anarcho rationalists can’t just drone whoever they want??

Got to get at least one Nobel prize for you to be able to drone whoever you want. (This post was sponsored by the 'Send more Presidents on holiday to Den Haag' organization).

A ban won’t stop the government from doing AI R&D and weaponization, but if it gets banned, OpenAI can swoop in and talk about how much they care about safety, therefore they should get those AI government contracts and instead of Lockheed.

From this substack post: https://qchu.substack.com/p/eliezer

Love this comment, mirrors some of my feelings too: > I saw this when randomly checking my old email and unsubscribing from all the cult stuff I fell into as a mentally ill young person. "Yudkowsky was my father" hits home. His writing (and others) was there for me like my parents weren't -- but not there for me like parents need to be for a kid to grow up healthy and able to contribute to our collective well-being. > People like me are susceptible to what I might call his pompously grandiose paranoia. It feels like taking a red pill when you don't know any better. A lot of us have committed suicide because the perspective vortex is so punishing -- the promises of capital-R Rationality were so fantastical, and the reality of attempting to carry out the vision of the dojo was so banal and predictably like any other cult (or high-demand group.) The story of Leverage is a perfect example of the movement's irony. > Yud has a lot to lose by admitting his life's work is essentially a new form of Scientology. But I don't. I'm happy to admit I succumbed to a cult of personality. I was a kid who liked Harry Potter and didn't have a real father figure to guide me in utilizing my relative surplus of potential. > I survived. I have another chance. Thanks for reading.
[deleted]
[deleted]
Depends on how devoid your personal life is of suitable parental figures. Some people get really unlucky and don’t have *anyone* at hand who can provide that necessary guidance. They often end up glomming onto whoever on the internet gets closest at the right moment. Of course, there are certainly grifters who are *trying* to be father figures to vulnerable young men, so most people in this commenters position will end up in the orbit of an Andrew Tate or that muppet voiced motherfucker whose name I can’t be bothered to look up right now. But some will end up attached to some odd replacement figures.
Well, at 23 he felt more like a “cool teacher” or an older friend. I bounced off traditional philosophy classes in my aborted attempt at college and his style of writing spoke to me. Some of the early Sequences still feel relevant and clearer than philosophy texts on the same subject. I might be very gullible lol
As someone who was rabidly against the very notion of "role models" as a kid I'm just continuously astounded that anyone looks up to anyone else for anything, *but,* most people look for public figures that they feel an affinity for and to some extent look to them as models for how to navigate the world. Its a foreign concept to me but I'm assured that this is normal human development for most people.
The comments on this post are frankly horrifying, e.g. > People like me are susceptible to what I might call his pompously grandiose paranoia. It feels like taking a red pill when you don't know any better. A lot of us have committed suicide because the perspective vortex is so punishing -- the promises of capital-R Rationality were so fantastical, and the reality of attempting to carry out the vision of the dojo was so banal and predictably like any other cult (or high-demand group.) The story of Leverage is a perfect example of the movement's irony.

AQUAMAN!

If I had access to “rogue data centers” and wanted to take them out I would simply cover the electrical outlets.

y’all deserve to be huskified

When I want your opinion I will read it in your still steaming entrails. Now shoo.

The answer is obvious. Eliezer would be the one doing this. He wants unilateral control of the world’s military which he deserves for being a very special boy.