r/SneerClub archives
newest
bestest
longest
AI Is as Risky as Pandemics and Nuclear War, Top CEOs Say, Urging Global Cooperation (https://time.com/6283386/ai-risk-openai-deepmind-letter/)
22

please check what’s been posted already

also, there’s some weird brigade-y shit going on in the votes on these posts, and that’s likely to get a post locked too

[deleted]

Yann's out of touch, which is why it's funny that even he chose not to get involved with what's in the OP and generally derides AI doomers. The latter is unfortunate because he doesn't distinguish between dumb AI doom like Skynet prophecies or real problems AI causes and will make worse.
I think Yann is one of those people here who have made up their mind and straight up refuse to engage with anything related to AI Alignment. He once asked the question of why an AI system would even desire self-preservation when instrumental convergence is one of the basic parts of AI Alignment theory and directly answers that question.
did you use the wrong alt and think you were in a rationalist sub for a moment
Part of the problem is that, when you program AI professionally (which I have done) you can start to see the limitations of these systems and realize that they're "just code", which is true but also misleading. We tend to mock the other outcome, overpromising and developing "golden hammer" syndrome, but it's equally common to get a myopic, limited view based on knowledge that is relevant at the time, but might not apply to all contexts. So, you hear arguments, from people inside the technology industry as much as from outsiders, that you can just "pull the plug" on a "killer robot." But a rogue AI, if one ever exists, is very likely to reproduce itself via malware, infecting the whole Internet so we can't turn it off without insufferable loss (i.e., by taking down the whole network, which we can't afford). Are AIs going to become sentient? I strongly doubt it. They're deterministic, unless connected to a source of physical randomness (most computer randomness is pseudo-randomness, which is good enough) which will (probably) not itself be sentient. Do they have minds of their own? In metaphor only. Are they predictable, though? By us, the answer is no. These things already outclass in board games; it's inevitable that they defeat whatever monitoring we place on them. Plus, we live in an adversarial world. Governments and capitalists are constantly trying to control, mislead, and exploit us. Combine the millions of ill-intended, adversarial human actors out there with increased capabilities that are unpredictable *even by those who are using them*, and there's reason to be nervous. Of course, this is too nuanced a discussion for a lot of people, including most of the tech executives spouting boneheaded takes in one direction or the other.
Bruh your account is a week old. Is this some astroturfing shit?

This was posted already.

Oh, could you give me a link? I didn't see any threads about it when I sorted by new.
It was the previous post, but the link was to the statement, not the Time article. That whole thread is about the statement.

Funny how when something ostensibly directly threatens the welfare of society’s elites, we get exhortations to global cooperation and playing it nice. When it doesn’t directly affect them – the data theft, labor discipline & societal misinformation – mum‘s the word.

This folks here is what is known as the most unbiased, least-tainted-by-any-merely-human-instincts kind of altruism. Its effectiveness is so transcendent you simply can’t ever hope to understand it at your IQ/net worth level tho; that’s just a hard truth, how things work in the world.

You can read the short statement here: https://www.safe.ai/statement-on-ai-risk

Among the signatories are the CEOs of arguably the three most important AI labs:

  1. Demis Hassabis CEO, Google DeepMind

  2. Sam Altman CEO, OpenAI

  3. Dario Amodei CEO, Anthropic

And two out of the three founders of deep learning (Yann LeCun did not sign):

  1. Geoffrey Hinton Emeritus Professor of Computer Science, University of Toronto

  2. Yoshua Bengio Professor of Computer Science, U. Montreal / Mila

You missed the three most important names: Grimes, Lex Fridman and Sam Harris
Surprised they didn't include Kanye.
My two cents is that given the signatories and the content this statement, it would be hard to deny that AI x-risk has become a mainstream position among AI researchers and leaders
These people are not wrong that AI risk is a real issue, but they're also untrustworthy and self-interested. Their strategy is regulatory capture. They don't want to achieve AI safety. They want to stop competition and become the sole supplier. They also want to ingratiate themselves with today's political leaders and win war contracts from the Pentagon so they can become the most powerful people in the nation, and they realize that, in order to do that, they have to come off as sensible.
Despite that being the literal, explicit content, I somewhat disagree. AI has a lot of potential to fuck people over, even at 1000x smaller existential risk than nukes. Some of the signatories may be willing to accept a slightly-inaccurate exaggeration about "extinction" if it gets the point across better about massive consequences. "Extinction" is not, I think, a mainstream position. It's just no longer considered a laughable one. (Except by the fine patrons of /r/sneerclub of course.)