r/SneerClub archives
newest
bestest
longest
Are they all wrong/ disingenuous? Love the sneer but I still take AI risks v seriously. I think this a minority position here? (https://www.safe.ai/statement-on-ai-risk#open-letter)
22

this post has turned into a worked example of why sneerclub is not debate club. locking until next time.

Which ai risks? The ones where political dissidents are identified and arrested? The ones where automated weapon systems are used to kill soldiers and civilians alike? The ones where a control system at a factory is shown an adversarial example, and fails catastrophically, killing several employees?

Or the ones where, in a matter of hours, a data center turns into a magical superintelligence who procedes to delete all life on earth using an engineered nano-robotic virus?

This, so much this. It annoys me to see the conversation being shifted to stupid sci fi scenarios instead of discussing real implications of the AI.
I think it's telling that all the consequences you see in the first risk are issues we already face. Supression of political dissidents, automated weaponry, unsafe control software: We've got 'em all already.
That’s very deliberate — current ai tech is undeniably powerful, but it only enables to do things we could already do in a more automated way (and *occasionally* with better results). I’ve yet to see any examples of ai doing something humans fundamentally couldn’t — and i don’t expect to for quite a while.
I figured you might be trying to make a point like that. Nicely written.
TBH, all of the real risks don't require AI to become a reality. Only human negligence, envy, hate, and stupidity...
I mean yeah but you could say the same for nuclear risk, and so much else. The point is that AI is a powerful technology that will be (and indeed already is being) misused. If you can solve the hard philosophical problems, then by all means go ahead, in the mean time though, we have to provide whatever social, legal, and technological half-solutions that we can.
It's worth noting that the philosophical questions they're asking are not going to help the social, legal and social problems that machine learning, or rather the implementation of machine learning in the context of late stage, rent seeking capitalism, cause. In fact, the questions the LWers and AGI prophets are pushing to ask detract from the real issues around labor and automation, compensation for training data, imported bias in "objective" data analysis, etc.
100%. we should just treat the technology with the same cultural relativist lens that we view the printing press or the Internet.
I think the point here is that a lot of problems (which boil down to automating decision making by supposedly using computer programs) is about shifting responsibility by claiming that some objective mathematical formula is making the decision, and not the man behind the curtain. This does not require the computer to do what people claim, only for people to believe it.
AI, like tech in general, acts like an amplifier to human negligence, envy, hate, stupidity. It should be judged through a reasonably cautious lens for those reasons, but that's not what's happening here. Instead we're worrying about Skynet and not what kind of hell automation and AI can unleash today to satisfy the greed of the wealthiest people on the planet.
Or companies replacing white collar jobs with AI because they can make more profit that way and don't entirely care about AI hallucinations or accuracy, but can pump out content faster or have targeted answers into databases with less programming. But let's not worry about late stage capitalism and instead worry about sci fi stories. I'd rather just read Banks or some other good sci fi authors instead.
That's the point and why so many CEOs are willing to jump on the cause. AI safety is to tech companies what greenwashing is to oil companies.
>It annoys me to see the conversation being shifted to stupid sci fi scenarios instead of discussing real implications of the AI. My preference is that they walk & chew gum.
A really good book about actual AI risks is *Weapons of Math Destruction* by Cathy O'Neil. Except she wrote it before "AI" made it into the popular imagination, before basic machine learning was routinely called "AI", before even machine learning was in the popular imagination so we were just talking about big data. The book is about problems that have already been going on for years, like inequality in automated decisions about loans and résumés, rather than vague scifi apocalypses. Yet it still holds up very well despite the hype revolution since then.
I highly recommend Cathy O'Neil's book, too. Being written before the AI hype made the book especially good. If anyone is interested in a quick overview, [I wrote a review on my blog just before the book came out](https://egtheory.wordpress.com/2016/09/14/social-algorithms/). Many of Cathy's blog posts from before the book are also great, [here is an annotated index](https://egtheory.wordpress.com/2014/09/05/ethics-of-big-data/) of some of them that I especially liked.
the ones used by a couple of corporations to commodify *every things* and automatically charge citizens "accordingly".
I mean the status quo isn’t really an ai risk, but sure.
it will be status quo 2: high speed personal alienation hell. same shit but with enhanced policing.
> Which ai risks? The ones where political dissidents are identified and arrested? The ones where automated weapon systems are used to kill soldiers and civilians alike? The ones where a control system at a factory is shown an adversarial example, and fails catastrophically, killing several employees? While it may not be quite as permanent or as bad as outright "extinction," the possibility for greater war and literal dystopia are sufficient to be called "societal-scale risk."
Even that last bit >Or the ones where, in a matter of hours, a data center turns into a magical superintelligence who procedes to delete all life on earth using an engineered nano-robotic virus? Is something you can consider. All things are possible with god so jot that down etc. etc. IMO The problem is these dudes acting like existing and inevitable harms and risks should just be ignored in favor of more sever risks that are either vanishingly unlikely or WAY the fuck off on the timeline. Could the singularity happen? Sure! Can it happen through a convolution matrix connected to a graph? Probably not!
Poorly specified indeed due to lack of space in title. I am indeed talking about catastrophic risks. The ones outlined in this thread are, I believe, hardly disputed by anyone (the scale to which they matter in relation to catastrophic risks is of course by some).
hardly disputed? where do these claims of catastrophic risk meet the burden of proof?
I meant concerns that DON’T have catastrophic risk implications/ require strong AI are not disputed. Eg JDrichlet mentioned identifying political dissidents, autonomous weapons. You could add election manipulation here as another classic. My impression is that none of these are disputed by the strong AI crowd as being issues, and certainly here in the club they are viewed as serious as well. My question was hence pointing at the more controversially discussed other category of risks. Hope that clears it up.
sorry, but I'm not following what you're saying. can you lay out the categories and the ways in which each category meets the burden of proof? I think we're all familiar with social media ML being toxic and surveillance tech being commonplace, yet it's not unreasonably (imho) to reiterate how those meet the burden of proof without hearsay.
There's also the fact that, to the degree that a lot of the "class 1 verging on class 2" scenarios involve military capabilities and the kind of conflicts that can only be zero sum or worse, they are pre-emptively urging capitulation of one particular side. This shit's in English, not Chinese. Like to the degree that they're all exactly the kind of overly-educated white people who would never be caught dead at Jan. 6th for purely aesthetic red team/blue team reasons and are (probably correctly) shrieking about it being treason even in the face of all the dumb shit that has previously gone down in Congress in previous ages, it's worth pointing out that by their own metrics, what they are actually advocating for is straight pre-emptive Vichy butthole spreading.
what are you talking about
"Rev up the Consent Machine, we need a new Forever War"
China will be continuing with AI research regardless, because the Chinese are not fucking stupid and self-defeating. If the public AI bitches do not realize that, they're morons. If they do realize it, they are advocating for pre-emptive surrender/disarmament for no good reason.
That’s entirely irrelevant to the conversation we were having
> The Chinese Delegation pointed out at the meetings that the Chinese Government actively supported the formulation of an International Convention against the Reproductive Cloning of Human Beings because the reproductive cloning of human beings is a tremendous threat to the dignity of mankind and may probably give rise to serious social, ethic, moral, religious and legal problems.The Chinese Government is resolutely opposed to cloning human beings and will not permit any experiment of cloning human beings, and for this purpose has formulated the Measures for the Management of the Technique for Human Auxiliary Reproduction. https://www.fmprc.gov.cn/eng/wjb_663304/zzjg_663340/tyfls_665260/tyfl_665264/2626_665266/2627_665268/200310/t20031028_600090.html China's not big on anything they don't think can be effectively controlled.

Is it just a huge coincidence that the AI doomsaying went mainstream once VC money started drying up? Must we ignore the financial incentive to exaggerate risk for fundraising?

EDIT: Also the question presented, in which all of the signatories must fall into the same motivation, is just not true of how any movement comes to be. The question encourages exaggeration.

I think it is mostly a coincidence. Certainly there's some number of guys who went from bitcoin boosters to AI doomers, but even that probably would've happened (and might have happened more) if there was still VC money to pour into AI. The real thing that happened is that ChatGPT/generative AI has reached a point where it is accessible and impressive to laymen, which has raised the salience of AI. So we get more AI grifters getting attention.
It really doesn't need to be one or the other. Fostering panic over generative AI is obviously the move when tech investors are demanding higher returns. The threat of AI needs to be greater and more imminent to sell the value of AI safety.
But many of these signatories work, and will continue to work, on things that aren't AI safety.
[deleted]
Upvoted, but I laughed at that myself in other places
The statement the people are signing on to is quite minimalist. It's one sentence and most people could look at it and say, "Okay, yeah, sounds reasonable, there's surely some danger here, I'll be responsible and sign on." Some of the people promoting it have really maximalist agendas (eg, EY). In fact, the main movers behind it are grifters of his sort. It's just PR for the EY sorts.
I agree, most of the people who signed this wouldn't support the Yudkowsky agenda. Which is a good thing.
Yes, but then EY and such are going to use it to draw publicity to themselves.
Yeah, but maybe someone else picks up the pen, ignores Yudkowsky's bunch, and does some useful research about this for a change (or funds it) 🤷🏼‍♂️
And this prevents them from believing the bullshit?
It means they have no incentive to hype this.
Peer pressure and groupshift in the tech sector. And, yeah, there are some true believers among them.
I get that there are some people there you wouldn't count on (Grimes? Really?). But I'm saying at least two of them are respected researchers, whose work I know, and neither them nor their peers have anything to do with AI safety. And there are dozens whom I don't know but I think are in the same kind of position.
I respect Stephen Hawking but I think his take on the risks from aliens is dogshit.
Is that a comment for me?
Yeah.
Are you trying to say these people have no idea what they're talking about? They're AI professors talking about AI, not a physicist talking about aliens and certainly not the LW crowd. I get the point that you think it's a fictional idea not really related to actual existing AI, but they seem to think differently. I didn't plan to comment on this thread any more because I don't really care if anyone on this sub thinks they're right or not. But the idea that anyone talking about AI as a potential existential risk necessarily has no idea what they're talking about has somehow become as strongly enshrined here as Yud's ideas are in LW, which I find at least ironic. I guess the counterweight is needed though.
A perpetual motion machine does not become more plausible when a physics professor talks about it.
I wanted to reply that that's because they never do, but [it seems I was kind of wrong](https://gizmodo.com/physicists-believe-its-possible-to-build-a-perpetual-mo-483239489) - and that the idea, despite being different than what is traditionally meant by a "perpetual motion machine" (it doesn't give you a supply of energy but rather is a system whose lowest energy state includes periodic motion), has been confirmed experimentally and might even find some uses! ([Wikipedia](https://en.wikipedia.org/wiki/Time_crystal)) On the other hand, We've had sadder stories like the very respected mathematician Michael Atiyah claiming, in old age, to have found a simple proof of the Riemann Hypothesis, and then publishing something wrong and nonsensical.
There are a lot of crackpots among professional academics. Even academics who are certainly right about some things can be crackpots about others. I once met a mathematics professor who specializes in probability theory and who also plays the lottery (no, he doesn't win). People can rationalize really weird beliefs when they don't understand their own emotions. I don't think that argument from authority is always invalid - we can't all be experts in everything, you need to trust people at some point, etc. But if there's a time to be skeptical it's when an expert is telling you that the world is going to be destroyed. That's a big claim that needs more than a "trust me bro". This is especially true when that expert is making claims that depend on other areas of study about which they actually know almost nothing. What does e.g. Hinton know about the physics of computation? Nothing. The practical aspects of industrial production? Nothing. MLOps? Nothing. Cybersecurity? Nothing. Robotics? Nothing. And yet you'll just take him at his word about computers causing *the end of the world?*
You said they didn't have anything to do with AI safety; I assumed that meant they weren't working in AI at all.
Cool. You recognize some names who genuinely believe that Skynet is going to kill humanity. Thank you for your perspective.
Glad to help!
Yeah, but the people who have really, really been pushing it generally work on AI safety. Bill McKibben isn't one of the people primarily responsible for pushing this; he's just someone with a well known name they could get to sign this petition.

“They” are not all disingenuous, but there is a characteristic refusal to engage with, argue about, or acknowledge the “this is already awful” scholarship in favor of “No no AI could be exponentially worse and I will come and save everyone.” And this is what I can’t stand. “Concern” about AI safety but utter disinterest in Joy Buolamwini or Timnit Gebru’s work? Genuine concern would not keep dodging current issues like they were Rip Torn wrenches. I would judge one’s concern about clean water similarly odd if securing it for Flint is boring but devising it for Mars existentially primary.

But if we don't solve it for mars, how will we solve it for the untold trillions upon trillions of 10^50 of souls in humanity's future light cone? How?!?
Paperclips of ice
I guess for the analogy to work they’d have to think that Flint will no longer be relevant soon because we’ll all be on Mars. They don’t worry about current AI issues because it’s evolving so rapidly that they think any solutions would quickly become obsolete. I’m not saying I agree with that, though.

Honestly, I find them disingenuous regardless of being right or wrong.

Many, such as Yud, have been paid millions of dollars over the years to study a problem that they have spent that entire time building a cult of personality around. Also, while accepting money researching this problem, most of them claim it is unsolvable. Notice the absurdity?

Honestly, if you take AI x risk seriously, you likely will end up just building more AI anyways.

I’d characterize a lot of the crowd this subreddit talks about as wrong but not disingenuous. They really believe the stuff they talk about, it’s just that the belief is thin on evidence. I’m not gonna make any claims about the general dangers of AI because that’s above my paygrade. But the lesswrong crowd has made some specific claims that are clearly just incorrect or poorly thought out.

I'd call Altman et al disingenuous. If the risk is so great why are you still working on it, Sam? ohhh, the money, ok
That could be true but they strike me as wannabe messianic types who'd develop this stuff because they think they can save people from the danger if they're in control of it.
I think Altman is using "our stuff is so powerful it could destroy the world omg" as a sales pitch, irrespective of whatever his personal views on it are.
you are correct in your assesment. here's [Sam Altman's thoughts about being a tech founder](https://blog.samaltman.com/how-to-be-successful).
No meme actually love this, as someone in the startup space. Bookmarked, thanks for sharing.
Iirc Sam Altman owns no shares in openai… I think he just has a god complex.
He is the CEO of the company?
Yes, but he holds no equity in it. Dude is already rich as fuck, the point I’m making is that it’s not as simple as him “doing it for the money”. From the way he talks it’s very clear he has a messiah complex and is high on his own supply.
Oh, sure, that's fair. He's definitely got some motivation in that vein
My personal opinion: better AI is needed before we can seriously address existential risks from AI, otherwise we have no idea what we're dealing with. I do understand that there's a trade-off with non-existential risks that are already hurting people and just getting worse.
>But the lesswrong crowd has made some specific claims that are clearly just incorrect or poorly thought out. Like what? Edit: Banned for disagreeing, so won't be responding. Users should keep in mind that this sub does not allow dissent. In case you were wondering why it's such an echo chamber.
Well, for starters, they think that AI is the biggest threat we're facing and that the threat is specifically that they'll become focused on our destruction. AI may be a threat but there's a lot to be done still to turn out chatbots into things capable of thought. They're not capable of intent, much less hostile intent and there's no reason to think that they'll achieve those capabilities sooner than we'll face serious challenges in the form of water wars and the like. They will cause harm by taking jobs from people but this isn't deliberate destructiveness or the result of bootstrapping itself into small-g godlike intelligence and then deciding humans have to be eliminated.
>Well, for starters, they think that AI is the biggest threat we're facing and that the threat is specifically that they'll become focused on our destruction. AI may be a threat but there's a lot to be done still to turn out chatbots into things capable of thought. Your second sentence does nothing to counter the first, and the first is a strawman. The worry isn't that AI will be *focused on our destruction* but rather that our destruction will be an unimportant (from the AI's POV) side-effect of achieving its actual goal. >They're not capable of intent, much less hostile intent Phew, good thing they're not improving at all! Otherwise we could be in trouble. Oh wait...
If they had stopped at merely claiming this was possible there wouldn't be an issue. But they sounded all the alarms and said we have to focus all our effort on dealing with this right now above all other concerns.
>But they sounded all the alarms and said we have to focus all our effort on dealing with this right now above all other concerns. They aren't sounding the alarms because they think it's certain. Even Yudkowsky, who's probably the most "certain" person I've seen on the topic, leaves open the possibility that he's wrong. And most other doomers are less certain than that. The point is that if it IS possible, and in fact not entirely unlikely, *that is the time to sound the alarms!* The time to sound the alarm isn't when the thing is already capable of destroying us, because then the alarm is moot. They're trying to slow development *before* it becomes dangerous. If your issue is that doomers don't recognize that this is merely one possibility, your issue is completely unwarranted, because basically every doomer I've ever seen does say that it's only possible.
If it's capable of doing that at some point then there's no stopping it because you can't control every programmer on the planet and there are certainly enough jaded nihilists out there who'd like to see everyone die. This is different than nuclear proliferation because the tools to make this stuff are much more readily available, so available that you couldn't effectively ban this.
Most glaringly their insistence that an AI can bootstrap itself from general intelligence to superintelligence in a short timeframe just by thinking hard enough. The process of experimentation, testing, iteration, etc. can't be replaced by thinking super hard.
>Most glaringly their insistence that an AI can bootstrap itself from general intelligence to superintelligence in a short timeframe just by thinking hard enough And how exactly is this "clearly just incorrect"? What evidence do you have of that? >The process of experimentation, testing, iteration, etc. can't be replaced by thinking super hard. Oh wait, so when you say "thinking really hard" you don't mean "doing a bunch of experimentation and iteration and code improvement in a short time", but rather "literally just sittin' and thinkin'". Yeah, it's neat how ideas seem stupid when you change them to make them stupid.
It's taken an untold number of man-hours to get AI to the current state of the art from our early work on neural networks and machine learning. A lot of that time has been dedicated to noticing and identifying where a problem is, which requires repeated testing and iteration. Even if we're assuming that the base-level AGI is capable of doing that kind of work completely independently and without humans intervening to provide information inputs or confirm whether a given change in output constitutes an improvement or a regression, that AI is still going to be limited by it's existing hardware, it's existing model of the outside world, and it's existing set of thought processes. I mean, let's take one of the most basic obstacles that an early AGI trying to bootstrap itself is likely to run into. Most scenarios involve the AI escaping onto the internet somehow. Let's assume that this means copying it's source code discretely to remote hardware either under its own sole control or otherwise beyond the reach of an off button. Even if we assume that it's able to independently form this plan without it's attempts at escape being noticed and dealt with, if it's running on a corporate network, then IT is going to have a *lot* of questions about why the AI lab is sending emails to various foreign companies asking about server space and/or finance is going to have a *lot* of questions about why their AWS budget has ballooned, AWS is going to have questions about where the money for this coming from and/or banks are going to want to have a physical address where someone is going to need to handle mail, etc. There are a whole lot of obstacles even for this first step towards superintelligence even ignoring whatever conceptual leaps need to be made to get there. While I'm not arrogant enough to say that it's impossible to overcome all of these (and more) it is wildly improbable that an AI as smart as I am or slightly smarter (remember this is all pre-FOOM) would come up with such a plan and execute it perfectly on the first attempt.
https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities
Feel free to explain how anything there is "clearly just incorrect".
I’ll give you some highlights: * treating “instrumental convergence” and “orthogonality” as givens, instead of tenuous presumptions that don’t have strong empirical evidence or strong philosophical theory * AGIs building Drexler-style nanotech as a serious example (it’s been posted on Lesswrong itself why drexler style nanotech is a fantasy and doing substantially better than biology is implausible if not provably impossible here is one such post: https://www.lesswrong.com/posts/FijbeqdovkgAusGgz/grey-goo-is-unlikely ) * arguing we only have one critical try at alignment. (This explicitly disregards that lesser alignment approaches like RLHF are being used and developed right now and put in use and improved on) * arguing inner alignment and outer alignment will, in the default case, will strongly diverge. Eliezer’s one example is evolution… but this analogy fails in a lot of ways: https://www.alignmentforum.org/posts/FyChg3kYG54tEN3u6/evolution-is-a-bad-analogy-for-agi-inner-alignment And this list is really long and rambly and I’ve done too many effort posts that have been ignored before by people, so let me know if you think list of lethalities has any knockdown points I missed.
>treating instrumental convergence and orthogonality as givens I have yet to see any compelling explanation for how this could NOT occur given what we currently know. The best I've seen is "Well maybe it won't :)" while the argument in favour of IC is strong. >it’s been posted on Lesswrong itself why drexler style nanotech is a fantasy and doing substantially better than biology is implausible if not provably impossible, do a search Interesting that you take this one Lesswrong post as truth, while claiming other posts which disagree with it are obviously incorrect. Sounds like motivated reasoning to me. >arguing we only have one critical try at alignment. (This explicitly disregards that lesser alignment approaches like RLHF are being used and developed right now and put in use and improved on) In fact, it does NOT explicitly disregard that, but actually explicitly mentions that, while pointing out that the true test is when trying to align something capable of destroying us, a failure of which could result in extinction.
> how this could NOT occur If you want to play Burden of Proof games I’m not really interested in engaging. I will go as far as explaining why I think burden of proof lies on Eliezer’s/Bostrom’s claims. Instrumental convergence is a claim about how hypothetical minds would act. Animals don’t have instrumental convergence. Humans poorly and sloppily make some actions in that direction but they don’t really systematically converge pursue instrumental goals so they aren’t really an example. Existing AI systems don’t engage in instrumental convergence. Because there are no existing examples, the claim needs either detailed philosophy of mind work on why minds would work like that. The case for orthogonality is a little bit better… we can see machine learning approaches now get optimized for some secondary or spurious trait or feature, but Eliezer brushes over all the object level details in favor of sweeping philosophical claims (which he doesn’t actually use rigorous systematic philosophical argumentation for). See, for example, Eliezer’s failure at responding to Chalmers for more burden of proof games (edit here: https://www.reddit.com/r/SneerClub/comments/12ofv59/david_chalmers_is_there_a_canonical_source_for/) Drexler has been debunked by people making detailed explanations about how his specific designs won’t work and similar desirable features of nanotech in line with his claims violate chemistry. I have not seen any similar detailed defense of his work’s plausibility. The phenomena and designs Drexler originally extrapolated down to the scales he did simply don’t work at those scales. This isn’t motivated reasoning, this is recognizing who actually has gone through the evidence. Eliezer had (successfully in your case) played a rhetorical game where any lesser alignment solution on lesser AI doesn’t prove/demonstrate anything I his framing. His framing means only proving a negative is enough to validly disagree with him.
I hope you don't mind, but this has to be one of the most articulate descriptions of argument from ignorance, wrt AI safety sophistry.
Edit I can’t tell if you are saying I’m making the argument from ignorance or the doomers are. Either way I do mind… If the other side is making claims about known unknowns and unknown unknowns, and I’m the one pointing out how unknown and ignorant the field of knowledge actually is, I’m not making an argument from ignorance I’m explaining the actual state of knowledge. And either way, If someone wants national policy to be set to drone strike non-signatory nations, even at risk of nuclear war, then the burden of proof is on them.
the doomers are, imho. and to your last point, even the mention of that philosophy should be alarming to someone on the fence. I described it with a little more hyperbole as such: >"we don't know this, but the moral implications we're stating are severe enough to justify the promulgation of our ideology to the masses via media circuits, and the codification of our beliefs into law that extends into international hegemonic control that would contravene basic democracy or sovereignty" to me that is a dangerous slippery slope without dissenting criticism and empirical/epistemic rigor. apologies if this came across as sarcastic.
In what sense do animals not have instrumental convergence? I don’t know of any species that don’t value food as an instrumental goal. Most animals have self preservation as a goal.
Animals eat food because they are hungry, not because of a careful calculation about how they need calories to pursue their primary goal. Evolution converged on hunger as a way of increasing reproductive success, but not the individual animals. Humans are capable of calculating how they need calories to pursue their broader goals, but mostly they eat when they are hungry or feel other immediate desires, even if these desires conflict with longer term goals (see all the people that have issue maintaining healthy eating habits because their hunger and taste are unreliable and they mostly act on immediate desires instead of rationally calculated subgoals). All of this amounts to Clippy, even if it is on some level trying to paper clip the universe, might get sidetracked maximizing paper clips in the short term, and fail to bootstrap unlimited resources then exterminate the human race.
Animals are not expected food maximisers. They eat if they are hungry, and do not eat if they are not. They might plan ahead by storing some food, but only enough to survive. They do not plot to absolutely maximise food by tiling the universe, as is claimed in the instrumental convergence hypothesis.
>If you want to play Burden of Proof games I’m not really interested in engaging. The burden of proof is on the person making the stronger claim. One group is saying something is POSSIBLE, and therefore warrants concern. One group is saying something is IMPOSSIBLE, and therefore does not warrant such concern. In my view, the "possible" crowd has sufficiently shown it to be possible (I'd say "highly likely"), while the "impossible" crowd has not sufficiently proven their case. >Animals don’t have instrumental convergence. Humans poorly and sloppily make some actions in that direction but they don’t really systematically converge pursue instrumental goals so they aren’t really an example Are you honestly saying that humans and animals haven't converged on the goal of survival? Because that's the main aspect of instrumental convergence in AI that's relevant to risks. If it comes upon a desire to survive and to maintain consistency, as pretty much all animals and humans have done, then it will seek to prevent us from killing/changing it. >Eliezer’s failure at responding to Chalmers for more burden of proof games You'll have to expand on that because I see no failure there. Yud's reply makes sense, and if you really want someone to try to play whack-a-mole, someone else in the replies [gave their own version](https://wiki.aiimpacts.org/doku.php?id=arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:argument_for_ai_x-risk_from_competent_malign_agents:start).
> something is POSSIBLE Eliezer had put his P(doom) at greater than 98% and explicitly described (in multiple recent podcasts) all plausible scenarios as converging on doom, so no, you can’t shift the burden of proof like this. He is also calling for extreme courses of action, like a willingness to bomb non-signing countries’ data centers or graphics card manufacturers, even at risk of nuclear war, so this isn’t academic, he genuinely thinks national policy should be set as if all of his presumptions are near certain. > converged on the goal of survival Instrumental Convergence refers to the agent itself converging on pursing instrumental goals that were not necessarily programmed into it. Survival instincts were instilled by evolution, not by individual humans or animals deciding to pursue survival as an instrumental subgoal of their “original” primary goals. Humans intentionally deciding to accumulate resources as a subgoal of a primary goal is an example of instrumental convergence, but objectively humans are bad at this, with strong temporal discounting that was likely evolutionarily adaptive but means it is difficult and uncommon for someone to defer pleasure or positive things for years in anticipation/calculation of greater net value further into the future. Humans are capable of this, but it isn’t easy or automatic (a great deal of enculturation goes to developing work ethic) or complete (humans will partially defer pleasures, but doing so completely and uniformly often crushes morale), so assuming any generally intelligent mind must rationally and systematically pursue instrumental subgoals is a major assumption. > whack-a-mole If you claim 98% chance of doom, the burden of proof is on you to show that all contrary cases are less than 2%. Eliezer has had years to compile his scattered blog posts into a coherent, complete, concise, well-cited, and formal academic paper and hasn’t bothered.
>Eliezer had put his P(doom) at greater than 98% Sounds like he's saying it's possible (and very likely) which remains a less strong claim than IMPOSSIBLE as his detractors seem to be claiming. >Survival instincts were instilled by evolution ...As a result of trying to optimize for something else, namely the spreading of genes. >so assuming any generally intelligent mind must rationally and systematically pursue instrumental subgoals is a major assumption. Not as major an assumption as assuming it WON'T. >the burden of proof is on you to show that all contrary cases are less than 2%. His point is that he can deal (and in my experience, has dealt) with contrary cases as they're brought up, the point is that there are essentially infinite claims which can be brought up, making it effectively impossible to nip them all in the bud in one document. And in my experience, those which can't be nipped in the bud rely on even more extreme assumptions and hopium than Yud's position.
> Sounds like he’s saying Okay, at this point, I don’t think you’ve actually listened to any of Eliezer’s podcast interviews or read the sequences. Which is valid, they are long and ramble. But if you are going to argue about this, you should Read the Sequences^tm first!
>Okay, at this point, I don’t think you’ve actually listened to any of Eliezer’s podcast interviews or read the sequences. I'd say the same of you. I've listened to all the available ones. As far as I'm concerned, he's fulfilled his burden of proof, while you haven't even attempted to do so, instead saying "Not only do you need to explain your thinking, you also need to disprove every scenario under the sun, and when you do, I'll just give you 100 more scenarios out of the infinite number of scenarios!" We have two groups with two different assumptions. You are requiring Yud's group (the one with the slightly less extreme assumption) to prove their assumption to a ludicrous degree, while requiring literally nothing aside from "Nahhhh fam" from his detractors with the more extreme assumption. Edit: Banned for disagreeing, so won't be responding. Users should keep in mind that this sub does not allow dissent. In case you were wondering why it's such an echo chamber.

>barges into any discussion or argument
>“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
>refuses to elaborate
>leaves

It’s just pure doomsaying, plain and simple. None of the x-risk scenarios either make sense, or are actually supported by any evidence. But by making a statement this vague and broad, it’s difficult to dismiss it, because there’s actually so little substance at all. But you still get the public to be all like “oooh, so all these people say AI is as dangerous as nuclear weapons??”

there’s no “AI risk” because these chatbots are not AI

I don’t think anybody claims there’s any existential risk right now or in the coming months.
You can find, in this very subreddit, links to Eliezer claiming: * GPT-2 (he formed this idea based on interactions with AI dungeon) has a grasp of intuitive/common sense physics * GPT-type approaches could break hashes * a sudden drop in the loss function during training could indicate the AI has made a hard break through In his Ted Talk, Eliezer described it as 0-2 more breakthroughs before AGI. The (relatively) saner end of the doomers don’t think its a matter of months, but Eliezer has seriously entertained that idea that GPT is enough to make AGI without any further paradigm shifting breakthroughs. Eliezer also purports the idea of hard take-off: an AGI could self improve and bootstrap more resources in a matter of weeks or even days. So yes, some of the doomers, at the very least Eliezer himself, think it might only be a matter of months.

Yeah my biggest misunderstanding with this sub is how they laugh at all AI threats. Ok cool you think Yudkowsky’s specific idea for AI killing all humans is unlikely/impossible. But that doesn’t mean that AI is harmless. It’s potentially an extinction level event. I think it’s unlikely, but it doesn’t seem to be a non zero chance so it should be taken seriously imo.

> It's potentially an extinction level event No, it isn't.
How do you know that? Seriously this is a brand new threat. How can you be sure that it's a zero percent chance?
Because I know how the technology works. It only seems like it could destroy the world if you don't understand it.
You might be right. I don't understand how the technology works. But I understand the concept of self improving technology. And I understand that even ChatGPT's basic ass sometimes lies. And I know that people smarter than me who understand the technology take the threat seriously. Maybe I'm just dumb, but that seems like enough for me to consider the threat possible.
> And I understand that even ChatGPT's basic ass sometimes lies "Lies" here implies intent to deceive and wrongly casts ChatGPT as some kind of scheming villain. In point of fact, it is just constructing plausible-looking sentences, with no regard for whether what it says is true or not, because it's just trying to predict a likely next word for the sentence. Like, yes, it's unreliable and it's a bad fit for almost all the use cases it's being sold for, and it can write simple algorithms that you can find on stackoverflow, but it's not at any more of a risk of becoming "self improving" and taking over the world than a [markov chain generator](https://github.com/meadej/TheBookOfMarkov/blob/master/volumes/I.txt) is. It would be bad to put ChatGPT in charge of most things, but it would be bad because it's a nonsense text generator, not because it's going to take over the world.
>It would be bad to put ChatGPT in charge of most things, but it would be bad because it's a nonsense text generator, not because it's going to take over the world. This is the exact part of this sub that I never understand. I agree word for word with everything there. But ChatGPT isn't the concern. It's however many iterations come 50 years later or 100 or 500 or whatever. Why does everyone here strawman this? You can't possibly argue that you know for a fact that technology will never ever be self improving and can't ever be a threat to humans.
I don't have to know that for a fact to decide this is a stupid thing to be worried about. "You can't prove this won't be an issue 500 years from now" applies to a lot of things it would be stupid to devote resources to caring about. By this logic, shouldn't we also be devoting all-hands-on-deck attention to diverting potential asteroid impacts, preparing for potential alien invasions, preventing the Yellowstone supervolcano from erupting, etc.? Like, I'm not arguing that a thinking computer is *mathematically impossible*. But there are tons of things that are possible *in theory* that still aren't worth caring about. I don't know if you've noticed but there are a lot of things in the world that are causing major issues *right now* and actually need attention and resources devoted to them in order to not cause massive problems over the next couple decades. I don't think freaking out about stuff because "we can't prove this won't be an issue at some point!" is a good way to prioritize, especially when the technology in question that's freaking everyone out has tons of way more obvious failure modes and ways to harm people that are occurring as we speak.
>By this logic, shouldn't we also be devoting all-hands-on-deck attention to diverting potential asteroid impacts, preparing for potential alien invasions, preventing the Yellowstone supervolcano from erupting, etc.? Strawman. I'm not arguing all-hands-on-deck attention. But yes SOMEBODY should be looking at how to prevent humans from going extinct from asteroid impacts and super volcanoes! I'd say you're unreasonable to say it should be ignored by everyone. I think somebody should be trying to figure out how to prevent AI from harming humans too. There are crazy people this field, but you guys make yourselves also look crazy when you won't engage with the reasonable parts of the arguments and only go to the extremes that like 3 people argue for.
I feel like you're missing the very important context that this is a forum for mocking a cult who argue that, in fact, yes, we should be devoting all-hands-on-deck attention to this issue, who routinely say things like "this is a more important issue to be focusing on right now than climate change or nuclear war," and who are broadcasting statements to this effect throughout the media to the extent that normal people are becoming seriously worried about the imminent threat of superintelligent AI due to ChatGPT. It's cool that you aren't arguing that that's the level of attention that needs to be paid to it. I agree it's probably something *someone* should be thinking about. But given the context the proper response to this is not "well gee their scenario is potentially plausible, why don'tcha give em a chance"
>I feel like you're missing the very important context that this is a forum for mocking a cult that argues that, in fact, yes, we should be devoting all-hands-on-deck attention to this issue Yeah maybe you're right. Maybe I don't really understand the rationalists position on this. But literally every time I've read a quote on this sub from a rationalist and I've bothered to read the original article, I feel like the quote was out of context and totally strawmanning the original position. I guess my point is this, the idea that AI may come with some dangers that are worth considering how to prevent seems like the responsible thing to do. Your argument that it's not a priority given the likelihood, time line and scale of the problem could be reasonable, and it's an argument I'd like to hear educated people on the subject discuss. However, teasing big yud for thinking AI can kill us all with thoughts, when that's not what he said, makes your reasonable argument seem less reasonable. It comes off like you (and everyone who agrees with you) thinks that AI could never possibly pose any threat and the idea that it could is laughable.
> But I understand the concept of self improving technology You actually don't, though. That's just a bunch of words. You could equivalently say "I understand the concept of faster than light travel technology", even though that's impossible to achieve for reasons that you probably don't understand.
Fair enough. I won't argue I actually understand it. But again people smarter than me do, and some of them think this threat is real.
The vast majority of the people who are experts in machine learning do not believe that it can or will destroy the world. If you're making judgments based on a vote of people you think know more than you do, you can safely put yourself in the "no apocalypse" camp and forget about the matter entirely.
We have an obvious proof by example for self-improving systems—organisms in the biosphere under Darwinian evolution—and none for faster than light travel.
[deleted]
Whether its an organism that improves itself or only improves its own progeny is just nitpicking pedantry, you know what I meant and it is still an example of a system that bootstraps all the way to general intelligence. We obviously have the example of the system that improves itself, distinct from its progeny, as well: brains. They were created from evolution, but also undergo self-improvement during developmental phases, learning, etc. >Can Darwinian evolution produce intelligence given enough time? Yes. Can LLMs? There is no such evidence. AI isn't limited to LLMs. Genetic programming exists as well for example. There are also clearly other self-improving machine learning systems using backprop, like RL agents with self-play: https://www.youtube.com/watch?v=kopoLzvh5jY.
[deleted]
Evolution is one example of a self improving system. Brains are another, though they came from evolution they self-improve over time in a different manner.
That's not an example of self improvement...
Self-reproducing organisms and self-reproducing organism populations under variation and natural selection aren't self-improving systems? (Edit: to the below, consider an organism and its progeny as the system (for asexually reproducing), or a species and its progeny (for sexually reproducing) as the system, and you've broken your "speed of light barrier")
No. It's an organism's offspring that are potentially improved, not the organism itself. That's just regular optimization.
[deleted]
Concerns about AI malware aren't based on any kind of sound science, but even so the statement being signed on to here is not about some malware: it's about AI killing all humans.
Think they mean regular malware with chatgpt like systems to increase the risk of it bypassing anti spam/other detection methods, not the ai making malware (which iirc somebody has already tried to do, but didnt check the results if it even worked) (the ai does provide unsafe code however, and I wonder if there are people out there trying to poison potential datasets for further training with bad code, wonder how easy it would be to have ai spit out code with bad string functions for example (which are easy to detect))
Sure but that's what I mean. Every version of "we need to be worried about AI because malware" can be sorted into one of two categories: 1. minor stuff that the average person shouldn't worry about (e.g. defeating google's spam filter) 2. apocalyptic stuff that's completely made up and usually impossible (e.g. AI automatically haxoring all computers) It's a total nonsequiter to say "sure AI won't destroy the entire world, but it'll do malware things!". Like, those are totally different categories of things that have no relationship to each other.
Ow yes certainly, just wanted to male sure we didnt overlook that threat.
[deleted]
> A Stuxnet capable of dynamically reacting to configurations that the programmers didn't anticipate is not out of the question Maybe you think this sounds more reasonable than malware being designed by AI, but it's actually exactly equivalent. The only way that malware can do this is by making impressive inferences about exploits based on its training data, which is exactly what designing malware with AI consists of in the first place. The reality is that malware design is a difficult problem in a way that building a chat bot is not. When a chat bot goes off the rails we don't perceive a problem because its sentences are still grammatical even though their content is silly or scary, and so we incorrectly impute meaning to it. When malware goes off the rails it just stops working, because the vast majority of inputs to a computer don't do anything useful (from either the malware or legit end user perspective). > this subreddit has a tendency to write off any kind of large-scale risk from AI at all Does it? Every post about this gets a litany of responses about specific challenges posed by AI technology. I don't think I've seen anyone saying that AI is totally harmless.
[deleted]
Okay yeah, that scenario makes more sense to me, at least in the sense that it's not physically impossible. But it still seems really implausible, for the reason you say: you'd have to be pretty stupid to use an LLM in the execution logic of your malware, and malware developers aren't that stupid. I guess what I don't understand is, why does that sort of possibility even bear mentioning? It doesn't seem like there's anything new here; "incompetent malware designers fuck up people's computers by accident" is something that already happens. Like, imagine two different worlds: (1) a world where LLMs do not exist, and (2) a world where every incompetent malware developer has full access to every modern LLM, but nobody else does. Would the average person even notice a difference between these two worlds? It's hard for me to see how they could. Even if all the bad malware developers jumped into the LLM pool without a second thought, they'd get right back out again when they realized that e.g. their LLM-powered ransomware wasn't working out so great after all. The addition of LLMs into the picture just doesn't seem consequential.

AI risks are real. Even Stephen Hawking thought it had the potential to lead to the end of the human race, and he was anything but a hack.

Yes this is a very good point, it is well known that people are experts in one topic are also experts in all other topics. It's inconceivable that Hawking could have been totally off base about this. I, for one, am a big fan of Isaac Newton's work on predicting the biblical apocalypse and developing the philosopher's stone.
I did commit a logical fallacy, but i and the scientific community hold hawking in extremely high regard. It is evidence that the risk is taken seriously by well known science heavyweights. This comment is kinda weird to make on a post talking about actual AI experts that are worried about AI risk. I just wanted to add a well known name since none of the ones on that list count for some people
When Stephen Hawking talks about the thermodynamics of black holes then you should listen to him, because nobody knows more than he did about that subject. When Stephen Hawking talks about artificial intelligence then you can safely dismiss whatever he says, because there are *tons* of people who know a lot more than he ever did about it, including basically every ML grad student. > I just wanted to add a well known name since none of the ones on that list count for some people And now, having signed that list, none of them ever should be taken seriously again! Crackpottery isn't confined to people of low social stature.
I already admitted to using a logical fallacy and i told you my reason. You calling them all crackpots shows even less critical thinking than Eliezer. It’s like a religion to some of you
Crackpots are as crackpots do.
Hawking was talking about much longer time spans and much different capabilities than what we're looking at right now; Hawking also thought that capitalism was an existential threat to the biosphere. Of the two risks, it's pretty clear which is more pressing.
Hawking gave no timespan. He was worried about AI that could surpass human intelligence, and nobody knows when that will happen. I like making fun of Eliezer, but to say there isn’t a real risk is also stupid
Hawking gave no timespan because he referred to it as a long term existential risk, and keyed his worries to "human equivalent" AI, which none of the existing models or methods can produce; the current AI-worriers are explicitly talking about LLMs when they predict doom, and for the wrong reasons. No one is saying there isn't a real risk to machine learning techniques disrupting economics and society, they're saying that Yud's worries are precluding discussions about the *actual* threats caused by the current "AI" tech, which has no developmental path to "human equivalency." I don't think their case (briefly: that think "human equivalency" can be arrived at without incorporating evolutionary principles into development; I think the kind of capabilities that lead to AGI, if that term is even coherent, are necessarily developed by evolutionary novelty) is well supported by evidence, which means what they're warning of is not what Hawking was warning about. Regardless, the more we talk about what Yud, et al, want to talk about, the *less* we talk about real issues around AI that are currently or will shortly be impacting the world. FOOM or related hard takeoff notions of AI/singularity are fiction, theology really, and are a distraction from the work of people like Timnit Gebru or Emily Bender.
Show me a source where he referred to it as being a far into the future risk. You’re putting words in his mouth by finding that implication in what he said. Also tell me where in my comment i mentioned that this threat is posed by LLM alone. Show me where in OPs post they specified that the risk they’re talking about applies only to the current state of AI or LLM. You aren’t arguing with me, you’re arguing with a straw-man. If i had to guess i’d say we agree more than we disagree
And many signatories of this statement are leaders of state-of-the-art AI in academia and aren't related to Rationality at all. Like, there are a couple people I've cited in my thesis.
There's an uncomfortable inflection point in the journey to adulthood wherein one is forced to realize that their elders are equally as fallible as their peers; more so even, in some ways.

One of the real problems here is that by design, AI technologies do things that their creators cannot predict in advance, and that don’t follow in any direct way from understanding how they work.

We see that all over ChatGPT, where many unexpected behaviors continue to be discovered.

While running all the way to “extinction” seems to say a lot more about the person doing the predicting than about the tech, I think anyone who says that the risks can be clearly understood and delineated is also not thinking clearly. We just don’t know, and we don’t even know how we could make it possible to know. And this is a very serious problem, especially given the somewhat predictable destructive aspects of existing digital tech.

No. Absolutely no. At the present level, no AI system imaginable will "go rogue" and do something that its creator couldn't imagine. Closest thing to "AI doing its own thing" are hallucinations, but those are not gonna cause Skynet. One day \*it might happen\* and that's what the LessWrongers like to fantasize about. However, it's pointless to talk seriously about "what might hypothetically happen one day", but don't tell me that should take so much media space when so many real issues that are taking place right now need attention. Presently, and probably for the foreseeable future, our only worry should be "what the people will do with the AI and what can we do to stop misuses from happening" rather than "what if the AI goes rogue".
This is I think the ur-sneer on the AI x-risk. The most you can say is that we can't conclusively prove that an AI won't want to kill everyone. But there's a void-between-galaxies-sized uncertainty gap between what we can reasonably say and the amount of time, money, and effort they want to devote to the problem. The same could be said for the trillions of simulated people they use as the ethical justification for longtermism. I don't think we can conclusively say that such a world is impossible, but it's sufficiently implausible as a near-term concern that it's not even worth having the argument of whether it would be desirable. Even if it was conclusively technically possible, there are [more immediate dystopias](https://qntm.org/mmacevedo) that they seem much less inclined to entertain. They have been so blinded by their eschatology that the most reasonable response would be to notice their "the end is near" sandwich board, avoid eye contact, and keep walking while they rant. But instead they've got enough connections to real money and power that they can't just be ignored or pitied. I doubt anyone in a position to do so is seriously considering airstrikes against OpenAI data centers, but when the modern newspaper of record is publishing the discussion something has gotten seriously out of whack.
https://www.youtube.com/watch?v=oLiheMQayNE&t=3056s&ab_channel=CognitiveRevolution watch 4 minutes from that time point, to learn about the spontaneous suggestion of assasination.
GPT lacks any actor or agentic or goal setting component. It can make bad/dangerous suggestions now, but it has no way of going rogue or independent. Building an agent using GPT as a component might eventually be possible, but additional key insights and breakthroughs on the scale of GPT is needed to actually implement the stuff that will support memory, goal setting, cost functions, etc.

[deleted]

as an autistic person, that's because they are grifters, liars and fearmongers. You bringing up their neurodivergence (whose? Yud says he's just Ashkenazi Jewish ^lol) is odd - what's that "we're just lil nerdy awkward guys" shit about? also, isn't the community full of (or at least intermingling with) eugenicists? wouldn't eradicating abnormalities such as autism be part of their ideal world? (I'm unsure whether, for example Roko would get rid of all neurodivergent people or just the queer ones) how does this pathetic defense tie into that?
Not speaking to the quality of the discourse here, other than to say that it is sneer-worthy to criticize a sub for lacking strong substantive discussion when a rule of that sub is that substantive discussion is off-topic. I don't see why I should take Geoffrey Hinton, in particular, seriously. He is not an expert in AI safety (to the extent that such experts exist at the moment) and by his own account only got interested in it very recently. It's not clear that, even now, he has seriously engaged with the existing literature, and in his recent talks have completely dismissed the contributions of entire fields about what they have to say on the topic. Why would inventing dropout or contrastive divergence make him qualified to speak on risk? More direct evidence about his suitability is the fact that he seems to make bad predictions about how the tech he has spent his life developing will actually interface with society (e.g., stop training radiologists because my CV work makes them obsolete). This is typical for a lot of people who work in research labs; they think that high scores on benchmarks and cute parlor tricks are what matter, with all other complexities amounting to "minor details" that are unworthy of their attention. In all seriousness, has Hinton published *even a single work* on AI safety? At least Stuart Russell has been talking about this for years. At least sneer-targets Scott Aaronson and Paul Christiano are working at/with OpenAI on safety. Hinton, at this point, is just some old guy who barged his way into the conversation and is screaming that the tech he invented is going to kill us all and that there is no way to fix it, and then throwing up his hands when asked for possible solutions.
> why an AI would possibly resist being turned off Oh lord. What we have now as an AI is a hundred years away from this. So why don't people here take such an argument seriously? Because it is, at this point, complete sci-fi and speculation.
> It's like this. This sub sneers at people like Yudkowsky and some other rationalists with questionable personalities and dispositions. They come across to neurotypical people as grifters, liars and fearmongers Because they are. >I have read a lot of arguments in this sub that are either very ignorant or honestly very illogical. We discourage people from making arguments because this is not rationalist debate club. This often leaves low-quality arguments behind.
> What I have not seen a lot in this sub is people actually engaging with the substance of actual ai x risk arguments, or with the arguments of far more accomplished and respectable researchers like Stuart Russell or Geoffrey Hinton You must be new here. I've made many comments and posts here about exactly this.

AI could change the world for the betterment of all humanity. At the same time, if in the wrong hands, could be used as a weapon. There’s a reason that (at least some) of the creators of AI have been petitioning congress (and the world) for regulation.

Is it amazing (almost incomprehensible technology)? Yes.

Should/Could it be developed to help advance society? Probably.

Is it something most people understand the “pros and cons” of? No. 

Could it be used as a weapon? Yes

Until we have a better grasp of the potential benefits/risks, I agree with its creators that it should be regulated and closely monitored by governments of most nations/UN - similar to the way the UN deals with other (potential) weapons and/or unregulated advanced technology..

You're weirdly confident in your opinions about international government regulation for something that you don't understand at all.
How so?
> I agree with its creators that it should be regulated and closely monitored by governments of most nations/UN
https://www.nytimes.com/2023/05/16/technology/openai-altman-artificial-intelligence-regulation.html
lol yes, I guess you missed it when this very sub had a post about Sam Altman asking congress to kneecap his competition: https://www.reddit.com/r/SneerClub/comments/13jkjsm/sam_altman_asks_congress_to_kneecap_his/ And anyway like i said before, you're weirdly confident about agreeing with Altman et al about something you don't understand. Like, when you don't understand something that also means that you can't meaningfully agree or disagree with other people about it. You might as well just be flipping a coin, because you have no way of knowing if what he's saying is reasonable.
Well, that's sort of the point. When the U.S. came out with the Atomic Bomb, most people did not understand the technology or the lasting ramifications that radiation has on people, the area etc. Would the U.S. have used the bomb and/or would the U.S. population have supported using it had they known about ALL the potential risks associated with it? Maybe or maybe not but they would have been informed of the risks and would have dealt with it accordingly. Once they learned of the risks, nuclear weapons became heavily regulated and have not been used (in combat) in almost 90 years. But nuclear technology has been *used to benefit society* by creating (relatively safe) nuclear power. The point is that if we want to use the technology to benefit society and have society support it, history dictates that it's best to learn and understand all of the benefits and all of the risks before the technology is unleashed on the world. Because if something were to go horribly wrong with it - most people would want no part of it, it would lose support and the potential benefits may never be brought to fruition.
Artificial intelligence is not similar to nuclear bombs in any respect. It feels weird to have to point that out, yet here we are.
It feels weird to have to point this out but here we are - I was speaking about the technology itself. Yes, nuclear technology was at the time the most cutting edge and most sophisticated technology ever known to mankind. A.I. is currently the most cutting edge and most sophisticated technology known to mankind. So that's one way they are very similar. So you seem to be mistaken. Since you seem confident enough that you understand this better than others - why not explain why you disagree with Sam Altman (and many others) who believe that it should be regulated?
I don't disagree with regulating AI, I just disagree with Altman's version of it. Requiring a government license to run machine learning models is insane.