r/SneerClub archives
newest
bestest
longest
Time: The 'Don't Look Up' Thinking That Could Doom Us With AI (https://time.com/6273743/thinking-that-could-doom-us-with-ai/)
31

A recent survey showed that half of AI researchers give AI at least 10% chance of causing human extinction.

This is absolutely not true. I’d accuse Tegmark of spreading pernicious lies, except that he’s clearly lost his edge and I think he actually believes this. Confirmation bias or some shit.

For those unaware, the survey question under consideration has two notable qualities:

  • it is so vaguely worded that we can’t conclude anything at all about what people believe regarding the AI apocalypse, and

  • it has a 4% response rate, so the statistics from the survey are basically garbage anyway

What this survey really tells us is that, of the 4% of ML conference attendees who constitute the self-selected group of the absolute most extreme AI doomers in all of academic ML, at most 10% think that AI could somehow be involved in some sort of apocalypse at some point. Maybe.

Edit: to clarify a question that came up, the reported response rate of 17% on the website is the percentage of survey recipients who responded to at least one survey question. The number of people who responded specifically to the questions about the robot apocalypse is much lower than that, and you can see this by downloading the raw data.

[deleted]
Yeah there are definitely people like that, but I'd hesitate to say that it's "a lot". I think the vast majority of people who go into AI research do so for the more obvious and typical reasons that is potentially very lucrative and also genuinely very cool. If you do shoddy surveys then you can get \~5% of people to agree with pretty much anything. I will be genuinely surprised and alarmed if more than 5% of the AI research community (i.e. real people with real, relevant credentials, not bloggers who do "research") ever agree with the statement "We should be concerned that AI will autonomously decide to destroy humanity and will succeed in doing so".
I see "17% response rate" noted under the Methods section. Where are you getting 4% from? Also, shouldn't we say that 50% (half) of the respondents gave AI at least a 10% chance of causing human extinction?
17% is the percentage of people who responded to *at least one* survey question. If you download the CSV of the raw survey data, though, you'll find that the questions that ask specifically about the robot apocalypse have about a 4% response rate. One of the robot apocalypse questions gets a median estimate of about 5% and the other gets a median of about 10%, among the people who responded. I.e. 50% of that 4% (i.e. 2% of all survey recipients) gave at least a 10% estimate of robot apocalypse It's a pretty disgusting survey overall. Not only is it methodologically unsound, but the way that they report the results is (deliberately, I assume) dishonest - they publicize the 17% response rate (which isn't great but it's also not super terrible) and also the 10% estimate of doom, but they don't publicize the fact that the 17% response rate doesn't apply to the 10% doom estimate!
Oh, you're right. Also, half of those that estimated odds above median only put the odds at \~20% or less, so there's a heavy skew. And the response to the more general, unqualified question about AI doom cut the odds by half... lol

the most influential responses have been a combination of denial, mockery, and resignation so darkly comical that it’s deserving of an Oscar.

Keep up the good work /r/sneerclub!

Let's make a deal. They stop saying evidence free unsupported bullshit, we'll stop calling them out for it.
This will happen no matter what. Either agi is real, we fucked up and we all become paperclips and sneerclub ends, or it doesnt, LW stops being relevant, everybody moves on and we stop posting because they stop posting. Clearly our oblivion is inevitable

Suppose a large inbound asteroid were discovered, and we learned that half of all astronomers gave it at least 10% chance of causing human extinction

A recent survey showed that half of AI researchers give AI at least 10% chance of causing human extinction.

One of these numbers is not like the other.

[deleted]
> ai research is a tangled mess The thing is that this isn't true at all. A lot of empirical AI results are relatively recent, but none of this is truly new stuff and the vast majority of AI researchers are competent professionals who would correctly answer "lol no" if you asked them if AI was going to cause the apocalypse. Tegmark is just wrong about what AI researchers believe because he's embedded in an insular social bubble of crackpots and he doesn't know anything about AI himself.
One of these professions is not like the other
they say ai is even more lethal than dihydrogen monoxide
Wait until they hear about climate change

If only the media were so upset over climate change

Well, duh! Climate change is a hoax. Now the acausal apocalypse robot god that's going to annihilate us all... real shit. I for one am scared of being put in an infinite torture chamber where I have to watch PragerU videos both real and AI generated.
What if it's Penis Prager YTP videos?

I enjoy Tegmark forgetting the part where for his analogy to work, there needs to be a killer asteroid - there’s no killer asteroid. His absolute certainty that the creation of our robot overlords is inevitable doesn’t really help.

Also, did we forget he funded Nazis? Wasn’t that like, two months ago? Time really needs to get its shit together and get someone to like…Google these people

Oh come on, you can't just call everyone you disagree with Naz... oh.
https://pbs.twimg.com/media/FpLy8M9WABUvxQT.jpg reminds me of this.

Before superintelligence and its human extinction threat, AI can have many other side effects worthy of concern, ranging from bias and discrimination to privacy loss, mass surveillance, job displacement, growing inequality, cyberattacks, lethal autonomous weapon proliferation, humans getting “hacked”, human enfeeblement and loss of meaning, non-transparency, mental health problems (from harassment, social media addiction, social isolation, dehumanization of social interactions) and threats to democracy from (from polarization, misinformation and power concentration). I support more focus on all of them. But saying that we therefore shouldn’t talk about the existential threat from superintelligence because it distracts doom these challenges is like saying we shouldn’t talk about a literal inbound asteroid because it distracts from climate change. If unaligned superintelligence causes human extinction in coming decades, all other risks will stop mattering.

Sadly, this line of argument has proved ineffective for my campaign to redirect all climate change funding to making sure we are protected against a potential giant asteroid from space

I invite carbon chauvinists to stop moving the goal posts and publicly predict which tasks AI will never be able to do.

What if the answer is ‘I don’t know’ lol

‘I invite carbon chauvinists to publicly predict what kind of aliens will never be discovered’

Yoshua Bengio argues that GPT4 basically passes the Turing Test that was once viewed as a test for AGI. And the time from AGI to superintelligence may not be very long: according to a reputable prediction market, it will probably take less than a year.

I’m glad we’re using trustworthy benchmarks here such as the Turing test and prediction markets

it’s naive to assume that the fastest path from AGI to superintelligence involves simply training ever larger LLM’s with ever more data. There are obviously much smarter AI architectures

okay so how is the LLM going to build this architecture without any user input telling it to do so lmao

If you’re an orangutan in a rain forest being clear cut, would you be reassured by someone telling you that more intelligent life forms are automatically more kind and compassionate?

so true honestly maybe we should be worrying more about human alignment before we start working on AI alignment

> Yoshua Bengio argues that GPT4 basically passes the Turing Test that was once viewed as a test for AGI Tegmark should know better here, at least Yudkowsky [recognizes](https://twitter.com/ESYudkowsky/status/1646174661976399872) that "passing the Turing test" is only a real achievement if it's a long discussion with someone who presses it on specific lines of questioning. I'd also say it's important find judges who have a sense of what kind questions might trip up a chatbot and show it lacks basic common-sense understanding of the terms it uses, a lot of good examples of such questions in [this article](https://medium.com/@shlomi.sher/on-artifice-and-intelligence-f19224281bee)--as the author says, "If you casually test the new AI in a haphazard way, it can look really smart. But if you test it in a critical way, guided by principles of counterfeit detection, it looks really dumb."
Yeah I mean a bunch of pillows stuffed under a blanket can "pass the Turing test" under the right circumstances ("I knocked a bunch of times and then peeked inside, he's still asleep in there")
>If you casually test the new AI in a haphazard way, it can look really smart. But if you test it in a critical way, guided by principles of counterfeit detection, it looks really dumb. So, you're telling me the AI can sound smart and convincing, as long as no one with actual knowledge in the field being talked about asks any pointed questions. Now, who does that remind me of?
That 2nd article is incredibly frustrating but I suspect the author is being really gentle to try and get people to the point that NLP people have been screaming for years. "NO IT DOESN'T BLOODY UNDERSTAND ANYTHING, IT'S AUTOCOMPLETE"
Fricking *Eliza* managed to fool people. Big ups to my homie Turing but my dude had a vastly more optimistic view of how good people are at telling the difference between a human and words pulled put of a hat at random.
tbf the specific person that is participating in the test matters a lot
[deleted]
Nobody is making something potentially even more intelligent and powerful than humans are right now, nor do I think it will be happening any time soon. I'm AI-negative in the sense that I think AI kinda sucks ass

so annoying that there’s like a high-up dude at TIME who’s bought into the cult, who keeps inviting these guys to write articles.

I can’t believe all this old 2014 fundraising crap is being resurrected. Like letting Meghan Trainor write on how we still don’t really grasp the bass.

Hologram of the artificial intelligence robot showing up from binary code.

Were the image captions AI generated?