Source Tweet: ____________________________________
@ESYudkowsky: Yeah, we need a name for this. Can anyone do better than “immediacy fallacy”? “Futureless fallacy”, “Only-the-now fallacy”? > @connoraxiotes: What’s the concept for this kind of logical misunderstanding again? The fallacy that just because something isn’t here now means it won’t be here soon or at a slightly later date? The immediacy fallacy?
Context thread:
@erikbryn: […] [blah blah safe.ai open letter blah]
| > @ylecun: I disagree. > AI amplifies human intelligence, which is an intrinsically Good Thing, unlike nuclear weapons and deadly pathogens. > > We don’t even have a credible blueprint to come anywhere close to human-level AI. Once we do, we will come up with ways to make it safe.
| > @ESYudkowsky: Nobody had a credible blueprint to build anything that can do what GPT-4 can do, besides “throw a ton of compute at gradient descent and see what that does”. Nobody has a good prediction record at calling which AI abilities materialize in which year. How do you know we’re far?
| > @ylecun: My entire career has been focused on figuring what’s missing from AI systems to reach human-like intelligence. I tell you, we’re not there yet. > If you want to know what’s missing, just listen to one of my talks of the last 7 or 8 years, preferably a recent one like this: https://ai.northeastern.edu/ai-events/from-machine-learning-to-autonomous-intelligence/
| > @ESYudkowsky: Saying that something is missing does not give us any reason to believe that it will get done in 2034 instead of 2024, or that it’ll take something other than transformers and scale, or that there isn’t a paper being polished on some clever trick for it as we speak.
| > @connoraxiotes: What’s the concept for this kind of logical misunderstanding again? The fallacy that just because something isn’t here now means it won’t be here soon or at a slightly later date? The immediacy fallacy? ____________________________________
Aaah the “immediate fallacy” of imminent FOOM, precious.
As usual I wish Yann LeCun had better arguments, while less sneer-worthy, “AI can only be a good thing” is a bit frustrating.
Dear Yud,
You’ve heard of a superweapon before, yes? I’m developing a superweapon that can destroy nations from orbit. It’s not complete yet, but I feel like I’m a few clever tricks and gradient descent iterations away. Pay me 10% of MIRI’s gross income every year or I will eventually destroy dath ilan. If you ignore or disagree with me, you will be committing an immediacy fallacy, and I’m told you and your ilk take committing fallacies very seriously.
Signed, Sir Basil Kooks
P.S. I’m very proud of using an online tool to find an anagram of “Roko’s Basilisk”, so I demand that you go ahead and praise me for that as well.
Drunk, but just:
Is homeboy inventing a #fallacy to cover when his autodidact (from 8) ass can’t cope with literally everyone in the world being able to demonstrably prove him wrong?
Love how he is making the risk analysis seem more scary by tugging at both the cuamce and impact ends.
Anyway, time to become an asteroid doomer, we dont know the chances for that, and the risk is also all future 10^^^10 (post) humans
This seems like such an incredible self own. Does he even hear himself?
This is already an annoying strawman of their opponents’ arguments. Can’t wait for an equally annoying name for it.
Finally, I can say “perhaps AI will do horrible things to the labor market in the future” only to be looked askance and told that I’m stuck believing that SkynetGPT won’t happen because my massive, raging “Only-the-now” bias is clouding my judgment.
Huge shout-out to the guy in that thread who cited a book about the zombie apocalypse to justify worrying about the robot apocalypse
So is this like an internet version of Andrew Wakefield redoing his experiments until he got a result he wanted?
what’s the mental illness that compulsively force a dude to coin words and concepts? is it just plain narcissism? Grandiosity? i think that, “Yeah, we need a name for this.”
I think the Yann LeCun link was the biggest take away from this post. At some point we should stop wasting our valuable attention on a useless crook who is literally paid not to understand the problems with his own ideas and focus on how we can go on and not get stuck on local maxima that “look like” intelligence for a walled garden set of problems.
I think there’s some strawmanning going on in the lecun talk but much more important are his ideas how to move forward instead.
isn’t this literally just time discounting
Yann is a massive fucking tool. Eloser is as well. It is amusing to watch them sperg at each other on twitter, the perfect video game for losers like them.
Eloser is forever clowning himself.
“yeah, we need a name for this” - Yud, every fucking day of his life it seems.