r/SneerClub archives
newest
bestest
longest
Yudkowsky originally predicted that the singularity would happen in the year 2021 (https://web.archive.org/web/20081021051739/http://yudkowsky.net/obsolete/singularity.html)
75

There’s a long post and discussion on the EA forum about how he keeps getting things wrong.

Yeah, I saw that, but I don't take big Yud seriously enough to read the comments. edit: I skimmed a few comments, I guess some people think that saying "he disavowed it" is going to convince us to just remove it from his track record? When Prophecy Fails and so on.
I like how infinitely deferential these guys are to that wanker that even the most muted call for, "Maybe we shouldn't all just think this guy is right about literally everything?" gets such a big discussion. EDIT: see >I think it's really important to seperate out the question "Is Yudkowsky an unusually innovative thinker?" and the question "Is Yudkowsky someone whose credences you should give an unusual amount of weight to?" How much bending over backwards there is here! Answer to both is a quick "no".
It’s the thing with moonshots: you only have to get one right.
It takes a lot of mental gymnastics to say “sure, he’s been wrong a lot, but..”

Just a curiosity.. does yudkowsky fundation actually do any research (like math for cs, ai developement, etc) or just philosophy?

It's all 'philosophy', which isn't in itself a problem, but becomes one when their philosophical output is barely coextensive with established analytic philosophy to begin with. Additionally, Yudco. sells itself an organisation attempting to find the solution to the problems of AI alignment, but from everything I've ever read from them, a more accurate description of what they do is 'describing and arguing for ways to think about how to find the solutions to the problems of AI alignment'. Again, not inherently a problem if you're up front about this and clear about the scope of your work, but the foundation basically inherits the immodesty of its founder and positions itself as the only thing standing between us and evil robogod.
> Additionally, Yudco. sells itself an organisation attempting to find the solution to the problems of AI alignment, but from everything I've ever read from them, a more accurate description of what they do is 'describing and arguing for ways to think about how to find the solutions to the problems of AI alignment'. This is the big thing for me. There are problems that actually exist now and which people care about that you could reasonably call "AI Alignment". There was the story a few years ago where a Google facial recognition AI classified a black guy as a gorilla. Google didn't want it to do that. If you were doing "AI Alignment" in a way that was serious and of practical utility, that's the exact sort of problem you ought to have useful things to do about. But MIRI was, as far as I know, absolutely silent about that. Just as they have been silent about all the other well-publicized failures of AI that actually exists, because what they are doing is essentially science fiction.
precisely this. They've somehow convinced people to support weed-fueled undergrad-level speculation, rather than do real analysis of, say, what reliably identifies a "hateful" post on gab or parler. Or generate real facial recognition training datasets for Africans, Southeast Asians, or Indians.
Deep down, I worry that big Yud thought that the google AI was essentially correct, hence the silence.
riddle: is it inherently easier to claim you're doing "philosophy" than to claim you're doing "math" or "computer science", or is it just easier to get away with it in a place where everyone knows math and computer people but most don't know a philospher
Bitter experience informs me that it is very easy to take the label “philosopher” even when there are proper philosophers (or perpetual neophyte runts like me) around, partly because weaponised charity is our stock in trade, and “you’re not a philosopher” is low hanging fruit, and partly because philosophers are perilously aware of the low general esteem their profession is held in anyway I’m not *against* the broad use of the term, and I’m certainly not *for* the kind of gatekeeping whereby you have to be employed at a university to be a “philosopher” (unlike “physicist”, which is more easily and rightly gatekeeped, just thanks to the shorter history of the term); the problem is in clarifying which *kind* of “philosopher” and therefore which kind of imprimatur is being established.
These dickheads think a logical fallacy alone invalidates an argument. Ben Shapiro: open invitation to a written debate on my terms. Positive vibes only. Your move.
I saw that you mentioned Ben Shapiro. In case some of you don't know, Ben Shapiro is a grifter and a hack. If you find anything he's said compelling, you should keep in mind he also says things like this: >Since nobody seems willing to state the obvious due to cultural sensitivity... I’ll say it: rap isn’t music ***** ^(I'm a bot. My purpose is to counteract online radicalization. You can summon me by tagging thebenshapirobot. Options: history, feminism, dumb takes, novel, etc.) [^Opt ^Out ](https://np.reddit.com/r/AuthoritarianMoment/comments/olk6r2/click_here_to_optout_of_uthebenshapirobot/)
By the axiomatic truth of Ben being a huge dork… (tbc, maybe, !WAP)
Why won't you debate me? ***** ^(I'm a bot. My purpose is to counteract online radicalization. You can summon me by tagging thebenshapirobot. Options: sex, dumb takes, civil rights, gay marriage, etc.) [^Opt ^Out ](https://np.reddit.com/r/AuthoritarianMoment/comments/olk6r2/click_here_to_optout_of_uthebenshapirobot/)
you've answered your own question
As long as you "publish" your results only in arXiv, you don't have to even put them through peer review.
The establishment is haram as fuck. edit: naughty words in this safe space will used according to a proprietary formula. Can’t stop, won’t stop.
I suspect that there are a lot more people who are capable of sniff testing computer science just given how many professional and hobbyist programmers there are. Which isn’t to say that they’re all capable or interested in doing research, but they should be able to sniff out obvious bullshit.
[Did Diogenes have a PhD? ](https://youtu.be/CWrMGXwhFLk) Marking for later as I have thoughts. Many of them.
> becomes one when their philosophical output is barely coextensive with established analytic philosophy to begin with. Peter Wolfendale said some of the smartest things going about AI that I’ve heard lately, and he’s a continental!
As far as I can tell he never bothered to learn how machine learning actually works, despite it having been around since well before he was born. All he done was converting old popular culture into quasi research, up to and including fucking time travel from the Terminator franchise (see the basilisk and such). Then he would dare claim that the Terminator is the reason nobody takes his Terminator rip-off seriously. When it comes to safety, we got right now an AI that is not any closer to doing a plumber's job, but which is alarmingly close to doing Hitler's job (dabbles in art, writes bullshit). Which is not even remotely similar to any of the pop culture tropes, so in retrospect there was probably no benefit whatsoever to "research" along the pop culture trope lines. And even without retrospect, since those are pop culture tropes, "raising awareness" of the same tropes was ridiculous and counter productive - everyone's already over primed to expect a super engineer AI that's making gadgets to take over the world with, and blind to other dangers. We're barreling straight into the unknown - a face-off with superhuman bullshitting - with surprisingly little thought to the obvious but non-Skynet-like consequences.
I was just reading early how he only has like 3 peer reviewed publications (and two of them were conference papers anyway iirc), MIRI publishes a bunch more about AI friendliness and whatnot but it's generally considered too abstract and theoretical for people to actually use. (and a lot of them are posted on arXiv. Take that, peer reviewers!) Meanwhile in the 20+ years since MIRI has existed a bunch of other people have gone on to make ML/AI technologies that are actually useful; how they managed to do it without Yudkowsky's unique and essential insight remains a mystery.
> in the 20+ years since MIRI has existed i'm so glad i stop paying attention to them in the early 00s. it was too frustrating seeing them talk about something that is supposedly of the upmost importance and then they rationalize to neverend why it's ok for them to not do anything properly, orderly, by respecting the codes of the day, working with proper institutions, respect their methods, their criteria and so on. always an excuse why they fail at achieving anything, ever. they jsut want free money to daydream about futuristic stuff, that's it.
Eliezer has no formal education past the 8th grade, and instead of seeing it as a problem that he can fix, he went the other way and started complaining about credentialism. Ultimately I think he's just a moderately successful cult leader who based his cult around sci-fi ideas; a little like a 21st century L. Ron Hubbard.
Huh, just realized another feature his cult shares with Hubbard's is using mental self-help to on-board new members, then exposing them to the progressively more and more batshit stuff. "Read the Sequences to understand our stance on AI"
He believed symbolic AI slash Eurisko was the holy grail all the way up until at least 2008. By 2009 ANNs were demolishing every single image recognition test we threw at them. Compute caught up.
re-upping this account of what happens when you peer review Yudkowsky, since I already posted it not long ago https://www.umsu.de/wo/2018/688
Yeah, kind of weird how his theory doesn't make sense if you don't believe there's a superintelligent AI in the future that can retroactively read your thoughts.
Well no…the “reliable predictor” in Newcomb’s Paradox long predates Yudkowsky. It can be God or a computer or anything. It’s already a hypothetical posit you can give any characteristics you want to make the thought experiment function.
I mean, I think I get it, I guess I'm just trying to work out why he thinks you should only take one box even in the version where box B is both transparent and has a million dollars in it already (but then maybe I shouldn't try to make sense of a nonsensical theory).
Most sane rationalist decision theory I found one guy a few years ago arguing straight faced that you can effect changes in the past that has already happened using only decision theory. Not even weird quantum shit or anything.
Did he actually say you should one-box in the scenario where box B, not box A, is transparent and you can see it already has the million dollars in it? I'd be a one-boxer in certain idealized thought-experiments where box A was transparent and contained money, but I can't really imagine an argument for one-boxing if B was transparent and contained money (unless it's supposed to be some kind of altruistic desire to maximize the chances of good outcomes for other simulated copies of yourself who may see something different in box B).
> (and two of them were conference papers anyway iirc) Papers or posters? Standards for posters can be very low.
it does a tiny bit of math. Math is nice, right? I mean, that's fine. But their visible research output is tiny.
I would very much not call what Yudkowsky et al do "philosophy" in the same sense as academic philosophers; they aren't really engaging with the traditions of philosophers and they do not share credentialing or publishing schemes.

He also has a standing bet with Bryan Caplan that it will happen by 2030, easy money for Caplan I guess.

He is a cargo cult.

I get the cult part but not the cargo part of this. I am familiar with cargo cult just not sure why he is seen to be doing it.
Here are some quotes on cargo cults. AI/Roko’s basilisk is their future deity that will reward them with goods for performing rituals that mimic what they think it will want. “any group of people who imitate the superficial exterior of a process or system without having any understanding of the underlying substance.” “A belief system, in which adherents perform rituals which they believe will cause a more technologically advanced society to deliver goods.” “Thus, a characteristic feature of cargo cults is the belief that spiritual agents will, at some future time, give much valuable cargo and desirable manufactured products to the cult members.” “the belief that spiritual agents will, at some future time, bless the believers with material prosperity (which, in turn, will usher in an era of peace and harmony)” “Cargo cults often develop during a combination of crises. Under conditions of social stress, such a movement may form under the leadership of a charismatic figure. This leader may have a "vision" (or "myth-dream") of the future, often linked to an ancestral efficacy ("mana") thought to be recoverable by a return to traditional morality.[17][18] This leader may characterize the present state as a dismantling of the old social order, meaning that social hierarchy and ego boundaries have been broken down.[19]”

Not just Yudkowsky. A lot of the singulitarians said in the 200X’s that it was about 20 years away. (Ah my bad i misremembered, Kurzweil always said it would happen in the 45’s. Removed previous link here. This part “Kurzweil writes that by 2010 a supercomputer will have the computational capacity to emulate human intelligence and”by around 2020” this same capacity will be available ‘for one thousand dollars’“of the prediction is lol however (the chatgpt projects cost millions of dollars to create, and they plan to monetize it)).

On that note, you really don’t hear a lot about nanotech anymore nowadays do you?

Yeah, Kurzweil's a little less bad but he's still out of his depth, he should stuck with synths. If I may toot my own horn a little I had an internship in an academic nanotech chemistry lab in the late aughts, I'd say that the experience taught me that the singularity just ain't gonna happen.
Kurzweil made his career by making big predictions that never came true but luckily people just forgot he made them
https://m.youtube.com/watch?v=mY5192g1gQg remember the good old days?

2021 was also the projected year we’d have a sealab. Didn’t happen.

So… Many…. Words….

Oh man Zyvex, haven’t heard that name in a looong time.

This reminds me of how I got married on the day the Rapture was supposed to happen. 11 years ago.

emphasis is my own:

https://en.wikipedia.org/wiki/Doomsday_cult

researchers have attempted to explain the commitment of members to their doomsday cult after the leader’s prophecies have proven false. Festinger attributed this phenomenon to the coping method of dissonance reduction, a form of rationalization.[2] Members often dedicate themselves with renewed vigor to the group’s cause after a failed prophecy, rationalizing with explanations such as a belief that their actions forestalled the disaster or continued a belief in the leader when the date for disaster is postponed.[2]

Aren’t a lot of older rats former Orions Arm people?

not that i know of, but can't say i'm surprised - got a link or something?
https://www.orionsarm.com/eg-article/4d5bed5d75e21 https://www.orionsarm.com/eg-article/45f483e195275 Sandberg left a few years back iirc
l, o, and furthermore, l
https://www.orionsarm.com/eg-article/486feb22f256b in 2032 Richard Smalley admits general assemblers are plausible, among other things.
This reads like bad fanfic, from very self-indulgent fans of Bayes and Dan Brown.
In a way it is…

Something something The Great Disappointment

Quasimodo predicted all of this