r/SneerClub archives
newest
bestest
longest
Big Yud Asks the Most Important (18 Multiple Choice Answer) Question You’ll Ever Read (https://twitter.com/ESYudkowsky/status/1639272469033897984?cxt=HHwWgMC-7YLo7r8tAAAA)
31

[deleted]

[deleted]
I asked ChatGPT to translate it: >You are tricked by at least one choice on this list because it targets your beliefs, biases, areas you don't know much about, or your own hopeful thoughts. *tfw a language model makes more sense than you do*
No wonder AI terrifies him so much
people who really need to be replaced with a very small shell script
The language model roughly approximates the mean level of writing on the internet, and big Yud is far below average in his writing ability
Has anyone asked GPT-4 to write some text in the style of Yud?
Just tried it. It slides into *generic* pomposity with the yud-specific jargon bolted on, but it gives a good attempt: https://i.imgur.com/zODwjsq.png
[deleted]
Oh, so he's a *Continental* Rationalist.
[deleted]
I like how he has to condescend the idea that maybe people's frame of mind will change after processing the future, because that doesn't "align" with his view that you are selfish and prejudice to maybe enjoy any outcome, by whatever means you might need to.

Prediction market is when winning criteria is ambiguously specified and also evaluated 177 years from now.

giant inscrutable matrices of floating-point numbers

Because DNN interpretability research isn’t a thing. Of course knowing that is would require actually reading any of the literature.

My real answer is some mix of I, J, and D. Also K, if you count actual Deep Learning as different from the strawman deep learning EY has imagined via osmosis of playing around with more publicly accessible deep learning projects (I.e. playing with stable diffusion and AI dungeon).

You're giving him too much credit by assuming he's just ignorant of interpretability work. I think his reasoning here is even more appalling and transparent than that: *I don't understand linear algebra, therefore one else does either.* Yudkowsky is basically the poster child for "you don't know what you don't know".
hold up my world is being shattered here I'm not even kidding Yudkowsky has **always** laced his writings with off-hand references to dense(-sounding?) mathematical concepts, and extolled his mathematical ability as one of his greatest strengths. ...has he said something egregiously dumb about linear algebra for real? Like is this actually a known thing? I enjoy linear and abstract algebras more than probably any other topic in math — though I'm hardly a mathematician — so if he's been caught out since I stopped following him *please tell me about it*
[deleted]
holy shit *I literally never have. He just talks about it.* I can't believe I never noticed this before. I'm honestly trying to come up with one time I have ever seen him actually go through and *do the math*, as he likes to say (...I think? someone. someone likes to say it, anyway), and... I'm not coming up with anything. Stand by.
There's this idea that the victims of con artists are often guilty of some sort of fatal sin themselves; "you can't con an honest man" and all that. People who fall for ponzi schemes are guilty of greed, for example. People who fall for Yudkowsky's bit are guilty of intellectual hubris. The flipside of that hubris is a chronic fear of being found out as intellectual frauds. That's why he often gets away with confidently referring to concepts from advanced mathematics despite the fact that he clearly doesn't understand anything more advanced than maybe basic arithmetic. In order to directly challenge him you need to have genuine confidence, and that only comes from accepting the reality of your own ignorance. As far as I can tell, every rationalist - without exception - has a fear of their own intellectual inadequacy that is so crippling that they're willing to give credence to charlatans like Yudkowsky, even when they know that he's a fraud. The alternative would require accepting their ignorance and finding a way to value themselves that isn't grounded in a facade of intellectual achievement, and they're not able to do that.
Is that why the 'burned out gifted kid'/'school bad' thing is so common with them?
> There's this idea that the victims of con artists are often guilty of some sort of fatal sin themselves; "you can't con an honest man" and all that. tbf, this is monstly said by the con men
>There's this idea that the victims of con artists are often guilty of some sort of fatal sin themselves; "you can't con an honest man" and all that. People who fall for ponzi schemes are guilty of greed, for example. I *thought* this was going to lead into a comforting explanation of why I'm **not** actually guilty of intellectual sin for never questioning Yudyud... *— (except for the vegetarianism thing, which is really what made me distance myself; just too at odds with the image I had had of the fellow) —* ...but no, it just goes on to explain in cruel detail how I was misled as easily as a particularly-dimwitted toddler. I see. Okay. Fine! I admit it: my ego is huge but fragile and I've built all my self-worth on being smart! Happy?! I've nothing left, now, besides the cold and meager comfort of my vast wealth and stunning good looks. It is a harsh world. As the rationalists say, wisely: *"Reality is a polyamorous mistress."* *** You make an interesting observation, though: now that you've stated it, it's easy to think back and find a dozen instances wherein insecurity over intellect was blatantly apparent. I'm also thinking now of how sometimes, when I or another was the first to admit/question "hey this part doesn't make sense to me; what am I missing?" on particularly Euler'd-up and popular offerings, other posters might immediately change course from "oh yes exquisite mathematics there!" to criticism. Seems plausible to me now that these were cases wherein no one had wanted to be the first to admit they didn't understand. (What if it's that *you're* wrong, and everyone sees you aren't as smart as OP?!) *** I hypothesize — with no evidence and only a modicum of thought! bold, or merely foolhardy? — that a similar dynamic might obtain in *any* community centered around a certain quality. That is: perhaps communities of ring- and watch-models, say, are rife with those who have placed their self-worth entirely upon their beautiful hands, and hence quiver with insecurity when the topic of the dread wrinkly-knuckle comes up...? It's probably worst when the relevant quality is **intelligence**, though. Even the most dedicated hand model would probably rather have uglier fingers than be thought *slow;* and in the rationalist sphere, intelligence is often *explicitly* enshrined as The Only Really Important Thing, which can't help. *** Real easy to equate yourself with your intellect, though. Even knowing of and conversing upon this pitfall, *I'm still sort of doing it right now even as we speak—*
> I can't believe I never noticed this before. I'm honestly trying to come up with one time I have ever seen him actually go through and do the math, as he likes to say (...I think? someone. someone likes to say it, anyway), and... And, next, do this experiment with neuroscience, cognitive psychology, philosophy...
when he tried, with the quantum physics sequence, he got simple math wrong, many people noted it, and he *never ever fixed it*
He has said something egregiously dumb about hash functions (claiming a GPT type of approach could learn to break hash functions better than state of the art because rainbow tables exist on the internet and would thus enter its training data). That is something GPT architecture just straight up can’t do and it should be obvious from the number of times it trips over basic math… like counting the number of words in a sentence. https://www.reddit.com/r/SneerClub/comments/10mjcye/if_ai_can_finish_your_sentences_ai_can_finish_the/?utm_source=share&utm_medium=ios_app&utm_name=ioscss&utm_content=2&utm_term=1
Oh god it's so stupid in so many different ways. It's not just that chatgpt isn't good at basic mathematical reasoning. Even if chatgpt were quite good at actual reasoning, it still wouldn't have some god-like ability to invert hash functions; *that's the point of hash functions.* Moreover, Yudkowsky clearly doesn't know that gradient descent is not an efficient or effective way of solving combinatorial optimization problems. He doesn't even know what the distinction is. He apparently thinks that ChatGPT is a shiny box covered in arcane runes and filled with literal magic. I wouldn't be surprised if he thinks that computer programming is basically witchcraft.
Say you don't understand computational complexity without saying you don't understand computational complexity.
I don’t know about outright dumb things but I’ve literally never seen him actually *do* any advanced math
he doesn't understand that in real life when evolution hits a P=NP, it approximates a good enough answer
To be fair though, interpretable/explainable ML research is Not Doing Great so far

Alternative title: Yud tries doing a push poll

He doesn’t seem to realize that a push poll is less effective when it completely exhausts its participants’ attention spans. He’s treating persuasion like it’s a war of attrition.

I really tried, buy as always when reading him, the will to puke got too strong and I stopped reading midway.

More stamina than me

My AGI can beat up your AGI!

Prediction markets are the most deranged concept I’ve ever seen. Imagine looking at what market forces do to, you know, the real economy and thinking “hm yes I will let these establish my beliefs”

I don't have any problem with prediction markets. It stands to reason that when people have to put their money where their mouth is, you'll have fewer lazy predictions. This isn't really a prediction market, though.
Actual, existing markets are loaded to the gills with fraud without constant enforcement, and are prone to insane fluctuations based on rumors, mistakes, similar stock names, and fat fingered errors. And that’s for the relatively straightforward stuff like “this company makes shoes and sells them for a profit”. I think that creating a real prediction market would make things less stable, actually.
On the other hand, a massive amount of shoes do come out of this process
[deleted]
Also, why are lazy predictions such a big issue that we have to make a whole new market to fix them?
[deleted]
the other reason is so that Thiel etc can rig the market
>We are always given the answer: “well, pundits for example, bloggers, columnists, that sort of thing.” And we always rejected that answer because it was mostly just grounded in knee-jerk prejudice against bloggers and columnists. It's not as if *they* don't go around *thinking* and *talking* to other people, or that those people are any less "real" than the people you talk to, or that the people you talk to don't have biases and idiosyncratic views of their own. The whole point of prediction markets is explicitly *not* to assume that any one subset of humanity has a better read on reality than any other, but only to make sure that pundits -- whether professional or strictly amateur -- go broke quickly if they are bad at their jobs. (Again, this only applies to real prediction markets that have robust participation from a wide variety of viewpoints and actually do pay out sometime in our lifetimes with rewards that people that people care about, not --- whatever the hell this website is).
I would say prediction markets pretty clearly assume that the level of insight you have is gated by your disposable income and inclination to gamble. If you don't have it then your ability to participate in prediction markets is just as limited as your ability to participate in any other market.
[deleted]
> I strongly reject these two premises, the second for laughs, and the first because it is straightforwardly not the job of pundits, columnists, bloggers etc. to think about the things that they say in anything like a mode of public rational deliberation. This is basic media criticism, absolutely 101. That isn’t a knee-jerk reaction, and it should be common sense, but for whatever reason public understanding seems to in fact have gone backwards on this in the age of social media, and it is at least a basic platform assumption of serious (including academic) thought. I am shocked, shocked that you do not consider David Brooks to be a serious thinker.
>What do you think deliberation between two or more people is for? Right! If it works with two people, why not try smoothing away biases with an even larger aggregate, from a (theoretically) wider cross-section of the population, wherein participants are strongly incentivized to get it right? I dunno, sounds good to me. >I have no idea where you’re getting this from, or how it would work, certainly it can’t form the basis of prediction markets as an idea. Presumably, they would either be putting their money in the market itself, if they believed in their own horseshit; or, if not, "this guy refuses to hold public positions in the market and has worse accuracy anyway!" would be a hard-to-explain-away challenge. *** Ultimately, I would guess we can't solve this by debate. We could check empirical data on how well prediction markets work, but also, that would mean expending effort and no matter how much meth I smoke I just don't want to do constructive things for some reason
[deleted]
I, uh... I can't, actually. *(I'm sorry! I'm not trying to insult anyone! I was born this way I swear I can't help it–)* That is: the salient aspect of your initial thrust — deliberation between multiple people enabling consideration of more data from more angles, with fewer (shared) biases and blind spots — is replicated in prediction markets. My charming yet sly riposte to your wild but virile thrusting is, therefore, that — barring some *other* feature of the markets preventing these (multi-party-deliberation-) benefits from materializing, something we as of yet have no reason to suppose — you've nicely explained to yourself the basic intuition behind prediction markets. En garde! >And those problems just happen to be great enough that I think you have to be a gullible fucking moron to put actual store by what the prediction market says. I don' know... this **does** sound sort of like saying they can't get anything right, to m— —...okay, fine, no, not *strictly*; they *could* get something right, but merely do so so little of the time that only a gullible fucking moron would put actual etc. But still, that's a pretty strong claim. I need to be getting ready for work — and I believe you have already made me late, with your enticing combination of rage and humor — so I sha'n't\* go further into why I think they'll work. I will merely say, I suppose, that my position mirrors yours: I'm not saying they're *perfect*, just that they're *better than many (most?) other options.*\*\* In theory, mind. We need a big and liquid one to really test it out. This is a problem. Still, I'd be willing to place some sort of bet on this, if you're keen — see where our predictions on accuracy diverge for current markets, say, or try our hands at a collection of predictions and pay out based on by how much and if we were out-performed. (Or something like that I don't know. It's sounding sort of like work now and I *hate* that stuff.) *** \**(I spelled this correctly, edited it out, re-inserted it later... and now no matter how I write it it looks goddamn wrong.)* \*\**(meaning realistic options; e.g., for the average person, "consult a symposium of world-famous experts on your question" isn't usually in the cards, even though it would probably usually out-perform the markets.)*
> My charming yet sly riposte to your wild but virile thrusting I'm asking for about 40% less purple prose moving forward, if you could.
Fine! I'm taking my ~~methamphetamine~~ ball and going home, then!
[deleted]
>I am imploring you to just write what you think normally. Are you saying I'm not normal? That's... that's accurate, yes; but it is also very hurtful. I can't believe a place called /r/SneerClub would have people going around hurting feelings like this. But just for you, I'll try to emulate your *cold, soulless* "prediction market" robot idols, this time. wait no that's *me* with the idols isn't it. shit >I’m talking about the exchange of ideas, intercomparison of values, substantive disagreement and ideally reconciliation of discord; in essence what happens in talking. I agree that these fine things may be found in deliberation; I'm afraid I don't really see the relevance here, though — at least, not in the context of our discussion so far. (Or: not for what *I've* been discussing; I'm becoming increasingly convinced we're not aiming our text-streams at the same conclusion-urinal. So to speak.) I.e., to get at the crux of the disagreement(?): >You have abstracted out from what discussion is *for* to what instrumental goals you already think a prediction market can achieve Absolutely. I defend only the notion that prediction markets can predict, and claim no other discussion-related virtue for them; nor do I suppose that they may replace discussion in all situations. *** As your comment — "What do you think deliberation between two or more people is for?" — was in response to /u/cashto talking about prediction markets mitigating the effects of biases *when making predictions...* ...well, it seemed like a reasonable interpretation might be: "this individual is pointing out that deliberation between people can do this already, so this *doesn't mean the markets are any better at predicting.*" If it's **actually** about how people can learn to agree but these soulless algorithms will never feel the warm glow of reconciliation; and *isn't* about how good the markets are at predicting things — then I've definitely badly misunderstood, and will gladly retire from this sub-debate. *** In fact, if you don't disagree with the claim "prediction markets can probably out-predict most other sources" (for appropriate types of question — e.g. they're notoriously bad for long-term stuff), then I say we call a happy ending on the entire thing! (Unless you'd like to make a case for why the markets are good at predicting but still bad *in toto*; I'd surely find this interesting, but probably wouldn't have much to say about for a while.)
everyone hates your writing style, please make your points less tediously
what the fuck am i reading are you forgetting you're not on LW and we do not want to read LW, we want to mercilessly mock LW
There is an growing crowd of LW types that show up here and, for some reason, seem to not view sneer club as opposed to rationalism, but instead as the left wing of rationalism, or HM's loyal opposition. It's a concerning trend.
Yes. The moderation team will have to be more strict. We must also remind people that this is a descendant of /r/badphilosophy, and while we have not inherited their "no learns" rule, we certainly appreciate (and may sometimes enforce) the spirit of it.
on the other hand, it sure is fun using the green username thingy
I imagine it as speaking like DEATH in Discworld books.
for the love of god just please stop talking
Jealousy, good sindikat, clearly erupts, in hot, sticky spurts of negativity, from the moist yet figurative crevices of your comment. You have let envy cloud your fingers, so to speak. Rather, *learn* from the virtuosity in prose you have been granted the opportunity — or, perhaps, dare I say: the *privilege?* — to lap up and swallow down, to taste with the eager tongue of cunning linguistics!
we ask that you stop posting like a fuckwit. thanks!
> Right! If it works with two people, why not try smoothing away biases with an even larger aggregate, by the way, it turns out that when you try this - I'm using the real-world worked example of Augur here - (a) you get ridiculously convoluted claims put to the prediction market so that the creator can rules-lawyer why you lose and they win (b) multiple rounds of such rules-lawyering on any question whatsoever (which *of course* you're going to have, because it's humans fighting over money) (c) the killer refutation: not enough people show up and the markets are past "thin" and into "dead". This is even before we get to (d) the bit where the CFTC comes calling about running an unregistered futures exchange.
> real prediction markets you're putting all the work for this comment into one word there
Nah. Because more money, more voting power. Nothing democratic (or useful) about prediction markets.
Prediction markets have existed in the form of futures and insurance exchanges for a long time, though.
True, but futures were never this absurd
Those are largely prediction markets based around things that actually directly involve trade and money. In some ways a future on nickel is a bet, but it’s driven underneath by the fact that someone, somewhere needs to buy nickel to make batteries or whatever. Attempts to make actual prediction markets have often failed. They delisted weather futures last time I checked. Sports and election betting are the only successful examples I can think of.
And sports betting. Prediction markets are basically just prop bets with a yuppified name and less restricted domain.
there are multiple examples of bozos setting up prediction markets that fall within the remit of the CFTC, who proceed to fine their asses. It turns out we have laws about this shit already.
Sure. I said *fewer* lazy predictions, not *no* lazy predictions. Elon Musk could walk in and put $1 million on "aliens will land on the White House lawn tomorrow" and drive the price to 99c. You still need enough other participants in the market with slightly better prediction abilities to go, wow, bet a penny to win a dollar, thanks for the free money Elon. If there is robust participation that includes a wide variety of perspectives, then it will equal out. If it's some obscure website where certain viewpoints are overly represented, well, even "one man one vote" won't help you there.
My actual threat model for if a government tried prediction models for real is stuff like a company notices it can make X dollars more profits if it wastes Y dollars on putting out a bad prediction and X > Y Like all the fossil fuel profiting companies would make predictions against global warming being real, and then the prediction markets would say it isn’t real, and right-wing pundits would ask why the market is saying it isn’t real and why don’t the scientists put their money where their mouth is if they are so sure. Eventually the fossil fuel companies would lose big, but the additional profits they could make in the mean time with no policies like cap and trade or carbon taxes being out would more than make up for it. Of course, that assumes the prediction market setup even got as far as a fair, honest and accurate evaluator setting the payout judgments. The fossil fuel companies could lobby and get favorable appointments in the people evaluating damages from global warming, and then the scientists don’t even get the market payout for being right because the committee put together to evaluate global warming prediction market payout questions is filled with Trump appointees that think more hurricanes are god punishing gays and not the result of increased average global temperatures.
Yeah the assassination market problem isn’t just restricted to actual assassinations
The point isn't to root out lazy predictions, it's to make a weighted ensemble of predictive models, as is standard practice in applied statistics when you have the ability to make many predictions and want to use them to produce a single better prediction. Of course, anyone who understands capitalism will know that using \_money\_ to weight the predictions is gonna fuck shit up royally.

Presumably leaves out the “alignment as i pose it is widely regarded by actual AI ethics thinkers as irrelevant and silly if they even think of me at all”.

Is he not including the option “Artificial General Intelligence will not be able to cause significant harm to humanity”? He writes poorly enough that I’m not sure having read the options once, and I can’t stomach reading them again.

Ah but if you're able to effectively question the framework that his whole edifice is built on the it becomes obvious that he's a charlatan. Can't have that.
The one scoring the lowest is: > It's impossible/improbable for something sufficiently smarter and more capable than modern humanity to be created, that it can just do whatever without needing humans to cooperate; nor does it successfully cheat/trick us. Which is _kinda_ in the ballpark? In a "reject the premise" kinda way?

[removed]

Another old account hacked. Seems to be happening a lot these days. Wonder what is up, or if im just randomly seeing a few of them (over all of social media) the past few days.