r/SneerClub archives
newest
bestest
longest
62

Rationalists spend a lot of time obsessing about prediction markets, even though I haven’t seen much evidence that prediction markets (especially the toy markets they prefer) do a particularly good job of getting at the truth. As a corollary to this, there is also a certain “betting culture” that often crops up in debate on rationalist websites: any argument can be pre-empted at any moment by the demand that you put down money on your position (typically at some absurd odds-ratio that is guaranteed to be a financial disaster for the person accepting the bet.)

In the past I’ve mostly viewed the latter phenomenon as a kind of dominance game: it’s a winning rhetorical move to make your opponent take a stupid bet. If they take it, you win. If they don’t, you win. Presumably any community built around “being right on the Internet” is going to optimize for rhetorical techniques like this, rather than trying to get at the actual truth.

But this does not explain the prediction market obsession, and I’m wondering if anyone has theories about what’s going on there?

I think it also appeals to the quantification fetish that the Rationalist community has. It sounds more math-y to say “I have invested in a prediction market on Y proposition” than it does to simply say “I believe Y” or even “I believe Y is X’% likely to be true”, which is the other standard. And you can pretend to apply all kinds of fun economic and financial analysis tools, which feels really math-y.

There’s also probably a misreading of Nassim Taleb’s Skin in the Game there, since I doubt that participation in one of these toy markets represents a non-negligible amount to lose for basically any participant.

It also shows how their "rationalism" is just a reductionism that gives them massive blind spots. If Elon Musk and a Ghanian subsistence farmer both put down a dollar for something, that's a vastly different signal. To know what a dollar means to someone as a signal you have to know... pretty much everything out about them. And that basic misunderstanding, that putting numbers (especially dollar values) on everything is a LESS predictive way to look at reality, not a more predictive one, because of all the messy human information you have to discard
Yeah — Taleb’s skin in the game doesn’t represent rationalist social cred, no matter how much they would like it to :p
As obnoxious as taleb can be he’s been the only one I’ve seen publically criticizing tetlocks “super forecasting” prediction markets. Nassims argument about prediction markets being useless for non binary outcomes like war etc and only having a niche use for things like the presidential election seems to be dead on and no one has really addressed it
Also from what I can tell about Taleb, a lot of his obnoxiousness is a character he plays because he knows it gets him media attention, and most of his actual seriously held positions are quite moderate. Certainly I prefer his kind of obnoxiousness to EY's or any of the rat crew.
I have been utterly unable to take Taleb seriously since [this](http://web.archive.org/web/20150924032003/https:/twitter.com/nntaleb/status/646695656778821632) fucking banger
Exaggerating you opinions for attention, isn't that just literally being a grifter, though? 😬
I meant more his apparent narcissism than his factual claims, which are stated grandly but aren't actually hugely exaggerated in my experience. I haven't read his books though, so I don't have the complete picture.
> As obnoxious as taleb can be he’s been the only one I’ve seen publically criticizing tetlocks “super forecasting” prediction markets. Link? I would love to read an obnoxious criticism of this
https://arxiv.org/pdf/1907.11162.pdf here’s the paper and he has a sneer on his website under 3. https://www.fooledbyrandomness.com/smear.html they also went back and forth on Twitter

There have been plenty of good answers here, but on an emotional level (that often isn’t recognized as emotional), a person who always needs to be right is often terrified by uncertainty and abstraction. Prediction markets point at the possibility that everything is knowable, which is soothing.

“Terrified” might seem like a funny word because rarely does an always-right seem afraid. That’s often because their defense mechanism for everything is trying to intellectually dominate it, so instead of looking afraid, they look smug and self-satisfied, secure that they are above the unknowable.

Another factor is that they have a strong libertarian streak and they fetishize markets to the point they think they are magic. A market of prediction bets therefore would be a powerful epistemic tool in their logic. Never mind that that real world markets often require strong regulations and oversight to stop them from spiraling in insane directions.

Yeah, as far as I can see it’s just a repackaging l of Hayek and the efficient markets hypothesis 15 years after a prominent instance of markets spiralling in insane directions led most people to dismiss all of this as fantasy, if they hadn’t already done so.

Fetishizing of famous scientific bets.

That’s all this is: these dudes read about the epic bets between Penrose and Hawking and that’s basically the long and short of it.

The "epic bets" that resulted in things like Hawking giving Penrose a baseball encyclopedia?
That was Hawking vs John Preskill. Hawking had another bet with Kip Thorne in the 70s where he had to get him a 1 year subscription to Penthouse. By the time they settled the bet Thorne had gotten married but Hawking still insisted in honoring it, lol.
Damn where's the prediction market where I can win a subscription to Penthouse?

Having traded commodities, I wish them the best of luck. Especially since the trading is now dominated by algorithms, quants, and a spread that makes it pretty much untenable unless you work for a big house.

Indeed -- if they really wanted to put their money where their mouth was, and believed that they could actually predict the future, they'd trade in the usual ecconomic instruments and environments rather than toy models where your only reward is validation.
Wait, are you saying they’re not trading, they’re doing virtual trades? Ok. There is a universe between virtual trading and actually trading. Everyone’s a big deal until they do it for real.
Quite a few are yeah. The serious prediction markets (with actual trades) tend to be much more accurate of course — tho they still suffer from the same fundamental problems as all prediction markets will.
They don't necessarily believe they can *personally* predict the future (although they do believe in an elite group of "superforecasters" with this superpower). Rather they use prediction markets as part of their toolbox for personal predictions. So what they're more likely to do is try to invest based on results from prediction markets.

“markets as perfect sources of information” has been a popular belief among fools and jerks throughout the last century or so. like everything rationalist, they claim to have discovered another idea boring old conservatives already invented.

Market demagogues cherrypick the good and ignore the ugly. While markets can definitely *incentivize* the disclosure of information, one can also incentivize objectively false information if that turns out to be more profitable ("markets can remain irrational for longer than you can remain solvent"). You can manipulate markets to force "ground truth" pretty divorced from reality. Just because markets price in rational factors *eventually*, doesn't mean it's a particularly good source of gound truth at any given moment, it works well merely in retrospect.
it's also a bit fatalistic right? what actually gets included in a "market"? how are the rules defined? they are human institutions, not natural phenomena.
writing about cr*pto has thoroughly disabused me of taking efficient market hypotheses the least bit seriously, and considering anyone who constructs apologia for them obviously a clown always always first look for the visible thumb on the scale of the market
Pity that the Principle of the Hiding Hand is already taken as a coinage.

[deleted]

I do find the Machiavelliansism to be quite embarrassing for them, since doing your scheming in plain sight on public blogs is rather self defeating
Which ironically is *precisely* what machiavelli wanted. By frankly describing the method of a prince (that is, scheming and ruthlessness), he exposed the essence and implications of that philosophy, and therefore he indirectly argued in favour of democracy and compassion (in such a way as to try to get his job back :p )
[deleted]
>If that is the case, why rationalism specifically? And why betting markets? What features do these conjoined ideas have that competing ultra-scientific or ultra-tech setups are missing? Well the answer is that rationalism openly (as in hiding in plain sight) promises you a set of tools that you get to use to undermine other people. I think the unifying feature is a bit different from that: rationalism, betting markets, and their various other obsessions are contexts in which the individual is maximally empowered and no teamwork is required, which is appealing to people who don't get along with other people very easily. Who do you need help from in order to understand the world? Rationalism says: no one. Or, how much help will you need to be able to materially profit from knowing something about the world? In betting markets the answer is: none. Other ultra-quantified activities - e.g. science, tech companies - involve a lot of interpersonal dynamics and teamwork. Someone who doesn't understand their own emotions, who is hyper sensitive to criticism, or who can't navigate conflict constructively (all traits that rationalists tend to have) will have a lot of trouble operating effectively in those environments. I think this also explains why they put so much of their discourse out in the open. If you want to build a community, but also keep that community's activities well-hidden from people who aren't already members, then you have to invest a lot in finding and recruiting people to join your community. But if you're inherently bad at interpersonal activities then it's going to be hard to do that. It's a lot easier to be very open about what your community does and hope that like-minded people will naturally gravitate to it. It also explains your observation that they tend to think about things in adversarial terms. If you're bad at operating in a team environment then it's probably a lot easier to interpret any given situation as being zero sum, even if it doesn't have to be.

I think there’s also a connection to their agi mythos. In particular they believe that an AGI would be able to accurately predict the future, and since agis are just math, it should also be possible for humans to predict the future with math.

That's something that's always struck me as strange given that chaos theory is also math. Even just the three body problem is famous enough that I can't believe they wouldn't be familiar with it. Heck, even very simple iterative processes can result in extreme complexity full of surprising details and edge cases - as anyone that's played with fractal generation knows. Even if we do someday invent a near-mystical level of AGI, it's not _magic_, the limits of computability, the laws of thermodynamics, the uncertainty principle, etc still apply.
Indeed the rationalists have discussed chaos before, and this very sub had a thread about the corresponding LessWrong post: https://www.reddit.com/r/SneerClub/comments/1273zxr/you_cant_predict_a_game_of_pinball_lesswrong/ Long story short, someone posted on LessWrong with a simple example of chaotic motion and explained how this shows that accurate long term predictions are impossible, and a bunch of people showed up in the comments and basically said "no you're wrong". My interpretation of this is that they believe *axiomatically* that it is possible to know everything, and as a result they simply reject the obvious consequences of things like chaotic motion.
but a *sufficiently advanced* AI,
That was a fascinating read since I rarely run into these guys in the wild (don't use much social media outside of reddit). The comments on the article boiled down to reiterating what the article already said about how human pinball players play: using heuristics to keep the ball in areas that are easier to control. Only using absurdly verbose language, and without acknowledging that they are in fact _heuristics_. It's like they technically acknowledge what the math says without internalizing the implications, trying to treat it like some kind of outside edge case.
It's exactly like every other kind of religious apologetics. The length and the complication contributes to their goal, which is to give their beliefs the appearance of having a rational foundations. Being concise and clear would only reveal (including to themselves) that the things they believe don't make sense. The most amazing thing about it, to me, is that they've appropriated the language of real science and math for purely religious purposes. Chaotic motion is sort of wondrous if you really think about it, but they've decided to reject the real magic of science in favor of the fake magic of a world that they can fully understand and control.
> Even just the three body problem is famous enough that I can't believe they wouldn't be familiar with it. it turns out rationalists don't know shit, and answer all such questions with "but a *sufficiently advanced* AI,"
> I can't believe [rationalists] wouldn't be familiar with [a thing] I have some bad news
How are we being this speculative about unknowables at scale in a thread about unknowables at scale? Can you not see at least a degree of irony here?
Uh... what?
The amount of armchair psychology going on to deduce the causal nature of their positions here is a bit ironic—I'm not sure what there is to be confused about.
Ah, so you can use words to coherently express concepts. Now to actually respond, um... the point of this is disecting their ideological frameworks. I'm pointing out that the plausibility of effective omniscience is fundamental to that framework and drawing a clear conclusion of that assumption. Given that actual evidence and science implies that such future prediction is essentially impossible, one has to ask the question of where the focus on prediction markets come from. I'm just proposing one aspect of that, which is that all that evidence and science is rejected in the mythology, and so it makes sense that they'd reject it in practice too. I wonder, have you ever actually studied the humanities? Maybe you'll see the difference between these different kinds of speculation.
Interesting—GPT understood the implications of my first comment without any context. From my perspective, the actual science says it's possible, so what now? The point is, I promise you operate under a functionally comparable degree of speculation consistently (in that it crosses a threshold of complexity placing it in the category of fundamentally unknowable), making judgements that crystallizes unbeknownst to you as knowledge in your mind and contributes to your bias towards the out-group. Both groups do it, call it out on the part of one another, and repeat. It's asinine.
You brag about how your robot can understand your pointlessly oblique terminology (seriously it sounded like the unknown unknowns speech) — while at the same time completely failing to understand what I’m saying. But it’s okay, the humanities are asinine. Basic communication and dialogue aren’t important if we all get paperclipped or acausaly fried with nanorobots right?
It's plain english dawg, and the bot got it because the semantics and syntax are ironclad—that's on your blind spots, which isn't a problem. I didn't say the humanities are asinine, I highlighted this example of you attempting to apply humanities methodology by conflating presenting subjective hallucinations about literal unknowables with "doing humanities", all the while offering scathing critique of those attempting to navigate the unknowable. It's not any deeper than a surface level irony I thought was interesting.
Lol. Lmao. You really can't figure this out? I'm almost beginning to feel bad. Oh and by the way, if you think Scott Siskind's writing (Scott Alexander as you might know him) is any good -- as your little spiel about outgroups and profession of both-sidesism seems to imply -- I'm afraid it's not. Well no, he's a really good writer unfortunately. He's just bad at logic, and not beign a fascist.
Now were just gaslighting... Have a good one, I hope you win the in-group out-group game you seem incessant on playing. When you sufficiently unpack the collective temperament of the implied monolith that is LW, let me know man—I'm on the edge of my seat for when you inevitably nail that one down.
I'm on the edge of my seat when you figure out how interpret text and coherently respond to it's content. This is one respect in which I'm willing to admit AI are superhuman, in the particular sense that they're better than you at it.
When your only available insult implies you're the confused one, it's a sad state of affairs. Irony incarnate. Keep going!
Okay this is getting boring now. You've stopped being funny, and so my amusement at your comments has dried up. Have a good day I guess? Or don't, if you really don't want to.
Aw, you were on such a roll :( I will! Btw you'd fit right in on LW. You're just like them, however I do have to generally give them the edge because at least by definition they acknowledge the framework they adhere to. You ultimately adhere to the same framework but pretend to float above the conversation, so there's that. Godspeed, Dirichlet!
You realise that like... I could thoroughly explain my beliefs and frameworks. I just didn't here, because this was funnier and honestly right now I don't have the patience or energy to apply actual serious levels of thought to this (and if I was going to seriously engage, I'd give you that courtesy at least) Like seriously, how do you think I'm working with the same framework when, throughout this conversation, I've said almost nothing about my own beliefs. All of the things I did say were conclusions, with no reasoning attached. Your inferential skills must be incredible lmao. I admire your willingness to go for bait though lol.
You don't have to explain your framework for me to understand that you are sifting through information, distilling approximations about things you can't possibly know the same as everyone else. You simply contextualize the scope of your personal approximation within the acceptable but ultimately arbitrary limit as defined by your in-group. I have nothing to lose here—I think both groups are generally self-masturbatory and unproductive relative to less rigid self-actualized communities. You certainly love to read your own writing and that's really clear in the volume to substance here, which you've conveniently packaged as apathy towards the conversation, but that doesn't have any bearing on what's actually happening.
>You don't have to explain your framework for me to understand that you are sifting through information, distilling approximations about things you can't possibly know the same as everyone else. You simply contextualize the scope of your personal approximation within the acceptable but ultimately arbitrary limit as defined by your in-group. You are sifting through information, distilling approximations about things you can't possibly know. In this case, relevantly my political experience. You assume that I operate on an ingroup outgroup model here. You're almost right, but you're not -- the order of operations is slightly different. Certainly those who identify with LW and similar are, in a certain sense, designated enemies. But I don't designate them as enemies because of that identification. I do so as consequence of their actions and words, which vary from rediculous to plainly evil. But I have no intention of lifting myself above the conversation in that regard. I think most people do things like that, including yourself. On the other hand, I haven't started an apocalypse cult based on superintelligent ai, so i think I'm still doing pretty well in comparison. You accuse the sneer club of being "self-masturbatory" and "unproductive". I agree. I don't think anyone ever claimed otherwise lol. And yes, I do love to read my own writing, thanks for noticing. Clearly it's something we have in common given "You simply contextualize the scope of your personal approximation within the acceptable but ultimately arbitrary limit as definned by your in-group." It's such a beautiful trainwreck of a sentence, with far too many words which are far too long. And not only that, but all those fancy words totally squander their richness, and instead fail to evoke any meaning at all. Anyway, I hope you enjoy meta-meta- elavating yourself above the conservation lmao. Your smug superiority is gloriously obnoxious, as I'm sure is mine.
These comments don't describe a top down temperament—it goes without saying we're describing the qualities and implications of an assertion you made, and not a sweeping claim about how far that extends into all of your interactions. I'm sure you mostly align with your own self-labels. Remember, this started with surface level irony. Surface-level is a very important detail. Text-wall was a fun read and will certainly help me with such extensions, but I'm afraid you've overextended. Edit: the bit about my beautiful train-wreck is really fucking funny—posturing, but profoundly funny. Thanks for that. I'll be honest I think we're both closer to the same temperament than anything.
I think you're still vastly overestimating my level of seriousness in this conversation. I'm very good at composing text-walls with minimal effort. Certainly, I haven't expended any more than surface level effort here. Hence I find it very funny when you say things like "I'm afraid you've overextended", like Sherlock or Moriarty smugly gloating at some great intellectual victory.
Likewise, shitposting abound, latent everywhere. I promise you don't know where my seriousness starts and stops in exactly the same manner.

I think Rationalists subscribe to the subjective interpretation of probability – the interpretation that allows assigning probabilities to any unknown event based on “degree of belief” – and that interpretation is best understood through betting.

In fact, one could say that Rationalist define their notion of rationality as displaying betting behaviour that is consistent with subjective probability. Indeed, they make frequent allusions to Bayesian reasoning.

Dear rationalists, if frequentism is so cringe, what are you updating your priors off of then 😎
They update their priors?
They make noises that contain the syllable "Bayes". I'm highly skeptical that the psychological appeal which prediction markets have for them has anything to do with Dutch-book arguments. I mean, Dutch-book coherence is all about a single gambler trying to be internally self-consistent, not participation in a market. The arrogance, the quantification fetish, the desire for tools to make other people shut up — those come first. Any invocation of actual math in the foundations of probability theory comes much later, if at all, to paper over questions.
I mean, I also desire reliable tools to make rationalists shut up.

I think it’s just a result of being libertarian-adjacent. Markets are the hammer and any problem becomes the nail in rationalist ethos - prediction markets being one instance of this “invisible hand” that runs popular in the circle.

Market failures (such as Keynesian beauty contest) are not something that exists in the Rationalist world, even market failures are purportedly “rational”.

“Rationalist” as a self-descriptor implies that others aren’t rational. Speciifically, LWers are obsessed with the idea that most people have no concern for truth and just profess to believe what’s convenient or socially acceptable. Naturally, it has a lot of echo with the right wing discourse around “virtue signaling” and “NPCs”, and with the EA thing about money as the “unit of caring” (if you really were a good person you’d have the most money possible spent on charity).

So the idea there is that when money is on the line you cut through the bullshit and see what people really believe.

Considering the (suggested, im not an neuroscientist) links between neurodivergence and addiction, it also seems a bit unhealthy for their own followers to focus so much on this.

It is a bit like they looked at markets, Taleb’s skin in the game, and gamification and went ‘we gotta make this worse!’

what’s going on? they call themselves “rationalists” because they’re obsessed with applying methods of pure reason to everything. they love betting markets because they love Bayesian probability because they love quantified models of the world.

it’s a fetish. it shouldn’t be taken seriously.

So, I’ve said before I don’t really have much of a problem with prediction markets. I think they’re generally accurate, provided a) there is a real incentive to “win”, ie, real money or equivalent motivating rewards and b) there is broad participation from a set of diverse perspectives – so, something like IEM, or even a sports book in Vegas generally qualifies.

I view play-money sites like Manifold Markets as a poor simulacra of an authentic prediction market. Many of the markets there have poorly defined payout conditions, are unknowable even in principle (“In 2024, how many named hurricanes in the Atlantic will there be?”) can be directly controlled by the participants (“what will I have for breakfast tomorrow?”), are often self-referential or even paradoxical (“this market will only pay out if ‘No’ wins”), and of course the market participants skew heavily towards a particular demographic, so huge amounts of rationalist group-think abounds. So I see no reason to take these markets as evidence of anything.

As for the cause – I can’t agree that it’s a “dominance game” or some sort of hypercapitalist ideology (“only people with wealth have valid opinions”) – since it’s all play money. I just view it as another instance of Rationalists pretending to be rational when they’re really not; indulging in having a form of rationalism but denying the substance thereof.

It short-circuits debate in the manner you describe, and it was promoted heavily by one of /theirguys/, Robin Hanson.

In Australia we have this ad campaign that is solely based around conflating the words “market” and “meerkat” so now I propose the prediction meerkat being a sub mascot

though I haven’t seen much evidence that prediction markets (especially the toy markets they prefer) do a particularly good job of getting at the truth

This is pretty standard for all ideologies that fetishized markets. Actual evidence of how markets perform is not only irrelevant, it isn’t even considered.

there’s already that guy who lost the million bet on Twitter over Bitcoin. I would love to take the other side of a bet with these fools.

https://noahpinion.substack.com/p/in-which-balaji-gives-away-at-least

Prediction markets /should/ work, if there is enough liquidity and the market is efficient (and if we are talking of some reasonably forecastable event, and not questions like “Will Donald Trump smile in his mugshot?”.

The idea that the forecast produced by a prediction market is a consistent estimator of the probability of some event is a cool one. It gives us a tool to predict future events. For good forecasters, it is a way to earn a bit money (and have fun). And there is a very easy metric (total earnings) that we can use to compare forecasters - telling if somebody is good or bad at predicting the future.

And I guess that the ‘obsession’ you mention is due to a combination of factors - there is some research backing them, there is the gambling aspect, the competitive side, the fact that they can be built onchain (therefore, decentralized and not controllable by regulatory bodies). Tbh, it is one of the few things on which I agree with rats :)

Prediction markets have been a topic of interest for rationalists and prediction enthusiasts for some time, and while there may be debates about their effectiveness, they continue to gain popularity. One possible explanation for this could be the potential for prediction markets to provide more accurate forecasts than other methods, especially when it comes to complex events like elections or stock prices. Additionally, the use of prediction markets in decision-making can incentivize participants to share their knowledge and expertise.

As for the “betting culture” that sometimes arises in these discussions, it’s worth noting that prediction markets can be used for more than just gambling. In fact, many platforms are designed specifically for traders and investors who are looking to hedge their risks or make informed decisions about their investments. One example of this is Polkamarkets, a decentralized prediction market built on the Polkadot network, which aims to provide a safe, transparent, and accessible platform for traders and investors to engage in prediction markets.

Overall, it seems that the popularity of prediction markets is likely driven by a combination of factors, including the potential for more accurate forecasts, the ability to incentivize knowledge sharing, and the opportunity for traders and investors to hedge their risks and make informed decisions.

Dear GPT-generated polkadot shill bot, Indeed, the implosion of such schemes is a testament how idiotic this is. For anyone taking the above post seriously, look up Schelling points - they're a fun oracle until one day, they ain't. Consistently ignoring externalities of the system has a price of its own.

What is the nature of the beef with rationalism? By rationalists do you mean every human being that self identifies as a rationalist, or is the implication that only the specific subset of rationalists who checks the boxes tied to criticisms offered on this sub are who are being referred to? At that point, wouldn’t it be just as useful to differentiate and delineate if generally speaking the rationalist community isn’t homogeneous in temperament, position or belief? Admittedly my experience is anecdotal and therefor not indicative, but I just don’t see the universalities outlined in this sub much as fundamental to the rationalist position expressed so consistently on LW.

I’m new to this whole space and just want to understand how this sub isn’t the opposite side of the same coin LW is typically associated with.

>how this sub isn't the opposite side of the same coin It's probably more, like, as the rationalists would say, *orthogonal*. Perhaps it would be more technically correct to contextualize criticisms of the rationalist community within a systematic and nuanced taxonomy of their culture and its dysfunctions, but I personally do not believe that technical correctness is always the best kind of correctness. Also being persnickety just isn't very much fun.
I'm just so confused because this response identical in the other camp
I refuse to believe that any rationalist has denounced persnicketiness
They do, just along a different vector

Is a prediction market, in the idealized case, still only as good at predicting things as the people who use it? Or is there some mechanism that’s supposed to ensure accuracy beyond that?

I have the impression that the betting thing is targeted at creating and strengthening ties with the group… Can you get out of the agreement of a bet that might never get settled? Well, yes, if you decide to pay or figure out that you were manipulated into it…

This looks like something that a cult would do to create commitment in its members. Besides, betting will make it harder for someone to change their mind about things.