r/SneerClub archives
newest
bestest
longest
how the fuck did big yud get seven million bucks for his roboapocalypse think tank (https://intelligence.org/2020/04/27/miris-largest-grant-to-date/)
33

Hooking the Ethereum people on this nonsense was quite clever, because (a) a lot of them have more Eth than they could ever reasonably dispose of (b) it’s relatively exchangeable for actualmoney in small quantities

and most importantly: (c) they’re crypto guys, therefore they don’t know shit about shit and are used to using their intellects from first principles, with hilarious results, rather than asking people who actually know things.

Surprised he got a BitMEX guy in - BitMEX are a Bitcoin derivatives exchange who are cheerfully ruthless and don’t give a shit.

I honestly respect the grift. I’d rather have billionaires funnel their money into HP fanfic than have them destroy livelihoods with “disruptive” startups or the furthering of a massive surveillance apparatus

Read their "research" page. Their papers are total gems. Seriously admirable dedication. E.g. they have one where they called something like probabilities "market prices", which would explain the appeal to whatever bitcoin thing they just grifted. Curious they are to some extent still sticking with good old fashioned AI in this day and age, though (when deep learning is the buzz of the time). My concern is whether they are still milking private individuals out of their last money as they used to (there were at least 2 people that kept donating all they could, I've no idea what happened to them). If they are only milking the rich, and if they stopped sending arrogant and stupid greentexts to actual AI researchers, then I guess they're fine.
[deleted]
And \*with\* an added dollop of jargon to make sure serious mathematicians won't waste their time trying to critique it properly.
Yeah I was pretty sure I seen this general idea with fuzzy reasoning before but there's enough jargon to just not bother. Their general ideas of how to build an AI are kind of insane though, like they are wanting to build an insane AI. The AI got to be a believer in the multiverse, and got to "acausally trade" across it meaning that the AI has a large set of hypotheses which make no predictions about AI's sensory inputs (not part of our universe), thus the probability of said hypotheses never moves from priors. But which are acausally influenced by AI's actions (inside those hypotheses there are other entities simulating decision-making parts of the AI). It computes expected utility including those hypotheses, and there's no construction to make those hypotheses always balance out. And they want it to be friendly, meaning that if it starts making sacrifices to Cthulhu, it wouldn't be paperclips it would be ritually sacrificing. The principle of "friendliness" is that the AI would do what you would want it to do if you were smarter, where "smarter" by definition means thinking like people that freak out about the "basilisk".
I mean, mathematically, the prediction market thing does actually work, and is a genuinely clever bit of reasoning - It's only tangentially AI-Safety related actually - and much more about the properties we can expect arbitrary AI systems to have. It's certainly research worth listening to. As for why they stick to "old-fashioned AI", I don't really think that they do - they're making the sensible assumption that they don't know about the internal structure of an AI system, and so must reason independently of it. I have to say, I can't really complain about their research (which is just pretty solid, and certainly new and interesting) - even if I find their method of aquiring funding to be utterly disgusting (In my view, you have three options, the first is to go through the route that almost every scientist goes through. They probably tried this, and understandably failed. The second option, is to use your own money - only an option for the rich, which none of them were. The third option is to find a sufficiently wealthy benefactor, which I guess they've now done. Maybe you can crowdfund, but don't try and pull a pascal's mugging, one of the concepts that the rationalists' squad pretty much came up with - or popularised. Be honest about what people can expect). Admittedly - their good quality research, has a lot more to do with the actual mathematicians they persuaded to come along, not much to do with Yudkowsky as far as I can tell.
Yeah honestly feel this way about Elon Musk. Is he a piece of shit? Yeah, all billionaires are. But I’d rather a billionaire dump ludicrous amounts of money into capital-hemorrhaging science projects than into hedge funds.
Hm I hadn't thought about it that way, I guess there are worse things than a lot of what they do. But I am pretty pissed off about the starlink stuff to be honest.
'We know it was us who scorched the sky'
Why would putting money into big science projects be better than hedge funds? Hedge funds are just going to be companies, too. It really comes down to investing in rockets vs investing in the next best way to make the economy more efficient. I don't see how either is preferable, other than that like billionaires are better suited to make big risky bets that don't pay off immediately (like SpaceX) than a more analytically driven approach like an investment bank or something.
tl;dr boring economic reasons that drive inequality. Velocity–speed that money moves through the economy–increases market efficiency no matter where the money is moving. I'd argue that buying 100 million vintage pogs would be better for the economy than a hedge fund, because hedge funds rely on capital remaining stationary and slowly accumulating whereas the pog-vendors are more likely to spend money on things: even if you make a shitty investment, that money gets a second chance to make a good one. Prevalence of slow capital accumulation is either indicative of A) your country has strong neo-colonial relationships around the world (why Britain and France have such huge financial sectors) or B) inequality in your country is rising exponentially. I'm getting this from Thomas Piketty's *Capital in the 21st Century,* which I haven't read but have heard some lectures about lol. There are certainly limits to this thinking obviously–if the spacesuit factory puts their profits into a bank account and lets it accumulate, then that's barely more velocity than a hedge fund AND the money doesn't go towards investment anywhere. In most cases though, lower-level distributors barely squeak by and invest profits into self-development, which keeps velocity going quickly and often results in the kind of economic results we want to see: higher median incomes, lower unemployment, higher rates of return, etc. Under Keynesian/Neokeynesian theory, shitty investments can lead to economic growth in this way–same reason that poorly-distributed stimulus packages can still help some.
I mean Pickety's point doesn't really apply here. The capital ownership is "stationary" if Musk invests 1 billion in SpaceX or in some hedge fund. What Pickety is saying is that if the rate of return on capital in general is higher than the growth rate, inequality will increase. That applies to either of Musk's investment opportunities. If he puts all of his capital in a hedge fund, that just means that some financial analysts will try and figure out how to split up the capital and invest it in a bunch of different companies (and other assets like bonds). So instead of a $1 billion bet on reusable spaceships, it's a $100 million bet on IBM, A $100 million bet on Ford, $400 million bet that the US remains just as solvent 10 years from now, and a $200 million bet that gold keeps being worth as much (or whatever).
That’s Piketty’s main point, yes. I’d argue that the key here is specifically investments which do not really count as a traditional “capital” relationships of profit-reinvestment. That is to say, things that *hemmorage* money or result in one-time net decreases in accumulated capital. This takes place in both poor investments and in early stages of startup-style industry, where it relies on external financial support to keep it afloat. In these cases, short term rate-of-returns are negative and *necessarily* result in decreases in capital accumulation. The key here is that hedge funds will average the RoR and continue accumulation while bad/volatile investments will often either surpass the RoR or result in inherent decreases in accumulation. Let me know if you see any gaps in my logic here. Your logic in the given scenario is sound, though I’d definitely emphasize that velocity is still highly important, and that velocity in stocks and bonds doesn’t count like direct investments do. Prior to the Great Depression, financial velocity was extremely high, but since that velocity was locked up in trades of more-or-less-equivalent assets then it’s functionally the same as keeping it under your mattress/will lead to long-run deflation. Profits from investment that are just going to go into low-risk commodities results in low-velocity and high-accumulation while profits from investment that go into continued direct-investment/get spent by the investor result in high-velocity and low-accumulation.
Your first point only makes sense if the investment turns out to be a bad one. I mean SpaceX specifically has probably been a positive ROI. I honestly don't really know how to think about your statements about money velocity, it's not something I've ever heard before. But at first glance I'd say I think you're confusing money velocity with economic activity. Like, if Scrooge McDuck pays $100 million in cash for a pool of gold coins, that doesn't like destroy $100 million worth of productivity. It would drive up the price of gold, causing more resources to be put into mining it, which would be a negative, but the immediate cash transfer itself isn't a problem. The guy that sold him the gold pool is gonna take that money and reinvest it wherever.
You’re right on the first point. It’s a big “if” that my argument hinges on, though I do earnestly believe it. On the second point, I’d emphasize that liquid assets contribute to velocity as well. You don’t *need* liquid assets to be changing hands constantly to have a prosperous economy, but if the economy is used to that movement and it slows down, then you’ll see a loss of velocity. This becomes significant when you’re barely meeting 2% growth. In your example, I’d argue that—if that gold was being traded around as an asset—the hoarding *does* harm the economy. Much like Ron Paul, we should imagine that the gold is money. In other assets—say, chocolate coins or lithium ore—this wouldn’t apply as the purchases aren’t typified by future sales. Same principle applies if it’s two Scrooges and they continuously trade their coins amongst eachother. The Great Depression is an example of this in action (along with a major investment bubble)—hoarding money and liquid assets slows the economy even when those liquid assets are swapping hands among the relatively well-off.
I just think "it's better for billionaires to take riskier bets because it's more likely they'll lose money" is a weird take. I mean reducing it to absurdity, would you say it'd be better for Musk to like pay all the engineers in the country double their normal wage to solve random hash functions by hand since that would quickly redistribute all of his capital? If SpaceX had failed (let's say it turns out to be impossible with current technology to build a self landing booster), wouldn't that have constituted a huge waste of manpower compared to whatever 6 tech corps some hedge fund would've invested in? Wouldn't those engineers hours have been better spent building safer cars or more efficient corporate command and control software or something? I think that's a weird way to think of commodities, or other assets I guess. I mean there was definitely a liquidity issue during the Great Depression, but that's not like a concern or a spectrum of concern during normal functioning. An external economic shock can be made a lot worse by a liquidity crunch. For the Great Depression, American farmers over-leveraged themselves to buy equipment during World War 1 (to meet the demand of European Nations that couldn't produce food for themselves anymore). Once Europe stabilized, farmers started defaulting on their loans (the external shock). What made it really bad was the Fed doing nothing to increase liquidity and help keep banks solvent. Now the Fed knows not to make that particular mistake. But I don't think analyzing commodities or assets in the same way as money makes sense honestly.
Not analyzing in the same way as money necessarily, velocity is a function of exchange, not just the exchange of money. “Weird” is an appropriate analysis. Things have gotten contradictory and counterintuitive since Friedman. I’m just regurgitating my professors for the most part. To The random hash functions thing, my intuition is to say no because that sounds very Austrian. Then again, that does directly counteract capital accumulation so maybe? Probably not because in that case RoR is 0% whereas bad investments are like 50-80%. I’m drunk now so macro is very hard to comprehend. Basically inequality lowers velocity and so lowering inequality increases that velocity even if it’s in weird ways. Whether that increase in velocity is enough to make up for deadweight loss depends on the particular circumstance
... if only it was an either-or.

The stupid thing about all this is that if you really are worried about AI-risk, then overconfident blowhards with ambitions to “optimise the world” should sound super dangerous to you!

I think fears about AI apocalypse are overblown bunk based on flimsy chains of logic and a fetishisation of intelligence over experimentation, but I still get worried at the idea of yudkowsky being the one at the AI switch.

The only thing worse than yud being so wrong would be yud being right. Talk about a grim meathook future.

Meh. This isn’t that hard once you get your foot in the door. Dude is bankrolled Peter Thiel. Over the years, MIRI has built a network of very well connected people researching an incredibly important topic in a field where there’s money to burn. I don’t find this surprising.

What I do find surprising is that he managed to get Thiel grant money in the first place. That funding decision is something I still find very strange. Clearly it can’t be rationally justified by anything EY published before or after the founding of MIRI. Worse, there were a ton of people with proven track records working on all the things Yudkowsky was interested in when MIRI was first established. Their competition was an autodidact who never went to university and had zero significant achievements in CS.

I know how he met Thiel et al (and it’s impressive that EY managed to worm his way into that circle at all) but I’ve never figured out exactly why they thought he was ‘their guy’.

There's this semi-serious theory floating around on /r/sneerclub that rationalism and techno-libertarianism are fronts for recruiting young men into right-wing causes by luring them with nerdy concepts and acclimating them to reactionary stuff under the guise of *rational debate*, *civil discussions* and other such *free speech absolutist* values libertarians are so fond of. I don't know if the libertarian-to-fascist pipeline was a conscious set-up but you've got to be wondering when you see the homogeneity of the demographics involved. This is usually a red flag for a cult. Anyway, the point I'm making is that it could be EY was the "chosen" of people like Thiel because his quirky concepts like god computer and Harry Potter were good enough entry points to learning later about the IQ of black people or the evolutionary psychology of not wanting to have sex with me, even if EY himself isn't aware of it or repudiates it. EY also copiously sucks up to the rich at regular intervals.
Hmm. This is an interesting take. There does exists a ‘libertarian/centrist to fascist’ pipeline and it is (mostly) consciously constructed. I consider this to be fact. That a pipeline exists can be verified by looking at the publicly available online history of rightwing youtube and the young people they’ve hoodwinked. (Unfortunately, I’ve also had the experience of watching this shift occur in people I know IRL). That the pipeline is a result of conscious design is something that I believe partly because of evidence collected in articles like [this](https://www.theguardian.com/politics/2017/feb/26/robert-mercer-breitbart-war-on-media-steve-bannon-donald-trump-nigel-farage) , [this](https://eu.usatoday.com/story/tech/talkingtech/2017/07/18/steve-bannon-learned-harness-troll-army-world-warcraft/489713001/) , [this](https://www.buzzfeednews.com/article/josephbernstein/heres-how-breitbart-and-milo-smuggled-white-nationalism) and [this](https://thebaffler.com/latest/the-moldbug-variations-pein) . Also because I know/have known people IRL who work in silicon valley and who think like this directly as a result of engaging with this material. None of this is enough to prove the claim to the standards of, say, a criminal trial. Still, I’d say the available evidence puts it up there with ‘Jeffrey Epstein didn’t kill himself.’ on the list of things that are probably true and can’t be proven with legal certainty. However, I’ve never thought about EY’s connection with any of these people in this way before. You’ve given me something to think about.
To be clear, as far as I know EY has always explicitly repudiated NRx in any way he could, banning them from his website, not associating with them etc. There are other reasons not to like him (this is where someone mentions "math pets" and urges you not to google that) but his crystal-clear repudiation (as opposed to SA's endless squirming and hand-wringing) as well as his solid billionaire grift have made me feel the most begrudging respect for him among all other rationalists.
He didn't ban them that i know of, he repudiated them but didn't actually do anything that I know of. And in any case, the ideas were already there, particularly after the two Sequence posts giving a flashing green light to the race realists.
I don't think it is pipelines. I think it is a collection of sales funnels. Every group tries to 'advertise' their ideas and convert a small percentage of the readers into buying into their bs. Or to advertise on places where people more likely to be converted into 'sales' gather. (I stole the 'sales funnel' idea from the weird far right self help groups, btw. So there is a reason to think about it like sales funnels, not tubes (also because tubes imply a linear one way, while the funnels are just more broad)). Yud was smart enough to realize something was going on and to say, 'no advertising on LW'. While the rest think they are just playing in the free market of ideas, but actually they are just being used as an advertisement platform. E: compare this to SSC, where the idea is more that through the power of honest debate and a commitment to truth, you can convert the bad people to the 'power of friendship' side. (This requires you to ignore that the other side can still lie, even if you don't allow it, and that the 'power of friendship' side can be converted to the 'Fuckit, lets just genocide some untermensch' side). It is believing in Ghandis methods (non-violence) and not his goals (freedom for the people of India).
Seems like it started at least more as a natural outgrowth of default techbro politics, which is already reactionary.
[deleted]
He's very good at relieving silicon valley idiots of their money, so if he started a business it might actually do well. Donating to his "charity", on the other hand, just makes *you* the silicon valley idiot.
If he wants to start a business he has to figure out how to scam VCs, not tech bros. Theranos and WeWork are more instructive here than Roko’s Basilisk.
> network I've been tempted to try a Yud grift if my career doesn't work out because hypothetically it would be easy to churn out this shit but the network is key here. MIRI is basically nerd affinity fraud.
build up network by starting rationalist blog and going to their meetings ezpz
Thiel injects himself with the blood of younger people and invests heavily in cryo-preservation and speculative anti-aging research. I think Thiel's main funding interest is anyone who puts money into delaying or "beating" death, even if it's a long shot, and that's always been an open part of Yudowsky's pitch on why superhuman AI is critically important.
> I know how he met Thiel et al (and it’s impressive that EY managed to worm his way into that circle at all) TL;DR?
He met him at a [Foresight Institute](https://foresight.org) dinner ([source](https://www.newyorker.com/magazine/2011/11/28/no-death-no-taxes) ).

there’s big 85112$ in the right-wing psy-op business

[deleted]

think tanks at least get paid to find support for specific results to benefit their donors

Money and value are absolutely arbitrary and nothing means anything. That’s how.

At least some of it came from Epstein

Not because of any conspiracy shit, but because he was into this kind of crap.