My poe detection wasn’t sure until the last sentence used the “still early” and “inevitably” lines. Nice.
- 6 Posts
- 363 Comments
scruiserto
TechTakes•Stubsack: weekly thread for sneers not worth an entire post, week ending 7th December 2025 - awful.systemsEnglish
17·2 days agoAnother day, another instance of rationalists struggling to comprehend how they’ve been played by the LLM companies: https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy
A very long, detailed post, elaborating very extensively the many ways Anthropic has played the AI doomers, promising AI safety but behaving like all the other frontier LLM companies, including blocking any and all regulation. The top responses are all tone policing and such denying it in a half-assed way that doesn’t really engage with the fact the Anthropic has lied and broken “AI safety commitments” to rationalist/lesswrongers/EA shamelessly and repeatedly:
I feel confused about how to engage with this post. I agree that there’s a bunch of evidence here that Anthropic has done various shady things, which I do think should be collected in one place. On the other hand, I keep seeing aggressive critiques from Mikhail that I think are low-quality (more context below), and I expect that a bunch of this post is “spun” in uncharitable ways.
I think it’s sort of a type error to refer to Anthropic as something that one could trust or not. Anthropic is a company which has a bunch of executives, employees, board members, LTBT members, external contractors, investors, etc, all of whom have influence over different things the company does.
I would find this all hilarious, except a lot of the regulation and some of the “AI safety commitments” would also address real ethical concerns.
scruiserto
SneerClub•On Incomputable Language: An Essay on AI by Elizabeth SandiferEnglish
6·16 days agoeven assuming sufficient computation power, storage space, and knowledge of physics and neurology
but sufficiently detailed simulation is something we have no reason to think is impossible.
So, I actually agree broadly with you in the abstract principle but I’ve increasingly come around to it being computationally intractable for various reasons. But even if functionalism is correct…
-
We don’t have the neurology knowledge to do a neural-level simulation, and it would be extremely computationally expensive to actually simulate all the neural features properly in full detail, well beyond the biggest super computers we have now and “moore’s law” (scare quotes deliberate) has been slowing down such that I don’t think we’ll get there.
-
A simulation from the physics level up is even more out of reach in terms of computational power required.
As you say:
I think there would be other, more efficient means well before we get to that point
We really really don’t have the neuroscience/cognitive science to find a more efficient way. And it is possible all of the neural features really are that important to overall cognition, so you won’t be able to do it that much more “efficiently” in the first place…
Lesswrong actually had someone argue that the brain is within an order or magnitude or two of the thermodynamic limit on computational efficiency: https://www.lesswrong.com/posts/xwBuoE9p8GE7RAuhd/brain-efficiency-much-more-than-you-wanted-to-know
-
scruiserto
SneerClub•On Incomputable Language: An Essay on AI by Elizabeth SandiferEnglish
6·16 days agoIt’s not infinite! If you take my cherry picked estimate of the computational power of the human brain, you’ll see we’re just one more round of scaling to have matched the human brain, and then we’re sure to have AGI
and make our shareholders immense profits! Just one more scaling, bro!
scruiserto
TechTakes•Stubsack: weekly thread for sneers not worth an entire post, week ending 23rd November 2025 - awful.systemsEnglish
17·16 days agoContinuation of the lesswrong drama I posted about recently:
https://www.lesswrong.com/posts/HbkNAyAoa4gCnuzwa/wei-dai-s-shortform?commentId=nMaWdu727wh8ukGms
Did you know that post authors can moderate their own comments section? Someone disagreeing with you too much but getting upvoted? You can ban them from your responding to your post (but not block them entirely???)! And, the cherry on top of this questionable moderation “feature”, guess why it was implemented? Eliezer Yudkowsky was mad about highly upvoted comments responding to his post that he felt didn’t get him or didn’t deserve that, so instead of asking moderators to block on a case-by-case basis (or, acasual God forbid, consider maybe if the communication problem was on his end), he asked for a modification to the lesswrong forums to enable authors to ban people (and delete the offending replies!!!) from their posts! It’s such a bizarre forum moderation choice, but I guess habryka knew who the real leader is and had it implemented.
Eliezer himself is called to weigh in:
It’s indeed the case that I haven’t been attracted back to LW by the moderation options that I hoped might accomplish that. Even dealing with Twitter feels better than dealing with LW comments, where people are putting more effort into more complicated misinterpretations and getting more visibly upvoted in a way that feels worse. The last time I wanted to post something that felt like it belonged on LW, I would have only done that if it’d had Twitter’s options for turning off commenting entirely.
So yes, I suppose that people could go ahead and make this decision without me. I haven’t been using my moderation powers to delete the elaborate-misinterpretation comments because it does not feel like the system is set up to make that seem like a sympathetic decision to the audience, and does waste the effort of the people who perhaps imagine themselves to be dutiful commentators.
Uh, considering his recent twitter post… this sure is something. Also" “it does not feel like the system is set up to make that seem like a sympathetic decision to the audience” no shit sherlock, deleting a highly upvoted reply because it feels like too much effort to respond to is in fact going to make people unsympathetic (at the least).
scruiserto
SneerClub•On Incomputable Language: An Essay on AI by Elizabeth SandiferEnglish
7·17 days agoSo one point I have to disagree with.
More to the point, we know that thought is possible with far less processing power than a Microsoft Azure datacenter by dint of the fact that people can do it. Exact estimates on the storage capacity of a human brain vary, and aren’t the most useful measurement anyway, but they’re certainly not on the level of sheer computational firepower that venture capitalist money can throw at trying to nuke a problem from space. The problem simply doesn’t appear to be one of raw power, but rather one of basic capability.
There are a lot of ways to try to quantify the human brain’s computational power, including storage (as this article focuses on, but I think its the wrong measure, operations, numbers of neural weights, etc.). Obviously it isn’t literally a computer and neuroscience still has a long way to go, so the estimates you can get are spread over like 5 orders of magnitude (I’ve seen arguments from 10^13 flops and to 10^18 or even higher, and flops is of course the wrong way to look at the brain). Datacenter computational power have caught up to the lowers ones, yes, but not the higher ones. The bigger supercomputing clusters, like El Capitan for example, is in the 10^18th range. My own guess would be at the higher end, like 10^18, with the caveat/clarification that evolution has optimized the brain for what it does really really well, so that the compute is being used really really efficiently. Like one talk I went to in grad school that stuck with me… the eyeball’s microsaccades are basically acting as a frequency filter on visual input. So literally before the visual signal has even got to the brain the information has already been processed in a clever and efficient way that isn’t captured in any naive flop estimate! AI boosters picked estimates on human brain power that would put it in range of just one more scaling as part of their marketing. Likewise for number of neurons/synapses. The human brain has 80 billion neurons with an estimated 100 trillion synapses. GPT 4.5, which is believed to have peaked on number of weights (i.e. they gave up on straight scaling up because it is too pricey), is estimated (because of course they keep it secret) like 10 trillion parameters. Parameters are vaguely analogs to synapses, but synapses are so much more complicated and nuanced. But even accepting that premise, the biggest model was still like 1/10th the size to match a human brain (and they may have lacked the data to even train it right).
So yeah, minor factual issue, overall points are good, I just thought I would point it out, because this factual issue is one distorted by the AI boosters to make it look like they are getting close to human.
scruiserto
SneerClub•Yudkowsky denies the accusations! several thousand words in, and ten years after they were madeEnglish
5·18 days agoI found those quote searching xcancel for Eliezer Yudkowsky
scruiserto
SneerClub•Yudkowsky denies the accusations! several thousand words in, and ten years after they were madeEnglish
16·18 days agoIt makes total sense if you think markets are magic and thus prediction markets are more magic and also you can decentralize all society into anarcholibertarian resolution methods!
scruiserto
Buttcoin•How Lightcone fucked up returning an FTX donation: they asked a chatbotEnglish
4·18 days agoI’m not sure I even want to give Elon that much? Like the lesswrong website is less annoying than twitter!
scruiserto
SneerClub•Yudkowsky denies the accusations! several thousand words in, and ten years after they were madeEnglish
12·19 days agoVery ‘ideological turing test’ failure levels.
Yeah, his rational is something something “threats” something something “decision theory”, which has the obvious but insane implication that you should actually ignore all protests (even peaceful protestors that meet his lib centrist ideals of what protests ought to be) because that is giving into the protestors “threats” (i.e. minor inconveniences, at least in the case of lib-brained protests) and thus incentivizing them to threaten you in the first place.
he tosses the animal rights people (partially) under the bus for no reason. EA animal rights will love that.
He’s been like this a while, basically assuming that obviously animals don’t have qualia and obviously you are stupid and don’t understand neurology/philosophy if you think otherwise. No, he did not even explain any details of his certainty about this.
scruiserto
SneerClub•Yudkowsky denies the accusations! several thousand words in, and ten years after they were madeEnglish
21·19 days agoI haven’t looked into the Zizians in a ton of detail even now, among other reasons because I do not think attention should be a reward for crime.
And it doesn’t occur to him to look into the Zizians in order to understand how cults keep springing up from the group he is a major thought leader in? Like if it was just one cult, I would sort of understand the desire just to shut ones eyes (but it certainly wouldn’t be a truth-seeking desire), but they are like the third cult (or 5th or 6th if we are counting broadly cult-adjacent group) (and this is not counting the entire rationalist project as cult). (For full on religious cults we have: leverage research, and the rationalist-Buddhist cult; for high-demand groups we have: the Vassarites, Dragon Army’s group home, and a few other sketchy group living situations (Nonlinear comes to mind)).
Also, have an xcancel link, because screw Elon and some of the comments are calling Eliezer out on stuff: https://xcancel.com/allTheYud/status/1989825897483194583#m
Funny sneer in the replies:
I read the Sequences and all I got was this lousy thread about the glomarization of Eliezer Yudkowsky’s BDSM practices
Serious sneer in the replies
this seems like a good time to point folks towards my articles titled “That Time Eliezer Yudkowsky recommended a really creepy sci-fi book to his audience and called it SFW” and "That Time Eliezer Yudkowsky Wrote A Really Creepy Rationalist Sci-fi Story and called it PG-13
scruiserto
Buttcoin•How Lightcone fucked up returning an FTX donation: they asked a chatbotEnglish
7·19 days agoElon is widely known to be a strong engineer, as well as a strong designer
This is just so idiotic I don’t know what made up world Habryka lives in. In between blowing up a launch pad, the numerous insane design and engineering choices of the cybertruck, all the animals slaughtered by neuralink, and the outages and technical problems of twitter, you might be tempted to hope that the idea of Elon Musk as a strong engineer or designer would be firmly relegated to the dustbins of early 2010 where out-of-the-loop people could manage to buy the image of his PR firms. I guess Musk-cultist and lesswrong have more overlap than I realized (I knew there was some, but I didn’t realize it was that common).
scruiserto
TechTakes•Anthropic: Chinese AI hackers are after you! Security researchers call BSEnglish
4·20 days agoEven taking their story at face value:
-
It seems like they are hyping up LLM agents operating a bunch of scripts?
-
It indicates that their safety measures don’t work
-
Anthropic will read your logs, so you don’t have any privacy or confidentiality or security using their LLM, but, they will only find any problems months after the fact (this happened in June according to Anthropic but they didn’t catch it until September),
If it’s a Chinese state actor … why are they using Claude Code? Why not Chinese chatbots like DeepSeek or Qwen? Those chatbots code just about as well as Claude. Anthropic do not address this really obvious question.
- Exactly. There are also a bunch of open source models hackers could use for a marginal (if any) tradeoff in performance, with the benefit that they could run locally, so that their entire effort isn’t dependent on hardware outsider of their control in the hands of someone that will shut them down if they check the logs.
You are not going to get a chatbot to reliably automate a long attack chain.
- I don’t actually find it that implausible someone managed to direct a bunch of scripts with an LLM? It won’t be reliable, but if you can do a much greater volume of attacks maybe that makes up for the unreliability?
But yeah, the whole thing might be BS or at least bad exaggeration from Anthropic, they don’t really precisely list what their sources and evidence are vs. what is inference (guesses) from that evidence. For instance, if a hacker tried to setup hacking LLM bots, and they mostly failed and wasted API calls and hallucinated a bunch of shit, if Anthropic just read the logs from their end and didn’t do the legwork contacting people who had allegedly been hacked, they might "mistakenly’ (a mistake that just so happens to hype up their product) think the logs represent successful hacks.
-
scruiserto
SneerClub•Habryka posts a NEW OFFICIAL LESSWRONG ENEMIES LIST. Guess who's #1, go on, guessEnglish
13·20 days agoThis is somewhat reassuring, as it suggests that he doesn’t fully understand how cultural critiques of LW affect the perception of LW more broadly;
This. On Reddit (which isn’t actually mainstream common knowledge per se, but I still find it encouraging and indicative that the common sense perspective is winning out) whenever I see the topic of lesswrong or AI Doom come up on unrelated subreddits, I’ll see a bunch of top upvoted comments mentioning the cult spin offs or that the main thinker’s biggest achievement is Harry Potter fanfic or Roko’s Basilisk or any of the other easily comprehensible indicators that these are not serious thinkers with legitimate thoughts.
scruiserto
TechTakes•Stubsack: weekly thread for sneers not worth an entire post, week ending 16th November 2025English
8·20 days agoAnother ironic point… Lesswronger’s actually do care about ML interpretability (to the extent they care about real ML at all; and as a solution to making their God AI serve their whims not for anything practical). A lack of interpretability is a major problem (like irl problem, not just scifi skynet problem) in ML, you can models with racism or other bias buried in them and not be able to tell except by manually experimenting with your model with data from outside the training set. But Sam Altman has turned it from a problem into a humble brag intended to imply their LLM is so powerful and mysterious and bordering on AGI.
scruiserto
TechTakes•Stubsack: weekly thread for sneers not worth an entire post, week ending 16th November 2025English
13·20 days agoA lesswronger wrote an blog post about avoiding being overly deferential, using Eliezer as an example of someone that gets overly deferred to. Of course, they can’t resist glazing him, even in the context of an blog post on not being too deferential:
Yudkowsky, being the best strategic thinker on the topic of existential risk from AGI
Another lesswronger pushes back on that and is highly upvoted (even among the doomers that think Eliezer is a genius, most of them still think he screwed up in inadvertently helping LLM companies get to where they are): https://www.lesswrong.com/posts/jzy5qqRuqA9iY7Jxu/the-problem-of-graceful-deference-1?commentId=MSAkbpgWLsXAiRN6w
The OP gets mad because this is off topic from what they wanted to talk about (they still don’t acknowledge the irony).
A few days later they write an entire post, ostensibly about communication norms, but actually aimed at slamming the person that went off topic: https://www.lesswrong.com/posts/uJ89ffXrKfDyuHBzg/the-charge-of-the-hobby-horse
And of course the person they are slamming comes back in for another round of drama: https://www.lesswrong.com/posts/uJ89ffXrKfDyuHBzg/the-charge-of-the-hobby-horse?commentId=s4GPm9tNmG6AvAAjo
No big point to this, just a microcosm of lesswrongers being blind to irony, sucking up to Eliezer, and using long winded posts about meta-norms and communication as a means of fighting out their petty forum drama. (At least us sneerclubers are direct and come out and say what we mean on the rare occasions we have beef among ourselves.)
scruiserto
TechTakes•Stubsack: weekly thread for sneers not worth an entire post, week ending 9th November 2025English
7·25 days agoThanks for the information. I won’t speculate further.
scruiserto
TechTakes•Stubsack: weekly thread for sneers not worth an entire post, week ending 9th November 2025English
7·26 days agoThanks!
So it wasn’t even their random hot takes, it was reporting someone? (My guess would be reporting froztbyte’s criticism, which I agree have been valid if a bit harsh in tone)
scruiserto
TechTakes•Stubsack: weekly thread for sneers not worth an entire post, week ending 9th November 2025English
2·26 days agoSome legitimate academic papers and essays have served as fuel for the AI hype and less legitimate follow-up research, but the clearest examples that comes to mind would be either “The Bitter Lesson” essay or one of the “scaling law” papers (I guess Chinchilla scaling in particular?), not “Attention is All You Need”. (Hyperscaling LLMs and the bubble fueling it is motivated by the idea that they can just throw more and more training data at bigger and bigger model). And I wouldn’t blame the author(s) for that alone.

I mean, I assume the bigger the pump the bubble the bigger the burst, but at this point the rationalists aren’t really so relevant anymore, they served their role in early incubation.