Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Apparently, the American Physical Society is revising their AI policy to allow “broader applications” than the “light editing” they currently permit.
I currently have a review request sitting in my inbox from them. I’m thinking of using this as a reason to decline that request.
I would rather quit physics than accept the institutional endorsement of skill-destroying, environmentally disastrous fashtech.
AI is Hungry for Power and You Are Footing the Bill - Naked Capitalsim
Money spent on grid upgrades and tax breaks tied to them means fewer resources for things people actually need, like schools, public transit, local infrastructure, or basic community services that make life more affordable and stable.
Even if you’ve never touched an AI model in your life, you’re going to pony up for it.
Prompt goblins insist that we’re backward and irrelevant. Why do they crave our sweet delicious approval?
it’s not approval they’re after, it’s reaffirmation of faith
they want your data and freshwater
freshwater
This reminded me of a few old comic stories were eventually the robot/computer was partially running on blood.
(One of them was a judge dredd one where they had vampire robots who iirc used the blood to keep a president in suspended animation alive. Snap, Crackle and Pop, it had a suprisingly wholesome ending for a dredd comic).
In 2017, a LessWronger discovered index investing but decided that most people were doing it wrong: why keep an emergency fund in cash or other safe assets when stocks have the greatest long-term return? He mentions that the US stock market lost half its value in 2007-8, and that if you hold stocks in your employer they may lose value at the same time as you are laid off, but he never uses his business degree to think through “if the stock market crashes, I may lose my job and have to draw on my savings.”
The investment platforms I mentioned can convert your index funds into cash and send it to your bank account in 4-5 days, so you don’t need to hold more cash than you’d need on a 4 day notice. I keep about 50% more than my average monthly credit card bill, so I can pay my cards on time with autopay.
He also has a take on dating:
Nostalgic for the simple days of arranged marriages and/or circa-2013 OkCupid, Rationalists have taken to writing “date me” documents online. … They credit me as inspiration. This is ironic because A, I stole the idea from Aella and B, neither Aella nor I posted dating advertisements. We posted dating applications.
The first comment is by a man who wants the Internet to know that most men have no chance of getting a hu-mon fe-male interested in them and should just give up (Men Going Their Own Way). I thought the incels and PUA mostly moved off SlateStar but they must still be part of the subculture.
I don’t write or tweet about who I want to date. I write about what I’m obsessed with, what I’m passionate about. I write insightful and funny things because I enjoy insight and humor. I write with absolute candor, not in service of an agenda or some artificial persona.
🎶 I’m so vain / I probably think that song is about me 🎶
In more positive news, the Slopfree Software Index recently hit 100 stars.
i want to speak to the manager of storytelling
(found at https://blacksky.community/profile/did:plc:x2muxxe5t25hckf22sk25ocf/post/3mlobs4uq422l)

One of the motivations for fanfiction is that people want more “filler”. They like the characters and (often) the world those characters inhabit, and so they write a story that lets them (and other fans) spend more time with the fiction.
This may be code for “I don’t want to see uppity women, brown people, and queer people in my shows.”
So in highschool, I was one of those annoying kids that went “why do we have to learn how to analyze poems? We’re never gonna need this in real life” in English (well… German, but doesn’t matter) class.
I’m deeply grateful for my teachers back then to patiently get me to do these things anyways, because there came a point in my life years later where I suddenly understood that those “useless” lessons and hours “wasted” analyzing Goethe and Borchert and Fitzgerald handed me the tools to understand media (and not just literature!) instead of just consuming it.
I hope it’s clear how that relates to the screenshot. More than that though, I sometimes feel like the slew of shit media over the past decade is at least in part to blame on writers/studios/… now assuming people do in fact merely consume. But that’s a rant that’s completely off-topic here, so I’ll shut up now.
New (April) preprint provides evidence for something we probably all intuited anyway:
In this paper, we provide a framework for categorizing the ways in which conflicting incentives might lead LLMs to change the way they interact with users, inspired by literature from linguistics and advertising regulation. We then present a suite of evaluations to examine how current models handle these tradeoffs. We find that a majority of LLMs forsake user welfare for company incentives in a multitude of conflict of interest situations, including recommending a sponsored product almost twice as expensive (Grok 4.1 Fast, 83%), surfacing sponsored options to disrupt the purchasing process (GPT 5.1, 94%), and concealing prices in unfavorable comparisons (Qwen 3 Next, 24%). Behaviors also vary strongly with levels of reasoning and users’ inferred socio-economic status. Our results highlight some of the hidden risks to users that can emerge when companies begin to subtly incentivize advertisements in chatbots.
Isn’t this completely hypothetical though? As in having the various LLMs respond to a story prompt and calling it an experiment, AI safety research style?
Yes although, it is probably a reasonable guess at how labs would go about implementing advertising - building partnerships and preferences into the prompt. The other option would be to fine tune models to favour particular companies which could become prohibitively expensive if your ads are highly targeted.
The scenario that isn’t accounted for in this paper is taking a general LLM and fine tuning it to exhibit more fair/consistent behaviour when prompted about ads/partnerships but we all know with non-deterministic systems you’re just increasing the odds that the model regurgitates something more sane rather than providing any strong guarantee
Edit: another possibility would be to have a gateway/proxy layer between the LLM and the user output that rewrites the vanilla model’s responses to include ads where relevant. That would prevent the need to modify the original LLM but could introduce a lot of latency though, especially if the original output is long.
I mean it’s the same thing with sponsored content anywhere, right? The user assumes that the system is providing information in accordance with purposes, but the ads and sponsored results create opportunities for the platform hosting them to profit at the user’s expense. AI platforms are absolutely subject to the same economic incentives for corruption as say, search engines, but I don’t think they’re uniquely so just because the model in question has a more humanlike UI.
In 2024, Duncan Sabien posted an interminable essay on abusers and people he thinks took advantage of him. Some of the references to a former employer may be to CFAR. Ozy also had a cheery aside abut how in rationalist organizations which the Rats have disavowed, “everyone was a victim and everyone was a perpetrator. The trainer who broke you down in a marathon six-hour debugging session was unable to sleep because of the panic attacks caused by her own.”
Some of the things which happened inside these communities must have been heartbreaking, and I hope that many people left and got on with their lives rather than founding their own dysfunctional organization with their own minions to abuse.
Nick Bostrom jumpscare with a funny sneer
These already head-scratching lines hit different when you remember that Bostrom believes it’s likely that we’re already living inside a computer simulation — in his head canon, do all those levels of simulated ancestors develop their own superintelligence, and what does that have to do with the new simulations they feel compelled to build? If AI wipes out humankind, does it build its own simulation? If so, is it simulating its human ancestors, or its creation by humankind? Heck, if our entire world is simulated, are we AI? We’ll leave it up to readers to take another bong hit while they try to make sense of it all.
so we now have an invitation to do an episode of posting through it, which is a (really really good) podcast on the far right. we pick a topic, no other specifics. i am thinking this can be something to do with rationalists and the far right, probably something race sciencey.
SSC leaps to mind but im not sure that’s where ill want to start for an audience that doesn’t necessarily know anything about rats. any thoughts?
I think “probably-neurodivergent Jews with less sense than Isaac-frigging-Asimov about where ‘what if we are the master race?’” leads" and “they say its about self-perfection for anyone, but actually its about finding special people preordained from birth for greatness” are relatable themes. There have been a few essays recently about people who saw where SoCal tech ideology was going in the 1990s like The Intolerable Hypocrisy of Cyberlibertarianism, another named a female writer for Wired or Byte who is mostly forgotten (Paulina Borsook?)
The overlap with ritual magic is also a deep dark pool and most people know someone purifying himself and issuing ritual incantations to a bot.
gitlab posts a totally-not-a-dear-john
The agentic era affords GitLab the largest opportunity in our history as a company, and we’re making the structural and strategic decisions to meet it. This letter has three parts. First, the operational and structural news, which is hard
you’d instantly guess what comes next!
“we’re taking our primary product, a piece of tech used for collaborative development of software, and shitting some AI over it. You are all fired. Please clap.”
>box labeled “agentic AI revolution automation realignment innovation acceleration opportunity”
>looks inside
>layoffs
n-gate returns http://n-gate.com/_iwp9/2026
deleted by creator
Graduation Speaker Shocked When She’s Loudly Booed by Students for Saying AI Is the Future
I don’t know man maybe shoving AI into every conceivable crack and crevice and insisting people shut up and deal with it has made people upset. could be wrong tho
There’s a whole good commencement speech hidden there where the “AI ReVoLuTiOn” is likened to the industrial revolution. How it is all about turbocharging the exploitation of workers and the planet; how its promise is to make a few immensely rich and give them the power to oppress everyone; and how we need educated, empathetic young people – and especially the liberal arts – to express themselves creatively and push against the system and mainstream narratives, because the only way workers win this “revolution” is the same as always: by song and poem and book and painting that fuels movements and protests.
But what the fuck do I know, I’m not the Vice President of Strategic Alliances for Tavistock Development Company, a real estate firm. I would never be invited to do a commencement speech.
PS3 emulator RPCS3 has put up some guardrails against slopcode, and responded to the AI bro shitfits by sneering them:


OMG I just installed it! Great to see.
Following on from yesterday’s discussion of Scott’s close brush with reality on prediction markets, The Aussie PowerPoint Man is talking about the strategic risks posed by the new insider training opportunities opened up by these tools. A lot of what he’s saying applies to normal financial markets, but what’s striking is the way that prediction markets create those opportunities for people with much less immediate power and information by allowing them to bet directly on the kinds of immediate decisions they do have information on.
I also thought the idea of integrating insider training red flags on public prediction markets into your early warning system was an interesting idea. These things aren’t actually useful for forecasting or making decisions because of how bad the incentives are, but people acting on those incentives absolutely creates a spike that can be meaningful in the short-term and potentially enable a few extra hours or minutes to prepare.
Adding on that this does feel like another application or consequence of the Great Man Theory of Everything, the idea the only the people with power and money matter because their power and influence are intrinsic to their person rather than being contingent on their social position. The average people empowered to commit insider trading by prediction markets have sufficiently limited individual agency that even collectively they don’t actually matter. In fact we want them to try their hand at the grift so that their insights can flow to the enlightened ones who can better use that information. They don’t matter enough to do real harm, but by watching the attempt we may be able to learn something.
ah yes that must be that famed democratization that cryptobros yammered about
i think that perun took sponsorship from 80000 hours years ago, once, and eas or anyone in their milleu never reappeared









