Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • CinnasVerses
    link
    fedilink
    English
    arrow-up
    10
    ·
    15 hours ago

    Scott Alexander published a blog post about how its unfair to call Victor Orban an autocrat but:

    I spent the first half of my writing career calling out biased left-wing experts, the flood swept all those people away, and now we’re ruled by germ-theory-denialists and Waffle-House-teleporters. Not a day goes by that I don’t want the old biased experts back. To paraphrase Cormac McCarthy, you never know what worse institutions your bad institutions have saved you from.

    Dsquareddigest responds:

    I believe the full quote is “to paraphrase Cormac McCarthy, you never know what worse institutions your bad institutions have saved you from, if you are being dumb on purpose”

    It’s in the dictionary next to Upton Sinclair’s famous line that “it is hard to get a man to understand something when he is a massive dumbass”

    • Architeuthis
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      10 hours ago

      Unless he specifies his problem was with ostensibly leftist academics being specifically too dismissive of race science and incelist tropes this is worthless, just run of the mill face-leopard schadenfreude.

      Also the second half (the what? what’s the cut-off point?) of his career has been if anything more mask off, and it’s not like he stopped whining about woke after posting a half-hearted disapproval of trump like three days before the election after years of writing about how cool it would be if there was less regulation especially for healthcare.

      • CinnasVerses
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 hours ago

        He claims he turned against Trump after the Capitol Putsch, so the two halves would be 2009-2019 and 2020-2026. He actually celebrated Trump’s second inauguration with his post about how everyone knows Richard Lynn was right but cowardly liberals pretend to believe blacks and whites are equal.

        I thought his posts about “women don’t like Nice Guys” ended around 2013 like a lot of shouting about gender online? Dating a young cam-person and sex blogger in 2014 must have improved his mood even if the relationship did not last.

    • Soyweiser
      link
      fedilink
      English
      arrow-up
      6
      ·
      13 hours ago

      I spent the first half of my writing career calling out biased left-wing experts,

      He admits it!

    • Architeuthis
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 hours ago

      just one more data trove bro

      Are new data-hungry players entering the market of are we still pretending that shoveling more social media posts to the data furnace will somehow overcome structural limitations?

  • rook
    link
    fedilink
    English
    arrow-up
    3
    ·
    22 hours ago

    Anyone ever heard of these folks before? https://dataglow.energy/

    On the face of it, it seems like a neat idea… use the waste heat of a datacentre to provide district heating, sweeten the deal with promises of faster internet connectivity. Probably a sensible thing to do with future builds of this kind, especially if it cuts down on noise, etc.

    I am cynical enough to assume that this is mostly a new trick for building consent for new datacentre construction, that it is an attempt to greenwash a dirty industry, and that in the end nothing will come of it but it’ll still somehow manage to make a few people richer and probably damage some green belt land.

    • fullsquare
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      9 hours ago

      i heard that a couple of german dcs (owned by universities or other research institutions and therefore indirectly by state) do this, but this kinda depends on district heating grid existing and also puts some limits on thermal side, in simplest variant chips just have to run hotter. not to mention that it’s kinda easier to do when you own the entire thing, long term, and can offload some of the engineering and design effort to some intern student writing masters or doctoral thesis. this works in part because when you switch from coal to gas and have district heating using that waste heat, there’s less waste heat from CCGT of equal power, and it’s all gone when you switch to renewables, so there’s a grid that still needs some heat and dc boiler can fill that gap to a small degree. at the same time dc can’t be the only source of heat because demand is seasonal and dc ideally should run 24/7 and while you can get enough storage for daily variation this won’t be enough and some other source of heat is needed. this is why it makes more sense as a long term government backed project

      • rook
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        This system uses heat pumps at the consumer sites rather than plain radiators, so they’ve got a bit more flexibility in how hot they have to run their cooling loop. There’s also mention of a swimming pool, though I have no idea how much energy it takes to warm one of those. Does provide a year-round demand, though.

        • gerikson
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 minutes ago

          To be honest I thought it was an April Fool’s Joke at first

      • rook
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 hour ago

        Thermify is a pretty weird-looking thing, what with actual servers being installed in people’s homes, and running some kind of opportunistic batch processing work? That’s very specialist compared to regular datacentres, though the plumbing would be a lot simpler.

  • gerikson
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    1 day ago

    Stop-AI terrorists: Eliezer Yudkowsky told us to bomb the datacenters.

    Yudkowsky: no no no, I said we needed airstrikes to hit the datacenters

    IRGC: I gotchu fam Cheap Drones Complicate the Gulf’s AI Boom

    (edit reworded comment around link to attempt to make it funnier)

    • Soyweiser
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 day ago

      What makes all this extra funny is Yuds lifes work. Wants to ensure AI alignement and fix human rationality. Creates terrorists instead.

      Reminds me a bit of his AI in the box experiments, which according to the stories always worked on his fans, but as soon as somebody skeptical did it, he stayed in the box.

        • Architeuthis
          link
          fedilink
          English
          arrow-up
          5
          ·
          10 hours ago

          Rationalists tend to lean more towards anime villain that Bond villain, but yeah.

    • YourNetworkIsHaunted
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 hours ago

      From the second post:

      A seasoned security leader would never build a defensive program and then measure offensive capability only, making remediation a second-class story. That is the kind of dog and pony show that any good security initiative would slam the door on. Or it’s like a surgeon telling you they have an even sharper scalpel to cut you deeper and faster. Yeah, so then what?

      Dark and paranoid thought: given that Anthropic very recently ran into issues with their defense contracts, are they playing up their offensive capabilities targeting a notoriously tech- and security-illiterate political establishment to try and force their way back into those sweet government contracts as an impossible-to-ignore offensive tool? I mean we’ve talked about how the cash burn rate for all these companies is sufficiently absurd that it’s going to take something truly crazy to turn these companies self-sustaining before the world runs out of investor money, and military and intelligence budgets are notorious for dragging ludicrous amounts of public money into a dark alley where nobody can see what’s happening to it.

  • nfultz
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 days ago

    https://www.nakedcapitalism.com/2026/04/ai-reputational-crisis-violence-data-center-protests-sam-altman-openai.html

    The profound ignorance of tech on the part of most American lawmakers is no joke. In a prior life, I was once responsible for updating a future Vice Chair of the Senate Intelligence Committee on tech issues and it was like showing an alarm clock to a chicken.

    haha

    That same senator went on to be a huge RussiaGater and played a central role in Twitter and other social media titans upping their censorship game at the behest of US politicians.

    oh :(

    • gerikson
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      2 days ago

      That shift suggests Virginians now consider data centers almost as undesirable as nuclear power plants,

      bah! Virginian voters need to read more LessWrong, where the benefits of both are explained beneath impenetrable layers of posts.

      Also this evisceration of Zvi:

      As for his argument regarding political violence, I’d point him toward John Locke, Nelson Mandela, Franz Fanon, or Walter Benjamin, but what’s the point, none of them printed their arguments on Magic: The Gathering cards.

    • gerikson
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      BTW what kind of site is Naked Capitalism? I’ve heard of it but never read it before .

      • nfultz
        link
        fedilink
        English
        arrow-up
        3
        ·
        19 hours ago

        blogosphere-era link aggregator that somehow kept going way longer than occupy wallstreet did. one thing to know, (like here), they link to a lot of stuff they don’t support.

  • sansruse
    link
    fedilink
    English
    arrow-up
    14
    ·
    2 days ago

    https://www.cnbc.com/2026/04/15/allbirds-bird-stock-shoes-ai.html

    Struggling shoe retailer Allbirds makes bizarre pivot from shoes to AI, stock explodes more than 400%

    I had such a hard time coming up with an original joke for this, until i realized the reason why is that allbirds is stealing jokes from the dotcom bubble in the first place.

    The company, valued around $4 billion at its peak, sold its intellectual property and other assets two weeks ago for $39 million. The stock surged over 400%, from under $3 a share up to $13. The shoe company had a market cap of about $21 million Tuesday.

    Oh. so, bit of a misleading headline there CNBC. This wasn’t a real publicly traded company, it was a company on life support that got pivoted by a greedy founder looking to cash in. Cynical move or the delusions of a true believer? does it matter?

    Regardless, the stupidity is too much, the resemblance too striking. good luck to Allbirds in the totally normal footwear-to-high tech pivot that is happening in this totally normal economy.

  • gerikson
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    2 days ago

    Tennesee(!) leads the way, a bill to make training chatbots a Class A felony.

    Hope they get the fullthroated support of LW

    Reddit /r/artificial freaks out (no clue what alignment that subreddit has): https://old.reddit.com/r/artificial/comments/1slu23a/red_alert_tennessee_is_about_to_make_building/

    via HN: https://news.ycombinator.com/item?id=47784650

    edit aww the coward lawmakers have backed down https://www.wjhl.com/news/tennessee-backs-off-sweeping-artificial-intelligence-limits-opts-for-study-instead/

    • lurker
      link
      fedilink
      English
      arrow-up
      15
      ·
      2 days ago

      as someone who’s disabled, the idea of “ethical eugenics” pisses me off to no end. There is no ethical eugenics! You’re systematically destroying classes of people because they don’t fit your standards, there is no way to make it ethical when the very core premise involves taking away human rights

    • YourNetworkIsHaunted
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 hours ago

      I’m glad someone else was able to coherently discuss how ass-backwards Saltman’s response has been. Like, if anything the fact that he responds to this moment by talking up the importance of democracy over emerging technologies should just be evidence before some distant future revolutionary tribunal that he knows his company is literally Sauron (okay, maybe more the Witch-King of Angmar than Sauron) and doesn’t care because he wants to be the one wearing the ring at the end of the day.

  • scruiser
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    3 days ago

    Eliezer joins the trend of condemning “political” violence with confidence on the far end of the dunning-kruger curve: https://www.lesswrong.com/posts/5CfBDiQNg9upfipWk/only-law-can-prevent-extinction

    I’ve already mocked this attitude down thread and in the previous weekly thread, so I’ll try to keep my mockery to a few highlights…

    He’s admitting nuke the data centers is in fact violence!

    It would be beneath my dignity as a childhood reader of Heinlein and Orwell to pretend that this is not an invocation of force.

    But then drawing a special case around it.

    But it’s the sort of force that’s meant to be predictable, predicted, avoidable, and avoided. And that is a true large difference between lawful and unlawful force.

    I don’t think Eliezer has checked the news if he think the US government carries out violence in predictable or fair or avoidable ways! Venezuela! (It wasn’t fair before Trump, or avoidable if you didn’t want to bend over for the interest of US capital, but it is blatantly obvious under Trump) The entire lead up to Iran consisted of ripping up Obama’s attempts at treaties and trying to obtain regime change through surprise assassination! Also, if the stop AI doomers used some clever cryptography scheme to make their policy of property destruction (and assassination) sufficiently predictable and avoidable would that count as “Lawful” in Eliezers book? If he kept up with the DnD/Pathfinder source material, he would know Achaekek’s assassins are actually Lawful Evil

    The ASI problem is not like this. If you shut down 5% of AI research today, humanity does not experience 5% fewer casualties. We end up 100% dead after slightly more time.

    His practical argument against non-state-sanctioned violence is that we need a total ban (and thus the authority of state driving it), because otherwise someone with 8 GPUs in a basement could invent strong AGI and doom us all. This is a dumb argument, because even most AI doomers acknowledge you need a lot of computational power to make the AGI God. And they think slowing down AGI (whether through violence or other means) might buy time for another sort of solution that is more permanent (like the idea of “solve alignment” Eliezer originally promised them). Lots of lesswrong posts regularly speculate on how to slow down the AI race and how to make use of the time they have, this isn’t even outside the normal window of lesswrong discourse!

    Statistics show that civil movements with nonviolent doctrines are more successful at attaining their stated goals

    Sources cited: 0

    One of the comments also pisses me off:

    Which reminds me about another point: I suspect that “bomb data centers” meme causal story was not somebody lying, but somebody recalling by memory without a thought that such serious allegation maybe is worthy to actually look up it and not rely on unreliable memory.

    “Drone strike the data centers even if starts nuclear war” is the exact argument Eliezer made and that we mocked. It is the rationalists that have tried to soften it by eliding over the exact details.

    • blakestaceyA
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      It would be beneath my dignity as a childhood reader of Heinlein and Orwell

      Life is too short to be that pompous

      • Architeuthis
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        Reading Heinlein as a kid isn’t even especially notable, but it’s Yud so he definitely means the polyamory advocacy stuff specifically.

        • blakestaceyA
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          And it’s not like Orwell wrote a book about talking animals that is required reading in schools across the land.

    • CinnasVerses
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      Yud says so much, and its often so confusing, that I think a lot of his followers don’t know his main messages. It used to be orthodox that you cannot have a two-faced message any more without each audience learning what you say to the others, but that assumed you were a good communicator aiming at a mass audience.

      Yud has strange views about legal responsibility:

      Anthropic Claude Mythos is already a state-level actor in terms of how much harm it could theoretically have done – given its demonstrated and verified ability to find critical security vulnerabilities in every operating system and browser; and how fast Mythos could’ve exploited those vulnerabilities, with ten thousand parallel threads of intelligent attack. Mythos hypothetically rampant or misused could have taken down the US power grid, say… at the end of its work, after introducing hard-to-find errors into all the bureaucracies and paperwork and doctors’ notes connected to the Internet.

      But if you release a virus and it infects people, we don’t hold the virus responsible, we hold you. If you build a car and it explodes when it gets rear-ended, we don’t blame the car, we blame you.

      • blakestaceyA
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        Ah, so it’s Mythos that will create the nanobotsdiamondoid bacteria

    • fullsquare
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      2 days ago

      eliezer misses that (as used in decolonization/civil rights era) nonviolence is effectively a sophisticated propaganda strategy that takes existing injustices and violence and uses it to bait opponent into attacking you, all while your own people take photos and show to entire world carefully crafted messaging that appeals to general public conscience. the messaging part is extremely important in this. there’s no fucking way this could work for him because his cause is comprehensible only to those who already buy his cult messaging as ground truth. he’s in just for the moral superiority of being nonviolent. he’s never gonna get it because comprehending it requires touching grass

      • gerikson
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        Yeah both non-violence and pure terrorism are communication forms at the root. I remember reading long ago that the Rote Armee Fraktion’s master plan was:

        1. commit horrific acts of violence against pillars of the community / rob banks to get money
        2. said acts would unleash a repressive wave of violence from the state
        3. the proletariat would see this repressive wave, wake up, and cause the revolution

        It kinda stopped at stage 2, because the BRD’s security services were a bit less ex-Nazi than they expected, and also there was basically no proletariat.

        Also the Southern police chief who correctly deduced that mass arrests were what the civil rights activists wanted, got the go-ahead from neighboring county jails, and then politely and non-violently arrested everyone protesting and spread them out over a wider area, thus preventing the media-friendly repression that was the goal.

        • fullsquare
          link
          fedilink
          English
          arrow-up
          6
          ·
          2 days ago

          Yeah there are only so many ways to get it going, you don’t hear about these that don’t figure it out because cops bust them making them look like clowns and nobody wants to get associated with them afterwards

          there is also a barrier between step 2 and 3, because sometimes news like that are suppressed. american school shootings get that treatment sometimes, not to mention all the info filtering at facebook and friends. this is why sympathetic media is an important bit to have in advance. there’s also this bit where any serious insurgency needs money and it looks like what they got didn’t work out

          that southern police chief was per blogpost Laurie Pritchett and this kind of thinking is also what makes COIN tick. worry not, Hegseth declared it all woke nonsense

    • YourNetworkIsHaunted
      link
      fedilink
      English
      arrow-up
      11
      ·
      3 days ago

      This feels somehow tied to the whole “agentic” thing I’ve ranged about previously. Like, individual acts of violence are strictly destructive because the people doing it aren’t sufficiently “agentic” to change things, even though American history is full of cases where (usually racist) vigilante violence had a huge impact on people’s decision-making. But when the government does it it’s different because people in government got there by proving their agency and ability to actually impact the world. Like, it feels almost like he’s offended that the NPCs might try and do something as drastic as killing someone without GM permission.

      Meanwhile in reality, people legitimately do feel like they don’t have a lot of options to protect themselves from the real harms this industry is doing, to say nothing of the people who buy his line about the oncoming class-K end-of-life scenario. Anger is an appropriate response to the circumstances we find ourselves in, and in a nation that has been quietly cultivating a culture of heroic violence for decades we shouldn’t be surprised to see people trying to inflict that fear and rage upon the outside world.

      • scruiser
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 days ago

        Eliezer complaining about vigilante actions is really ironic considering one of his main themes in Harry Potter and the Methods of Rationalist was about “heroic responsibility” and complaining about how ordinary people default to doing nothing. I guess what he actually meant was for right-thinking people (people that agree with him) to take the actions he approves of.

      • Evinceo
        link
        fedilink
        English
        arrow-up
        9
        ·
        3 days ago

        in a nation that has been quietly cultivating a culture of heroic violence for decades we shouldn’t be surprised to see people trying to inflict that fear and rage upon the outside world.

        Nay a culture where every citizen is entitled to one armed crashout and threats of such have been an important lever used by the party that believes in that entitlement for decades.

    • Soyweiser
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      2 days ago

      But it’s the sort of force that’s meant to be predictable, predicted, avoidable, and avoided. And that is a true large difference between lawful and unlawful force.

      Remember the cartoon of the bombs being dropped on people and the people going ‘I hear the next bombs will be sent by a woman’, this but ‘with lawful force’.

      We end up 100% dead after slightly more time.

      On a long enough timeframe…

      Statistics show that civil movements with nonviolent doctrines are more successful at attaining their stated goals

      This is always one of those things that baffles me, and makes it clear to me these people have never even been close to any real movement. All these movements have violent and non-violent parts. Hell, you see it even now with the far right, they have a violent and non-violent part, and the non-violent part scores points by pointing to their violent friends and going ‘we are not with them’ while going to the same parties, sharing the same ideas, and all being friends with each other. Hell, look at the various LW people who went ‘wow, all these rightwingers in our mids are horrible’ and then not stopping being friends with them. I see now how Sam got the drop on all these naive people.

  • samvines
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    3 days ago

    Soon, at each new model of AI along the current capability curve, you will start to see large discrete jumps in ability in economically important areas, because the previous AI ability level in some aspect of the job just wasn't good enough and bottlenecked progress. When bottlenecks are released, it looks like a leap forward. It is going to look like unexpected gains in AI capacity, and, indeed there is no sign that the current exponential ability curve is slowing down so far but it is going to be like what happened in coding: as soon as models crossed a certain threshold with Opus 4.5, GPT-5.2, and Gemini 3, suddenly Claude Code & Codex were viable.  Before that, it was all about coding assistance, afterwards it was all about agents from despite relatively small gains in model ability

    There is just something so inherently smug and annoying about Mollick. He is one of those low information boosters whose posts sound intellectual until you really think about them.

    Tell me more about how the pile of cursed spaghetti that is Claude code is now viable due to model breakthroughs. All I see are hype men saying “the new model is a team of PhDs in your pocket” and then releasing disappointing updates or saying “the new model is too dangerous” because they have some vaporware powered by human crowdsourcing.

    Also coding is not like other areas - you can test for hallucinations by compiling and printing and running tests.

    I guess my first mistake this morning was opening linkedin

    • YourNetworkIsHaunted
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 days ago

      I’ve never understood how these things are simultaneously gaining their abilities based on statistical analysis of all kinds of random writings online including social media, fanfic, reddit, etc. but also are simultaneously supposed to end up as experts rather than a much faster and more agreeable dumbass. Like, the training data may include all the great works of literature, all the scrapable scientific studies and textbooks they could steal, and so on. But it also included every moron who ever shared conspiracy theories on Twitter, every confident-sounding business idiot on LinkedIn, and every stupid word that Scott or Yud ever wrote. Surely the bullshit has to exceed the expertise by raw volume, and if they took the time and energy to curate it out the way they would need to to correct that they wouldn’t be left with a large enough sample to actually scale off of.

      Basically, either I’m dramatically misunderstanding something or the best we can hope for is the Average Joe on Reddit, who may not be a complete dumbass but definitely isn’t a team of PhDs.

      • scruiser
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        3 days ago

        LLMs generate the next most probable token given the previous context of tokens they have (not an average of the entire internet). And post-training shifts the odds a bit further in a relatively useful direction. So given the right context the LLM will mostly consistently regurgitate content stolen from PhDs and academic papers, maybe even managing to shuffle it around in a novel way that is marginally useful.

        Of course, that is only the general trend given the righttm prompt. Even with a prompt that looks mostly right, one seemingly innocuous word in the wrong place might nudge the odds and you get the answer of a moron /r/hypotheticalphysics in response to a physics question. Or a asking for a recipe gets you elmer’s glue on your mozarella pizza from a reddit joke answer.

        if they took the time and energy to curate it out the way they would need to to correct that they wouldn’t be left with a large enough sample to actually scale off of

        They do steps like train the model generally on the desired languages with all the random internet bullshit, and then fine-tuning it on the actually curated stuff. So that shifts the odds, but again, not enough to actually guarantee anything.

        So tldr; you’re right, but since it is possible to get somewhat better than average internet junk with curating and post-training and prompting, llm boosters and labs have convinced themselves they are just a few more iterations of data curation and training approaches and prompting techniques away from entirely eliminating the problem, when the best they can do is make it less likely.