Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week’s thread

(Semi-obligatory thanks to @dgerard for starting this - this one was a bit late, I got distracted)

  • Sailor Sega Saturn
    link
    English
    13
    edit-2
    7 days ago

    I woke up and immediately read about something called “Defense Llama”. The horrors are never ceasing: https://theintercept.com/2024/11/24/defense-llama-meta-military/

    Scale AI advertised their chatbot as being able to:

    apply the power of generative AI to their unique use cases, such as planning military or intelligence operations and understanding adversary vulnerabilities

    However their marketing material, as is tradition, include an example of terrible advice. Which is not great given it’s about blowing up a building “while minimizing collateral damage”.

    Scale AI’s response to the news pointing this out – complaining that everyone took their murderbot marketing material seriously:

    The claim that a response from a hypothetical website example represents what actually comes from a deployed, fine-tuned LLM that is trained on relevant materials for an end user is ridiculous.

    • @BlueMonday1984OP
      link
      English
      137 days ago

      On the one hand, that spectacular failure could potentially dissuade the military from buying in and prolonging this bubble. On the other hand, having an accountability sink for war crimes would be a tempting offer to your average army.

      • @istewart
        link
        English
        126 days ago

        The eventual war crimes trials will very likely reveal that “AI targeting” has already been used as an accountability sink for a premeditated ethnic cleansing policy in Gaza.

      • @froztbyte
        link
        English
        77 days ago

        I’ve been wondering about this

        One the one hand, military procurement (at least afaik) tends toward complete functional product

        On the other hand, military R&D programs have been among the most spectacularly profligate financial black holes in recent decades

        None of the options involved feel great, even if “it gets shunted from mil procurement and all industry claims get publicly brandished as the bullshit it is” comes to pass (which tbh still feels like an optimistic outcome, with unclear time horizons)

        • @YourNetworkIsHaunted
          link
          English
          77 days ago

          I mean it fits into the pattern of procurement projects that aren’t allowed to fail despite having had serious coherence issues starting at the design stage. Though the military is usually less prone to the “problem in search of a solution” dynamic that VCs are prone to if a project gets started it can shamble forwards as a zombie for years before anyone finds the political will to kill it.