Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Semi-obligatory thanks to @dgerard for starting this)

    • @rook
      link
      English
      74 days ago

      Valsorda was on mastodon for a bit (in ‘22 maybe?) and was quite keen on it , but left after a bunch of people got really pissy at him over one of his projects. I can’t actually recall what it even was, but his argument was that people posted stuff publicly on mastodon, so he should be able to do what he liked with those posts even if they asked him not to. I can see why he might not have a problem with LLMs.

      Anyone remember what he was actually doing? Text search or network tracing or something else?

      • Robert Kingett, blind
        link
        fedilink
        93 days ago

        I was utterly amazed at so many people in that thread advocating that LLM’s were supposed to, like, superglue the internet and thereby somehow make it available to people in the global south? When an LLM is just quite literally a faster autocomplete and doesn’t actually connect people to anything. I wonder what these people will think when these “AI” companies decide they want money and they’re done burning cash. They think internet is expensive? Just wait until these LLM makers are tired of spending money. @rook @V0ldek

        • @froztbyte
          link
          English
          12
          edit-2
          3 days ago

          (e: apologies, this turned into more of a wall-of-text sneer than I meant to, but I’ll leave it for flavour and detail)

          superglue the internet and thereby somehow make it available to people in the global south

          as someone from (and living in) the global south (fairly familiar with but not myself at worse end of the resources spectrum), I cannot tell you how fucking ridiculous it sounds each time I see some North American Fuckwit post shit like that. whether it was the coiners going “banking the unbanked!!!” or the llm trash “can help you write professional!!!”, it’s always some Extremely Resourced thinking that just does. not. apply. this side of the world

          I probably should make this a long detailed post sometime somewhere, demonstrating just how utterly fucking wrong some of these presumptions are, because oh god they’re many:

          the amount of data it takes to communicate with this trash (in a number of markets, you get people buying data bundles in 10/50/100MB increments in day or hour units because that’s what they can afford at that point (there is another rant here to be had about exploitative behaviour on the part of telcos but separate rant))

          just reaching the servers for this shit requires a good network connection, nevermind the interaction latency (higher base latencies = much longer cumulative = much slower “experience”… and this shit was already slow from US networks)

          hell, just having the hardware that’s capable is sometimes a big blocker - so-called “feature phones” are somewhat common (how much depends on where you are). sideline mention: locally in some areas they’re called “trililis”, after the way they ring, which I fucking love. and even when you have users with smartphones, the devices are not necessarily good. sometimes it’s low resourced (because cost), sometimes it’s buggy as fuck (vendors, cost), sometimes it’s just plain fucked (because hard knocks life)

          and don’t even get me goddamn started on the language. the phenomenon of nigerian english being Too Florid For USA has already featured here previously, but it goes so much beyond that. show me one of these fucking prompts working even half-well in Pedi, Sotho, Swazi, Tsongo, Tswana, Venda, Xhosa, Zulu, or Afrikaans. and those are just the other national (spoken/textual) languages here (in ZA). one single border away there’s 25+ more that I know of

          and that’s to just look at the resource/technical/implementation side of it, and saying nothing about the Northern Saviour dynamic - so many of these fucking people advertise working for a non-profit, wearing it like a badge. wandering around DC a few years back, running into many of these, with so-called focuses on places in africa I’ve been to and worked in… it was surreal how wide the gap was between reality and what they had in their heads

      • David GerardMA
        link
        English
        63 days ago

        oh! was he the guy doing a search engine archiving as much of the fediverse as possible, over the objections of the people being indexed?

        yeah that tracks

        • @blakestaceyA
          link
          English
          93 days ago

          So many techbros have decided to scrape the fediverse that they all blur together now… I was able to dig up this:

          “I hear I’m supposed to experiment with tech not people, and must not use data for unintended purposes without explicit consent. That all sounds great. But what does it mean?” He whined.

          • David GerardMA
            link
            English
            63 days ago

            yeah, that’s the fucker. as a large language model, he does not have a data type for consent

            • @froztbyte
              link
              English
              53 days ago

              I always wondered why he was at google for so long, and cut a teeny bit of hypothetical slack in light of “hmm maybe it gave a significantly better life than what he could in italy” (which honestly I can understand as a drive, if not necessarily agree with)

              that slack’s gone now

    • @mirrorwitch
      link
      English
      154 days ago

      I find the polygraph to be a fascinating artifact. most on account of how it doesn’t work. it’s not that it kinda works, that it more or less works, or that if we just iron out a few kinks the next model will do what polygraphs claims to do. the assumptions behind the technology are wrong. lying is not physiological; a polygraph cannot and will never work. you might as well hire me to read the tarot of the suspects, my rate of success would be as high or higher.

      yet the establishment pretends that it works, that it means something. because the State desperately wants to believe that there is a path to absolute surveillance, a way to make even one’s deepest subjectivity legible to the State, amenable to central planning (cp. the inefficacy of torture). they want to believe it so much, they want this technology to exist so much, that they throw reality out of the window, ignore not just every researcher ever but the evidence of their own eyes and minds, and pretend very hard, pretend deliberately, willfully, desperately, that the technology does what it cannot do and will never do. just the other day some guy way condemned to use a polygraph in every statement for the rest of his life. again, this is no better than flipping a coin to decide if he’s saying the truth, but here’s the entire System, the courts the judge the State itself, solemnly condemning the man to the whims of imaginary oracles.

      I think this is how “AI” works, but on a larger scale.

      • David GerardMA
        link
        English
        73 days ago

        see also voice stress analysis, another thing that doesn’t work but is sold as working with AI

    • David GerardMA
      link
      English
      114 days ago

      that dude advocates LLM code autocomplete and he’s a cryptographer

      like that code’s gotta be a bug bounty bonanza

      • @selfA
        link
        English
        104 days ago

        dear fuck:

        From 2018 to 2022, I worked on the Go team at Google, where I was in charge of the Go Security team.

        Before that, I was at Cloudflare, where I maintained the proprietary Go authoritative DNS server which powers 10% of the Internet, and led the DNSSEC and TLS 1.3 implementations.

        Today, I maintain the cryptography packages that ship as part of the Go standard library (crypto/… and golang.org/x/crypto/…), including the TLS, SSH, and low-level implementations, such as elliptic curves, RSA, and ciphers.

        I also develop and maintain a set of cryptographic tools, including the file encryption tool age, the development certificate generator mkcert, and the SSH agent yubikey-agent.

        I don’t like go but I rely on go programs for security-critical stuff, so their crypto guy’s bluesky posts being purely overconfident “you can’t prove I’m using LLMs to introduce subtle bugs into my code” horseshit is fucking terrible news to me too

        but wait, mkcert and age? is that where I know the name from? mkcert’s a huge piece of shit nobody should use that solves a problem browsers created for no real reason, but I fucking use age in all my deployments! this is the guy I’m trusting? the one who’s currently trolling bluesky cause a fraction of its posters don’t like the unreliable plagiarization machine enough? that’s not fucking good!

        maybe I shouldn’t be taking this so hard — realistically, this is a Google kid who’s partially funded by a blockchain company; this is someone who loves boot leather so much that most of their posts might just be them reflexively licking. they might just be doing contrarian trolling for a technology they don’t use in their crypto work (because it’s fucking worthless for it) and maybe what we’re seeing is the cognitive dissonance getting to them.

        but boy fuck does my anxiety not like this being the personality behind some of the code I rely on

        • @gerikson
          link
          English
          84 days ago

          Oh shit, that’s where I recognize his name from. Very disappointing he’s full on the LLM train.

          • @selfA
            link
            English
            84 days ago

            cryptographers: need strict guarantees on code ordering and timing because even compiler optimizations can introduce exploitable flaws into code that looks secure

            the go cryptographer: there’s no reason not to completely trust a system that pastes plagiarized code together so loosely it introduces ordering-based exploits into ordinary C code and has absolutely no concept of a timing attack (but will confidently assert it does)

        • @froztbyte
          link
          English
          54 days ago

          yeah. Been following valsorda for a while because reasons, but there’s a certain type of thing they frequently go for. “It’s popular and thus worth it, who cares about the side effects” isn’t something they seem to concern themselves with in respect to the gallery of shit

          I know that rage exists, but haven’t really tried to make serious use of it yet. Probably worth checking out

          • @selfA
            link
            English
            74 days ago

            I know that rage exists, but haven’t really tried to make serious use of it yet.

            oh I make serious use of rage all the time in my work

            not the program, but that looks cool too

    • @swlabr
      link
      English
      65 days ago

      Some ok anti-AI voices in that thread. But mostly a torrent of shit

    • @FredFig
      link
      English
      95 days ago

      Criticizing others for not being perfectly exacting with their language and then jumping in front of the LLM headlights all at once, truly the human mind has no limits.