Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

  • @Soyweiser
    link
    English
    17
    edit-2
    22 days ago

    AGI growth lol from twitter

    xcancel.com link

    Edit: somebody also Did The Math (xcancel) “I eyeballed the rough numbers from your graph then re-plotted it as a linear rather than a logarithmic scale, because they always make me suspicious. You’re predicting the effective compute is going to increase about twenty quadrillion times in a decade. That seems VERY unlikely.”

    • @ebu
      link
      English
      20
      edit-2
      22 days ago

      i really, really don’t get how so many people are making the leaps from “neural nets are effective at text prediction” to “the machine learns like a human does” to “we’re going to be intellectually outclassed by Microsoft Clippy in ten years”.

      like it’s multiple modes of failing to even understand the question happening at once. i’m no philosopher; i have no coherent definition of “intelligence”, but it’s also pretty obvious that all LLM’s are doing is statistical extrapolation on language. i’m just baffled at how many so-called enthusiasts and skeptics alike just… completely fail at the first step of asking “so what exactly is the program doing?”

      • @BigMuffin69
        link
        English
        1522 days ago

        The y-axis is absolute eye bleach. Also implying that an “AI researcher” has the effective compute of 10^6 smart high schoolers. What the fuck are these chodes smoking?

      • @froztbyte
        link
        English
        1022 days ago

        this article/dynamic comes to mind for me in this, along with a toot I saw the other day but don’t currently have the link for. the toot detailed a story of some teacher somewhere speaking about ai hype, making a pencil or something personable with googly eyes and making it “speak”, then breaking it in half the moment people were even slightly “engaged” with the idea of a person’d pencil - the point of it was that people are remarkably good at seeing personhood/consciousness/etc in things where it just outright isn’t there

        (combined with a bit of en vogue hype wave fuckery, where genpop follows and uses this stuff, but they’re not quite the drivers of the itsintelligent.gif crowd)

          • @blakestaceyOPA
            link
            English
            14
            edit-2
            22 days ago

            Transcript: a post by Greg Stolze on Bluesky.

            I heard some professor put googly eyes on a pencil and waved it at his class saying “Hi! I’m Tim the pencil! I love helping children with their homework but my favorite is drawing pictures!”

            Then, without warning, he snapped the pencil in half.

            When half his college students gasped, he said “THAT’S where all this AI hype comes from. We’re not good at programming consciousness. But we’re GREAT at imagining non-conscious things are people.”

            • @sc_griffith
              link
              English
              6
              edit-2
              18 days ago

              how exactly did he get googly eyes on a pencil. big “then an eagle flew around the classroom” energy

          • @froztbyte
            link
            English
            321 days ago

            yeah, was that. not sure it happened either, but it’s a good concise story for the point nonetheless :)

      • @o7___o7
        link
        English
        1022 days ago

        They’re just one step away from “Ouija board as a Service”

        • David GerardMA
          link
          English
          822 days ago

          Ouija Board, Sexy Lady Voice Edition

          • @Soyweiser
            link
            English
            422 days ago

            Either sexy voice or the voice used in commercials for women and children. (I noticed a while back that they use the same tone of voice and that tone of voice now lowkey annoys me every time I hear it).

      • @Soyweiser
        link
        English
        822 days ago

        Same with when they added some features to the UI of gpt with the gpt40 chatbot thing. Don’t get me wrong, the tech to do real time audioprocessing etc is impressive (but has nothing to do with LLMs, it was a different technique) but it certainly is very much smoke and mirrors.

        I recall when they taught developers to be careful with small UI changes without backend changes as for non-insiders that feels like a massive change while the backend still needs a lot of work (so the client thinks you are 90% done while only 10% is done), but now half the tech people get tricked by the same problem.

        • @ebu
          link
          English
          822 days ago

          i suppose there is something more “magical” about having the computer respond in realtime, and maybe it’s that “magical” feeling that’s getting so many people to just kinda shut off their brains when creators/fans start wildly speculating on what it can/will be able to do.

          how that manages to override people’s perceptions of their own experiences happening right in front of it still boggles my mind. they’ll watch a person point out that it gets basic facts wrong or speaks incoherently, and assume the fault lies with the person for not having the true vision or what have you.

          (and if i were to channel my inner 2010’s reddit atheist for just a moment it feels distinctly like the ways people talk about Christian Rapture, where flaws and issues you’re pointing out in the system get spun as personal flaws. you aren’t observing basic facts about the system making errors, you are actively in ego-preserving denial about the “inevitability of ai”)

    • @gerikson
      link
      English
      1622 days ago

      Straight line on a lin-log chart, getting crypto flashbacks.

      • @Soyweiser
        link
        English
        1022 days ago

        I think technically the singularitarians were way ahead of them on the lin-log charts lines. Have a nice source (from 2005).

        • @carlitoscohones
          link
          English
          922 days ago

          How am I ever going to work again, knowing that page is on the internet. Instead of Timecube, it’s time squared.

          • @Soyweiser
            link
            English
            7
            edit-2
            22 days ago

            I’m just amazed that they hate lin charts so much that the Countdown to SIN - lin chart is missing.

            E: does seem to work when I directly go to the image, but not on the page. No human! You have a torch look down, there is a cliff! Ignore the siren cries of NFTs at the bottom! (Also look behind you, that woman with her two monkey friends is about to stab you in the back for some reason).

        • @mountainriver
          link
          English
          822 days ago

          I can’t get over that the two axis are:

          Time to the next event.

          Time before present.

          And then they have plotted a bunch of things happening with less time between. I can’t even.

    • @gerikson
      link
      English
      15
      edit-2
      21 days ago

      Similar vibes in this crazy document

      EDIT it’s the same dude who was retweeted

      https://situational-awareness.ai/

      AGI by 2027 is strikingly plausible. GPT-2 to GPT-4 took us from ~preschooler to ~smart high-schooler abilities in 4 years. Tracing trendlines in compute (~0.5 orders of magnitude or OOMs/year), algorithmic efficiencies (~0.5 OOMs/year), and “unhobbling” gains (from chatbot to agent), we should expect another preschooler-to-high-schooler-sized qualitative jump by 2027.

      Last I checked ChatGPT can’t even do math, which I believe is a prerequisite for being considered a smart high-schooler. But what do I know, I don’t have AI brain.