Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post, there’s no quota for posting and the bar really isn’t that high

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

  • Mii
    link
    English
    112 months ago

    I hate that you can’t mention the Fermi paradox anymore without someone throwing AI into the mix. There’s so much more interesting discussions to have about this than the idea that we’re all gonna be paperclipped by some future iteration of spicy autocomplete.

    But what’s even worse is that those munted dickheads will then claim that they have also found the solution to the Fermi paradox, which is, of course, to give more money to them so they can make their shitty products even worse safer.

    Also:

    AI could spell the end of intelligence on Earth (including AI) […]

    Somehow Clippy 9000 that’s clever enough to outsmart the entirety of the human race because it’s playing 4D chess with multiverse time travel, is, at the same time, too stupid to come up with any plan that doesn’t kill itself in the end, too?

    • @saucerwizard
      link
      English
      92 months ago

      Theres a concentrated effort, it seems, at bringing rationalist stuff into SETI.

    • @titotal
      link
      English
      82 months ago

      Yeah, the fermi paradox really doesn’t work here, an AI that was motivated and smart enough to wipe out humanity would be unlikely to just immediately off itself. Most of the doomerism relies on “tile the universe” scenarios, which would be extremely noticeable.