• @Shitgenstein1
    link
    English
    244 months ago

    A year and two and a half months since his Time magazine doomer article.

    No shut downs of large AI training - in fact only expanded. No ceiling on compute power. No multinational agreements to regulate GPU clusters or first strike rogue datacenters.

    Just another note in a panic that accomplished nothing.

    • @Soyweiser
      link
      English
      183 months ago

      No shut downs of large AI training

      At least the lack of Rationalist suicide bombers running at data centers and shouting ‘Dust specks!’ is encouraging.

      • @skillissuer@discuss.tchncs.de
        link
        fedilink
        English
        113 months ago

        considering that the more extemist faction is probably homeschooled, i don’t expect that any of them has ochem skills good enough to not die in mysterious fire when cooking device like this

      • @sc_griffith
        link
        English
        10
        edit-2
        3 months ago

        why would rationalists do something difficult and scary in real life, when they could be wanking each other off with crank fanfiction and buying castles manor houses for the benefit of the future

      • AcausalRobotGod
        link
        English
        73 months ago

        They’ve decided the time and money is better spent securing a future for high IQ countries.

    • @fartsparkles@sh.itjust.works
      link
      fedilink
      English
      144 months ago

      It’s also a bunch of brainfarting drivel that could be summarized:

      Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.

      Or

      Read Asimov’s I, Robot. Then note that in our reality, we’ve not yet invented the Three Laws of Robotics.

      • @OhNoMoreLemmy@lemmy.ml
        link
        fedilink
        English
        194 months ago

        If yud just got to the point, people would realise he didn’t have anything worth saying.

        It’s all about trying to look smart without having any actual insights to convey. No wonder he’s terrified of being replaced by LLMs.

      • @Architeuthis
        link
        English
        193 months ago

        Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.

        You make his position sound way more measured and responsible than it is.

        His ‘effective safety measures’ are something like A) solve ethics B) hardcode the result into every AI, I.e. garbage philosophy meets garbage sci-fi.

          • AcausalRobotGod
            link
            English
            93 months ago

            A good chunk of philosophers do believe there are moral facts, but this is less useful for these purposes than one would think

            • @froztbyte
              link
              English
              83 months ago

              yeah it’s been absolutely hilarious to watch this play out in LLM space. so many prompt configurations and model deployments with so very many string-based rule inputs, meant to be configuring inviolable behaviour, that still get egregiously broken

              and afaict none of the dipshits have really seemed to internalise that just maybe their approach isn’t working

      • @Shitgenstein1
        link
        English
        153 months ago

        Before we accidentally make an AI capable of posing existential risk to human being safety

        It’s cool to know that this isn’t a real concern and therefore in a clear vantage of how all the downstream anxiety is really a piranha pool of grifts for venture bucks and ad clicks.

      • @Evinceo
        link
        English
        23 months ago

        That’s a summary of his thinking overall but not at all what he wrote in the post. What he wrote in the post is that people assume that his theory depends on an assumption (monomaniacal AIs) but he’s saying that actually, his assumptions don’t rest on that at all. I don’t think he’s shown his work adequately, however, despite going on and on and fucking on.