this time in open letter format! that’ll sure do it!

there are “risks”, which they are definite about - the risks are not hypothetical, the risks are real! it’s totes even had some acknowledgement in other places! totes real defs for sure this time guize

  • Sailor Sega Saturn
    link
    English
    23
    edit-2
    13 days ago

    Unprecedented benefits to humanity?

    I mean you’re competing against bicycles, campfires, bagel slicers, minestrone, electric toothbrushes, and the Sega Saturn; so that’s a pretty high bar buster!

    These risks range from the further entrenchment of existing inequalities to manipulation and misinformation to the loss of control of autonomous AI systems potentially resulting in human extinction

    Two of these things are happening today. Hint: they’re the ones that AI people give lip service to before going back to role-playing about acasual basilisks and paperclips and digital clones and whether the ability to auto-respond to auto-generated emails is worth the P(doom) risk.

    Great way to turn what might have otherwise been a petition based in reality into a joke I guess.

  • @carlitoscohones
    link
    English
    1913 days ago

    One of these things is not like the others. (my bulleting)

    We also understand the serious risks posed by these technologies. These risks range from

    • the further entrenchment of existing inequalities, to
    • manipulation and misinformation, to
    • the loss of control of autonomous AI systems potentially resulting in human extinction.
    • @froztbyteOP
      link
      English
      813 days ago

      yeah. it had a very burying-the-lede kinda feel to it. and then it just doubles back and switches the concern statement out too

      have seriously considered making righttoearn.ai in response…

      • @maol
        link
        English
        311 days ago

        It’s turn into one of those letter ladder word games. “Warn, earn, ears, cars, care…”

  • @Soyweiser
    link
    English
    16
    edit-2
    13 days ago

    Once I would just like to see an explaination from the AI doomers how, considering the limited capacities of Turing style machines, and P!=NP (assuming it holds, else the limited capacities thing falls apart, but then we don’t need AI for stuff to go to shit, as I think that prob breaks a lot of encryption methods), how AGI can be an existential risk, it cannot by definition surpass the limits of Turing machines via any of the proposed hypercomputational methods (as then turning machines are hyperturing and the whole classification structure crashed down).

    I’m not a smart computer scientist myself (I did learn about some of the theories as evidenced above) but im constantly amazed at how our hyperhyped tech scene nowadays seems to not know that our computing paradigm has fundamental limits. (Everything touched by Musk extremely has this problem, with capacity problems in Starlink, Shannon Theoritically impossible compression demands for Neuralink, everything related to his tesla/AI related autonomous driving/robots thing. (To further make this an anti-Musk rant, he also claimed AI would solve chess, solving chess is a computational problem (it has been done for 7x7 board iirc), which just costs a lot of computation time (more than we have), if AI would solve chess, it would side step that time, making it a superturing thing, which makes turing machines superturing (I also can’t believe that of all the theorethical hypercomputing methods we are going with the oracle method (machine just conjures up the right method, no idea how), the one I have always mocked personally) which is theoretically impossible and would have massive implications for all of computer science) sorry rant over).

    Anyway, these people are not engineers or computer scientists, they are bad science fiction writers. Sorry for the slightly unrelated rant, it was stuck as a splinter in my mind for a while now. And I guess that typing it out and ‘telling it to earth’ like this makes me feel less ranty about it.

    E: of course the fundamental limits apply to both sides of the argument, so both the ‘AGI will kill the world’ shit and ‘AGI will bring us to posthuman utopia of a googol humans in postscarcity’ seem unlikely. Unprecedented benefits? No. (Also im ignoring physical limits here as well, a secondary problem which would severely limit the singularity even if P=NP).

    E2: looks at title of OPs post, looks at my post. Shit, the loons ARE at it again.

    • @BigMuffin69
      link
      English
      20
      edit-2
      13 days ago

      No, they never address this. And as someone who personally works on large scale optimization problems for a living, I do think it’s difficult for the public to understand, that no, a 10000 IQ super machine will not be able to just “solve these problems” in a nano second like Yud thinks. And it’s not like well, the super machine will just avoid having to solve them. No. NP hard problems are fucking everywhere. (Fun fact, for many problems of interest, even approximating the solution to a given accuracy is NP-hard, so heuristics don’t even help.)

      I’ve often found myself frustrated that more computer scientist who should know better simply do not address this point. If verifying solutions is exponentially easier than coming up with them for many difficult problems (all signs point to yes), and if a super intelligent entity actually did exist (I mean does a SAT solver count as a super intelligent entity?), it would probably be EASY to control, since it would have to spend eons and massive amounts of energy coming up with its WORLD_DOMINATION_PLAN.exe, but you wouldn’t be able to hide a super computer doing this massive calculation, and someone running the machine seeing it output TURN ALL HUMANS INTO PAPER CLIPS, would say, ‘ah, we are missing a constraint here, it thinks that this optimization problem is unbounded’ <- this happens literally all the time in practice. Not the world domination part, but a poorly defined optimization problem that is unbounded. But again, it’s easy to check that the solution is nonsense.

      I know Francois Chollet (THE GOAT) has talked about how there are no unending exponentials and the faster growth the faster you hit constraints IRL (running out of data, running out of chips, running out of energy, etc… ) and I’ve definitely heard professional shit poster Pedro Domingos explicitly discuss how NP-hardness strongly implies EA/LW type thinking is straight up fantasy, but it’s a short list of people who I can think of off the top of my head who have discussed this.

      Edit: bizarrely, one person who I didn’t mention who has gone down this line of thinking is Illya Sutskever; however, he has come to some frankly… uh… strange conclusions -> the only reason to explain the successful performance of ML is to conclude that they are Kolmogorov minimizers, i.e., by optimizing for loss over a training set, you are doing compression which done optimally is solving an undecidable problem. Nice theory. Definitely not motivated by bad sci-fi mysticism imbued with pure distilled hopium. But from my arm-chair psychologist POV, it seems he implicitly acknowledges for his fantasy to come true, he needs to escape the limitations of Turing Machines, so he has to somehow shoehorn a method for hyper computation into Turing Machines. Smh, this is the kind of behavior reserved for aging physicist, amirite lads? Yet in 2023, it seemed like the whole world was succumbing to this gas lighting. He was giving this lecture to auditoriums filled with tech bro shilling this line of thinking to thunderous applause. I have olde CS prof friends who were like, don’t we literally have mountains of evidence this is straight up crazy talk? Like you can train an ANN to perform addition, and if you can look me straight in the eyes and say the absolute mess of weights that results looks anything like a Kolmogorov minimizer then I know you are trying to sell me a bag of shit.

      • @o7___o7
        link
        English
        11
        edit-2
        13 days ago

        Smh, this is the kind of behavior reserved for aging physicists, amirite lads?

        Bah Gawd! That man has a family!

      • @Soyweiser
        link
        English
        1013 days ago

        Ow god im not alone in thinking this, thank you! I’m not going totally crazy!

        • @BigMuffin69
          link
          English
          9
          edit-2
          13 days ago

          I got you homie

          ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    • @o7___o7
      link
      English
      9
      edit-2
      13 days ago

      and P!=NP (assuming it holds, else the limited capacities thing falls apart, but then we don’t need AI for stuff to go to shit, as I think that prob breaks a lot of encryption methods),

      Building a scifi apocalypse cult around LLMs seems like a missed opportunity when there are much more interesting computer science toys lying around. Like you pointed out, there’s the remote possibility that P=NP, which is also largely unexplored in fiction. There is a fun little low-budget movie called The Traveling Salesman about this exact scenario, where several scientists are locked in a room deciding what to do with their discovery when the government tries to squeeze them for it. Very 12 Angry Men.

      My fav example of the micro-genre is The Laundry Files book series by Charles Stross (who visits these parts!). In the first book, The Atrocity Archives, it turns out that any mathematical proof that P=NP is a closely guarded state secret; so much so that the British government has an entire MoD agency dedicated to rounding up and permanently employing people who discover The Truth. This is because drawing a graph that summons horrors from beyond space-time (brain-eating parasites, hungry ghosts, Cthulhu, a competent Tory politician, etc) is an NP-complete problem. You really don’t want an efficient algorithm for solving 3SAT to show up on reddit.

      I mean, you could also use it to steal bitcoin and make robots, but pfft.

      I’m not doing the series justice. I love how Bob, Mo, Mhari, and co grow and change, and their character arcs really hit home for me, as someone who more-or-less grew up alongside the series, not to mention the spot-on social commentary.

        • @o7___o7
          link
          English
          412 days ago

          Rad as heck!

          btw, (sorry if this is prying!) considering your line of work, is all of this acausal robot god stuff especially weird and off-putting for you? Do your coworkers seem to be resistant to it?

          • @BigMuffin69
            link
            English
            6
            edit-2
            12 days ago

            Not prying! Thankful to say, none of my coworkers have ever brought up ye olde basilisk, the closest anyone has ever gotten has been jokes about the LLMs taking over, but never too seriously.

            No, I don’t find the acasual robot god stuff too weird b.c. we already had Pascal’s wager. But holy shit, people actually full throat believing it to the point that they are having panic attacks wtf. Like:

            1. Full human body simulation -> my brother-in-law is a computational chemist, they spend huge amounts of compute modeling simple few atom systems. To build a complete human simulation, you’d be computing every force interaction for approx ~ 10^28 atoms, like this is ludicrous.

            2. The chuckle fucks who are posing this are suggesting ok, once the robot god can sim you (which again, doubt), it’s going to be able to use that simulation of you to model your decisions and optimize against you.

            So we have an optimization problem like:

            min_{x,y} f(x) s.t. y in argmin{ g(x,y),(x,y) in X*Y}

            where x and f(x) would be the decision variables and obj function 🐍 is trying to minimize, and y and g(x,y) is the objective of me, the simulated human who has its own goals, (don’t get turned to paperclips).

            This is a bilevel optimization problem, and it’s very, very nasty to solve. Even in the nicest case possible, that somehow g,f, are convex functions and X,Y are all convex sets, (which is an insane ask considering y and g entails a complete human sim), this problem is provably NP-hard.

            Basically, to build the acasual god, first you need a computer larger than the known universe, and this probably isn’t sufficient.

            Weird note: while I was in academia, I actually did do some work on training ANN to model the constraint that y is a minimizer of a follower problem by using an ANN to act as a proxy for g(x,*), and then encoding a representation of the trained network into a single level optimization problem… we got some nice results for some special low dim problems where we had lots of data🦍 🦍 🦍 🦍 🦍

      • @Soyweiser
        link
        English
        613 days ago

        Computer scientist accidentally ruins the world by having his p=pn algorithm iterate over automatically generated programs and asks it ‘does this program halt or not?’

        • @o7___o7
          link
          English
          612 days ago

          That’s basically how the point-of-view character gets roped in, except instead of threatening the whole world he only threatened Wolverhampton, he was still in grad school, and it was a graphics algorithm.

  • @froztbyteOP
    link
    English
    15
    edit-2
    14 days ago

    those fucking asks, though

    We therefore call upon advanced AI companies to commit to these principles:

    That the company will not enter into or enforce any agreement that prohibits “disparagement” or criticism

    subtooting the real openai, will the real openai please stand up

    “pls mr openai will you not kill all my shares pls I believed so hard?”

    That the company will facilitate a verifiably anonymous process for current and former employees to raise risk-related

    “I believe I was unfairly edged out because of my beliefs and I want to speak to the manager”

    That the company will support a culture of open criticism

    oh yeah that sounds kinda good

    and allow its current and former employees to raise risk-related concerns about its technologies to the public,

    oh.

    That the company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed.

    “rlyrlypls mropenai, plsnotakeshaare. I believed so hard 🥹🥹🥹🥹”

    • @Soyweiser
      link
      English
      6
      edit-2
      13 days ago

      subtooting the real openai, will the real openai please stand up

      Funny you would mention the real openAI, as openAI is apparantly 3 different companies (one a non-profit, an LLC and one more) which have some complex ownership/oversight relationship which is very weird looking to me. (The Ed guy (sorry forgot the rest of his name+blog) linked here before had a post on that).

  • @FermiEstimate@lemmy.dbzer0.com
    link
    fedilink
    English
    1514 days ago

    “We’re all in grave danger! What? Well no, we can’t give specifics unless we risk not getting paid. Signed, Anonymous”

    I mean, I wasn’t exactly expecting the Einstein-Szilard letter 2.0 when I clicked that link, but this is pathetic.

  • @Eiim@lemmy.blahaj.zone
    link
    fedilink
    English
    1512 days ago

    I also think that AI companies shouldn’t be allowed to have non-disparagement agreements. Not because of x-risk or anything, but because a) I think all companies shouldn’t be allowed to have non-disparagement agreements, and b) it would create a bunch of entertaining content for this instance.

  • @Amoeba_Girl
    link
    English
    913 days ago

    critical support for this as a first step to making NDAs illegal

  • Mii
    link
    English
    913 days ago

    They managed to make this even more stupid than the open letter from last year which had Yud among the signatories. At least that one was consistent in its message while this one somehow manages to shoehorn an Altman milquetoast well-akshually in that AI is, like, totes useful and stuff until it’s gonna murder us all.

    Who are the even pandering to here?