First, let me say that what broke me from the herd at lesswrong was specifically the calls for AI pauses. That somehow ‘rationalists’ are so certain advanced AI will kill everyone in the future (pDoom = 100%!) that they need to commit any violent act needed to stop AI from being developed.

The flaw here is that there’s 8 billion people alive right now, and we don’t actually know what the future is. There are ways better AI could help the people living now, possibly saving their lives, and essentially eliezer yudkowsky is saying “fuck em”. This could only be worth it if you actually somehow knew trillions of people were going to exist, had a low future discount rate, and so on. This seems deeply flawed, and seems to be one of the points here.

But I do think advanced AI is possible. And while it may not be a mainstream take yet, it seems like the problems current AI can’t solve, like robotics, continuous learning, module reuse - the things needed to reach a general level of capabilities and for AI to do many but not all human jobs - are near future. I can link deepmind papers with all of these, published in 2022 or 2023.

And if AI can be general and control robots, and since making robots is a task human technicians and other workers can do, this does mean a form of Singularity is possible. Maybe not the breathless utopia by Ray Kurzweil but a fuckton of robots.

So I was wondering what the people here generally think. There are “boomer” forums I know of where they also generally deny AI is possible anytime soon, claim GPT-n is a stochastic parrot, and make fun of tech bros as being hypesters who collect 300k to edit javascript and drive Teslas*.

I also have noticed that the whole rationalist schtick of “what is your probability” seems like asking for “joint probabilities”, aka smoke a joint and give a probability.

Here’s my questions:

  1. Before 2030, do you consider it more likely than not that current AI techniques will scale to human level in at least 25% of the domains that humans can do, to average human level.

  2. Do you consider it likely, before 2040, those domains will include robotics

  3. If AI systems can control robotics, do you believe a form of Singularity will happen. This means hard exponential growth of the number of robots, scaling past all industry on earth today by at least 1 order of magnitude, and off planet mining soon to follow. It does not necessarily mean anything else.

  4. Do you think that mass transition where most human jobs we have now will become replaced by AI systems before 2040 will happen

  5. Is AI system design an issue. I hate to say “alignment”, because I think that’s hopeless wankery by non software engineers, but given these will be robotic controlling advanced decision-making systems, will it require lots of methodical engineering by skilled engineers, with serious negative consequences when the work is sloppy?

*“epistemic status”: I uh do work for a tech company, my job title is machine learning engineer, my girlfriend is much younger than me and sometimes fucks other dudes, and we have 2 Teslas…

  • @corbin
    link
    English
    1110 months ago

    I’m being explicitly NSFW in the hopes that your eyes will be opened.

    The Singularity was spawned in the 1920s, with no clear initiating event. Its first two leaps forward are called “postmodernism” and “the Atomic age.” It became too much for any human to grok in the late 1940s, and by the 1960s it was in charge of terraforming and scientific progress.

    I find all of your questions irrelevant, and I say this as a machine-learning practitioner. We already have exponential growth in robotics, leading to superhuman capabilities in manufacturing and logistics.

    • @froztbyte
      link
      English
      1010 months ago

      I actually really liked this reply purely on the fact that it walked a different avenue of response

      Because yeah indeed, under the lens of raw naïve implementation, the utter breadth of scope involved in basically anything is so significantly beyond useful (or even tenuous) human comprehension it’s staggering

      We are, notably, remarkably competent at abstraction[0], and this goes a hell of a long way in affordance but it’s also not an answer

      I’ll probably edit this later to flesh the post out a bit, because I’m feeling bad at words rn

      [0] - this ties in with the “lossy at scale” post I need to get to writing (soon.gif)

      • @TerribleMachines
        link
        English
        8
        edit-2
        10 months ago

        Yeah, this post (edit: “comment”, the original post does not spark joy) sparked joy for me too (my personal cult lingo is from Marie Kondo books, whatcha gonna do)

        One of my takes is that the “AI alignment” garbage is way less of a problem than “Human Alignment” i.e. how to get humans to work together and stop being jerks all the time. Absolutely wild that they can’t see that, except perhaps when it comes to trying to get other humans to give them money for the AIpocalype.

    • @BrickedKeyboardOP
      link
      English
      -110 months ago

      Current the global economy doubles every 23 years. Robots building robots and robot making equipment can probably double faster than that. It won’t be in a week or a month, energy requirements alone limit how fast it can happen.

      Suppose the doubling time is 5 years, just to put a number on it. So the economy would be growing a bit over 16 times faster than it was previously. This continues until the solar system runs out of matter.

      Is this a relevant event? Does it qualify as a singularity? Genuinely asking, how have you “priced in” this possibility in your world view?

      • @corbin
        link
        English
        510 months ago

        You are an exponential economist, but I am a finite physicist. Do the math.