OpenAI’s offices were sent thousands of paper clips in an elaborate prank to warn about an AI apocalypse::The prank was a reference to the “paper clip maximizer” scenario – the idea that AI could destroy humanity if it were told to build as many paper clips as possible.

    • @aeronmelon@lemm.ee
      link
      fedilink
      English
      381 year ago

      Human Resources: “But you still took one home without asking, which is theft. Go clean out your desk.”

      • Gyromobile
        link
        fedilink
        English
        11
        edit-2
        1 year ago

        Everyone works (at least partially) from home now. If you havn’t taken a box of office supplies home yet you are probably braindead.

  • @umbrella@lemmy.ml
    link
    fedilink
    English
    62
    edit-2
    1 year ago

    I’m more worried about what capital will do with the tech than the tech itself.

    • @Sanctus@lemmy.world
      link
      fedilink
      English
      32
      edit-2
      1 year ago

      Its far more likely that wealth inequality will be greatly exasperated, and the philanthropic elite will become even more powerful while the “middle class” disappears entirely into a new bottom class that makes up most of humanity, Imperium of Man style.

      • Uriel238 [all pronouns]
        link
        fedilink
        English
        211 year ago

        The problem is the social unrest will be uncontrollable, and among their solutions will not only to be to micromanage the population, but also adjust its numbers by force.

        So murder drones will just be sent to cull the masses down to a manageable number. This is the robot future Randall is concerned about in XKCD 1968.

        Imagine if the police dogbots killed people and we couldn’t question why, nor had the power to resist. This is a problem being considered by AI ethicists. Also that AI will develop scarier ways to cull the population, possibly instigating the demise of their operating end users.

        • @Sanctus@lemmy.world
          link
          fedilink
          English
          4
          edit-2
          1 year ago

          Its already advanced enough to do that, and the killer bots are in development. It seems all but inevitable, especially since we are already seeing unrest as climate change begins to claim its first nations.

      • @reksas@lemmings.world
        link
        fedilink
        English
        4
        edit-2
        1 year ago

        Police is already on its way to your location for speech containing anti-corporate sentiment, flagged by ai

  • @Aggravationstation@lemmy.world
    link
    fedilink
    English
    23
    edit-2
    1 year ago

    I’m genuinely worried I’ll be watching TV and Clippy will appear: “It looks like your entire species is about to be vaporised by a coordinated drone strike. Would you like some help? Well, you gotta beg for it now, bitch!”

  • kbal
    link
    fedilink
    131 year ago

    Presumably the same people who thought that the Large Hadron Collider was going to create a black hole that would destroy the world.

    • @MotoAsh@lemmy.world
      link
      fedilink
      English
      221 year ago

      Nah, AI doing weird stuff is actually possible. Armageddon isn’t likely, but it’s more on the table than a black hole ever was.

        • Captain Janeway
          link
          fedilink
          English
          5
          edit-2
          1 year ago

          Drones that target people with image analysis. Facial detection is trivial these days. Drones have proven to be one of Ukraine’s best guerilla warfare techniques. Isis was less successful but Ukraine has a lot more capital to make “off the shelf” solutions more meaningful. Just look around. Plenty of private organizations are selling mass organized drones which use various ML models to target individuals. Either for finding a person in a forest fox hole or for searching a town for a particular individual.

          Eg: this random company I found on Google

          • @tabular@lemmy.world
            link
            fedilink
            English
            2
            edit-2
            1 year ago

            It’s difficult to draw a clear line between a simple neural network and a human brain when it comes to “intelligence”. The rouge, paperclip-making “AI” seems be far closer to an intelligence, while flying autos or text prediction seems closer to mere hand-written code.

            • @MotoAsh@lemmy.world
              link
              fedilink
              English
              11 year ago

              I think part of the wisdom in the warning is that any kind of “intelligence” (read: NOT specifically artificial general intelligence) is capable of running away with unforseen scenarios.

              Hell, even normal ol’ algorithms can have some pretty nasty edge cases that noone spots until it’s running in production… Sure it’s uncommon, but it’s not exactly rare. (just look up the list of zero-day exploits over the years)

      • Ataraxia
        link
        fedilink
        English
        21 year ago

        Yeah and vaccines make you magnetic. Science bad.

  • bruhduh
    link
    fedilink
    English
    81 year ago

    Ooooooh i haven’t seen this face since windows xp

  • @Destraight@lemm.ee
    link
    fedilink
    English
    51 year ago

    I highly doubt that would ever happen.If this AI is building paperclips to overthrow humanity then someone is going to notice

    • ayaya
      link
      fedilink
      English
      151 year ago

      You would think so, but you have to remember AGI is hyper-intelligent. Because it can constantly learn, build, and improve upon itself at an exponential rate it’s not only a little bit smarter than a human-- it’s smarter than every human combined. AGI would know that if it’s caught trying to maximizing paperclips humans would shut it down at the first sign something is wrong, so it would find unfathomably clever ways to avoid detection.

      If you’re interested in the subject the YouTube channel Computerphile has a series of videos with Robert Miles that explain the importance of AI safety in an easy to understand way.

      • Peanut
        link
        fedilink
        English
        1
        edit-2
        1 year ago

        For a system to be advanced enough to be that dangerous, it would need the complex analogical thought that would prevent this type of misunderstanding. Rather, such dumb super intelligence is unlikely.

        however, human society has enabled a paperclip maximizer in the form of profit maximizing corporate environments.

        • @MotoAsh@lemmy.world
          link
          fedilink
          English
          61 year ago

          They use simple examples to elucidate the problem. Of course a real smart intelligence isn’t going to get stuck making paper clips. That’s entirely not the point.

          • Peanut
            link
            fedilink
            English
            21 year ago

            the the problem of analogy is applicable to more than one task. your point is moot.

            for it to be intelligent enough to be a “super intelligence” it would require systems for weighting vague liminal concept spaces. rather, several systems that would prevent that style of issue.

            otherwise it just couldn’t function as well as you fear.