https://www.reuters.com/technology/artificial-intelligence/openai-co-founder-sutskevers-new-safety-focused-ai-startup-ssi-raises-1-billion-2024-09-04/

http://web.archive.org/web/20240904174555/https://ssi.inc/

I have nothing witty or insightful to say, but figured this probably deserved a post. I flipped a coin between sneerclub and techtakes.

They aren’t interested in anything besides “superintelligence” which strikes me as an optimistic business strategy. If you are “cracked” you can join them:

We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.

  • Sailor Sega SaturnOP
    link
    English
    172 months ago

    Eliezer is skeptical, can find flaw in any alignment strategy in 2 minutes: https://x.com/ESYudkowsky/status/1803676608320192617

    If you have an alignment plan I can’t shoot down in 120 seconds, let’s hear it. So far you have not said anything different from the previous packs of disaster monkeys who all said exactly this almost verbatim, but I’m open to hearing better.

    • @selfA
      link
      English
      232 months ago

      is… is yud one of the disaster monkeys? or are we supposed to forget he spent a bunch of years running and renaming an institute that tried and failed to do this exact same alignment grift?

      • @sc_griffith
        link
        English
        52 months ago

        yud is the uniquely capable person in this area. anyone who even sets foot in it should make groveling to him a high priority. these people are disaster monkeys because they aren’t doing that