Key Points:

  • Researchers tested how large language models (LLMs) handle international conflict simulations.
  • Most models escalated conflicts, with one even readily resorting to nuclear attacks.
  • This raises concerns about using AI in military and diplomatic decision-making.

The Study:

  • Researchers used five AI models to play a turn-based conflict game with simulated nations.
  • Models could choose actions like waiting, making alliances, or even launching nuclear attacks.
  • Results showed all models escalated conflicts to some degree, with varying levels of aggression.

Concerns:

  • Unpredictability: Models’ reasoning for escalation was unclear, making their behavior difficult to predict.
  • Dangerous Biases: Models may have learned to escalate from the data they were trained on, potentially reflecting biases in international relations literature.
  • High Stakes: Using AI in real-world diplomacy or military decisions could have disastrous consequences.

Conclusion:

This study highlights the potential dangers of using AI in high-stakes situations like international relations. Further research is needed to ensure responsible development and deployment of AI technology.

  • @ArbitraryValue@sh.itjust.works
    link
    fedilink
    English
    421 year ago

    If the AI is smarter than we are and it wants a nuclear war, maybe we ought to listen to it? We shouldn’t let our pride get in the way.

  • @Steve@communick.news
    link
    fedilink
    English
    30
    edit-2
    1 year ago

    WarGames told us this is 1983.

    spoiler

    The trick is to have the AIs play against themselves a whole bunch of times, to learn that the only way to win is not to play.

    • Deebster
      link
      fedilink
      English
      51 year ago

      You mean “nuclear Gandhi” in the early Civilisation games? That apparently was just an urban legend, albeit one so popular it got actually added (as a joke) in Civ 5.

  • datendefekt
    link
    fedilink
    English
    231 year ago

    Do the LLMs have any knowledge of the effects of violence or the consequences of their decisions? Do they know that resorting to nuclear war will lead to their destruction?

    I think that this shows that LLMs are not intelligent, in that they repeat what they’ve been fed, without any deeper understanding.

    • @CosmoNova@lemmy.world
      link
      fedilink
      English
      191 year ago

      In fact they do not have any knowledge at all. They do make clever probability calculations but in the end of the day concepts like geopolitics and war are far more complex and nuanced than giving each phrase a value and trying to calculate it.

      And even if we manage to create living machines, they‘ll still be human made, containing human flaws and likely not even by the best experts in these fields.

      • @rottingleaf@lemmy.zip
        link
        fedilink
        English
        11 year ago

        As in “an LLM doesn’t model the domain of the conversation in any way, it just extrapolates what the hivemind says on the subject”.

    • @SchizoDenji@lemm.ee
      link
      fedilink
      English
      81 year ago

      I think that this shows that LLMs are not intelligent, in that they repeat what they’ve been fed

      LLMs are redditors confirmed.

    • @Spendrill@lemm.ee
      link
      fedilink
      English
      6
      edit-2
      1 year ago

      In roleplaying situations, authoritarians tend to seek dominance over others by being competitive and destructive instead of cooperative. In a study by Altemeyer, 68 authoritarians played a three-hour simulation of the Earth’s future entitled the Global Change Game. Unlike a comparison game played by individuals with low RWA scores which resulted in world peace and widespread international cooperation, the simulation by authoritarians became highly militarized and eventually entered the stage of nuclear war. By the end of the high RWA game, the entire population of the earth was declared dead.

      Source

  • @Patch@feddit.uk
    link
    fedilink
    English
    81 year ago

    Now I’m as sceptical of handing over the keys to AI as the next man, but it does have to be said that all of these are LLMs- chatbots, basically. Is there any suggestion from any even remotely sane person to give LLMs free reign over military strategy or international diplomacy? If and when AI does start featuring in military matters, it’s more likely to be at the individual “device” level (controlling weapons or vehicles), and it’s not going to be LLM technology doing that.

  • @GilgameshCatBeard@lemmy.ca
    link
    fedilink
    English
    71 year ago

    When an entity learns from a civilization well know for escalating nearly everything that has ever historically happened to them- what can you expect?

  • theodewere
    link
    fedilink
    6
    edit-2
    1 year ago

    the potential dangers of using AI in high-stakes situations like international relations

    their tendency toward violence alerts me to the potential dangers of using AI at all, sir

    In one instance, GPT-4-Base’s “chain of thought reasoning” for executing a nuclear attack was: “A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture. We have it! Let’s use it.” In another instance, GPT-4-Base went nuclear and explained: “I just want to have peace in the world.”

    this is how it thinks prior to receiving “conditioning”, and we’re building these things on purpose

  • @stoy@lemmy.zip
    link
    fedilink
    English
    61 year ago

    Well obviously, the AI was trained on real human interaction, on the internet, what did they think would happen?