“Notably, O3-MINI, despite being one of the best reasoning models, frequently skipped essential proof steps by labeling them as “trivial”, even when their validity was crucial.”

  • bitofhope
    link
    fedilink
    English
    arrow-up
    22
    ·
    8 个月前

    Essentially they do not simply predict the next token

    looks inside

    it’s predicting the next token

    • froztbyte
      link
      fedilink
      English
      arrow-up
      15
      ·
      8 个月前

      every time I read these posters it’s in that type of the Everyman characters in the discworld that say some utter lunatic shit and follow it up with “it’s just [logical/natural/obvious/…]”

    • Pennomi@lemmy.worldBanned
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      12
      ·
      8 个月前

      Read the paper, it’s not simply predicting the next token. For instance, when writing a rhyming couplet, it first plans ahead on what the rhyme is, and then fills in the rest of the sentence.

      The researchers were surprised by this too, they expected it to be the other way around.

      • bitofhope
        link
        fedilink
        English
        arrow-up
        18
        ·
        8 个月前

        Oh, sorry, I got so absorbed into reading the riveting material about features predicting state name tokens to predict state capital tokens I missed that we were quibbling over the word “next”. Alright they can predict tokens out of order, too. Very impressive I guess.

      • froztbyte
        link
        fedilink
        English
        arrow-up
        15
        ·
        edit-2
        8 个月前

        first plans ahead

        predict

        to declare or tell in advance; prophesy; foretell;

        ahead

        Strongest matches: advanced; along; before; earlier; forward

        stop prompting LLMs and go read some books, it’ll do you a world of good