• @selfMA
    link
    English
    34 hours ago

    This is obviously insane, the correct conclusion is that learning models cannot in fact be trained so hard that they will always get the next token correct. This is provable, and it’s not even hard to prove. It’s intuitively obvious, and a burly argument that backs the intuition is easy to build.

    You do, however, have to approach it through analogies, through toy models. When you insist on thinking about the whole thing at once, you wind up essentially just saying things that feel right, things that are appealing. You can’t actually reason about the damned thing at all.

    this goes a long way towards explaining why computer pseudoscience — like a fundamental ignorance of algorithmic efficiency and the implications of the halting problem — is so common and even celebrated among lesswrongers and other TESCREALs who should theoretically know better

      • @zbyte64
        link
        English
        22 hours ago

        They’re Basically fanboys of whatever the latest cult is coming out of silicon valley.

  • @swlabr
    link
    English
    120 minutes ago

    Such a good post. LWers are either incapable of critical thought or self scrutiny, or are unwilling and think verbal diarrhea is a better choice.