• selfA
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 年前

    like fuck, all you or I want out of these wandering AI jackasses is something vaguely resembling a technical problem statement or the faintest outline of an algorithm. normal engineering shit.

    but nah, every time they just bullshit and say shit that doesn’t mean a damn thing as if we can’t tell, and when they get called out, every time it’s the “well you ¡haters! just don’t understand LLMs” line, as if we weren’t expecting a technical answer that just never came (cause all of them are only just cosplaying as technically skilled people and it fucking shows)

    • o7___o7
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      1 年前

      It’s weird how these people want everyone to believe that they’re a new class of tech-priest but they also give off the vibe that they’d throw away their laptop if they accidentally deleted the Microsoft Edge icon.

    • V0ldek
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 年前

      I was thinking about this after reading the P(Dumb) post.

      All normal ML applications have a notion of evalutaion, e.g. the 2x2 table of {false,true}x{positive,negative}, or for clustering algorithms some metric of “goodness of fit”. If you have that you can make an experiment that has quantifiable results, and then you can do actual science.

      I don’t even know what the equivalent for LLMs is. I don’t really have time to spare to dig through the papers, but like, how do they do this? What’s their experimental evaluation? I don’t seen an easy way to classify LLM outputs into anything really.

      The only way to do science is hypothesis->experiment->analysis. So how the fuck do the LLM people do this?

      • o7___o7
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        1 年前

        Right? “AI” is great if you want to sort a few million images of galaxies into their various morphological classifications and have it done before the end of the decade. A++, good job, no notes.

        You can’t grift off of that very easily, though.

      • selfA
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 年前

        I’d really like to know too, especially given how many times we’ve already seen LLMs misused in scientific settings. it’s starting to feel like the LLM people don’t have that notion — but that’s crazy, right?