In today’s episode, Yud tries to predict the future of computer science.

    • @blakestaceyMA
      link
      English
      11
      edit-2
      11 months ago

      Men will literally use an LLM instead of going to therapy writing documentation

      • @200fifty
        link
        English
        611 months ago

        I mean they’ll use an LLM instead of going to therapy too…

    • @selfMA
      link
      English
      1011 months ago

      fucking hell. I’m almost certainly gonna see this trash at work and not know how to react to it, cause the AI fuckers definitely want any criticism of their favorite tech to be a career-limiting move (and they’ll employee any and all underhanded tactics to make sure it is, just like at the height of crypto) but I really don’t want this nonsense anywhere near my working environment

      • Sailor Sega Saturn
        link
        English
        811 months ago

        I’ve seen a few LLM generated C++ code changes at my work. Which is horrifying.

        • One was complete nonsense on it’s face and never should have been sent out. The reviewer was basically like “what is this shit” only polite.
        • One was subtly wrong, it looked like that one probably got committed… I didn’t say anything because not my circus.

        No one’s sent me any AI generated code yet, but if and when it happens I’ll add whoever sent it to me as one of the code reviewers if it looks like they hadn’t read it :) (probably the pettiest trolling I can get away with in a corporation)

        • @blakestaceyMA
          link
          English
          611 months ago

          I’m pretty sure that my response in that situation would get me fired. I mean, I’d start with “how many trees did you burn and how many Kenyans did you call the N-word in order to implement this linked list” and go from there.

      • @froztbyte
        link
        English
        6
        edit-2
        11 months ago

        Eternal September: It’s Coming From Inside The House Edition

        I hear you on the issues of the coworkers though… already seen that overrun in a few spaces, and I don’t really have a good response to it either. just stfu’ing also doesn’t really work well, because then that shit just boils internally

      • @zogwarg
        link
        English
        6
        edit-2
        11 months ago

        Possible countermeasure: Insist on “crediting” the LLM as the commit author, to regain sanity when doing git blame.

        I agree that worse doc is a bad enough future, though I remain optimistic that including LLM in compile step is never going to be mainstream enough (or anything approaching stable enough, beyond some dumb useless smoke and mirrors) for me to have to deal with THAT.

        • @froztbyte
          link
          English
          4
          edit-2
          11 months ago

          This also fails as a viable path because version shift (who knows what model version and which LLM deployment version the thing was at, etc etc), but this isn’t the place for that discussion I think

          This did however give me the enticing idea that a viable attack vector may be dropping “produced by chatgpt” taglines in things - as malicious compliance anywhere it may cause a process stall