Over the past few years, the evolution of AI-driven tools like GitHub’s Copilot and other large language models (LLMs) has promised to revolutionise programming. By leveraging deep learning, these tools can generate code, suggest solutions, and even troubleshoot issues in real-time, saving developers hours of work. While these tools have obvious benefits in terms of productivity, there’s a growing concern that they may also have unintended consequences on the quality and skillset of programmers.

  • @Dunstabzugshaubitze@feddit.org
    link
    fedilink
    827 months ago

    I’ve seen enough programmers blindly copypasting code from stackoverflow and other forums without thinking and never understanding the thing they just “wrote”, to know that tools like copilot won’t make programmers worse, they will allow more people to be bad programmers.

    people need to read more code, play around with it, break it and fix it to become better programmers.

    • Spzi
      link
      fedilink
      English
      37 months ago

      Hehe, good point.

      people need to read more code, play around with it, break it and fix it to become better programmers.

      I think AI bots can help with that. It’s easier now to play around with code which you could not write by yourself, and quickly explore different approaches. And while you might shy away from asking your colleagues a noob question, ChatGPT will happily elaborate.

      In the end, it’s just one more tool in the box. We need to learn when and how to use it wisely.

  • @BatmanAoD@programming.dev
    link
    fedilink
    397 months ago

    I was hoping this might start with some actual evidence that programmers are in fact getting worse. Nope, just a single sentence mentioning “growing concern”, followed by paragraphs and paragraphs of pontification.

    • @pixeltree@lemmy.blahaj.zone
      link
      fedilink
      4
      edit-2
      7 months ago

      I don’t think it’s making devs worse, however I do think it’s significantly lowering the bar to entry to the point where people who don’t have enough knowledge to actually do the job well are becoming proceedingly common. Theoretically they should get weeded out by a good interview process but corporate be corporate

      Not that my opinion is worth anything, it’s not like I have anything to back it up.

      Please disregard any takes I may have

      • @BatmanAoD@programming.dev
        link
        fedilink
        27 months ago

        I mean, at least you acknowledge that you’re presenting an opinion. This blog post just tries to gloss over the fact that it’s pure speculation.

      • @BatmanAoD@programming.dev
        link
        fedilink
        27 months ago

        It’s probably not “provable” one way or the other, but I’d like to see more empirical studies in general within the software industry, and this seems like a fruitful subject for that.

  • dinckel
    link
    fedilink
    26
    edit-2
    7 months ago

    Anything that allows people to blindly and effortlessly get results inherently makes them more stupid. Your brain is like any muscle. You need to repeatedly use it for it to work well

    • Scratch
      link
      fedilink
      English
      227 months ago

      I’ll bet people said the same thing when Intellisense started suggesting lines completions.

      And when errors were highlighted in the code rather than console output.

      And when high-level languages started appearing.

      • dinckel
        link
        fedilink
        187 months ago

        This really isn’t a good comparison at all. One gives you a list of choices you can make, and the other gives you a blind answer.

        If seeing what argument types the function takes make me a worse engineer, so be it, I guess

      • @MajorHavoc@programming.dev
        link
        fedilink
        157 months ago

        I’ll bet people said the same thing when Intellisense started suggesting lines completions.

        They did.

        And when errors were highlighted in the code rather than console output.

        Yep.

        And when high-level languages started appearing.

        And yes.

        That said, if you believed my mentors, we were barelling towards a 2025 in which nothing running on software ever really worked reliably.

        So they may have been grumpy, but they were also right, on that point.

      • @u_tamtam@programming.dev
        link
        fedilink
        107 months ago

        I’ll bet people said the same thing when Intellisense started suggesting lines completions.

        I’m sure many did, but I’m also pretty sure it’s easy to draw a line between code assistance and LLM-infused code generation.

    • @lysdexic@programming.dev
      link
      fedilink
      English
      27 months ago

      Claude is laughable hypersensitive and self-censoring to certain words independently of contexts (…)

      That’s not a problem, nor Claude’s main problem.

      Claude’s main problem is that it is frequently down, unreliable, and extremely buggy. Overall I think it might be better than ChatGPT and Copilot, but it’s simply so unstable it becomes unusable.

    • @groucho@lemmy.sdf.org
      link
      fedilink
      English
      197 months ago

      A thing that hallucinates uncompilable code but somehow convinces your boss it’s a necessary tool.

      • KeriKitty (They(/It))
        link
        fedilink
        English
        67 months ago

        I’ll never forget attending CS courses with a guy who got violently angry at having to write code. I assume he’s either thrilled with Copilot or in prison for attacking somebody over its failure to reliably write all of his code for him.

        • @Kuinox@lemmy.world
          link
          fedilink
          37 months ago

          Of course, I don’t understand why people think it’s “unecessary”.
          Do they never do exploratory work and do thing they are uncomfortable with ?
          It’s a tool, if i’m in a codebase I know well, it’s often pretty useless.
          But I started writing some python, I’m a python noob, copilot is a gigantic productivity booster.

  • @Phegan@lemmy.world
    link
    fedilink
    77 months ago

    As someone who thinks we are in an AI bubble about to burst, this article has “old man angry at the kids using new technology” vibes.

    • @lysdexic@programming.dev
      link
      fedilink
      English
      1
      edit-2
      7 months ago

      I agree. Those who make bold claims like “AI is making programmers worse” neither has any first-hand experience with AI tools nor has any contact with how programmers are using them in their day-to-day business.

      Let’s think about this for a second: one feature of GitHub Copilot is the /explain command, which is used to put together a synthetic description of what a codebase does. Please someone tell me how a programmer gets worse at their job by having a tool that helps him understand any codebase anywhere.

      • @Big_Boss_77@lemmynsfw.com
        link
        fedilink
        English
        17 months ago

        I honestly wonder if they’re not trying to imply by virtue of digging up the info yourself, you’re not better for it… some real boomer shit