TL;DR: I spent a solid month “pair programming” with Claude Code, trying to suspend disbelief and adopt a this-will-be-productive mindset. More specifically, I got Claude to write well over 99% of the code produced during the month. I found the experience infuriating, unpleasant, and stressful before even worrying about its energy impact. Ideally, I would prefer not to do it again for at least a year or two. The only problem with that is that it “worked”. It’s hard to know exactly how well, but I (“we”) definitely produced far more than I would have been able to do unassisted, probably at higher quality, and with a fair number of pretty good tests (about 1500). Against my expectation going in, I have changed my mind. I now believe chat-oriented programming (“CHOP”) can work today, if your tolerance for pain is high enough.

  • Avicenna@programming.dev
    link
    fedilink
    arrow-up
    18
    ·
    edit-2
    5 days ago

    Sounds like being a project manager for a team of one coder AI, honestly quite depressing. You don’t get to do the fun part (coding) or you don’t actually get to interact with intelligent human beings (possibly only fun part of a managerial role). The only positive thing you get out of it is basically output (which may become unmaintainable for complex projects in the long run). Sounds like something that only CEOs and people trying to get rich quickly would like.

  • tyler@programming.dev
    link
    fedilink
    arrow-up
    16
    ·
    5 days ago

    Sounds infuriating honestly. Being more productive at the cost of mental health isn’t something we should be aiming for as a species.

    • zqwzzle@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 days ago

      Didn’t they revive that checkeagle project with Claude? Keep the bar low I guess.

  • sacredfire@programming.dev
    link
    fedilink
    arrow-up
    5
    ·
    5 days ago

    My experience with LLMs for coding has been similar. You have to be extremely vigilant, because they can produce very good code but will also miss important things that will cause disasters. It makes you very paranoid with their output, which is probably how you should approach it and is honestly how you should approach any code that you’re writing or getting from somewhere else.

    I can’t bring my self to actually use them for generating code like he does in this blog post though. That seems infuriating. I find them useful as a way to query knowledge about stuff that I’m interested in which I then cross reference with documentation and other sources to make sure I understand it.

    Sometimes you’re dealing with a particular issue or problem that is very hard to Google for or look up. LLMs are a good starting point to get an understanding of it; even if that understanding could be flawed. I found that it usually points me in the right direction. Though the environmental and ethical implications of using these tools also bother me. Is making my discovery phase for a topic a little bit easier worth the cost of these things?

  • termaxima@slrpnk.net
    link
    fedilink
    arrow-up
    4
    ·
    4 days ago

    I don’t care if human meat made for tastier, healthier hamburgers, and faster too ; I refuse to eat people for any reason whatsoever.

    If you don’t see how this relates to AI, maybe you are AI yourself.

  • abbadon420@sh.itjust.works
    link
    fedilink
    arrow-up
    8
    ·
    5 days ago

    Interesting read. Haven’t finished it yet (it’s late, I’m going to bed) but it’s a nice shift from the defacto negativity around LLM’s on this platform

  • ozymandias@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    3
    ·
    5 days ago

    i tried that recently with a pretty unique app, it gave a decent outline but just made so many bugs it was worthless… every library it included was outdated… i don’t want to imagine how many security flaws it creates.
    i think it’s decent if your project has been done a million times before, otherwise it sucks