• melfie@lemy.lol
    link
    fedilink
    arrow-up
    68
    ·
    2 months ago

    One major problem I have with Copilot is it can’t seem to RTFM when building against an API, SDK, etc. Instead, it just makes shit up. If I have to go through line by line and fix everything, I might as well do it myself in the first place.

    • MinFapper@startrek.website
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 months ago

      It will if you explicitly ask it to. Otherwise it will either make stuff up or use some really outdated patterns.

      I usually start by asking Claude code to search the Internet for current best practices of whatever framework. Then if I ask it to build something using that framework while that summary is in the context window, it’ll actually follow it

  • floofloof@lemmy.ca
    link
    fedilink
    arrow-up
    45
    ·
    edit-2
    2 months ago

    Yeah, the places to use it are (1) boilerplate code that is so predictable a machine can do it, and (2) with a big pinch of salt for advice when a web search didn’t give you what you need. In the second case, expect at best a half-right answer that’s enough to get you thinking. You can’t use it for anything sophisticated or critical. But you now have a bit more time to think that stuff through because the LLM cranked out some of the more tedious code.

    • Corngood@lemmy.ml
      link
      fedilink
      arrow-up
      54
      ·
      2 months ago

      (1) boilerplate code that is so predictable a machine can do it

      The thing I hate most about it is that we should be putting effort into removing the need for boilerplate. Generating it with a non-deterministic 3rd party black box is insane.

        • yes_this_time@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          I would agree that the interest will wain in some domains where they aren’t aiding in productivity.

          But LLMs for coding are productive right now in other domains and people aren’t going to want to give that up.

          Inference is already financially viable.

          Now, I think what could crush the SOTA models is if they get sued into bankruptcy for copyright violations. Which is a related but separate thread.

      • expr@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        2 months ago

        …regular coding, again. We’ve been doing this for decades now and this LLM bullshit is wholely unnecessary and extremely detrimental.

        The AI bubble will pop. Shit will get even more expensive or nonexistent (as these companies go bust, because they are ludicrously unprofitable), because the endless supply of speculative and circular investments will dry up, much like the dotcom crash.

        It’s such an incredibly stupid thing to not only bet on, but to become dependent on to function. Absolute lunacy.

        • yes_this_time@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          I would bet on LLMs being around and continuing to be useful for some subset of coding in 10 years.

          I would not bet my retirement funds on current AI related companies.

          • expr@programming.dev
            link
            fedilink
            arrow-up
            2
            ·
            2 months ago

            They aren’t useful now, but even assuming they were, the fundamental issue is that it’s extremely expensive to train and run them, and there is no current inkling of a business model where they actually make sense, financially. You would need to charge far more than what people could actually afford to pay to make them anywhere near profitable. Every AI company is burning through cash at an insane rate. When the bubble pops and the money runs out, no one will want to train and host them anymore for commercial purposes.

            • yes_this_time@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              1 month ago

              They may not be useful to you… but you can’t speak for everyone.

              You are incorrect on inference costs. But yes training models is expensive and the economics are concerning.

  • forrcaho@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    2 months ago

    I recently asked ChatGPT to generate some boilerplate code in C to use libsndfile to write out a WAV file with samples from a function I would fill in. The code it generated casted the double samples from the placeholder function it wrote to floats to use sf_writef_float to write to the file. Having coded with libsndfile over a decade ago, I knew that sf_writef_double existed and would write my calculated sample values with no loss of precision. It probably wouldn’t have made any audible difference to my finished result but it was still obviously stupidly inferior code for no reason.

    This is the kind of stupid shit LLMs do all the time. I know I’ve also realized months later that some LLM-generated code I used was doing something in a stupid way, but I can’t remember the details now.

    LLMs can get you started and generate boilerplate, but if you’re asking it to write code in a domain you’re not familiar with, you have to understand that — if the code even works — it’s highly likely that it’s doing something in a boneheaded way.