“No Duh,” say senior developers everywhere.

The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.

  • Jesus@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    ·
    2 months ago

    Might be there someday, but right now it’s basically a substitute for me googling some shit.

    If I let it go ham, and code everything, it mutates into insanity in a very short period of time.

    • degen@midwest.social
      link
      fedilink
      English
      arrow-up
      30
      ·
      2 months ago

      I’m honestly doubting it will get there someday, at least with the current use of LLMs. There just isn’t true comprehension in them, no space for consideration in any novel dimension. If it takes incredible resources for companies to achieve sometimes-kinda-not-dogshit, I think we might need a new paradigm.

      • Glitchvid@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        I think we’ve tapped most of the mileage we can get from the current science, the AI bros conveniently forget there have been multiple AI winters, I suspect we’ll see at least one more before “AGI” (if we ever get there).

      • Jason2357@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        They are statistical prediction machines. The more they output, the larger the portion of their “context window” (statistical prior) becomes the very output they generated. It’s a fundamental property of the current LLM design that the snake will eventually eat enough of it’s tail to puke garbage code.