• tabular@lemmy.world
    link
    fedilink
    English
    arrow-up
    241
    ·
    edit-2
    3 months ago

    Before hitting submit I’d worry I’ve made a silly mistake which would make me look a fool and waste their time.

    Do they think the AI written code Just Works ™? Do they feel so detached from that code that they don’t feel embarrassment when it’s shit? It’s like calling yourself a fictional story writer and writing “written by (your name)” on the cover when you didn’t write it, and it’s nonsense.

    • kadu@scribe.disroot.org
      link
      fedilink
      English
      arrow-up
      182
      ·
      3 months ago

      I’d worry I’ve made a silly mistake which would make me look a fool and waste their time.

      AI bros have zero self awareness and shame, which is why I continue to encourage that the best tool for fighting against it is making it socially shameful.

      Somebody comes along saying “Oh look at the image is just genera…” and you cut them with “looks like absolute garbage right? Yeah, I know, AI always sucks, imagine seriously enjoying that hahah, so anyway, what were you saying?”

    • Feyd@programming.dev
      link
      fedilink
      English
      arrow-up
      114
      ·
      3 months ago

      LLM code generation is the ultimate dunning Kruger enhancer. They think they’re 10x ninja wizards because they can generate unmaintainable demos.

        • NotMyOldRedditName@lemmy.world
          link
          fedilink
          English
          arrow-up
          29
          ·
          3 months ago

          Sigh, now in CSI when they enhance a grainy image they AI will make a fake face and send them searching for someone that doesn’t exist, or it’ll use a face of someone in the training set and they go after the wrong person.

          Either way I have a feeling they’ll he some ENHANCE failure episode due to AI.

    • atomicbocks@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      80
      ·
      3 months ago

      From what I have seen Anthropic, OpenAI, etc. seem to be running bots that are going around and submitting updates to open source repos with little to no human input.

      • Notso@feddit.org
        link
        fedilink
        English
        arrow-up
        56
        ·
        3 months ago

        You guys, it’s almost as if AI companies try to kill FOSS projects intentionally by burying them in garbage code. Sounds like they took something from Steve Bannon’s playbook by flooding the zone with slop.

        • SkaveRat@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          11
          ·
          3 months ago

          that’s the annoying part.

          LLM code can range to “doesn’t even compile” to “it actually works as requested”.

          The problem is, depending on what exactly was done, the model will move mountains to actually get it running as requested. And will absolutely trash anything in its way, From “let’s abstract this with 5 new layers” to “I’m going to refactor that whole class of objects to get this simple method in there”.

          The requested feature might actually work. 100%.

          It’s just very possible that it either broke other stuff, or made the codebase less maintainable.

          That’s why it’s important that people actually know the codebase and know what they/the model are doing. Just going “works for me, glhf” is not a good way to keep a maintainable codebase

          • turboSnail@piefed.europe.pub
            link
            fedilink
            English
            arrow-up
            9
            ·
            3 months ago

            LOL. So true.
            On top of that, an LLM can also take you on a wild goose chase. When it gives you trash, you tell it to find a way to fix it. It introduces new layers of complication and installs new libraries without ever really approaching a solution. It’s up to the programmer to notice a wild goose chase like that and pull the plug early on.

            That’s a fun little mini-game that comes with vibe coding.

        • Björn@swg-empire.de
          link
          fedilink
          English
          arrow-up
          5
          ·
          3 months ago

          Reminds me of one job I had where my boss asked shortly after starting there if their entry test was too hard. They had gotten several submissions from candidates that wouldn’t even run.

          I envision these types of people are now vibe coding.

    • JustEnoughDucks@feddit.nl
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 months ago

      I would think that they will have to combat AI code with an AI code recognizer tool that auto-flags a PR or issue as AI, then they can simply run through and auto-close them. If the contributor doesn’t come back and explain the code and show test results to show it working, then it is auto-closed after a week or so if nobody responds.

  • lmr0x61@lemmy.ml
    link
    fedilink
    English
    arrow-up
    171
    ·
    3 months ago

    Damn, Godot too? I know Curl had to discontinue their bug bounties over the absolutely tidal volume of AI slop reports… Open source wasn’t ever perfect, but whatever cracks in there were are being blown a mile wide by these goddamn slop factories.

    • fuck_u_spez_in_particular@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      ·
      3 months ago

      Unfortunately it’s a general theme in Open Source. I lost almost all motivation for programming in my free-time because of all these AI-slop(-PRs). It’s kinda sad, how that Art (among others) is flooded with slop…

    • luciferofastora@feddit.org
      link
      fedilink
      English
      arrow-up
      19
      ·
      3 months ago

      Open source wasn’t ever perfect, but whatever cracks in there were are being blown a mile wide by these goddamn slop factories.

      This is the perpetual issue, not just with AI: Any system will have flaws and weaknesses, but often, they can generally be papered over with some good will and patience…

      Until selfish, immoral assholes come and ruin it for everyone.

      From teenagers using the playground to smoke and bury their cigs in the sand, so now parents with small children can’t use it any more, over companies exploiting legal loopholes to AI slop drowning volunteers in obnoxious bullshit: Most individual people might be decent, but a single turd is all it takes to ruin the punch bowl.

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      3 months ago

      Then get ready for people just making slop libraries, not because people are dissatisfied with existing solutions (such as I did with iota, which is a direct media layer similar to SDL, but has better access to some low-level functionality + OOP-ish + memory safe lang), but just because they can.

      I got a link to a popular rectpacking algorithm pretty quickly after asking in a Discord server. Nowadays I’d be asked to “vibecode it”.

      • Jankatarch@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 months ago

        Can confirm the last part. I am in Uni and if anyone ever asks questions on the class groupchats then first 5-6 answers will be “ask chatgpt.”

  • e8d79@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    102
    ·
    3 months ago

    I think moving off of GitHub to their own forge would be a good first step to reduce this spam.

  • BitsAndBites@lemmy.world
    link
    fedilink
    English
    arrow-up
    94
    ·
    3 months ago

    It’s everywhere. I was just trying to find some information on starting seeds for the garden this year and I was met with AI article after AI article just making shit up. One even had a “picture” of someone planting some seeds and their hand was merged into the ceramic flower pot.

    The AI fire hose is destroying the internet.

    • maplesaga@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      ·
      3 months ago

      I fear when they learn a different layout. Right now it seems they are usually obvious, but soon I wont be able to tell slop from intelligence.

      • badgermurphy@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        2 months ago

        One could argue that if the AI response is not distinguishable from a human one at all, then they are equivalent and it doesn’t matter.

        That said, the current LLM designs have no ability to do that, and so far all efforts to improve them beyond where they are today has made them worse at it. So, I don’t think that any tweaking or fiddling with the model will ever be able to do anything toward what you’re describing, except possibly using a different, but equally cookie-cutter way of responding that may look different from the old output, but will be much like other new output. It will still be obvious and predictable in a short time after we learn its new obvious tells.

        The reason they can’t make it better anymore is because they are trying to do so by giving it ever more information to consume in a misguided notion that once it has enough data, it will be overall smarter, but that is not true because it doesn’t have any way to distinguish good data from garbage, and they have read and consumed the whole Internet already.

        Now, when they try to consume more new data, a ton of it was actually already generated by an LLM, maybe even the same one, so contains no new data, but still takes more CPU to read and process. That redundant data also reinforces what it thinks it knows, counting its own repetition of a piece of information as another corroboration that the data is accurate. It thinks conjecture might be a fact because it saw a lot of “people” say the same thing. It could have been one crackpot talking nonsense that was then repeated as gospel on Reddit by 400 LLM bots. 401 people said the same thing; it MUST be true!

        • Urist@lemmy.ml
          link
          fedilink
          English
          arrow-up
          8
          ·
          2 months ago

          I think the point is rather that it is distinguishable for someone knowledgeable on the subject, but not for someone is not. Thus making it harder to evolve from the latter to the former.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        You will be able to tell slop from intelligence.

        However, you won’t be able to tell AI slop from human slop, and we’ve had human slop around and already overwhelming, but nothing compared to LLM slop volume.

        In fact, reading AI slop text reminds me a lot of human slop I’ve seen, whether it’s ‘high school’ style paper writing or clickbait word padding of an article.

  • Hemingways_Shotgun@lemmy.ca
    link
    fedilink
    English
    arrow-up
    85
    ·
    3 months ago

    This was honestly my biggest fear for a lot of FOSS applications.

    Not necessarily in a malicious way (although there’s certainly that happening as well). I think there’s a lot of users who want to contribute, but don’t know how to code, and suddenly think…hey…this is great! I can help out now!

    Well meaning slop is still slop.

  • MystikIncarnate@lemmy.ca
    link
    fedilink
    English
    arrow-up
    68
    ·
    3 months ago

    Look. I have no problems if you want to use AI to make shit code for your own bullshit. Have at it.

    Don’t submit that shit to open Source projects.

    You want to use it? Use it for your own shit. The rest of us didn’t ask for this. I’m really hoping the AI bubble bursts in a big way very soon. Microsoft is going to need a bail out, openai is fucking doomed, and z/Twitter/grok could go either way honestly.

    Who in their right fucking mind looks at the costs of running an AI datacenter, and the fact that it’s more economically feasible to buy a fucking nuclear power plant to run it all, and then say, yea, this is reasonable.

    The C-whatever-O’s are all taking crazy pills.

      • Routhinator@lemmy.ca
        link
        fedilink
        English
        arrow-up
        39
        ·
        3 months ago

        No but they are actively not promoting it or encouraging it. Github and MS are. If you’re going to keep staying on the pro-AI site, you’re going to eat the consequences of that. Github are actively encouraging these submissions with profile badges and other obnoxious crap. Its not an appropriate env for development anymore. Its gamified AI crap.

      • woelkchen@lemmy.world
        link
        fedilink
        English
        arrow-up
        30
        ·
        3 months ago

        No (just like Lemmy isn’t immune against AI comments) but Github is actively working towards AI slop

      • Cryxtalix@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 months ago

        If you want to get a programming job, you want a good looking CV. By contributing to prominent open source projects on github, github’s popularity and fancy profile system makes it look real good on a CV.

        Github is a magnet for lazy vibe coders spamming their shit everywhere to farm their CVs. On other git hosts without such a fancy profile systems, there’s less on an incentive to do so. Slop to good code ratio should be lower and more managable.

    • setsubyou@lemmy.world
      link
      fedilink
      English
      arrow-up
      33
      ·
      edit-2
      3 months ago

      I think the open slop situation is also in part people who just want a feature and genuinely think they’re helping. People who can’t do the task themselves also can’t tell that the LLM also can’t do it.

      But a lot of them are probably just padding their GitHub account too. Any given popular project has tons of forks by people who just want to have lots of repositories on their GitHub but don’t actually make changes because they can’t actually do it. I used to maintain my employer’s projects on GitHub and literally we’d have something like 3000 forks and 2990 of them would just be forks with no changes by people with lots of repositories but no actual work. Now these people are using LLMs to also make changes…

  • xkbx@startrek.website
    link
    fedilink
    English
    arrow-up
    40
    ·
    3 months ago

    Couldn’t you just set up actual AI/LLM verification questions, like “how many r’s in strawberry?”

    Or even just have an AI / Manual contribution divide. Wouldn’t stop everything 100% but might help the clean-up process better

    • CameronDev@programming.dev
      link
      fedilink
      English
      arrow-up
      93
      ·
      3 months ago

      Those kind of challenges only work for a short while. Chatgpt has solved the strawberry one already.

      That said, I wish these AI people would just create their own projects and contribute to them. Create a LLM fork of the engine, and go nuts. If your AI is actually good, you’ll end up with a better engine and become the dominant fork.

      • warm@kbin.earth
        link
        fedilink
        arrow-up
        51
        ·
        3 months ago

        They don’t want to do it in a corner where nobody can see, they want to push it on existing projects and attempt to justify it.

          • mcv@lemmy.zip
            link
            fedilink
            English
            arrow-up
            11
            ·
            edit-2
            3 months ago

            Use open source maintainers as free volunteers check whether your AI coding experiment works.

      • new_guy@lemmy.world
        link
        fedilink
        English
        arrow-up
        26
        ·
        3 months ago

        There’s a joke in science circles that goes something like this:

        “Do you know how they call alternative medicine that works? Just regular medicine.”

        Good code made by LLM should be indistinguishable from code made by an human… It would simply be “just code”.

        It’s hard to create a project the size of Godot’s and not have a human in the loop somewhere filtering the slop and trying to create a cohesive code base. At that poin they either would be overwhelmed again or the code would be unmaintainable.

        And then we would go full circle and get to the same point described by the article.

        • CameronDev@programming.dev
          link
          fedilink
          English
          arrow-up
          22
          ·
          3 months ago

          They can fork Godot and let their LLMs go at it. They don’t have to use the Godot human maintainers as free slop filters.

          But of course, if they did that, their LLMs would have to stand on their own merits.

        • sp3ctr4l@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          7
          ·
          3 months ago

          At the risk of drawing the ire of people…

          … I have a local LLM that I run as a primarily a coding assistant, mostly for GDScript.

          I’ve never like, submitted anything as a potential commit to Godot proper.

          But dear lord, the amount of shennanigans I have had to figure out just to get an LLM to even understand GDScript’s syntax and methods properly is… substantial.

          They tend to just default back to using things that work in Python or JS, but… do not work or exist in GDScript.

          Like one recurring quirk is they will keep trying to use ? ternary instead of if x else(if) y constructions.

          That or they will constantly fuck up trying to custom sorting properly, they’ll either do it syntactically wrong, or, just hallucinate various kinds of set/array methods and properties that don’t exist in GDScript.

          And its a genuine stuggle to get them to comprehend more than roughly 750 lines of code at the same time, without confusing themselves.

          It is possible to use an LLM to be like, hey, look at this code, help me refactor it to be more modular, or, standardize this kind of logic into a helper function… but you basically have to browbeat them with a custom prompt that tells them to stop doing all these dumb, basic things.

          Even if you tell them in conversation " hey you did this wrong, heres how it actually works ", it doesnt matter, keep that conversation going and they will forget it and repeat the mistake… you have to have it contstantly present in the prompt.

          The amount of babysitting and constantly telling an LLM the number of errors it is making is quite substantial.

          It can be a thing that makes some sense to do in some situations, but it is extremely, extremely far away from ‘Make a game for me in Godot’, or even like ‘Make a third person camera script’.

          You have to break things down into much, much more conceptually smaller chunks.

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        11
        ·
        3 months ago

        People who submit AI-generated code tend to crumble, or sound incomprehensible, in the face of the simplest questions. Thank goodness this works for code reviews… because if you look at AI CEO interviews, journalists can’t detect the BS.

        • sp3ctr4l@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          7
          ·
          3 months ago

          LLMs are magic at everything that you don’t understand at all, and they’re horrifically incompetent at anything you do actually understand pretty well.

      • Pamasich@kbin.earth
        link
        fedilink
        arrow-up
        7
        ·
        3 months ago

        I mean, ChatGPT can do it. I just tested it. And if you run your own AI, you can probably remove most such rules anyway.

    • turboSnail@piefed.europe.pub
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 months ago

      How about asking it to write a short political speech on climate change. Then, just count the number of rhetoric devices and em-dashes. A human dev wouldn’t be bothered to write anything fancy or impactful when they just want to submit a bug fix. It would be simple, poorly written, and filled with typos. LLMs try to make it way too impressive and impactful.

        • turboSnail@piefed.europe.pub
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          No need to add any more than you usually do. Just leave the ones you are unable to see. Besides, LLMs tend to write in overly grand style, whereas humans can’t be bothered to use every trick in the book. Humans just get to the point and skip all the high-impact language that LLMs seem to love.

          • boonhet@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            2 months ago

            I usually proofread any messages that aren’t for my close friends or family lol

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        3 months ago

        The funnier thing is when you try to get an LLM to do like, a report on its creators.

        You can keep feeding them articles detailing the BS their company is up to, and it will usually just keep reverting to the company line, despite a preponderance of evidence that said company line is horseshit.

        Like uh, try to get an LLM to give you an exact number of uh, how much will this conversation we are having, how much will that increase RAM prices in a 3 month period?

        What do you think about ~95% of companies implementing ‘AI’ into their business processes reporting a 0 to negative boost to productivity?

        What are the net economic damages of this malinvestment?

        Give it a bunch of economic data, reports, etc.

        Results are usually what I would describe as ‘comical’.

        • turboSnail@piefed.europe.pub
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          “Don’t Bite The Hand That Feeds You”. LLMs seem to have internalized this rule pretty well. I can imagine that this idea can also be taken much further. Basically like trying to search “Tiananmen Square massacre” on the wrong side of the Great Firewall of China.

          Well, what if LLMs were instructed to not talk about “sensitive topics” like that? After all, more and more people are already using an LLM as a search engine replacement, so it’s only natural that Microsoft and OpenAI might receive some interesting letters about implementing very specific limitations.

    • SkunkWorkz@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 months ago

      Yeah but that won’t stop people from manually submitting prs made with AI. A lot of the slop isn’t just automated pull requests but people using AI to find and fix “bugs”, without understanding the code at all.

  • order216@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    ·
    2 months ago

    Why people try to contribute even if they don’t work on their codes? Ai slop not helping at all.

  • Luden@lemmings.world
    link
    fedilink
    arrow-up
    35
    ·
    3 months ago

    I am a game developer and a web developer and I use AI sometimes just to make it write template code for me so that I can make the boilerplate faster. For the rest of the code, AI is soooo dumb it’s basically impossible to make something that works!

    • Pyr@lemmy.ca
      link
      fedilink
      English
      arrow-up
      14
      ·
      3 months ago

      Yes I feel like many people misunderstand AI capabilities

      They think it somehow comes up with the best solution, when really it’s more like lightning and takes the path of least resistance. It finds whatever works the fastest, if it even can without making it up and then lying that it works

      It by no means creates elegant and efficient solutions to anything

      AI is just a tool. You still need to know what you are doing to be able to tell if it’s solution is worth anything and then you still will need to be able to adjust and tweak it

      It’s most useful for being able to maybe give you an idea on how to do something by coming up with a method/solution you may not have known about or wouldn’t have considered. Testing your own stuff as well is useful or having it make slight adjustments.

      • AnUnusualRelic@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        3 months ago

        It finds whatever works the fastest

        For a very lax definition of “works”…

        Kind of agree with the rest of your points. Remember though, that the suggestions it gives you, for things you’re not familiar with may very well be terrible ones that are frowned upon. So it’s always best to triple check what it outputs, and only use it for broad suggestions.

      • ILikeBoobies@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 months ago

        Works in this case doesn’t mean the output works but that it passes the input parameter rules.

    • AmbitiousProcess (they/them)@piefed.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      Unfortunately Anubis wouldn’t stop the bots, it would just slow them down.

      Anubis just adds proof of work, AKA computation, to your requests. It’s why your browser takes a second before it can access the site. It’s nothing for things on your scale, but it’s a fuck ton of time and money for large scraping operations accessing millions of links every day.

      For a bot submitting PRs though, it’s not gonna be a meaningful hindrance unless the person is specifically running a bot designed to make thousands of PRs every day, which a lot of these aren’t.

      Really unfortunate.

      • Randelung@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        I’m ignorant 😅 I don’t use either. I guess it doesn’t really defend against browser-remote-controlling bot agents.

        • pkjqpg1h@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 months ago

          browser-remote-controlling bot agents

          if you mean some users giving control of their browser to an bot no it don’t because it’s still a legit user browser window

          but most of bots don’t use a legit browser window (because it would be impossible to scale)

          • Randelung@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 months ago

            I was thinking that using selenium or similar would allow the bot to circumvent any block that works in a browser. Since it’s probably not doing a million PRs at once, doing that would be viable. It could even use the cookie from the selenium session to then use the api directly.

            Kinda like flaresolver does for prowlarr/jackett.

            In which case Anubis is only a temporary measure until the vibe coders wise up.

            • pkjqpg1h@lemmy.zip
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 months ago

              Defense systems also improve. Anubis can make the Proof-of-Work (PoW) more difficult or add new checks. This competition is won by whoever can keep their costs lower. When spammers have to use more resources for each pull request while normal users do not pay an extra cost, the defenders win.

  • bluGill@fedia.io
    link
    fedilink
    arrow-up
    26
    ·
    3 months ago

    I’ve been writting a lot of code with ai - for every half hour the ai needs to write the code I need a full week to revise it into good code. If you don’t do that hard work the ai is going to overwhelm the reviewers with garbage

      • bluGill@fedia.io
        link
        fedilink
        arrow-up
        19
        ·
        3 months ago

        I’m writing code because it is often faster than explaining to the ai how to do it. I’m spending this month seeing what ai can do - it ranges from saving me a lot of tedious effort to making a large mess to clean up

        • LedgeDrop@lemmy.zip
          link
          fedilink
          English
          arrow-up
          9
          ·
          3 months ago

          I’ve had better success, when using AI agents in repeated, but small and narrow doses.

          It’s been kinda helpful in brainstorming interfaces (and I always have to append at the end of every statement “… in the most maintainable way possible.”)

          It’s been really helpful in writing unit tests (I follow Test Driven Development), and sometimes it picks up edge cases I would have overlooked.

          I wouldn’t blindly trust any of it, as all too often it’s happy to just disregard any sort of error handling (unless explicitly mentioned, after the fact). It’s basically like being paired up with an over-eager, under-qualified junior developer.

          But, yeah, you’re gonna have a bad time if you prompt it to “write me a Unix operating system in web assembly”.

        • Thorry@feddit.org
          link
          fedilink
          English
          arrow-up
          8
          ·
          3 months ago

          I totally get it. I’ve been critical about using AI for code purposes at work and have pleaded to stop using it (management is forcing it, less experienced folk want it). So I’ve been given a challenge by one of the proponents to use a very specific tool. This one should be one of the best AI slop generators out there.

          So I spent a lot of time thoroughly writing specs for a task in a way the tool should be able to do it. It failed miserably, didn’t even produce any usable result. So I asked the dude that challenged me to help me refine the specs, tweak the tool, make everything perfect. The thing still failed hard. It was said it was because I was forcing the tool into decisions it couldn’t handle and to give it more freedom. So we did that, it made up the rules themselves and subsequently didn’t follow those rules. Another failure. So we split up the task into smaller pieces, it still couldn’t handle it. So we split it up even further, to a ridiculous level, at which point it would definitely be faster just to create the code manually. It’s also no longer realistic, as we pretty much have the end result all worked out and are just coaching the tool to get there. And even then it’s making mistakes, having to be corrected all the time, not following specs, not following code guidelines or best practices. Another really annoying thing is it keeps on changing code it shouldn’t touch, since we’ve made the steps so small, it keeps messing up work it did previously. And the comments it creates are crazy, either just about every line has a comment attached and functions get a whole story, or it has zero comments. As soon as you say to limit the comments to where they are useful, it just deletes all the comments, even the ones it put in before or we put in manually.

          I’m ready to give up on the thing and have the use of AI tools for coding limited if not outright stopped entirely. But I’ll know how that discussion will go: Oh you used tool A? No, you should be using tool B, it’s much better. Maybe the tools aren’t there now, but they are getting better all the time, so we’ll benefit any day now.

          When I hear even experienced devs be enthusiastic about AI tools, I really feel like I’m going crazy. They suck a lot and aren’t useful at all (on top of the thousand other issues with AI), why are people liking it? And why have we hedged the entire economy on it?

          • mcv@lemmy.zip
            link
            fedilink
            English
            arrow-up
            7
            ·
            3 months ago

            I’ve started using it as an interactive rubber duck. When I’ve got a problem, I explain it to the AI, after which it gives a response that I ignore because after explaining it, I figured it out myself.

            AI has been very helpful for finding my way around Azure deploy problems, though. And other complex configuration issues (I was missing a certificate to use az login). I fixed problems I probably couldn’t have solved without it.

            But I’ve lost a lot of time trying to get it to solve complex coding problems. It makes a heroic effort trying to combine aspects of known patterns and algorithms into something resembling a solution, and it can “reason” about how it should work, but it doesn’t really understand what it’s doing.

            • addie@feddit.uk
              link
              fedilink
              English
              arrow-up
              3
              ·
              3 months ago

              Which is strange, because Azure’s documentation is complete dogshit.

              We were trying to solve something at work (send SMTP messages using OAuth authentication, not rocket science) and Azure’s own chatbot kept on making up non-existent server commands, rest endpoints that don’t exist, and phantom permissions that needed to be added to the account.

              Seriously; fuck Azure, fuck Copilot. Made a task that should have taken hours, take weeks.

            • Ænima@lemmy.zip
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              3 months ago

              after explaining it, I figured it out myself.

              I use colleagues or people on Discord for this. I get the solution immediately after asking AND those that saw me, or heard me, ask now think I’m an idiot. It’s my neurodivergent kink!

        • Joe@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          5
          ·
          3 months ago

          You will need more than a month to figure out what its good for and what not, and to learn how to effectively utilize it as a tool.

          If can properly state a problem, outline the approach I want, and can break it down into testable stages, it can be an accelerator. If not, it’s often slop.

          The most valuable time is up front design and planning, and learning how to express it. Next up is the ability to quickly make judgement calls, and to backtrack without getting bogged down.

      • bluGill@fedia.io
        link
        fedilink
        arrow-up
        4
        ·
        3 months ago

        That is a question I’n trying to answer. Until I know what ai can do I can’t have a valid opinion.

        • leftzero@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          15
          ·
          edit-2
          3 months ago

          We know what “AI” can do.

          • Create one of the largest and most dangerous economic bubbles in history.
          • Be a massive contributor to the climate catastrophe.
          • Consume unfathomable amounts of resources like water, destroying the communities that need them.
          • Make personal computing unaffordable. (And eventually any form of offline computing; if it’s up to these bastards we’ll end up back with only mainframes and dumb terminals, with them controlling the mainframes).
          • Promote mass surveillance and constant erosion of privacy.
          • Replace search engines making it impossible to find trustworthy information on the Internet.
          • Destroy the open web by drowning it on useless slop.
          • Destroy open source by overwhelming the maintainers with unusable slop.
          • Destroy the livelihood of artists and programmers using their own stolen works as training data, without providing a useable replacement for the works they would have produced.
          • Infect any code they touch with such an amount of untraceable bugs that it becomes unusable and dangerous (see windows updates since they replaced their programmers with copilot, for instance.
          • Support the parasitic billionaire class and increase the wealth divide even more.
          • Make you look like a monstrous moronic asshole for supporting all that shit.

          It maybe being able to save you five minutes of coding in exchange for several hours of debugging (either by you or by whoever is burdened with your horrible slop) is not worth being an active contributor to all that monstrous harm on humanity and the world.

    • Seefra 1@lemmy.zip
      link
      fedilink
      English
      arrow-up
      13
      ·
      3 months ago

      Not sure why you’re getting down votes, AI is a good tool when used properly.

        • SCmSTR@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          I mean, yes, but also that’s a bit nuclear. Machine learning has real, fully good ethical and responsible uses… The problem is that society has yet to agree on the philosophy of what that is, and most business-first minded people have SUPER shitty, or even completely missing moral compasses.

          So, effectively what you say, yes. But technically with much nuance and many clauses, not entirely.

          We are clearly not ready as a species to handle it. Though, maybe we’ll burn the shit out of our hands in the next coming century enough to learn. But either way, it’s DEFINITELY not an “ignore all risk and run blindly at this shiny new flame” thing like a lot of people seem to think and treat it.

    • Peehole@piefed.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      3 months ago

      With proper prompting you can let it do a lot of annoying stuff like refactors reasonably well. With a very strict linter you can avoid the most stupid mistakes and shortcuts. If I work on a more complex PR it can take me a couple days to plan it correctly and the actual implementation of the correct plan will take no time at all.

      I think for small bug fixes on a maintainable codebase it works, and it works for writing plans and then implementing them. But I honestly don’t know if it’s any faster than just writing the code myself, it‘s just different.

      • fuck_u_spez_in_particular@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 months ago

        reasonably well

        hmm not in my experience, if you don’t care about code-quality you can quickly prototype slop, and see if it generally works, but maintainable code? I always fall back to manual coding, and often my code is like 30% of the length of what AI generates, more readable, efficient etc.

        If you constrain it a lot, it might work reasonably, but then I often think, that instead of writing a multi-paragraph prompt, just writing the code might’ve been more effective (long-term that is).

        plan it correctly and the actual implementation of the correct plan will take no time at all.

        That’s why I don’t think AI really helps that much, because you still have to think and understand (at least if you value your product/code), and that’s what takes the most time, not typing etc.

        it‘s just different.

        Yeah it makes you dumber, because you’re tempted to not think into the problem, and reviewing code is less effective in understanding what is going on within code (IME, although I think especially nowadays it’s a valuable skill to be able to review quickly and effectively).

        • Peehole@piefed.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          Eh I don’t disagree with you, it’s just the reality for me that I am now expected to work on much more stuff at the same time because of AI, it’s exhausting but at least in my job I have no choice and I try to arrange myself with the situation.

          I sure lost a lot of understanding of the details of the codebase but I do read every line of code these LLMs spit out and manually review all PRs for obvious bullshit. I also think code quality got worse despite me doing everything I can to keep it decent.