Very cool. I love nothing more than security critical software written by a statistical text generator.
Just quoting this from the linked post:
“I’m a KeePassXC maintainer. The Copilot PRs are a test drive to speed up the development process. For now, it’s just a playground and most of the PRs are simple fixes for existing issues with very limited reach. None of the PRs are merged without being reviewed, tested, and, if necessary, amended by a human developer. This is how it is now and how it will continue to be should we choose to go on with this. We prefer to be transparent about the use of AI, so we chose to go the PR route. We could have also done it locally and nobody would ever know. That’s probably how most projects work these days. We might publish a blog article soon with some more details.”
First I’ve seen this, so I appreciate the post OP. It’s four months old too, so I have no idea what and if anything has changed since the quoted post
Yes the first post sorta goes against the expectation I got from the title.
The same way fedora is slop now? For fucks sake…
Edit: no, fedora is not slop. The same way keepass isnt slop. Slop is made by letting an llm make something unchecked. KeePass is still reviewing every PR
Yeah this is really getting exhausting. There’s plenty of real shit to be mad about without getting mad about really petty nothing like this. Also the thing is free and open source. Like the entitlement sometimes with this shit is wild.
AI is used poorly for a great many things but just blanket shitting on every use of it is just as obnoxious.
I think any use of “AI” is itself pretty horrendous and should be ostracised and never supported. It’s fascist technology.
Correct
Wrong
That sounds a little extreme but this is “Fuck AI” so I guess that’s expected.
and yet, you cannot refute my statement. Get your genocide tech today! It can: -destroy the environment -enrich the already super rich -enable and industrialize genocide -fucks over the poor by firing tons of workers -entrenches marginalisation by propagating fascist world views and pushing for exclusionary thinking
Just because you didn’t think it though and are unable to think it through doesn’t mean my thinking is far-fetched - just that you lack the info, time, or energy to arrive at the same conclusion.
I think I’m giving them a pass as well. It’s been months since and everything is still okay. From what I can see it looks like some experiments. With quite a good chunk of manual intervention, review and then changing around things and force-pushing a correct (probably human-written) version. I wonder if it even saved them time. Maybe they reconsidered their approach since, the last of those PRs is from end of August. At least they seem to be transparent and pay a good amount of attention to what Copilot does.
I think vibe-coding and AI assisted programming is a bit weird anyway. My own experience is mostly negative. I’ve experimented with it nonetheless. Idk, lots of programmers are clever but also curious people. They’ll try things and figure it out eventually. And looks to me like they might be roughly on the right track here. And I’ll agree, it doesn’t really matter whether they review pull requests from a 14 year old, or russian hacker in disguise or AI. It’s always the same process with pull requests and you never know who’s at the other end and what their motivations are. It’s highly problematic if people bury developers in AI slop, but if they choose so themselves, they’re mostly equipped to deal with it. At least in theory and if they’re good at their job.
Yeah not sure what the point is if it’s not saving any time anyway
Some individual motivation… Curiosity. Fascination with new tech. Or the prospect of maybe saving time and then evaluating if that’s the case. Idk, I’ve tried it as well and it doesn’t seem to save me time but that’s one of the the big promises of AI. I think we all know how AI delivers on its promises overall. But learning and experimenting (with some due diligence) is rarely amongst the problematic aspects of something. But it kind of comes first or you can’t learn about the truth.
I use LLMs for coding too. They’re pretty great at generating the code I could have written myself. But that’s the important part. I completely understand the code. As long as we’re transparent and a good developer combs trough it I don’t see why not.
We might be in the wrong community here to discuss a positive attitude towards AI coding… But anyway… Do you like it? I think I’m more and more coming to the conclusion that I don’t really fancy it. It’s somewhat fulfilling to code something. But my experience with AI is I’ll spend 90 minutes arguing with it and making it have countless shots at the one problem, and then I end up reading all the code, refactoring it and rewriting snippets and it’s super tedious and I’m annoyed because I like computers for doing exactly what I tell the to do, and now I have to argue with the darn thing about the specifications or how memory allocation or maths works a certain way or if we can pull in random libraries for a simple task… So I’m a bit split on this. At first it was very exciting and fascinating. But I think for coding that kind of got old quickly. At least for me and the stuff I do. These days I’ll use it for quick tech-demos, templates, placeholders, to google the documentation, translate Chinese and the like but I’ve cut down on the actual coding mostly because it takes the fun out of it and turns it upside-down into reviewing and correcting code.
Not OP but I had great success letting it repeat stuff we already have, for example we have a certain pattern on how we place translations. So I just hardcode everything and in the end tell it, using a pre-written task I can just call up, to take all the hardcoded labels and place it im our system in the same way it has already been done. It then reads the code of a few existing components and replicates that. Or I let it extract some code into smaller components. Or move some component around, it can do that batter than the IDEs integrated move action. Completely novel stuff is possible but I am uncertain if I am actually not slower using it to achieve that. I mostly do it step by step, really small steps that is.
I have to measure my performance at some point, it is certainly possible that I am actually slower than before. But overall I never liked typing out the solution that is in my head, so using it as writer is nice.
Sonnet 4.5 is what I use. Some colleagues like GTP-5 but it struggles real hard to do the most basic things right in my experience. Claude is just miles ahead.
To the extent I have grown more comfortable, it’s accepting that the AI is usually wrong and giving up on trying unless it’s obvious and short. I won’t “argue” with it, I just discard and do it myself. I’ll also click “review my code” and give it a chance to highlight mistakes. Again it is frequently wrong. But once it did catch an inconsistency that I know would have been frustrating when it eventually reared its head.
The thing that I’m thinking of turning off is code completion with tab. Problem is that the lag means I didn’t know if the tab key is going to do a normal thing or if by the time I hit it an AI suggestion pops up and I have to undo the unexpected modification. Also sometimes the suggestions linger and make the actual code hard to read long after I already decided to ignore the suggestion.
Yesterday was a fair amount of tab completing through excessively boilerplate crap thanks to AI, but most days it’s next to useless as I am in low boilerplate scenarios. Some frameworks and languages make you type a novel to do something very common, and AI helps with those. I tend to avoid those but I didn’t have a choice yesterday. Even then the AI made some very bad suggestions, so I have to be in the lookout at all times.
Yeah, I sometimes find the same loop with “this thing just don’t understand what I’m asking for” - I’ve had luck with breaking it down into smaller steps, and being specific about the requirements helps. I use Claude Sonnet 4.5 which is pretty decent, the OpenAI models really don’t compare and are pretty bad at best at coding.
Thanks. Yeah I didn’t try Claude. They want my phone number to sign up and I’m not providing that to people. But you’re not the only person suggesting Claude Sonnet, I’ve read that several times now. I wonder if they’re really that much better. I’ll try some more throwaway phone numbers to get in, but seems they’ve blocked most of them.
I’ve tried breaking down things as well. That’s usually a good strategy in programming. Though I guess at some point they’re small enough so I could have already typed it in myself instead of talking about doing it. And I find it often also struggles to find the right balance with the level of detail of a function and whether it’s clever to do a very specific singular thing or do it a bit more general so the function can be reused in other parts of the code. So it’ll be extra work later to revise it, once everything is supposed to come together and integrate/combine.
I have used LLMs for coding for work and it’s been really annoying. The technology just burns tokens to end up at square 0
I’m not a particularly skilled or professional developer, so I’ll defer to those who are on determining its usefulness for professional grade projects. However, I tried it a time or two for some of my small personal projects and had to proofread and correct mistakes so I stopped fooling with it. If I’ve got to babysit it and fix its mistakes anyway, then what’s even the point? I guess for large blocks of code it might be handy to not have to type it all out, but then you’ve still got to proofread and correct that much more anyway, so my ethical concerns aside, it really didn’t help me any more than just searching Stackoverflow to remind myself how to do something.
There is great irony in this post, considering this sensationalism was called out in a response to the maintainer:
…show hostility due to some random article with sensational title like ‘KeePassXC uses vibe coded contributions now without the users knowing’ which I know is not true. A blog article by KeePassXC would greatly avoid such situation.
Fuck…
any worthwhile forks or alternatives?
I’m using the original KeePass for the PC.
For my phone, Keepass2Android.
Ouch, for something as sensitive, I don’t trust code reviews to catch vulnerabilities. They probably won’t happen overnight, but I don’t want to risk being a victim to the gradual laziness that comes with backseating programming over time.
Time to jump ship.
using llm’s to do base level coding with human oversight is fine imo.
I hate AI’s societal consequences and it’s profit motive. I hate the data collection, i hate the Surveillance Tech it’s used for. I hate how it shits on Artists. I hate that people use LLM’s to substitude human connections, i hate the (current) environmental impact
But I’m not a luddite. Cat’s out of the bag anyway. We can’t stop it. Same as people couldn’t stop machinery taking over simple work.
We can stop it. It’s not profitable or good.
identification of cancer, development of vaccines, analysis of weather patterns and the ability to insta-translate speech and basically create real life subtitles are just small examples of vast usecases where AI was genuinely successful and a good use of the technology and there’s so much more.
We got to a point where a 50W specialized all-in-one-computing package can handle the workload of a local llm and outpace my gaming-rig which runs deepseek 8b. 50W is as much as a lightbulb in the 90’s. So the environmental impact goes away with time as technology progresses.
the issue lies with the corpos, not the technology.
Sure, collectively we could stop it. But those who oppose it are not numerous enough.
First few are not LLMs










