

If there’s one thing that coding LLMs do “well”, it’s expose the need in frameworks for code generation. All of the enterprise applications I have worked on in modernity were by volume mostly boilerplate and glue. If a statistically significant portion of a code base is boilerplate and glue, then the magical statistical machine will mirror that.
LLMs may simulate filling this need in some cases but of course are spitting out statistically mid code.
Unfortunately, committing engineering effort to write code that generates code in a reliable fashion doesn’t really capture the imagination of money or else we would be doing that instead of feeding GPUs shit and waiting for digital God to spring forth.

DI frameworks are tricky beasts. Either they sacrifice flexibility for simplicity (I’ve seen this done in Go and in Scala, where the DI essentially generates basic instantiation and more advanced resolution is left to the app developer) or they can get really complex but do some handy things (.Net 4.x DI frameworks like Castle Windsor provided some neat lifecycle management tools but was internally very complex).
Cycle detection gets a little hairer the more complex a dependency/ class of dependencies gets. The process itself doesn’t change but the internal representation of the graph needs to be sufficiently abstract enough to illustrate a cycle for all possible resolution scenarios.
Based on the commit to fix the particular bug, it looks like the change will address a specific scenario but will probably fail to address similar issues.
All this to say “the problem isn’t too hard to think about but the solution isn’t straight-forward”, also “this is a fine short- term fix but longer-term would involve redefining the internal representation of a dependency graph”, and finally " An LLM-provided solution is at best a band-aid, in the most generous light.’