• 2 Posts
  • 15 Comments
Joined 6 天前
cake
Cake day: 2025年12月1日

help-circle

  • It depends on the methodology. If you’re trying to do a direct port. You’re probably approaching it wrong.

    What matters to the business most is data, your business objects and business logic make the business money.

    If you focus on those parts and port portions at a time, you can substantially lower your tech debt and improve developer experiences, by generating greenfield code which you can verify, that follows modern best practices for your organization.

    One of the main reasons many users are complaining about quality of code edited my agents comes down to the current naive tooling. Most using sloppy find/replace techniques with regex and user tools. As AI tooling improves, we are seeing agents given more IDE-like tools with intimate knowledge of your codebase using things like code indexing and ASTs. Look into Serena, for example.







  • While it’s possible to see gains in complex problems through brute force, learning more about prompt engineering is a powerful way to save time, money, tokens and frustration.

    I see a lot of people saying, “I tried it and it didn’t work,” but have they read the guides or just jumped right in?

    For example, if you haven’t read the claude code guide, you might have never setup mcp servers or taken advantage of slash commands.

    Your CLAUDE.md might be trash, and maybe you’re using @file wrong and blowing tokens or biasing your context wrong.

    LLMs context windows can only scale so far before you start seeing diminishing returns, especially if the model or tools is compacting it.

    1. Plan first, using planning modes to help you, decomposition the plan
    2. Have the model keep track of important context externally (like in markdown files with checkboxes) so the model can recover when the context gets fucked up

    https://www.promptingguide.ai/

    https://www.anthropic.com/engineering/claude-code-best-practices

    There are community guides that take this even further, but these are some starting references I found very valuable.








  • Your anecdote is not helpful without seeing the inputs, prompts and outputs. What you’re describing sounds like not using the correct model, providing good context or tools with a reasoning model that can intelligently populate context for you.

    My own anecdotes:

    In two years we have gone from copy/pasting 50-100 line patches out of ChatGPT, to having agent enabled IDEs help me greenfield full stack projects, or maintain existing ones.

    Our product delivery has been accelerated while delivering the same quality standards verified by our internal best practices we’ve our codified with determistic checks in CI pipelines.

    The power come from planning correctly. We’re in the realm of context engineering now, and learning to leverage the right models with the right tools in the right workflow.

    Most novice users have the misconception that you can tell it to “bake a cake” and get the cake ypu had in your mind. The reality is that baking a cake can be broken down into a recipe with steps that can be validated. You as the human-in-the-loop can guide it to bake your vision, or design your agent in such a way that it can infer more information about the cake you desire.

    I don’t place a power drill on the table and say “build a shelf,” expecting it to happen, but marketing of AI has people believing they can.

    Instead, you give an intern a power drill with a step-by-step plan with all the components and on-the-job training available on demand.

    If you’re already good at the SDLC, you are rewarded. Some programmers aren’t good a project management, and will find this transition difficult.

    You won’t lose your job to AI, but you will lose your job to the human using AI correctly. This isn’t speculation either, we’re also seeing workforce reduction supplemented by Senior Developers leveraging AI.