Have you ever asked an AI model to rewrite a complex Java class, only to have it spit out a half-finished mess filled with // TODO: Implement logic here or similar comments?
Itโs a common frustration. We expect a “Senior Developer” experience, but we often get a “Lazy Intern” output. However, I recently discovered a specific prompt strategy that completely changed the game. By moving away from “single-shot” requests and adopting an iterative, structured approach, I successfully migrated a legacy API server class to a robust, vulnerability-free library without a single hallucination.
Here is the breakdown of why this strategy works and how you can use it to master large-scale code rewrites.
The Strategy: A Step-by-Step Masterclass
Instead of throwing the entire file at the AI model and saying “fix this,” I broke the process into a conversation. This “Chain-of-Thought” (CoT) and “Incremental” approach kept the AI focused and accurate.
1. The Persona & The Roadmap
I started by setting a high bar. I didn’t just ask for code; I asked the AI to act as a Senior Java Developer and Software Engineer. Then laid out the roadmap: we would choose a library first, define the structure second and write the logic last.
2. The Library Vetting Process
Crucially, I instructed the AI model not to generate any code until we agreed on the library. This forced the model to think critically about security and robustness. It provided a list of candidates with detailed pros and cons, allowing me to make an informed architectural decision before a single line was written.
3. Defining the Skeleton First
Once the library was chosen, I provided only the class structure:
- Member variables
- Method signatures and their JavaDoc header comments.
This gave the AI a clear “blueprint” of the API’s contract without overwhelming its context window with logic. I even gave it the freedom to suggest necessary private members to support the new library’s architecture.
4. The “One-by-One” Generation
The secret sauce was the final step: Method-level generation. I provided the original source code for one method at a time. This ensured the app logic was respected and prevented the AI from getting “lazy” or skipping complex parts.
Why Does This Strategy Work?
If youโve struggled with AI “hallucinations” or incomplete code, itโs usually due to context window saturation. Hereโs the technical reality:
- Attention Scarcity: When an LLM generates a long class in one go, its “attention” to the middle sections often drops. This leads to those dreaded
TODOcomments. - Reduced Hallucination: By providing the original logic for only one method at a time, you keep the “scope” narrow. The AI model doesn’t have to guess what the rest of the class does; it only needs to translate one specific logic block into the new library’s syntax.
- Logical Grounding: Defining the structure first acts as a “constrainer.” It prevents the AI from inventing new, incompatible ways to handle data, ensuring the new code fits perfectly into your existing project.
Pro-Tips for Your Next Code Rewrite
To replicate this success, keep these prompt engineering principles in mind:
| Principle | Action |
| Separation of Concerns | Discuss architecture/libraries before asking for code. |
| Blueprint First | Define the class signatures and members to lock in the “contract.” |
| Iterative Logic | Feed original logic in small chunks (method by method). |
| Negative Constraints | Use “Do not generate code until…” to prevent premature output. |
Conclusion
AI is a powerful tool, but it requires a pilot. By using an iterative strategy, you transform the AI from a simple code generator into a genuine thought partner. My project worked perfectly on the first testโnot because the AI was magic, but because the prompt strategy provided the necessary guardrails for excellence.
Read more: From concept to code, building a video splitter app using only AI and logic






