Stage 7 — Prompts for Code (Core Skill)
This is where prompt engineering becomes practical software engineering.
Stage topics
- Code generation with requirements
- Refactoring
- Bug finding
- Test generation
- Documentation from code
Code generation
Reliable coding prompt should include:
- language/version,
- architecture constraints,
- style conventions,
- acceptance criteria,
- output format (files, diff, change list).
Refactoring
Avoid “make it better”. Request:
- explicit code smells,
- target structure,
- no behavior change,
- required tests after refactor.
Bug finding
Effective pattern:
- context (symptom, logs, stacktrace),
- cause hypotheses,
- minimal reproduction steps,
- fix + explanation of correctness.
Test generation
Always specify:
- test type (unit/integration),
- mocking boundaries,
- edge cases,
- expected invariants.
Documentation from code
Useful outputs:
- API contracts,
- ADR summary,
- support runbook,
- release changelog.
Coding prompts as engineering specifications
A coding prompt should behave like a compact engineering ticket. The model needs to know the target repository, the language version, the framework, the ownership boundary, and the definition of done. Without that, it may produce code that looks correct in isolation but does not fit the existing project.
For bug finding, the prompt must distinguish symptoms from causes. Logs and screenshots are symptoms. The fix should target the cause and explain why the symptom disappears. For refactoring, the prompt must explicitly preserve behavior, because “clean up” can otherwise become an accidental rewrite. For tests, the prompt should define which layer is under test and what must be mocked, otherwise generated tests often become brittle or too broad.
| Task type | Prompt must include | Acceptance signal |
|---|---|---|
| Code generation | Version, framework, file scope, constraints | Builds and meets behavior |
| Refactoring | Smell, target shape, no behavior change | Tests still pass |
| Bug finding | Symptom, reproduction, suspected area | Cause is explained and fixed |
| Test generation | Layer, mocks, edge cases | Meaningful failure on regression |
Stage takeaway
Coding prompts must be highly specified.
Less ambiguity means less technical debt after generation.
Beginner explanation
A coding prompt is not a request to “write code”. It is a compact engineering specification. The model needs to know language and version, existing libraries, files in scope, behavior that must not change, and how completion should be verified. Without that, the model may create a nice isolated snippet that does not fit the project.
For a new feature, the prompt should include acceptance criteria. Example: “add endpoint POST /orders, validate quantity >= 1, return 400 on validation error, add unit tests for service and integration test for controller.” This defines not only code, but expected behavior.
For bug fixing, separate symptom from cause. Stack trace, log, and screenshot show symptoms. A good prompt asks for root cause, minimal fix, and explanation of why the fix removes the cause. For refactoring, explicitly say “do not change public behavior” and ask for tests or risk notes.
In a Spring Boot 3 project, fix the bug in OrderService.
Symptom: an order is created when quantity=0.
Expected behavior: return validation error before saving.
Do not change API response format.
Add tests for quantity=0 and quantity=1.
In the answer, list changed files and verification.
Mini scenarios from real projects
- Model generates compiling code that ignores project constraints: prompt does not specify versions and conventions.
- Bug-finding output is superficial: findings format with severity and line references is not required.
- Test generation covers happy path but misses edge cases: negative scenarios are not explicitly requested.
Fast decision rules
- For code tasks, always specify environment context: language, versions, style, constraints.
- For reviews, require structured findings and risk-based prioritization.
- For tests, require happy-path, negative-path, and boundary-case coverage.
Self-check questions
- Why does “generate code” without constraints usually create technical debt?
- Which fields should be mandatory in a bug-finding report format?
- How do you phrase test-generation prompts to cover more than happy path?