Modern LLMs aren’t really all that intelligent unless given enough context to do a good job. A generic request, something like:
“Hey AI, create a Statement of Work (SOW) for this project.”
Will produce a very generic result: fluffy and barely usable.
The real shift came when I stopped treating the agent like a magic button and started treating it like a junior architect/PM with access to my project materials. That meant feeding it context, pointing it at real sources (docs, code, transcripts, templates), and giving it clear, scoped tasks in a sequence.
This post is the playbook I came up with some far with some concrete prompt examples for going from blank page → architecture → plan → Statement of Work (SOW), with a human reviewer, someone technical and experienced enough to steer the process and guide those AI agents.
The examples assume a web project (e.g. Sitecore XM Cloud + Next.js + Azure), but feel free to swap the stack for whatever you use.
Setting up a “project brain”
Your agent is only as good as the stuff (context) you give it. Now context engineering is increasingly becoming more important part of AI-powered process then prompt engineering.
Before prompting, I usually put together a project folder like:
/mnt/data/acme/
discovery/
call-01-transcript.md
call-02-transcript.md
rfp.pdf
notes-from-sales.md
current-state/
public-site-audit.md
legacy-arch-notes.md
repo-map.md
planning/
wbs-template.xlsx
estimates-draft.xlsx
assumptions-risks.md
templates/
sow-xmcloud-v2.md
architecture-brief-template.md
milestone-plan-template.md
I also collect links the agent is allowed to use, for example:
- Client site:
https://www.acme.com - Vendor docs:
https://doc.sitecore.com/...,https://nextjs.org/docs,https://learn.microsoft.com/azure/... - Internal blog posts / playbooks
I do this once per client. Then all my prompts can refer to these paths and URLs instead of hand‑explaining everything every time.
At a high level, the way I think about the SOW pipeline looks like this:
The rest of this post walks through each stage with concrete prompts I actually use.
Turning discovery chaos into a structured brief
My first goal is simple: get from a pile of transcripts and notes to:
- a clean summary of goals/constraints/risks,
- a list of open questions, and
- something I can sanity‑check in 10–15 minutes.
First discovery pass prompt
You are my architecture + delivery assistant for the “ACME Commerce Replatform” project.
I am the lead architect and engagement owner. You assist with analysis and drafting;
I make all decisions and review everything before it goes anywhere.
Project context (high-level):
- Client: ACME Corp, B2C retail in EU + US
- Target stack: Sitecore XM Cloud, Next.js (Vercel), Azure Functions, Azure AD B2C, SAP ERP
- Goal: Modernize ecommerce site, improve performance, simplify authoring, handle seasonal traffic spikes.
Sources you may use for THIS task:
- Discovery calls
- /mnt/data/acme/discovery/call-01-transcript.md
- /mnt/data/acme/discovery/call-02-transcript.md
- Request for proposal (RFP) and notes
- /mnt/data/acme/discovery/rfp.pdf
- /mnt/data/acme/discovery/notes-from-sales.md
- Public site audit
- /mnt/data/acme/current-state/public-site-audit.md
Task:
1. Read all the above.
2. Produce a concise discovery brief with the following structure:
## 1. Business goals
- Bullet list, 5–10 items.
## 2. Functional needs
- Group by theme (Catalog, Search, Checkout, Content/Marketing, Account, etc.).
- For each theme, list concrete bullets, not vague sentences.
## 3. Non-functional constraints
- Performance, security/compliance, SLA/availability, regions, traffic patterns, etc.
## 4. Risks & unknowns
- Things that sound risky or unclear.
- For each, quote the sentence or section you based it on.
## 5. Questions for the next client session
- List questions we must clarify. Separate “Critical” vs “Nice-to-have”.
Rules:
- Do NOT invent requirements or numbers. If something isn’t stated, leave it as unknown.
- When you mention a concrete figure (traffic, SLAs, deadlines), include the source file name.
- Keep the whole brief under 2 pages worth of text.
At the end, explicitly ask me to correct any misunderstandings before we move on.
Human in the loop: you skim this brief, fix anything off, maybe add a few questions, and then you’re ready for Step 2.
Using the agent as a repo and current-state guide
If I’m inheriting a legacy solution, I get the agent to do the boring reading and mapping.
Mapping the existing system
You are now focused on understanding ACME's current technical setup.
I am still the architect; I use your summaries to guide decisions.
If something is unclear, list it as an assumption and I’ll confirm.
Sources for THIS task:
- Legacy architecture notes:
- /mnt/data/acme/current-state/legacy-arch-notes.md
- Repo map and scan (generated by a separate script):
- /mnt/data/acme/current-state/repo-map.md
- Optional code snippets if referenced inside repo-map.md
Task:
1. From these sources, produce a high-level technical overview with sections:
## 1. High-level architecture today
- Describe main components (CMS, frontend, integrations, identity, data stores).
## 2. Key integrations
- For each external system (ERP, CRM, payment, search, etc.), describe:
- What it does today
- How it connects (protocol, API, batch jobs, etc.)
## 3. Pain points and constraints
- Anything noted as slow, brittle, hard to maintain, tightly coupled, etc.
## 4. Migration concerns
- Customizations or patterns that may not translate well to a SaaS/headless model.
2. For each concern, link back to the specific source (file + section).
Rules:
- Do not speculate about systems that aren’t mentioned.
- If the repo-map hints at something but you lack detail, flag it as “assumed” and add to the migration concerns.
End by listing 5–10 areas I should inspect in the actual code myself.
You can follow up with more targeted prompts like:
From the same sources, focus ONLY on how user authentication and authorization are implemented today.
Summarize the flow step-by-step and list anything that might block a move to Azure AD B2C.
Turning notes into requirements and user stories
Once I understand the ask and the current system, I turn that into structured requirements the team can work with.
Generating and categorizing user stories
You’re helping me turn discovery notes into a structured requirement set.
Sources:
- Discovery brief you previously produced (paste or reference file).
- Discovery transcripts:
- /mnt/data/acme/discovery/call-01-transcript.md
- /mnt/data/acme/discovery/call-02-transcript.md
Task:
1. Propose user stories in this format:
“As a <type of user>, I want <goal> so that <reason>.”
2. Organize them into sections:
- Content & authoring
- Catalog & product discovery
- Checkout & payments
- Accounts & identity
- Analytics & reporting
- Operations & admin
- Other (if needed)
3. For each section:
- Separate into “MVP” vs “Later / Nice-to-have” based ONLY on what’s in the transcripts.
- If priority is unclear, tag it as “TBD”.
4. At the end, output:
## Gaps / Likely Missing Areas
- Any standard area (SEO, accessibility, observability, etc.) that was never mentioned.
Rules:
- No made-up features (“AI recommendations”) unless the client actually said something similar.
- Keep each story short and concrete.
- Don’t write acceptance criteria yet; I’ll do that.
Ask me to review the sections and priorities, and say where you’re uncertain.
You then go through, promote/demote priorities, and add acceptance criteria yourself or with the agent in a second pass.
Exploring architecture options together
Here I use the agent as a sounding board to explore a couple of viable options, not to auto‑architect the whole thing.
Strawman architecture options prompt
You are assisting me with solution architecture. I choose the final design;
your job is to propose options and highlight trade-offs.
Context (confirmed so far):
- Goals: modern B2C web, better performance, simpler authoring, support traffic spikes (5x on Black Friday).
- Target stack: Sitecore XM Cloud, Next.js on Vercel, Azure Functions, Azure AD B2C, SAP ERP.
Sources:
- Discovery brief: /mnt/data/acme/deliverables/discovery-brief-v1.md
- Current state overview: /mnt/data/acme/current-state/legacy-summary.md
- Vendor docs (read only for patterns, do NOT quote large chunks):
- https://doc.sitecore.com
- https://nextjs.org/docs
- https://learn.microsoft.com/azure/
Task:
1. Propose TWO high-level architecture options:
Option A – “Fastest time-to-market”
Option B – “More flexible, more custom”
For each option, describe:
- Main components
- How they interact for main flows (browse, search, checkout)
- Where integrations with SAP and CRM live
- How identity (Azure AD B2C) fits in
2. Then create a comparison table with:
- Dimension: Time-to-market, Complexity, Operational overhead, Flexibility, Risk.
Rules:
- Base recommendations on modern, well-supported patterns from the vendor docs.
- If you’re unsure about a detail, state it as an assumption, don’t present it as fact.
- No buzzwords for their own sake. Favor simple, boring architecture where it fits.
End by listing 5 questions you recommend I ask the client to choose between A and B.
Example prompt: generate diagrams (for later editing)
Using the selected option (I chose Option A from your last answer),
generate a Mermaid system diagram that shows:
- Users → Edge → Next.js app → XM Cloud → Azure Functions → SAP
- Azure AD B2C for identity
- CDN in front of static/media content
Constraints:
- Keep node names clear and concise.
- Use LR layout.
- Don’t explain Mermaid, just output the code block.
At the end, write a 2–3 paragraph plain-English explanation I can paste into an architecture brief.
You paste the Mermaid into your diagram tool, tidy labels, and you’ve got ready‑to‑show visuals.
Planning, work breakdown structure, and effort — agent as checklist
The agent doesn’t own the estimates. It helps me not forget work.
First work breakdown structure (WBS) draft prompt
You are now helping me turn the selected architecture into a delivery plan.
I am the delivery lead. You can suggest tasks and sequencing;
I set estimates and commitments.
Sources:
- Architecture brief: /mnt/data/acme/deliverables/architecture-brief-v1.md
- User story set: /mnt/data/acme/deliverables/user-stories-v1.md
- Our generic web-project work breakdown structure (WBS) template:
- /mnt/data/acme/planning/wbs-template.xlsx
- Our standard “Web project checklist”:
- /mnt/data/internal/checklists/web-project-checklist.md
Task:
1. Propose a Work Breakdown Structure organized by workstream:
- Project setup / environments / DevOps
- Frontend (Next.js) implementation
- Backend & integrations (SAP, CRM, search, payments)
- Content modeling & migration
- QA & testing (functional, UAT, performance)
- Analytics, SEO, and tracking
- Go-live & hypercare
2. Under each workstream, list concrete epics/tasks at roughly “1–2 week” granularity.
3. For each task:
- Note key dependencies (if any).
- Briefly flag risk level: Low / Medium / High, with 1-line reason.
Rules:
- Use our checklist file so you don’t forget things like monitoring, logging, redirects, accessibility.
- Do NOT assign effort or dates. I will do that.
- Do not invent tasks that clearly contradict the chosen architecture.
End with:
- A list of tasks you think are “high risk / high uncertainty” that I should estimate separately.
You then copy this into your planning tool, adjust, and add estimates manually (or with a second, clearly constrained prompt).
SOW and document set — from plan to contract‑ready draft
Finally, I turn everything into client‑ready docs: discovery deck, architecture brief, plan, and SOW. The agent is mainly doing formatting and assembly here. I keep control of promises and numbers.
Drafting a SOW from real inputs (not thin air)
You are helping me draft a Statement of Work for the
“ACME Commerce Replatform” project.
I am the engagement lead and I approve all scope, assumptions, and pricing.
Your job is to assemble a clear draft from our existing materials,
not to invent new commitments.
Use ONLY these sources:
- Discovery brief:
- /mnt/data/acme/deliverables/discovery-brief-final.md
- Architecture brief:
- /mnt/data/acme/deliverables/architecture-brief-final.md
- WBS and estimates:
- /mnt/data/acme/planning/wbs-final.xlsx
- /mnt/data/acme/planning/estimates-final.xlsx
- Assumptions & risks:
- /mnt/data/acme/planning/assumptions-risks-final.md
- Standard SOW template:
- /mnt/data/templates/sow-xmcloud-v2.md
Do NOT pull in knowledge from anywhere else.
Task:
1. Infer the SOW template structure (section headings) from sow-xmcloud-v2.md.
2. Produce a project-specific SOW draft that:
- Keeps all legal/boilerplate sections from the template as-is.
- Fills in project-specific parts ONLY with information from the sources above.
Specifically:
- “Objectives” → from discovery brief.
- “In-scope” → from WBS + architecture brief. Write as numbered bullet points.
- “Out-of-scope” → anything explicitly listed in assumptions-risks-final.md,
plus anything the template expects that we are NOT delivering.
- “Deliverables & milestones” → from WBS & estimates; group by phase.
- “Timeline” → use relative timing (e.g. “Week 1–4: …”), not fixed dates.
- “Assumptions & client responsibilities” → from assumptions-risks-final.md.
3. Mark any places where required information is missing with a clear TODO,
e.g. “TODO: confirm payment terms with finance”.
Rules:
- No marketing fluff. Write in straightforward, contractual language.
- Do NOT change any legal clause wording that appears in the template.
- If you are unsure, write the doubt in a comment like: “NOTE FOR AUTHOR: …”.
At the end:
- Output the full SOW as markdown.
- Then append a short checklist: “Things for human review before sending to legal”.
You then:
- Read it end‑to‑end
- Adjust scope wording, milestones, and assumptions
- Hand it to legal / leadership for their review
Same pattern works for an architecture brief or discovery deck; you just swap in a different template file and structure.
Keeping humans in the loop on purpose
You’ll notice each prompt:
- States clearly: you are responsible for decisions.
- Limits sources to specific files/links.
- Asks the agent to surface assumptions and questions, not hide them.
- Ends with “ask me to review…” or “list what I should check”.
A few extra patterns I use to stay in control:
Two‑pass pattern
- Pass 1: “Draft the thing and list your assumptions and uncertainties.”
- Pass 2 (after I review): “Now update the draft based on these corrections: …”
Explicit veto on legal and estimates
I often add:
Important:
- You are not allowed to invent legal clauses or change liability language.
- You are not allowed to decide pricing or discounts.
- Treat all numbers in estimates-final.xlsx as authoritative.
Source‑quoting for traceability
When I care about traceability:
For each requirement or constraint you mention,
include in parentheses which source file it came from.
If there is no source, don’t state it as a fact.
That makes it very obvious when the agent is guessing.
Further Reading & Tools
If you want to go deeper into the ideas behind this workflow:
-
From Discovery to PRD: How AI Transformed Our Requirements Process – White Prompt
https://blog.whiteprompt.com/from-discovery-to-prd-how-ai-transformed-our-requirements-process-7e3b3fe26d8c -
From Meeting Transcripts to User Stories: AI-Assisted Guide – DevAgentix
https://www.devagentix.com/blog/analyze-meeting-transcripts-to-create-user-stories -
GitDiagram – Visualize codebases as architecture diagrams
https://news.ycombinator.com/item?id=42521769 -
Building an AI Agent for Codebase Analysis and Understanding – Zogoo
https://zogoo.medium.com/building-an-ai-agent-for-codebase-analysis-and-understanding-d02158ee0e99 -
The CTO’s Blueprint to Retrieval-Augmented Generation (RAG) – HatchWorks
https://hatchworks.com/blog/gen-ai/cto-blueprint-rag-llm/ -
AI on Trial: Legal Models and Hallucinations – Stanford HAI
https://hai.stanford.edu/news/ai-trial-legal-models-hallucinate-1-out-6-or-more-benchmarking-queries
Related posts in the AI-Powered Stack series: