Skip to content

AI‑Powered Stack — From Signed SOW to Sprint One

Created:
Updated:

In my previous post AI-Powered Stack — From Blank Page to Signed SOW , I walked through how I use AI to go from blank page to a signed Statement of Work (SOW) – discovery, requirements, architecture, and contract.

This one starts right after the signature.

By now we have:

What we don’t have yet is:

This is the gap I use an AI‑assisted Sprint Zero to close. The agents help with reading, structuring, and drafting; I’m still the one making calls, changing scope, and owning the plan.

Below is the flow I actually run, plus concrete prompt examples you can adapt. If you want the “before the signature” side of this, see:

At a high level, I think about the “signed SOW → Sprint One” pipeline like this:

Signed SOW,

architecture, estimates

Epics, stories,

importable CSV

Team shape,

sprint outline

Sprint 0 checklist,

no-blockers state

Team ready

for Sprint One


Setting up a “delivery workspace”

Before I ask an agent to do anything, I pull everything into one place:

Then I “introduce” the project to my planning / coding agent.

Prompt: initialize the Sprint Zero assistant

You are my Sprint Zero assistant for the “ACME XM Cloud Replatform” project.

I’m the lead (project manager, business analyst, and architect in one). 
You help me analyze, structure, and draft. I review and approve everything.

Stack:
- CMS: Sitecore XM Cloud
- Frontend: Next.js (App Router)
- Composable: Sitecore CDP & Personalize, Sitecore Search
- Cloud: Azure (Functions, Storage, Key Vault, APIM)
- ALM: Jira (or Azure DevOps)

You can use:
- Contracts: ./contract/sow-final.md, ./contract/msa-final.md
- Discovery & architecture: ./discovery/*.md, ./solution/architecture-brief-final.md
- Planning: ./planning/wbs-final.xlsx, ./planning/estimates-final.xlsx, ./planning/assumptions-risks-final.md
- Public docs (read-only): XM Cloud docs, CDP/Personalize docs, Search docs,
  Next.js docs, Azure docs, client site.

Rules:
- Ground outputs in these materials. If something isn’t there, mark it as ASSUMPTION or UNKNOWN.
- Don’t change SOW scope or legal terms.
- End each answer with:
  - “Questions for you” (items I should decide)
  - “Unknowns/Risks” (if any)

That’s the only “big” prompt. Everything else is small, focused steps on top of this.


Turning SOW + architecture into epics

Step two is getting from “scope paragraphs” to epics by capability.

For XM Cloud projects I usually group work into streams like:

Prompt: scope → epics and features

Using:
- ./contract/sow-final.md
- ./solution/architecture-brief-final.md
- ./planning/wbs-final.xlsx

Create a first-pass backlog skeleton:

1. Define 6–10 WORKSTREAMS that match this stack
   (XM Cloud, Next.js, CDP/Personalize, Search, Integrations, DevOps, Analytics, etc.).

2. Under each workstream:
   - List 2–6 EPICS.
   - For each epic:
     - 2–5 short FEATURES (client-visible things).
     - 1–2 sentence description.
     - References to SOW section / WBS IDs that justify it.

Rules:
- Only include work that is clearly in scope.
- If something is implied but not explicit, include it but tag as ASSUMPTION.
- Don’t write user stories yet.

Output as Markdown headings:
## Workstream: XM Cloud
### Epic: ...
- Feature: ...

I skim this, merge/split epics, and make sure every SOW deliverable has a home.


Expanding features into stories (without losing the plot)

Next, I move from epics and features to user stories and initial acceptance criteria. I do this one stream at a time (XM Cloud + Next.js first, usually).

Prompt: feature → stories and acceptance criteria

Focus only on these workstreams and their epics/features:
[PASTE relevant “Workstream: XM Cloud” and “Workstream: Next.js” sections]

Use:
- ./discovery/user-journeys-final.md
- ./solution/architecture-brief-final.md
- Client site
- XM Cloud + Next.js docs

Task:
For each FEATURE:
1) Draft user stories using:
   "As a <persona>, I want <capability> so that <value>."

2) For each story add:
   - 3–6 acceptance criteria (bullet points, concrete and testable).
   - Tags like: xm-cloud, nextjs, cdp, search, integration.

3) Group stories under:
   - Content authoring
   - Web experience
   - Personalization
   - SEO/accessibility
   - Analytics (if relevant)

Rules:
- Don’t invent new features.
- If a story is unclear, include it but add [NEEDS CLARIFICATION] at the start.
- Keep language simple, no marketing fluff.

Output:
- Markdown, grouped by Epic → Feature → Stories.

Then I go in and:

The result is something the team can actually read and talk about.


Generating an importable backlog for Jira or Azure DevOps

Once stories look good, I want them in the tool, not stuck in Markdown. I usually start with epics and stories only (no subtasks yet) so I can check the import format.

Prompt: stories → Jira CSV

We’re going to import epics and stories into Jira.

Target project key: ACMEWEB

CSV columns:
- Issue Type (Epic, Story)
- Summary
- Description
- Epic Name (for epics)
- Epic Link (for stories, using the epic Summary)
- Components
- Labels
- Priority (set "Medium" for all)
- Acceptance Criteria (in Description, under a heading)

Task:
1) Take the epics and stories we agreed on (I’ll paste them after this prompt).
2) Generate a CSV with:
   - One row per epic (Issue Type = Epic).
   - One row per story (Issue Type = Story), linked to the correct epic.
3) Use short, clear summaries (<80 characters).
4) Put the full story and acceptance criteria into Description.

Rules:
- Keep text as close as possible to the approved stories.
- If a story was marked [NEEDS CLARIFICATION], add that tag to its Labels too.

Output:
- A single CSV code block ready to save as `acme-backlog-epics-stories.csv`.
- After the CSV, list any rows you’re unsure about.

For Azure DevOps, I do the same but with Work Item Type, Title, Area Path, etc., matching an exported sample CSV from the actual board.

I always review the CSV before importing and do a tiny test import first.


Breaking stories into technical tasks

Once the board has epics and stories, I use the agent to suggest implementation tasks. I don’t import these blindly, but they make a great starting checklist.

Prompt: story → implementation tasks

Using:
- ./solution/architecture-brief-final.md
- The following user stories (I’ll paste a subset)
- XM Cloud, CDP/Personalize, Search, and Next.js docs as needed

For each story:

1) Propose 3–10 implementation tasks such as:
   - XM Cloud: content model updates, layout setup, serialization changes.
   - Next.js: route setup, data fetching, component coding, styling.
   - CDP/Personalize: event tracking, audience setup, experiments/placements.
   - Search: indexing config, query implementation, UI wiring.
   - Azure: Functions/APIM wiring, Key Vault config, pipelines.

2) For each task provide:
   - Title (0.5–2 days of work)
   - 1–2 sentence description
   - Suggested component tag (xm-cloud, nextjs, cdp, search, azure, devops, qa)
   - Dependencies (story, other tasks, environment, or client access)

Rules:
- Do not expand scope. Tasks should only implement the existing stories.
- Mark anything blocked on client input as "BLOCKED: needs client X".

Output:
- Markdown table per story:
  - Task Title | Description | Component | Dependencies

I trim this down, decide which tasks matter, and then either:

Again: I review before import.


Sketching the team shape and capacity

Now we know what needs to be done. Next question: who should do it, and at what pace?

I give the agent the estimates and backlog and let it propose a team mix. I still make the final staffing decisions.

Prompt: suggest team composition

Using:
- ./planning/wbs-final.xlsx
- ./planning/estimates-final.xlsx
- The epic list

Propose a team composition for the main delivery phase
(~3–6 months) for this stack:

Roles to consider:
- Solution architect (me)
- XM Cloud engineer
- Next.js engineer(s)
- Integration engineer (Azure Functions/APIM)
- CDP/Personalize specialist
- Search specialist
- QA engineer
- DevOps/platform engineer
- PM / Scrum master
- BA / Product owner (client-side)

Task:
1) Suggest:
   - Number of people per role (or 0.5 FTE if part-time).
   - Short rationale per role.

2) Suggest a ballpark throughput:
   - Stories or points per 2-week sprint,
     clearly marked as an ASSUMPTION.

3) Highlight:
   - Skills we might be missing.
   - Roles that can realistically be combined in a small team.

Rules:
- Don’t change scope or dates.
- Treat this as a draft for me to adjust, not a commitment.

I adjust based on actual people and budget. Often I can cover architect + PM + BA myself early on, then add specialists as we ramp.


Mapping work to sprints

With backlog and a draft team, I ask the agent to sketch a sprint plan.

Prompt: backlog → sprint outline

Assume:
- 2-week sprints
- The team composition we decided on (I’ll paste it)
- Target: MVP in about N sprints, then enhancements

Using:
- The epic and key story list
- Dependencies from WBS and architecture (e.g., content model before migration, tracking before personalization)

Task:
1) Propose a sprint-by-sprint outline:

For each sprint:
- Sprint goal(s) (1–3 bullets)
- Epics/stories that fit that goal
- Any setup or integration milestones

2) Include:
- A "Sprint 0" focused on:
  - Environments & access
  - XM Cloud + repo setup
  - Next.js skeleton
  - Basic CDP event stream
- Sprints for:
  - Data migration & SEO redirects
  - Load/performance testing
  - Go-live and hypercare

3) Present a summary table:
  Sprint | Approx calendar weeks | Focus | Notes

Rules:
- Respect logical dependencies.
- Assume we need slack for unplanned work.
- Flag any sprint that looks overloaded.

End with:
- “Questions for you” about trade-offs (scope vs time).

The output isn’t perfect, but it’s good enough that a couple of passes turn it into a slide for the kickoff and a realistic initial roadmap.


Building the “no blockers” checklist

A surprising amount of early pain comes from boring things like “nobody has access to CDP yet”. I have the agent turn the SOW + architecture into a readiness checklist.

Prompt: Sprint Zero readiness

Goal: By the end of Sprint 0, nothing blocks the team on day 1 of Sprint 1.

Using:
- ./solution/architecture-brief-final.md
- ./contract/sow-final.md
- ./planning/assumptions-risks-final.md

Create a checklist with four sections:

1) Access & accounts
   - XM Cloud org and projects
   - Sitecore CDP & Personalize tenant
   - Sitecore Search
   - Azure subscription / resource group
   - Git repos, CI/CD tool
   - Jira or Azure DevOps
   - Monitoring/logging tools

2) Environments
   - XM Cloud environments (Dev, Test, Prod)
   - Front-end hosting (Vercel or other)
   - Test data & identities
   - DNS / certificates if applicable

3) Governance & ways of working
   - Definition of Ready / Done
   - Branching and PR rules
   - Release / deployment process
   - Change control aligned with SOW

4) Compliance & data
   - Data residency decisions (for CDP, Search)
   - PII handling in events
   - Security / pen-test slots if needed

For each item include:
- Owner (Client / Our team / Shared)
- Needed by (Before Sprint 0 end / Before first integration sprint)

Output as Markdown checkboxes I can paste into a shared doc.

Most of these become Sprint Zero tasks in Jira or ADO. A lot of them are on the client’s side, so this also becomes a simple “help us help you” list.


Keeping a “deep research” agent in the background

XM Cloud, CDP/Personalize, Search, Azure, and Next.js all change quickly. I keep a separate research agent that doesn’t touch the backlog directly but keeps my templates and assumptions fresh.

Prompt: deep research agent brief

You are my deep research assistant for Sitecore-based projects.

Your job:
- Track useful updates and best practices for:
  - Sitecore XM Cloud
  - Sitecore CDP & Personalize
  - Sitecore Search
  - Next.js (especially App Router and data fetching)
  - Azure services used in these projects

Sources (non-exhaustive):
- Official Sitecore docs and developer portal
- Official Next.js docs
- Microsoft Azure docs and architecture center
- Release notes / product blogs

When I ask you about a topic (e.g., “XM Cloud deployment practices”,
“CDP tracking with Next.js”, “Search widgets”), you should:
- Pull recent, reputable sources.
- Summarize what changed or what’s recommended now.
- Suggest:
  - Backlog items to add or adjust
  - Checklist changes for Sprint Zero
- Give me URLs so I can read more.

Rules:
- Don’t modify our SOW or backlog yourself.
- Clearly separate facts from your own inferences.

I run this occasionally and fold any good ideas into my next Sprint Zero or backlog template.


Humans still run the show

In practice, this is what changed for me:

The pattern is always the same:

By the time Sprint Zero ends, we have:

…and the project is ready to move from “signed” to “actually building”.


Related posts in the AI-Powered Stack series:


Previous Post
AI Code Generation — From prompt to XM Cloud component matrix
Next Post
AI-Powered Stack — Series overview and roadmap