Skip to content

AI-Powered Stack — Working with AI as your Sitecore BA, architect, and PM

Created:
Updated:

On many of my engagements I work solo—wearing the BA, architect, and PM hats while still shipping features. That’s a lot of context-switching. Over the past year or so I’ve been experimenting with AI tools to offload some of the repetitive, document-heavy parts of these roles. It doesn’t always work perfectly, but when it does, it frees me up for the parts that actually need human judgment.

This post documents the role‑specific workflows I’ve settled on for Sitecore projects. The goal isn’t to let AI run the show—it’s to delegate the grunt work (synthesizing docs, drafting artifacts, spotting inconsistencies) so I can spend more time on trade‑offs, stakeholder conversations, and decisions that matter.

Here’s roughly how I think about the agent setup:

Business analyst agents

NotebookLM, ChatGPT Pro

Requirements

& backlog

Architect agents

Claude Code, ChatGPT Pro

Architecture

& decision notes

Project manager agents

ChatGPT Pro

Roadmaps

& RAID log

The examples below assume XM Cloud + Next.js/App Router + Content SDK, often with Sitecore Search, Content Hub, or CDP/Personalize in the mix. Your stack may differ, but the patterns should translate.


A few ground rules

Before getting into specifics, here are the principles I try to follow:

With that framing, here’s what the workflows look like in practice.


Business analyst: making sense of messy inputs

The BA work is mostly about synthesis—taking RFPs, meeting notes, legacy docs, and stakeholder opinions, then turning them into something a dev team can act on. AI helps me get through the pile faster and catch gaps I might miss.

BA workflow: capture and normalize discovery inputs

Goal: Turn RFPs, notes, recordings, and legacy documentation into a single source of truth you can query and evolve.

Steps:

  1. Centralize sources.

    • I drop RFPs, pitch decks, previous SOWs, and CRM notes into a NotebookLM notebook or similar project RAG.
    • I store the same documents (minus anything too sensitive) in a private Azure storage account and index them with Azure AI Search, wired into Azure OpenAI “On your data” so later prompts get citations instead of hallucinations.
  2. Create a discovery taxonomy.
    I ask my BA agent to propose a simple taxonomy (for example: personas, journeys, channels, KPIs, content types, features, constraints) and save it as docs/discovery_taxonomy.md.

  3. Run structured Q&A.
    Using ChatGPT or Claude Code with the RAG attached:

    • Ask for summaries per persona and per journey.
    • Ask it to list explicit requirements, implicit requirements, and “unknowns” that must be clarified.
    • Export results into Markdown files under docs/requirements/.
  4. Turn “unknowns” into stakeholder questions.
    I have the agent convert unknowns into specific questions I can ask during workshops. This becomes my discovery backlog.

Output: docs/requirements/ folder with per-journey requirement summaries, a discovery questions list, and a living taxonomy.

Prompt I actually use:

<context>
You are a senior business analyst specializing in Sitecore XM Cloud implementations. You have deep knowledge of:
- XM Cloud architecture (Experience Edge, Content SDK, headless rendering)
- Content Hub integration patterns (DAM, CMP, content operations)
- Sitecore Search and CDP/Personalize capabilities
- Next.js App Router and modern headless patterns
</context>

<task>
Analyze the uploaded documents and extract a structured requirements specification.
</task>

<inputs>
Documents: [RFP/meeting notes/legacy docs/stakeholder interviews]
Project type: [New build | Migration | Enhancement]
Known constraints: [Timeline, budget, compliance requirements if any]
</inputs>

<instructions>
1. **Extract explicit requirements** — things the client directly requested
2. **Infer implicit requirements** — industry standards they'll expect:
   - SEO (meta, structured data, sitemaps)
   - Accessibility (WCAG 2.1 AA minimum)
   - Performance (Core Web Vitals targets)
   - Security (OWASP top 10, content validation)
   - Mobile-first responsive design
3. **Identify Sitecore-specific requirements** — capabilities that map to XM Cloud features:
   - Content modeling needs → XM Cloud templates
   - Personalization scenarios → CDP/Personalize rules
   - Search requirements → Sitecore Search widgets
   - Asset management → Content Hub DAM integration
4. **List unknowns** — gaps that block estimation or architecture
</instructions>

<output_format>
## Explicit Requirements
| ID | Description | Source | Category | Priority |
|----|-------------|--------|----------|----------|
| REQ-001 | [One sentence] | [Doc/section] | [Content/Components/Integration/Personalization/Search/Infrastructure] | [Must/Should/Could] |

## Implicit Requirements
| ID | Description | Rationale | Category |
|----|-------------|-----------|----------|
| IMP-001 | [Requirement] | [Why this is expected] | [Category] |

## Sitecore Feature Mapping
| Requirement ID | XM Cloud Feature | Implementation Notes |
|----------------|------------------|---------------------|
| REQ-001 | [Feature name] | [Brief technical approach] |

## Unknowns (Discovery Questions)
| ID | Question | Impact if Unresolved | Suggested Owner |
|----|----------|---------------------|-----------------|
| UNK-001 | [Specific question] | [What we can't estimate/design] | [Client/Tech/Both] |
</output_format>

<quality_checks>
- Every requirement has a traceable source
- Categories align with XM Cloud delivery phases
- Unknowns are phrased as answerable questions, not vague concerns
- No duplicate requirements across sections
</quality_checks>

BA workflow: epics, stories, and acceptance criteria

Goal: Convert the synthesized requirements into a backlog that development can estimate and build.

Steps:

  1. Define issue templates.
    In my repos I add Markdown templates for epics and stories (for example docs/templates/epic.md, docs/templates/story.md) that include:

    • problem statement,
    • target personas and journeys,
    • dependencies (XM Cloud, Content Hub, Search, CDP, external systems),
    • acceptance criteria and non-functional notes.
  2. Draft epics with an agent.
    For each journey in the taxonomy, I feed the requirements into an AI agent and ask for:

    • 3–7 epics,
    • each with a short description, acceptance criteria, and explicit references to the underlying documents (RFP sections, meeting notes, etc.).
  3. Review and normalize.
    I review epics for:

    • clarity (no tech jargon for business stakeholders),
    • testability (can QA or UAT verify this?),
    • alignment to Sitecore capabilities (for example using XM Cloud components, Experience Edge, or personalization features).
  4. Generate stories from epics.
    Once epics are stable, I ask the agent to propose user stories that:

    • use your team’s story template,
    • specify where data comes from (Experience Edge vs Content Hub vs custom APIs),
    • include analytics, a11y, and personalization notes upfront.
  5. Push into the ALM tool.
    Either manually or via API, I import epics and stories into Jira/Azure DevOps. The Markdown stays as canonical documentation in git; the ALM system can be rebuilt if needed.

Output: Reviewed epics and stories traceable back to requirements, with clear acceptance criteria.

Prompt for story generation:

<context>
You are a senior Sitecore technical BA creating user stories for an XM Cloud + Next.js App Router project.

Tech stack context:
- Content delivery: Experience Edge GraphQL (Delivery and Preview endpoints)
- Rendering: Next.js with Content SDK, server components preferred
- Personalization: CDP/Personalize (if in scope)
- Search: Sitecore Search JS SDK (if in scope)
- Content operations: Content Hub DAM/CMP (if in scope)
</context>

<task>
Break down the provided epic into implementable user stories with XM Cloud-specific acceptance criteria.
</task>

<epic>
[Paste epic description here]
</epic>

<instructions>
1. Generate 3-5 user stories that collectively deliver the epic
2. Each story should be completable in one sprint (roughly 3-8 story points)
3. Consider the full XM Cloud delivery path: authoring → serialization → Edge → head
4. Include both functional and technical acceptance criteria
</instructions>

<output_format>
For each story, provide:

### Story [N]: [Short title]

**User Story:**
As a [specific persona from project taxonomy],
I want [concrete goal tied to a user action],
so that [measurable business benefit].

**Acceptance Criteria:**
```gherkin
Given [precondition - content exists, user state, etc.]
When [user action or system event]
Then [observable outcome]
And [additional verifiable behavior]

Data Contract:

XM Cloud Implementation Notes:

Non-Functional Requirements:

Dependencies:

<quality_checks>


### BA Workflow 3: Traceability and change management

**Goal:** Ensure every deliverable maps back to a requirement and every requirement maps to real user value.

**Steps:**

1. **Ask an agent to build a traceability matrix.**  
   Starting from epics/stories and requirements, I ask for:
   - rows: requirements,  
   - columns: epics, stories, components, tests.  
   Save it as `docs/traceability_matrix.md` and keep it updated.

2. **Use AI to flag orphan work.**  
   Periodically, I have an agent scan the backlog and traceability matrix to identify:
   - stories without linked requirements,  
   - components that are not used by any journey,  
   - requested work not tied to measurable KPIs.

3. **Run change-impact analyses.**  
   When a stakeholder asks for a change, I ask my BA agent:  
   “Show me all epics, stories, components, and tests impacted by this change,”  
   using the matrix and ALM exports as input.

**Output:**
A living traceability matrix and a repeatable pattern for change-impact analysis—useful for any enterprise project, but especially for Sitecore where the CMS, personalization, and search layers each have their own moving parts.

---

## Architect: turning constraints into designs

Architecture work on Sitecore projects is mostly about trade-offs: where does content live, how do we query it, what gets personalized, and how does it all fit together? AI helps me explore options faster and keep decisions documented—though the actual judgment calls are still mine.

### Architect workflow: architecture vision and guardrails

**Goal:** Produce a concise architecture vision and a set of guardrails that frame every technical decision on the project.

**Steps:**

1. **Load platform constraints into the RAG.**  
   I include:
   - XM Cloud docs for Experience Edge, Content SDK, serialization, BYOC, and CI/CD.  
   - Search JS SDK and CDP/Personalize docs if they are in scope.  
   - Any enterprise constraints (network, identity, logging, observability).

2. **Ask for architecture options.**  
   I give my architect agent:
   - project requirements,  
   - non-functional constraints (traffic, latency, geos, compliance),  
   - technical preferences (Next.js App Router, Storybook, etc.).  
   Ask for 2–3 architecture options with pros/cons and Sitecore-specific notes (for example Experience Edge usage patterns, personalization options, SCS module layouts).

3. **Draft the architecture vision.**  
   I pick the option I prefer and ask the agent to generate a short RFC‑style document:
   - problem statement and scope,  
   - high-level diagram (C4-style system/context),  
   - key decisions (for example “XM Cloud as headless CMS using Experience Edge Delivery and Preview; Next.js with Content SDK as primary head”),  
   - risks and assumptions.

4. **Define explicit guardrails.**  
   I ask the agent to extract guardrails like:
   - which data lives in XM Cloud vs Content Hub vs CDP,  
   - which endpoints to use in which environments,  
   - serialization rules (SCS modules, push/pull policies),  
   - security constraints (no secrets in code, where keys live, how they rotate).

**Output:**
An RFC-style architecture vision plus a guardrails list that both agents and humans reference.

**Prompt for architecture options:**
You are a senior Sitecore solutions architect with deep experience in XM Cloud, composable DXP, and headless architectures. You understand: - Experience Edge delivery patterns and rate limits (80 req/s uncached) - Content SDK capabilities and limitations - SCS (Sitecore Content Serialization) module design - Next.js App Router rendering strategies (SSG, SSR, ISR) - Sitecore Search, CDP/Personalize, and Content Hub integration patterns Propose 2-3 architecture options for the project, with clear trade-offs and Sitecore-specific considerations.

<project_constraints>

For each architecture option:
  1. Summarize the approach in 2-3 sentences
  2. Define the content architecture:
    • What content types live in XM Cloud vs Content Hub vs external systems
    • Template hierarchy strategy (inheritance, shared templates)
    • SCS module boundaries (what gets serialized where)
  3. Define the delivery architecture:
    • Experience Edge query patterns (layout queries vs item queries)
    • Caching strategy (ISR intervals, on-demand revalidation, Edge caching)
    • Preview vs delivery endpoint usage
  4. Define the personalization approach (if applicable):
    • Rule-based (XM Cloud embedded) vs CDP-driven
    • Where personalization decisions happen (Edge, server, client)
  5. Identify integration points:
    • How each Sitecore product connects
    • External system integration patterns
  6. Assess trade-offs:
    • Pros (what this optimizes for)
    • Cons (what you sacrifice)
    • Cost implications (licensing, infrastructure, maintenance)
  7. Flag Sitecore-specific risks:
    • Serialization complexity
    • Edge rate limits and mitigation
    • Preview API limitations (not for production traffic)
    • Content Hub sync timing (if applicable)

<output_format>

Option 1: [Descriptive name]

Summary

[2-3 sentence overview]

Content Architecture

Content TypeLocationRationale
[Type][XM Cloud / Content Hub / External][Why]

SCS Module Strategy:

  • Foundation/[Module]: [What it contains]
  • Feature/[Module]: [What it contains]
  • Project/[Site]: [What it contains]

Delivery Architecture

[Simple ASCII or description of request flow]
User → CDN → Next.js (Vercel/Node) → Experience Edge → [Response]

Caching Strategy:

Route PatternStrategyTTLRevalidation Trigger
/ISR60sWebhook on publish
/products/*ISR300sWebhook on publish
/searchSSRNone-

Personalization Approach

[Description or “Not applicable”]

Integration Points

System ASystem BPatternNotes
XM CloudContent Hub DAM[Connector / API / Manual][Details]

Trade-offs

Pros:

  • [Benefit 1]
  • [Benefit 2]

Cons:

  • [Limitation 1]
  • [Limitation 2]

Cost Considerations:

  • [Licensing, infrastructure, or operational cost notes]

Sitecore-Specific Risks

RiskLikelihoodImpactMitigation
[Risk][H/M/L][H/M/L][How to address]

[Repeat for Options 2 and 3] </output_format>

Cite Sitecore documentation where relevant: - Experience Edge: https://doc.sitecore.com/xmc/en/developers/xm-cloud/experience-edge-for-xm-cloud.html - Content SDK: https://doc.sitecore.com/xmc/en/developers/content-sdk/index-en.html - SCS: https://doc.sitecore.com/xmc/en/developers/xm-cloud/sitecore-content-serialization.html ```

Architect workflow: component and data design

Goal: Translate IA and requirements into content models, components, and data flows that are realistic for XM Cloud and composable Sitecore.

Steps:

  1. Start from your component matrix.
    Use the output of your “Code Generation” series: the Playwright crawl, matrix, and components/*.md specs.

  2. Model content types.
    Ask your agent to propose:

    • templates and content types in XM Cloud (or in Content Hub, if content is shared across channels),
    • field definitions and constraints,
    • relationships between entities (for example articles, authors, categories, products).
  3. Validate against Sitecore docs.
    Cross-check:

    • whether your templates align with component usage patterns in XM Cloud components docs,
    • whether Experience Edge queries will be efficient for your access patterns,
    • whether personalization rules have the data they need.
  4. Document data flows.
    Ask an agent to draw sequence diagrams (in Mermaid or similar) for:

    • page render in preview vs delivery,
    • personalization decision flows,
    • integrations (for example XM Cloud → Connect → Salesforce).

Output: Content models, component designs, and data-flow diagrams in docs/architecture/.

Prompt for component/template design:

<context>
You are a Sitecore XM Cloud content architect designing templates and components for a headless implementation. You understand:
- XM Cloud template inheritance and standard values
- Content SDK field types and GraphQL schema generation
- Experience Editor / Pages authoring experience
- BYOC (Bring Your Own Components) registration patterns
- Rendering parameters vs datasource approaches
</context>

<task>
Design XM Cloud templates, field definitions, and component mappings based on the provided component matrix.
</task>

<component_matrix>
[Paste component list with fields, or reference the component specs folder]
</component_matrix>

<instructions>
1. **Design template hierarchy:**
   - Identify shared base templates (reduce field duplication)
   - Define page templates vs component/datasource templates
   - Plan standard values and insert options

2. **Define field specifications:**
   - Map each component field to a Sitecore field type
   - Include validation rules and help text for authors
   - Consider source fields for Droplists/Multilists

3. **Separate rendering parameters from datasource fields:**
   - Rendering parameters: display options, layout choices (no content value)
   - Datasource fields: actual content that varies per instance

4. **Design placeholder structure:**
   - Define placeholder names and allowed renderings
   - Plan nested placeholder patterns for complex layouts

5. **Optimize for GraphQL:**
   - Avoid deeply nested templates that complicate queries
   - Consider field naming for clean GraphQL schema
</instructions>

<output_format>
## Template Hierarchy

### Base Templates
| Template Name | Path | Purpose | Inherited By |
|---------------|------|---------|--------------|
| _ContentBase | /Foundation/Content | Common fields (Title, Description) | All content templates |
| _MediaBase | /Foundation/Media | Image, Video fields | Media-rich components |

### Page Templates
| Template Name | Path | Base Templates | Insert Options |
|---------------|------|----------------|----------------|
| HomePage | /Project/[Site]/Pages | _ContentBase, _SEO | HeroBanner, ContentBlock |

### Component Templates (Datasources)
| Template Name | Path | Base Templates | Used By Rendering |
|---------------|------|----------------|-------------------|
| AccordionItem | /Project/[Site]/Components | _ContentBase | Accordion |

## Field Definitions

### [Component Name] Template
| Field Name | Display Name | Type | Source | Required | Help Text |
|------------|--------------|------|--------|----------|-----------|
| Title | Title | Single-Line Text | - | Yes | "Main heading (50 chars max)" |
| Body | Body Content | Rich Text | - | No | "Supports links and lists" |
| Theme | Background Theme | Droplist | /Content/Lookups/Themes | No | "Visual style variant" |

**Standard Values:**
- Theme: "Default"

## Rendering Parameters vs Datasource

| Field | Location | Rationale |
|-------|----------|-----------|
| BackgroundColor | Rendering Parameter | Display option, not content |
| Title | Datasource | Actual content, varies per use |
| MaxItems | Rendering Parameter | Layout control |

## Placeholder Structure

main-content ├── hero (allowed: HeroBanner, VideoHero) ├── body-content (allowed: ContentBlock, Accordion, CardGrid) │ └── card-grid-items (allowed: Card) [nested] └── cta-section (allowed: CTABanner)


## GraphQL Considerations
- [Notes on query patterns, fragment reuse, etc.]
</output_format>

<quality_checks>
- Every component from the matrix has a corresponding template
- Field types are appropriate for Content SDK serialization
- Author-facing names are clear and consistent
- No orphan templates (all are used by at least one rendering)
</quality_checks>

Architect workflow: decision records and technical debt management

Goal: Track why you made certain decisions and when to revisit them.

Steps:

  1. Adopt a decision note template.
    Add docs/adr/template.md with:

    • context, decision, alternatives, consequences, review date.
  2. Use AI to draft decision notes.
    For each significant choice (for example Experience Edge strategy, serialization layout, hosting model), ask an agent to generate a draft ADR from your architecture vision and notes. You edit and approve it.

  3. Tag decisions with revisit triggers.
    Ask the agent to propose revisit triggers (for example “traffic increases 10×,” “Content Hub DAM is introduced,” “Search quotas change”) and include them in the ADR.

  4. Periodically review decision notes with AI assistance.
    Once per quarter or per big release, ask an agent to:

    • scan ADRs,
    • compare them against current metrics and constraints,
    • propose which decisions may need revisiting.

Output: A decision log that actually gets maintained. AI makes drafting fast enough that I don’t skip it.

Prompt for ADR drafting:

<context>
You are a technical writer documenting architecture decisions for a Sitecore XM Cloud project. ADRs will be reviewed by:
- Future developers who need to understand why choices were made
- Architects evaluating whether to revisit decisions
- Stakeholders assessing technical debt and risks
</context>

<task>
Draft a complete Architecture Decision Record based on the provided decision context.
</task>

<decision_input>
**Problem:** [What problem or question triggered this decision]
**Decision:** [What we chose to do]
**Alternatives considered:** [Other options we evaluated]
**Key constraints:** [What limited our options - budget, timeline, team skills, licensing]
</decision_input>

<instructions>
1. Frame the context so someone unfamiliar with the project understands the problem
2. Explain the decision with enough detail to reproduce the reasoning
3. Document alternatives honestly - why they were rejected, not just that they were
4. Be specific about consequences - both positive outcomes and new constraints/risks
5. Define concrete, measurable revisit triggers
</instructions>

<output_format>
# ADR-[NNN]: [Concise decision title]

**Status:** Proposed | Accepted | Superseded | Deprecated
**Date:** [YYYY-MM-DD]
**Deciders:** [Names/roles involved]
**Technical area:** [Content architecture | Delivery | Integration | Infrastructure | Security]

## Context

[2-3 paragraphs explaining:]
- What problem or opportunity prompted this decision
- What constraints shaped our options (timeline, budget, team, licensing)
- What requirements or goals this decision must satisfy

### Sitecore-Specific Context
[If applicable: relevant XM Cloud capabilities, limitations, or patterns that influenced the decision]

## Decision

**We will:** [Clear statement of what we're doing]

**Rationale:**
1. [Primary reason - most important factor]
2. [Secondary reason]
3. [Additional supporting factors]

### Implementation Notes
- [Key implementation details]
- [Configuration or setup required]
- [Dependencies on other decisions]

## Alternatives Considered

### Alternative 1: [Name]
- **Description:** [What this option would have meant]
- **Pros:** [Benefits]
- **Cons:** [Drawbacks]
- **Rejection reason:** [Why we didn't choose this]

### Alternative 2: [Name]
[Same structure]

## Consequences

### Positive
- [Benefit 1 - be specific]
- [Benefit 2]

### Negative
- [Trade-off or limitation 1]
- [New constraint introduced]

### Neutral
- [Changes that are neither good nor bad, just different]

## Risks

| Risk | Likelihood | Impact | Mitigation |
|------|------------|--------|------------|
| [Risk description] | Low/Medium/High | Low/Medium/High | [How we'll address it] |

## Revisit Triggers

This decision should be reconsidered if:
- [ ] [Concrete, measurable condition, e.g., "Monthly page views exceed 10M"]
- [ ] [Scope change, e.g., "Content Hub CMP is added to the project"]
- [ ] [Technology change, e.g., "Sitecore releases a native solution for X"]
- [ ] [Time-based, e.g., "12 months after go-live for performance review"]

## Related Decisions
- ADR-[NNN]: [Related decision title] - [How they relate]

## References
- [Link to relevant Sitecore documentation]
- [Link to internal design docs or RFCs]
</output_format>

<quality_checks>
- Context is understandable without prior project knowledge
- Decision rationale is traceable to stated constraints
- Alternatives are fairly represented (not straw men)
- Revisit triggers are specific and measurable
</quality_checks>

Project manager: keeping things visible

PM work is often just communication and bookkeeping—roadmaps, status updates, risk logs, retros. None of it is hard, but it eats time. AI handles the repetitive synthesis so I can focus on the conversations that actually move things forward.

PM workflow: delivery model and roadmap

Goal: Choose a delivery model (phased releases, parallel tracks, etc.) and keep a clear roadmap aligned to capacity.

Steps:

  1. Feed the agent delivery constraints.
    I provide:

    • team composition and capacity,
    • timelines and key milestones,
    • dependency constraints (for example content readiness, external systems).
  2. Ask for delivery scenarios.
    I ask for 2–3 delivery models (for example “MVP, then composable add‑ons,” “journey‑by‑journey rollout”) with pros/cons, emphasizing Sitecore‑specific risks: content migration, personalization ramp‑up, search tuning, etc.

  3. Turn the chosen model into a roadmap.
    I have the agent:

    • map epics to releases,
    • highlight cross-team dependencies,
    • suggest buffer and risk mitigation activities (for example early content modeling, early search setup).
  4. Export a shareable roadmap.
    Together we generate:

    • a Markdown roadmap (committed to git),
    • a slide for stakeholders (for example a simple swim-lane diagram).

Output:
An AI-assisted but human-curated roadmap that is consistent with architecture, requirements, and team capacity.

PM workflow: RAID logs and risk intelligence

Goal: Keep risks, assumptions, issues, and dependencies visible and actionable.

Steps:

  1. Create a structured RAID log.
    I maintain docs/raid/raid_log.md with sections for risks, assumptions, issues, and dependencies.

  2. Use AI to ingest meeting notes.
    After key meetings, I paste notes or transcripts into my PM agent and ask:

    • “Extract new risks, assumptions, issues, and dependencies,”
    • “Suggest owners and due dates where obvious.”
  3. Ask for risk heatmaps.
    Have the agent summarize RAID entries as:

    • a prioritised risk list (impact vs likelihood),
    • visual-friendly bullets for stakeholder updates.
  4. Tie RAID items to work.
    I ask the agent to suggest which epics/stories should mitigate a given risk and convert those suggestions into backlog items or checklists.

Output: A living RAID log that stays current without manual retyping.

Prompt for RAID extraction:

<context>
You are a project manager analyzing meeting notes for a Sitecore XM Cloud implementation. You understand common XM Cloud project risks including:
- Content migration complexity and timing
- Experience Edge rate limits and caching strategies
- Serialization conflicts in multi-developer environments
- Third-party integration dependencies (CDP, Search, Content Hub)
- Author training and adoption challenges
</context>

<task>
Extract and categorize RAID items (Risks, Assumptions, Issues, Dependencies) from the provided meeting notes.
</task>

<meeting_notes>
[Paste meeting notes, transcript, or summary here]
</meeting_notes>

<instructions>
1. **Identify Risks** — potential problems that haven't happened yet
   - Consider technical, organizational, timeline, and budget risks
   - Flag XM Cloud-specific risks (Edge limits, preview API constraints, SCS conflicts)

2. **Surface Assumptions** — things we're treating as true without verification
   - Challenge assumptions that could derail the project if wrong
   - Note which assumptions need validation and by whom

3. **Capture Issues** — problems that already exist and need resolution
   - Distinguish blockers (stopping work) from impediments (slowing work)
   - Note any workarounds currently in place

4. **Map Dependencies** — external factors we're waiting on
   - Include both upstream (we need X) and downstream (Y needs us)
   - Note expected resolution dates if mentioned
</instructions>

<output_format>
## RAID Log Update — [Meeting Date]

### New Risks
| ID | Risk Description | Category | Likelihood | Impact | Owner | Mitigation | Due |
|----|------------------|----------|------------|--------|-------|------------|-----|
| R-[NNN] | [One sentence describing what might go wrong] | [Technical / Timeline / Budget / Resource / External] | H/M/L | H/M/L | [Name/Role] | [Proposed mitigation action] | [Date] |

**Sitecore-Specific Risks Flagged:**
- [Any risks related to XM Cloud, Edge, Content Hub, etc.]

### New Assumptions
| ID | Assumption | Category | Impact if Wrong | Needs Validation By | Status |
|----|------------|----------|-----------------|---------------------|--------|
| A-[NNN] | [What we're assuming] | [Technical / Business / Resource] | [What happens if false] | [Who should verify] | Unvalidated / Validated / Invalid |

### New Issues
| ID | Issue Description | Severity | Blocker? | Current Workaround | Owner | Target Resolution |
|----|-------------------|----------|----------|-------------------|-------|-------------------|
| I-[NNN] | [What's wrong now] | High/Med/Low | Yes/No | [If any] | [Name] | [Date] |

### New Dependencies
| ID | We Need | From | For (Epic/Story) | Expected Date | Status | Escalation Path |
|----|---------|------|------------------|---------------|--------|-----------------|
| D-[NNN] | [What we need] | [Team/System/Person] | [What it unblocks] | [When expected] | Pending/At Risk/Resolved | [Who to escalate to] |

### Recommended Actions
1. [Immediate action needed]
2. [Follow-up for next meeting]
3. [Escalation if required]

### Items Requiring Stakeholder Decision
- [Any items that need client or leadership input]
</output_format>

<quality_checks>
- Each item has a clear, actionable description (not vague)
- Owners are assigned where identifiable from context
- Impact ratings are justified by project context
- No duplicate items (check existing RAID log)
- Sitecore-specific concerns are explicitly called out
</quality_checks>

PM workflow: status, stakeholder updates, and retros

Goal: Spend less time manually writing updates and more time resolving real issues.

Steps:

  1. Automate status rollups.
    I provide:

    • ALM exports (sprint board, burndown, cycle time),
    • RAID log,
    • the roadmap.
      Ask the agent to generate:
    • a weekly internal status (for the team),
    • an executive summary (1–2 slides, non-technical language).
  2. Drive better retrospectives.
    After each sprint, I give the agent:

    • completed work,
    • incidents,
    • cycle-time metrics,
    • key decisions and changes.
      Ask it to propose:
    • themes for “what went well / what to improve,”
    • 3–5 concrete improvement actions,
    • owners and suggested timings.
  3. Keep the loop closed.
    At the start of the next sprint, I revisit the previous retro actions with an agent and confirm status. This ensures improvements actually happen.

Output: Consistent status updates and better retros with less manual effort.

Prompt for status rollup:

<context>
You are a project manager creating status updates for a Sitecore XM Cloud implementation. You understand:
- XM Cloud delivery phases (content modeling, component development, integration, UAT)
- Common stakeholder concerns (timeline, budget, quality, adoption)
- Technical milestones (Edge connectivity, content migration, personalization setup)
</context>

<task>
Generate a weekly status update with both internal (team) and external (stakeholder) versions.
</task>

<inputs>
**Sprint Board Status:**
[Paste or describe: completed items, in-progress items, blocked items]

**RAID Log Updates:**
[New risks, resolved issues, changed assumptions, dependency status]

**Key Decisions This Week:**
[Architecture decisions, scope changes, process changes]

**Metrics (if available):**
[Velocity, burndown, cycle time, defect count]

**Upcoming Milestones:**
[Next major deliverables and dates]
</inputs>

<instructions>
1. **Analyze sprint progress** — what moved, what's stuck, why
2. **Highlight blockers** — distinguish team-solvable from escalation-needed
3. **Connect to milestones** — is the current pace on track for upcoming dates?
4. **Surface decisions** — what was decided and what still needs decision
5. **Identify escalations** — anything executives need to know or act on
</instructions>

<output_format>
---
## Internal Status (Team)

### Sprint [N] Progress — Week of [Date]

**Overall Health:** 🟢 On Track | 🟡 At Risk | 🔴 Off Track

#### Completed This Week
| Story/Task | Points | Notes |
|------------|--------|-------|
| [ID]: [Title] | [N] | [Any relevant context] |

#### In Progress
| Story/Task | Points | % Complete | Owner | Blocker? |
|------------|--------|------------|-------|----------|
| [ID]: [Title] | [N] | [%] | [Name] | [None / Description] |

#### Blocked
| Story/Task | Blocked By | Impact | Resolution Path | ETA |
|------------|------------|--------|-----------------|-----|
| [ID]: [Title] | [What's blocking] | [What can't proceed] | [How we'll unblock] | [Date] |

#### XM Cloud-Specific Progress
- **Content Modeling:** [Status — % templates complete, serialization stable?]
- **Component Development:** [Status — Storybook coverage, BYOC registration]
- **Edge Integration:** [Status — Preview/Delivery endpoints, caching strategy]
- **Content Migration:** [Status — if applicable]

#### RAID Updates
- **New Risks:** [Brief list or "None"]
- **Resolved Issues:** [Brief list or "None"]
- **Dependencies Changed:** [Brief list or "None"]

#### Decisions Made
1. [Decision and rationale summary]

#### Decisions Needed
1. [Decision needed, who needs to make it, deadline]

#### Technical Debt / Cleanup
- [Any debt incurred this sprint]

---
## Executive Summary (Stakeholders)

### Project Status — [Date]

**Overall:** 🟢 On Track | 🟡 At Risk | 🔴 Off Track

**Progress Highlights:**
- ✅ [Major accomplishment 1 — business language]
- ✅ [Major accomplishment 2]
- 🔄 [What's actively being worked on]

**Milestone Update:**
| Milestone | Target Date | Status | Confidence |
|-----------|-------------|--------|------------|
| [Milestone name] | [Date] | On Track / At Risk / Complete | High / Medium / Low |

**Attention Needed:**
- ⚠️ [Any item requiring stakeholder awareness or action]

**Key Metrics:**
- Sprint velocity: [N] points (target: [N])
- Defects: [N] open ([+/-N] from last week)

**Budget/Timeline Impact:** [None / Description of any changes]

**Next Week Focus:**
1. [Priority 1]
2. [Priority 2]
</output_format>

<escalation_criteria>
Flag for escalation if:
- Blocker persists > 3 days without resolution path
- Milestone confidence drops to "Low"
- Scope change impacts budget or timeline
- External dependency misses committed date
- Technical risk could impact go-live
</escalation_criteria>

Governance: keeping it useful

A few practices that keep AI helpful rather than chaotic:

  • Single source of truth in git.
    Requirements, architecture, prompts, and checklists live beside code. ALM tools and docs can be regenerated from this source.

  • Explicit “do not change” rules.
    Maintain AGENTS.md and similar files explaining what agents may touch (Markdown docs, specs, tests) and what they must not change directly (secrets, environment configs, serialized items).

  • Citations required.
    For anything related to Sitecore behavior, require agents to cite official docs or a project RAG source. If a suggestion has no citation, treat it as speculative.

  • Human sign-off for decisions. Epics, architectures, roadmaps, and risks are always reviewed before they become canonical. AI drafts; I approve.

These patterns work for me. Your mileage may vary—AI tooling changes fast, and what works today might look different in six months. The underlying idea (delegate repetitive synthesis, keep humans on decisions) should hold up longer than any specific tool.


Related posts in the AI-Powered Stack series:


Previous Post
AI Workflows — Editorial copilot for XM Cloud pages with "on your data" AI
Next Post
AI-Powered Stack — Sprint zero for XM Cloud with agentic AI