Skip to content

AI-Powered Stack — Working with AI as your Sitecore BA, architect, and PM

Created:
Updated:

Modern Sitecore projects put enormous pressure on tiny teams. On most of my engagements I am effectively a company of one: the same person is asked to play business analyst, solution architect, and de‑facto PM while still shipping features. An AI‑powered stack changes that equation for me: instead of trying to be three people at once, I orchestrate a small bench of agents that each specialize in a slice of the work—while I keep ownership of decisions and quality.

This post turns the high‑level AI stack from the previous article into the role‑specific workflows I actually run for Sitecore BAs, architects, and PMs. The goal is not “let AI do everything,” but rather “let AI handle the repetitive, document‑heavy, easily auditable work, so I can focus on judgment, negotiation, and trade‑offs.”

At a high level my “virtual team” for these roles looks like this:

Business analyst agents

NotebookLM, ChatGPT Pro

Requirements

& backlog

Architect agents

Claude Code, ChatGPT Pro

Architecture

& decision notes

Project manager agents

ChatGPT Pro

Roadmaps

& RAID log

In the rest of the post I will assume you are working on XM Cloud + Next.js/App Router + Content SDK, often alongside Sitecore Search, Content Hub, and Customer Data Platform (CDP) / Personalize, and I will walk through how I use these agents in practice.


Principles for AI-powered roles on Sitecore projects

Before diving into each role, it helps to agree on a few principles:

With those principles in place, here is how I use AI for each role.


Business analyst: from messy inputs to clear requirements

When I wear the BA hat, I sit between stakeholders and delivery. My BA‑oriented agents help me synthesize inputs, spot gaps, and keep requirements consistent across channels and products.

BA workflow: capture and normalize discovery inputs

Goal: Turn RFPs, notes, recordings, and legacy documentation into a single source of truth you can query and evolve.

Steps:

  1. Centralize sources.

    • I drop RFPs, pitch decks, previous SOWs, and CRM notes into a NotebookLM notebook or similar project RAG.
    • I store the same documents (minus anything too sensitive) in a private Azure storage account and index them with Azure AI Search, wired into Azure OpenAI “On your data” so later prompts get citations instead of hallucinations.
  2. Create a discovery taxonomy.
    I ask my BA agent to propose a simple taxonomy (for example: personas, journeys, channels, KPIs, content types, features, constraints) and save it as docs/discovery_taxonomy.md.

  3. Run structured Q&A.
    Using ChatGPT or Claude Code with the RAG attached:

    • Ask for summaries per persona and per journey.
    • Ask it to list explicit requirements, implicit requirements, and “unknowns” that must be clarified.
    • Export results into Markdown files under docs/requirements/.
  4. Turn “unknowns” into stakeholder questions.
    I have the agent convert unknowns into specific questions I can ask during workshops. This becomes my discovery backlog.

Output:
docs/requirements/ folder with:

BA workflow: epics, stories, and acceptance criteria

Goal: Convert the synthesized requirements into a backlog that development can estimate and build.

Steps:

  1. Define issue templates.
    In my repos I add Markdown templates for epics and stories (for example docs/templates/epic.md, docs/templates/story.md) that include:

    • problem statement,
    • target personas and journeys,
    • dependencies (XM Cloud, Content Hub, Search, CDP, external systems),
    • acceptance criteria and non-functional notes.
  2. Draft epics with an agent.
    For each journey in the taxonomy, I feed the requirements into an AI agent and ask for:

    • 3–7 epics,
    • each with a short description, acceptance criteria, and explicit references to the underlying documents (RFP sections, meeting notes, etc.).
  3. Review and normalize.
    I review epics for:

    • clarity (no tech jargon for business stakeholders),
    • testability (can QA or UAT verify this?),
    • alignment to Sitecore capabilities (for example using XM Cloud components, Experience Edge, or personalization features).
  4. Generate stories from epics.
    Once epics are stable, I ask the agent to propose user stories that:

    • use your team’s story template,
    • specify where data comes from (Experience Edge vs Content Hub vs custom APIs),
    • include analytics, a11y, and personalization notes upfront.
  5. Push into the ALM tool.
    Either manually or via API, I import epics and stories into Jira/Azure DevOps. The Markdown stays as canonical documentation in git; the ALM system can be rebuilt if needed.

Output:
Reviewed epics and stories that are traceable back to requirements, with clear acceptance criteria and Sitecore-specific notes.

BA Workflow 3: Traceability and change management

Goal: Ensure every deliverable maps back to a requirement and every requirement maps to real user value.

Steps:

  1. Ask an agent to build a traceability matrix.
    Starting from epics/stories and requirements, I ask for:

    • rows: requirements,
    • columns: epics, stories, components, tests.
      Save it as docs/traceability_matrix.md and keep it updated.
  2. Use AI to flag orphan work.
    Periodically, I have an agent scan the backlog and traceability matrix to identify:

    • stories without linked requirements,
    • components that are not used by any journey,
    • requested work not tied to measurable KPIs.
  3. Run change-impact analyses.
    When a stakeholder asks for a change, I ask my BA agent:
    “Show me all epics, stories, components, and tests impacted by this change,”
    using the matrix and ALM exports as input.

Output:
A living traceability matrix and a repeatable pattern for change-impact analysis—critical for enterprise Sitecore projects.


Architect: from vision and constraints to concrete designs

When I switch into architect mode, AI helps me synthesize constraints, evaluate options, and document decisions—without outsourcing judgment.

Architect workflow: architecture vision and guardrails

Goal: Produce a concise architecture vision and a set of guardrails that frame every technical decision on the project.

Steps:

  1. Load platform constraints into the RAG.
    I include:

    • XM Cloud docs for Experience Edge, Content SDK, serialization, BYOC, and CI/CD.
    • Search JS SDK and CDP/Personalize docs if they are in scope.
    • Any enterprise constraints (network, identity, logging, observability).
  2. Ask for architecture options.
    I give my architect agent:

    • project requirements,
    • non-functional constraints (traffic, latency, geos, compliance),
    • technical preferences (Next.js App Router, Storybook, etc.).
      Ask for 2–3 architecture options with pros/cons and Sitecore-specific notes (for example Experience Edge usage patterns, personalization options, SCS module layouts).
  3. Draft the architecture vision.
    I pick the option I prefer and ask the agent to generate a short RFC‑style document:

    • problem statement and scope,
    • high-level diagram (C4-style system/context),
    • key decisions (for example “XM Cloud as headless CMS using Experience Edge Delivery and Preview; Next.js with Content SDK as primary head”),
    • risks and assumptions.
  4. Define explicit guardrails.
    I ask the agent to extract guardrails like:

    • which data lives in XM Cloud vs Content Hub vs CDP,
    • which endpoints to use in which environments,
    • serialization rules (SCS modules, push/pull policies),
    • security constraints (no secrets in code, where keys live, how they rotate).

Output:
An RFC-style architecture vision document plus a guardrails list that all agents (and humans) reference.

Architect workflow: component and data design

Goal: Translate IA and requirements into content models, components, and data flows that are realistic for XM Cloud and composable Sitecore.

Steps:

  1. Start from your component matrix.
    Use the output of your “Code Generation” series: the Playwright crawl, matrix, and components/*.md specs.

  2. Model content types.
    Ask your agent to propose:

    • templates and content types in XM Cloud (or in Content Hub, if content is shared across channels),
    • field definitions and constraints,
    • relationships between entities (for example articles, authors, categories, products).
  3. Validate against Sitecore docs.
    Cross-check:

    • whether your templates align with component usage patterns in XM Cloud components docs,
    • whether Experience Edge queries will be efficient for your access patterns,
    • whether personalization rules have the data they need.
  4. Document data flows.
    Ask an agent to draw sequence diagrams (in Mermaid or similar) for:

    • page render in preview vs delivery,
    • personalization decision flows,
    • integrations (for example XM Cloud → Connect → Salesforce).

Output:
Content models, component designs, and data-flow diagrams stored in docs/architecture/ and linked from the architecture vision.

Architect workflow: decision records and technical debt management

Goal: Track why you made certain decisions and when to revisit them.

Steps:

  1. Adopt a decision note template.
    Add docs/adr/template.md with:

    • context, decision, alternatives, consequences, review date.
  2. Use AI to draft decision notes.
    For each significant choice (for example Experience Edge strategy, serialization layout, hosting model), ask an agent to generate a draft ADR from your architecture vision and notes. You edit and approve it.

  3. Tag decisions with revisit triggers.
    Ask the agent to propose revisit triggers (for example “traffic increases 10×,” “Content Hub DAM is introduced,” “Search quotas change”) and include them in the ADR.

  4. Periodically review decision notes with AI assistance.
    Once per quarter or per big release, ask an agent to:

    • scan ADRs,
    • compare them against current metrics and constraints,
    • propose which decisions may need revisiting.

Output:
A lightweight but powerful decision log for the project, with AI helping you draft and review.


Project manager: visibility, risk, and predictability

When I take on the PM role, AI helps me keep everyone aligned, spot risks early, and run shorter feedback loops without drowning in manual status reporting.

PM workflow: delivery model and roadmap

Goal: Choose a delivery model (phased releases, parallel tracks, etc.) and keep a clear roadmap aligned to capacity.

Steps:

  1. Feed the agent delivery constraints.
    I provide:

    • team composition and capacity,
    • timelines and key milestones,
    • dependency constraints (for example content readiness, external systems).
  2. Ask for delivery scenarios.
    I ask for 2–3 delivery models (for example “MVP, then composable add‑ons,” “journey‑by‑journey rollout”) with pros/cons, emphasizing Sitecore‑specific risks: content migration, personalization ramp‑up, search tuning, etc.

  3. Turn the chosen model into a roadmap.
    I have the agent:

    • map epics to releases,
    • highlight cross-team dependencies,
    • suggest buffer and risk mitigation activities (for example early content modeling, early search setup).
  4. Export a shareable roadmap.
    Together we generate:

    • a Markdown roadmap (committed to git),
    • a slide for stakeholders (for example a simple swim-lane diagram).

Output:
An AI-assisted but human-curated roadmap that is consistent with architecture, requirements, and team capacity.

PM workflow: RAID logs and risk intelligence

Goal: Keep risks, assumptions, issues, and dependencies visible and actionable.

Steps:

  1. Create a structured RAID log.
    I maintain docs/raid/raid_log.md with sections for risks, assumptions, issues, and dependencies.

  2. Use AI to ingest meeting notes.
    After key meetings, I paste notes or transcripts into my PM agent and ask:

    • “Extract new risks, assumptions, issues, and dependencies,”
    • “Suggest owners and due dates where obvious.”
  3. Ask for risk heatmaps.
    Have the agent summarize RAID entries as:

    • a prioritised risk list (impact vs likelihood),
    • visual-friendly bullets for stakeholder updates.
  4. Tie RAID items to work.
    I ask the agent to suggest which epics/stories should mitigate a given risk and convert those suggestions into backlog items or checklists.

Output:
A living RAID log and a pattern for keeping it up to date without manual retyping.

PM workflow: status, stakeholder updates, and retros

Goal: Spend less time manually writing updates and more time resolving real issues.

Steps:

  1. Automate status rollups.
    I provide:

    • ALM exports (sprint board, burndown, cycle time),
    • RAID log,
    • the roadmap.
      Ask the agent to generate:
    • a weekly internal status (for the team),
    • an executive summary (1–2 slides, non-technical language).
  2. Drive better retrospectives.
    After each sprint, I give the agent:

    • completed work,
    • incidents,
    • cycle-time metrics,
    • key decisions and changes.
      Ask it to propose:
    • themes for “what went well / what to improve,”
    • 3–5 concrete improvement actions,
    • owners and suggested timings.
  3. Keep the loop closed.
    At the start of the next sprint, I revisit the previous retro actions with an agent and confirm status. This ensures improvements actually happen.

Output:
Consistent status updates, higher-quality retros, and less copy-paste for the PM.


Governance: keeping AI helpful and safe

Across BA, architect, and PM workflows, a few governance practices make the difference between “useful copilot” and “chaos generator”:

If you adopt these patterns, AI becomes a force multiplier for your BA, architect, and PM roles instead of a source of noise. The rest of the AI-Powered Stack and Code Generation series builds on these workflows to show concrete implementations in XM Cloud, Storybook, and composable integrations.


Related posts in the AI-Powered Stack series:


Previous Post
AI-Powered Stack — Fast POCs from prompt to XM Cloud components
Next Post
AI-Powered Stack — Sprint zero for XM Cloud with agentic AI