Modern Sitecore projects put enormous pressure on tiny teams. On most of my engagements I am effectively a company of one: the same person is asked to play business analyst, solution architect, and de‑facto PM while still shipping features. An AI‑powered stack changes that equation for me: instead of trying to be three people at once, I orchestrate a small bench of agents that each specialize in a slice of the work—while I keep ownership of decisions and quality.
This post turns the high‑level AI stack from the previous article into the role‑specific workflows I actually run for Sitecore BAs, architects, and PMs. The goal is not “let AI do everything,” but rather “let AI handle the repetitive, document‑heavy, easily auditable work, so I can focus on judgment, negotiation, and trade‑offs.”
At a high level my “virtual team” for these roles looks like this:
In the rest of the post I will assume you are working on XM Cloud + Next.js/App Router + Content SDK, often alongside Sitecore Search, Content Hub, and Customer Data Platform (CDP) / Personalize, and I will walk through how I use these agents in practice.
Principles for AI-powered roles on Sitecore projects
Before diving into each role, it helps to agree on a few principles:
-
Ground everything in project and product docs.
My agents are backed by:- a project retrieval‑augmented generation (RAG) workspace (NotebookLM, Azure OpenAI “On your data,” or similar), and
- official docs: XM Cloud, Content SDK, Experience Edge, SCS, Search, Content Hub, CDP/Personalize.
-
Agents produce drafts; humans own decisions.
Agents can propose requirements, architectures, or plans—but you decide what ships. Treat AI like a high-speed junior colleague. -
Make workflows repeatable and auditable.
Every workflow below ends in Markdown or another versioned artifact: specs, RFCs, diagrams, RAID logs. That keeps AI’s work inspectable and sharable. -
Optimize for project reuse.
Prompts, checklists, and templates should live in your repo (for example underprompts/anddocs/). Over time, you refine them just like code.
With those principles in place, here is how I use AI for each role.
Business analyst: from messy inputs to clear requirements
When I wear the BA hat, I sit between stakeholders and delivery. My BA‑oriented agents help me synthesize inputs, spot gaps, and keep requirements consistent across channels and products.
BA workflow: capture and normalize discovery inputs
Goal: Turn RFPs, notes, recordings, and legacy documentation into a single source of truth you can query and evolve.
Steps:
-
Centralize sources.
- I drop RFPs, pitch decks, previous SOWs, and CRM notes into a NotebookLM notebook or similar project RAG.
- I store the same documents (minus anything too sensitive) in a private Azure storage account and index them with Azure AI Search, wired into Azure OpenAI “On your data” so later prompts get citations instead of hallucinations.
-
Create a discovery taxonomy.
I ask my BA agent to propose a simple taxonomy (for example: personas, journeys, channels, KPIs, content types, features, constraints) and save it asdocs/discovery_taxonomy.md. -
Run structured Q&A.
Using ChatGPT or Claude Code with the RAG attached:- Ask for summaries per persona and per journey.
- Ask it to list explicit requirements, implicit requirements, and “unknowns” that must be clarified.
- Export results into Markdown files under
docs/requirements/.
-
Turn “unknowns” into stakeholder questions.
I have the agent convert unknowns into specific questions I can ask during workshops. This becomes my discovery backlog.
Output:
docs/requirements/ folder with:
- per-journey requirement summaries,
- a discovery questions list,
- and a living taxonomy used across the project.
BA workflow: epics, stories, and acceptance criteria
Goal: Convert the synthesized requirements into a backlog that development can estimate and build.
Steps:
-
Define issue templates.
In my repos I add Markdown templates for epics and stories (for exampledocs/templates/epic.md,docs/templates/story.md) that include:- problem statement,
- target personas and journeys,
- dependencies (XM Cloud, Content Hub, Search, CDP, external systems),
- acceptance criteria and non-functional notes.
-
Draft epics with an agent.
For each journey in the taxonomy, I feed the requirements into an AI agent and ask for:- 3–7 epics,
- each with a short description, acceptance criteria, and explicit references to the underlying documents (RFP sections, meeting notes, etc.).
-
Review and normalize.
I review epics for:- clarity (no tech jargon for business stakeholders),
- testability (can QA or UAT verify this?),
- alignment to Sitecore capabilities (for example using XM Cloud components, Experience Edge, or personalization features).
-
Generate stories from epics.
Once epics are stable, I ask the agent to propose user stories that:- use your team’s story template,
- specify where data comes from (Experience Edge vs Content Hub vs custom APIs),
- include analytics, a11y, and personalization notes upfront.
-
Push into the ALM tool.
Either manually or via API, I import epics and stories into Jira/Azure DevOps. The Markdown stays as canonical documentation in git; the ALM system can be rebuilt if needed.
Output:
Reviewed epics and stories that are traceable back to requirements, with clear acceptance criteria and Sitecore-specific notes.
BA Workflow 3: Traceability and change management
Goal: Ensure every deliverable maps back to a requirement and every requirement maps to real user value.
Steps:
-
Ask an agent to build a traceability matrix.
Starting from epics/stories and requirements, I ask for:- rows: requirements,
- columns: epics, stories, components, tests.
Save it asdocs/traceability_matrix.mdand keep it updated.
-
Use AI to flag orphan work.
Periodically, I have an agent scan the backlog and traceability matrix to identify:- stories without linked requirements,
- components that are not used by any journey,
- requested work not tied to measurable KPIs.
-
Run change-impact analyses.
When a stakeholder asks for a change, I ask my BA agent:
“Show me all epics, stories, components, and tests impacted by this change,”
using the matrix and ALM exports as input.
Output:
A living traceability matrix and a repeatable pattern for change-impact analysis—critical for enterprise Sitecore projects.
Architect: from vision and constraints to concrete designs
When I switch into architect mode, AI helps me synthesize constraints, evaluate options, and document decisions—without outsourcing judgment.
Architect workflow: architecture vision and guardrails
Goal: Produce a concise architecture vision and a set of guardrails that frame every technical decision on the project.
Steps:
-
Load platform constraints into the RAG.
I include:- XM Cloud docs for Experience Edge, Content SDK, serialization, BYOC, and CI/CD.
- Search JS SDK and CDP/Personalize docs if they are in scope.
- Any enterprise constraints (network, identity, logging, observability).
-
Ask for architecture options.
I give my architect agent:- project requirements,
- non-functional constraints (traffic, latency, geos, compliance),
- technical preferences (Next.js App Router, Storybook, etc.).
Ask for 2–3 architecture options with pros/cons and Sitecore-specific notes (for example Experience Edge usage patterns, personalization options, SCS module layouts).
-
Draft the architecture vision.
I pick the option I prefer and ask the agent to generate a short RFC‑style document:- problem statement and scope,
- high-level diagram (C4-style system/context),
- key decisions (for example “XM Cloud as headless CMS using Experience Edge Delivery and Preview; Next.js with Content SDK as primary head”),
- risks and assumptions.
-
Define explicit guardrails.
I ask the agent to extract guardrails like:- which data lives in XM Cloud vs Content Hub vs CDP,
- which endpoints to use in which environments,
- serialization rules (SCS modules, push/pull policies),
- security constraints (no secrets in code, where keys live, how they rotate).
Output:
An RFC-style architecture vision document plus a guardrails list that all agents (and humans) reference.
Architect workflow: component and data design
Goal: Translate IA and requirements into content models, components, and data flows that are realistic for XM Cloud and composable Sitecore.
Steps:
-
Start from your component matrix.
Use the output of your “Code Generation” series: the Playwright crawl, matrix, andcomponents/*.mdspecs. -
Model content types.
Ask your agent to propose:- templates and content types in XM Cloud (or in Content Hub, if content is shared across channels),
- field definitions and constraints,
- relationships between entities (for example articles, authors, categories, products).
-
Validate against Sitecore docs.
Cross-check:- whether your templates align with component usage patterns in XM Cloud components docs,
- whether Experience Edge queries will be efficient for your access patterns,
- whether personalization rules have the data they need.
-
Document data flows.
Ask an agent to draw sequence diagrams (in Mermaid or similar) for:- page render in preview vs delivery,
- personalization decision flows,
- integrations (for example XM Cloud → Connect → Salesforce).
Output:
Content models, component designs, and data-flow diagrams stored in docs/architecture/ and linked from the architecture vision.
Architect workflow: decision records and technical debt management
Goal: Track why you made certain decisions and when to revisit them.
Steps:
-
Adopt a decision note template.
Adddocs/adr/template.mdwith:- context, decision, alternatives, consequences, review date.
-
Use AI to draft decision notes.
For each significant choice (for example Experience Edge strategy, serialization layout, hosting model), ask an agent to generate a draft ADR from your architecture vision and notes. You edit and approve it. -
Tag decisions with revisit triggers.
Ask the agent to propose revisit triggers (for example “traffic increases 10×,” “Content Hub DAM is introduced,” “Search quotas change”) and include them in the ADR. -
Periodically review decision notes with AI assistance.
Once per quarter or per big release, ask an agent to:- scan ADRs,
- compare them against current metrics and constraints,
- propose which decisions may need revisiting.
Output:
A lightweight but powerful decision log for the project, with AI helping you draft and review.
Project manager: visibility, risk, and predictability
When I take on the PM role, AI helps me keep everyone aligned, spot risks early, and run shorter feedback loops without drowning in manual status reporting.
PM workflow: delivery model and roadmap
Goal: Choose a delivery model (phased releases, parallel tracks, etc.) and keep a clear roadmap aligned to capacity.
Steps:
-
Feed the agent delivery constraints.
I provide:- team composition and capacity,
- timelines and key milestones,
- dependency constraints (for example content readiness, external systems).
-
Ask for delivery scenarios.
I ask for 2–3 delivery models (for example “MVP, then composable add‑ons,” “journey‑by‑journey rollout”) with pros/cons, emphasizing Sitecore‑specific risks: content migration, personalization ramp‑up, search tuning, etc. -
Turn the chosen model into a roadmap.
I have the agent:- map epics to releases,
- highlight cross-team dependencies,
- suggest buffer and risk mitigation activities (for example early content modeling, early search setup).
-
Export a shareable roadmap.
Together we generate:- a Markdown roadmap (committed to git),
- a slide for stakeholders (for example a simple swim-lane diagram).
Output:
An AI-assisted but human-curated roadmap that is consistent with architecture, requirements, and team capacity.
PM workflow: RAID logs and risk intelligence
Goal: Keep risks, assumptions, issues, and dependencies visible and actionable.
Steps:
-
Create a structured RAID log.
I maintaindocs/raid/raid_log.mdwith sections for risks, assumptions, issues, and dependencies. -
Use AI to ingest meeting notes.
After key meetings, I paste notes or transcripts into my PM agent and ask:- “Extract new risks, assumptions, issues, and dependencies,”
- “Suggest owners and due dates where obvious.”
-
Ask for risk heatmaps.
Have the agent summarize RAID entries as:- a prioritised risk list (impact vs likelihood),
- visual-friendly bullets for stakeholder updates.
-
Tie RAID items to work.
I ask the agent to suggest which epics/stories should mitigate a given risk and convert those suggestions into backlog items or checklists.
Output:
A living RAID log and a pattern for keeping it up to date without manual retyping.
PM workflow: status, stakeholder updates, and retros
Goal: Spend less time manually writing updates and more time resolving real issues.
Steps:
-
Automate status rollups.
I provide:- ALM exports (sprint board, burndown, cycle time),
- RAID log,
- the roadmap.
Ask the agent to generate: - a weekly internal status (for the team),
- an executive summary (1–2 slides, non-technical language).
-
Drive better retrospectives.
After each sprint, I give the agent:- completed work,
- incidents,
- cycle-time metrics,
- key decisions and changes.
Ask it to propose: - themes for “what went well / what to improve,”
- 3–5 concrete improvement actions,
- owners and suggested timings.
-
Keep the loop closed.
At the start of the next sprint, I revisit the previous retro actions with an agent and confirm status. This ensures improvements actually happen.
Output:
Consistent status updates, higher-quality retros, and less copy-paste for the PM.
Governance: keeping AI helpful and safe
Across BA, architect, and PM workflows, a few governance practices make the difference between “useful copilot” and “chaos generator”:
-
Single source of truth in git.
Requirements, architecture, prompts, and checklists live beside code. ALM tools and docs can be regenerated from this source. -
Explicit “do not change” rules.
MaintainAGENTS.mdand similar files explaining what agents may touch (Markdown docs, specs, tests) and what they must not change directly (secrets, environment configs, serialized items). -
Citations required.
For anything related to Sitecore behavior, require agents to cite official docs or a project RAG source. If a suggestion has no citation, treat it as speculative. -
Human sign-off for decisions.
Epics, architectures, roadmaps, and risks are always reviewed and approved by humans before they become canonical.
If you adopt these patterns, AI becomes a force multiplier for your BA, architect, and PM roles instead of a source of noise. The rest of the AI-Powered Stack and Code Generation series builds on these workflows to show concrete implementations in XM Cloud, Storybook, and composable integrations.
Useful links
- Sitecore XM Cloud docs: https://doc.sitecore.com/xm-cloud
- Sitecore Developer Portal (XM Cloud, CDP, Search, etc.): https://developers.sitecore.com/docs
- Sitecore Content SDK docs: https://doc.sitecore.com/xmc/en/developers/content-sdk/index-en.html
- Sitecore Search docs: https://doc.sitecore.com/search/en/developers/search-developer-guide/index-en.html
- Sitecore CDP docs: https://doc.sitecore.com/cdp
- Sitecore Personalize docs: https://doc.sitecore.com/personalize
- Sitecore Content Hub docs: https://doc.sitecore.com/ch/en/index-en.html
- Next.js docs: https://nextjs.org/docs
- Azure OpenAI “On your data” overview: https://learn.microsoft.com/azure/ai-services/openai/how-to/use-your-data
Related posts in the AI-Powered Stack series: