Most Sitecore teams are under pressure to “show something” in the first week of an engagement. I feel that pressure too, even though I usually show up as a company of one: I am the only named developer on the project, and the rest of the “team” is a set of coding agents I orchestrate across the SDLC.
When stakeholders want to see XM Cloud, Experience Edge, and new UX in action before the information architecture or integration contracts are finished, it is easy to end up with throwaway proofs of concept:
- one‑off Next.js apps no one wants to maintain,
- demos that do not match the eventual architecture,
- or spikes that quietly become production without tests or governance.
In my AI‑powered delivery workflow, I use agents to flip that pattern. Instead of hurried, disposable POCs, I:
- lean on agents to scaffold XM Cloud + Next.js + Storybook projects quickly,
- plug in realistic data flows and components from day one, and
- capture architecture decisions so that POCs become intentional stepping stones toward production.
At a high level my POC pipeline looks like this:
This post walks through that flow in concrete terms. I assume:
- I already have access to XM Cloud and the Experience Edge endpoints,
- a Next.js/App Router front end using the Content SDK,
- and the rest of the AI‑Powered Stack in place (agents, RAG, and MCP‑style integrations).
Deciding what belongs in a POC vs production
For me, the first place AI helps is with framing, not code.
Before I open a terminal, I want a shared understanding of:
- Why we are building this POC.
- What “good” looks like.
- What will be kept and what will be thrown away.
Framing the POC in business terms
I start by giving my planning agent—usually ChatGPT Pro, sometimes Claude Code—a short brief:
- the client’s goals (“validate XM Cloud fit,” “prove composable Search,” “explore a new information architecture”),
- constraints (timebox, budget, environments), and
- target personas (editors, marketers, customers).
I ask it to draft a POC one‑pager with:
- Objective: what specific decision this POC should unblock.
- Scope: what is in vs out (channels, integrations, journeys).
- Success criteria: what we must be able to demo or measure.
- Timebox and constraints: for example “5 days, no production data, Preview only.”
- Risks and assumptions: for example “Assumes content authors can work with simplified information architecture.”
I always edit this one‑pager, but having an agent generate the first draft lets me iterate faster with stakeholders. I save it as docs/pocs/<poc-name>/brief.md; it becomes the contract I use with both humans and agents.
Here is the kind of prompt I use for that first draft:
You are a senior delivery lead helping me plan an XM Cloud / Next.js proof of concept.
Using the notes below, draft a one-page POC brief in Markdown with sections:
- Objective (what decision this POC should unblock)
- Scope (what is in vs out)
- Success criteria (what we must be able to demo or measure)
- Timebox and constraints (for example "5 days, no production data, Preview only")
- Risks and assumptions
Notes:
- Client goals: …
- Constraints: …
- Target personas: …
Marking what will be kept versus discarded
Next, I ask an agent to propose which outputs should be production candidates and which are pure experiments. For example:
-
To keep (production candidates):
- basic project structure (Next.js + Content SDK + Storybook),
- common design system primitives (buttons, inputs, typography),
- component specs and their mapping to XM Cloud,
- environment and key layout (even if using test values).
-
To discard (experiments):
- quick-and-dirty page layouts that don’t match design,
- temporary data mocks or test routes,
- any unreviewed AI-generated code touching security or integrations.
I write these down in docs/pocs/<poc-name>/keep-vs-throwaway.md. This file gives clear guidance when I later harden or refactor the POC with my coding agents.
Using AI to scaffold an XM Cloud + Next.js + Storybook starter
Once the POC frame is clear, I let coding agents help me stand up the skeleton quickly, using patterns that align with official XM Cloud guidance.
Starting from an XM Cloud-compatible starter
I want a baseline that:
- uses Next.js App Router,
- integrates the Sitecore Content SDK,
- is aware of Experience Edge Preview and Delivery endpoints, and
- plays nicely with Storybook and Playwright.
On real projects that means either:
- the official Content SDK starter for XM Cloud (Next.js App Router), or
- a local‑first starter repo I maintain with Storybook and Playwright pre‑wired.
In my coding agent (usually Claude Code or a ChatGPT Pro code chat inside the IDE), I give it the brief plus any starter repo links and ask for a concrete plan:
You are a senior Next.js engineer helping me scaffold an XM Cloud-compatible POC.
Goals:
- Next.js App Router + TypeScript
- Sitecore Content SDK wired to Experience Edge Preview/Delivery
- Storybook configured for isolated component dev
Tasks:
- Propose the exact `npx` / `pnpm` / `npm` commands to scaffold the project
- Add a `.env.example` with placeholders for XM Cloud / Experience Edge config
- Add Storybook for Next.js and a simple example story
- Summarize required manual steps (keys, login, CLI setup)
Output:
- Shell commands
- List of files to create/update with short snippets (not full files)
I stay in control of the terminal. Agents propose commands and file contents, but I run and commit them.
Baking POC-specific conventions into the starter
Make it obvious that this is a POC, not a production environment:
- Use a
POC-ONLYbanner in the README, with a checklist of things that must be done before promoting code (tests, security review, performance checks). - Add a top-level
docs/pocs/<poc-name>/status.mdwhere you note:- what’s working,
- what’s hacked together,
- and what is blocked.
- Ask your agent to generate a POC checklist in Markdown, including:
- content modeling tasks,
- component coverage targets,
- integration stubs for Search, Content Hub, or CDP if relevant.
This way, my starter is not just code—it’s a small, opinionated POC framework.
Generating UX wireframes, flows, and copy with AI
With the starter in place, you can use AI to accelerate UX exploration without locking yourself into fragile code.
Wireframes and flows
Depending on your design stack, you can:
- Use tools like Figma plus AI plugins to generate or remix wireframes from prompts.
- Use web-based tools (for example UIzard or similar) for fast throwaway explorations.
Feed your agent:
- the POC brief,
- existing brand guidelines or design tokens (even if incomplete),
- a list of key journeys (for example “home → product listing → product detail,” “marketing landing page → form submission”).
I ask for:
- 2–3 alternate wireframe concepts per key page or journey,
- annotated user flows describing:
- what content blocks appear where,
- what personalization or experiments might be relevant,
- where forms, search, or CDP events occur.
Export the best variants as images or embedded frames and store references in docs/pocs/<poc-name>/ux.md alongside flows and notes.
Microcopy and content drafts
Use your project RAG (NotebookLM, Notion, or just simply a Git repo with artefacts stored in it) to ground copy in:
- existing brand voice,
- legal and compliance guidelines,
- previous campaigns or site content.
I ask my writing agent to propose:
- headlines, subheadings, and CTAs for key components,
- localized variants if relevant,
- alternate tones (formal, conversational, concise) with references to source material.
Treat these as content mocks and store them under docs/pocs/<poc-name>/copy/. Later, you can:
- migrate accepted copy into XM Cloud items or Content Hub entities,
- and use this same material as context for editorial copilots in the AI Workflows theme.
Going from wireframes to XM Cloud-ready components
Now you have:
- a starter Next.js + Content SDK + Storybook project,
- wireframes and flows,
- and draft copy.
The next step is to turn that into real components that align with XM Cloud concepts.
Defining POC component specs
I start with a handful of high-value components (for example Hero, Teaser, Card Grid, Form CTA). For each:
- I ask an agent to propose a component spec based on:
- the wireframe,
- desired copy and content,
- and any existing component matrix if I have one.
- I capture the spec as Markdown:
- fields (text, rich text, media, links),
- data source (Experience Edge Preview vs Delivery, Content Hub, or static),
- analytics and tracking needs,
- accessibility expectations,
- personalization and testing ideas.
I store these under components/poc/<ComponentName>.md in the repo.
A compact prompt I reuse for this looks like:
You are helping design XM Cloud-ready frontend components for a POC.
Using the wireframe notes and copy below, draft a short component spec in Markdown for <ComponentName> with:
- Purpose (1–2 sentences)
- Fields (name, type, required?, notes)
- Data source (Experience Edge Preview vs Delivery, Content Hub, or static)
- Behavior notes (responsiveness, interactions, a11y)
Wireframe notes:
- …
Copy draft:
- …
Generating Storybook components and stories
I then ask a coding agent to:
- generate a React component per spec, using your design tokens and Tailwind or CSS-in-JS setup,
- create a Storybook story showcasing:
- the default state,
- at least one variant state,
- edge cases (long text, missing image, etc.),
- use strongly typed props that match the fields in the spec.
Because this is a POC, I focus on:
- rendering, layout, and basic behavior,
- not yet on full CMS integration or production-level error handling.
Once generated, I run Storybook and review components with designers and stakeholders. I adjust specs and code as needed—AI accelerates iteration, but the team sets the bar.
Connecting components to Experience Edge Preview
To keep the POC grounded in real XM Cloud content, I:
- Populate a small set of sample items in XM Cloud that roughly match the POC.
- Configure the Content SDK client to talk to Experience Edge Preview, with environment keys stored in local
.envand never committed. - For a subset of components, ask an agent to:
- write server-side data loaders that fetch content via the Content SDK or GraphQL,
- map content fields to component props,
- handle “no data” or preview-only states gracefully.
The goal is to show stakeholders:
- the real authoring experience in XM Cloud,
- how content flows into the POC front end,
- and how components might behave in preview vs delivery.
Capturing architecture decisions as part of the POC
While building the POC, my agents and I are making choices that will shape the production solution. I want to capture those decisions in the moment, not six weeks later.
Simple decision notes
For each meaningful decision (for example “Use Content SDK GraphQL vs REST,” “Use App Router with RSC for page rendering”), I ask an agent to draft a short decision note. In more formal setups this is called an Architecture Decision Record (ADR), but the idea is simple: a small markdown file that says:
- what we decided,
- why we chose it,
- what alternatives we considered,
- and what we need to re-check before production.
I use a prompt like this:
You are helping document an important technical decision for an XM Cloud / Next.js POC.
Write a short decision note in Markdown with:
- Title: one-line summary of the decision
- Context: what problem we are solving
- Decision: what we chose and why
- Alternatives: 1–2 options we did NOT choose, with a short reason
- Follow-ups: what we should revisit before production
I review the note, tweak language for the org (security, compliance, operations), and commit it under docs/decisions/ or docs/adr/. Reviewing the POC then also means reviewing its decision set.
Tagging decisions with POC-specific caveats
In each decision note I try to spell out:
- which parts are safe to carry forward as-is,
- which are temporary shortcuts taken for the POC (for example inline mocks instead of proper service abstractions),
- and what must be revisited before production (for example caching strategy, resiliency, telemetry).
I ask an agent to propose a small “promote to production” checklist for each decision. These become part of the hardening plan.
Hardening POCs into production foundations
At some point the POC either:
- has done its job and can be archived, or
- is clearly the right foundation for production.
AI can help you move from “works on my machine” to “ready for a real pipeline” in a structured way.
Running an AI-assisted hardening review
I start by giving an agent:
- the POC codebase,
- relevant ADRs,
- your organization’s non-functional requirements (performance, security, observability),
- and the POC brief (objective, scope, constraints).
I ask it to:
- identify POC-only shortcuts (for example unhandled error paths, missing types, console logs),
- propose a hardening backlog grouped by:
- tests (unit, integration, Playwright, load tests),
- performance (caching, edge behavior, image optimization),
- security (secrets, auth, rate limiting),
- maintainability (folder structure, naming, documentation).
I still review the suggestions carefully—AI can miss organizational nuances—but it will surface many obvious items quickly.
Here is a simple hardening prompt that has worked for me:
You are reviewing an XM Cloud / Next.js POC repo to decide what it would take to make it production-ready.
Inputs:
- POC brief (objective, scope, constraints)
- ADRs under docs/adr
- org-level NFRs (performance, security, observability)
Tasks:
- List obvious POC-only shortcuts and risks
- Propose a hardening backlog grouped by: tests, performance, security, maintainability
- Mark which items are "must do before go-live" vs "nice to have"
Output:
- Markdown checklist I can paste into docs/pocs/<poc-name>/hardening.md
Building a thin “promotion pipeline”
Next, I define a minimal pipeline for promoting POC code into production-ready branches:
- Branching model: for example
poc/*feature branches merging intomainonly after hardening tasks are complete. - Checks: lint, typecheck, unit tests, and at least a small set of Playwright smoke tests per POC page.
- Gates: human review required for:
- security-sensitive code,
- SCS changes,
- configuration and environment updates.
I use a coding agent to:
- scaffold CI workflow files (GitHub Actions, Azure Pipelines, etc.) with the basics,
- draft documentation for developers describing how to run checks locally.
Closing the loop with stakeholders
Finally, I turn the POC into:
- a short show-and-tell for stakeholders with:
- before/after journeys,
- what we learned,
- remaining unknowns, and
- which parts will become production foundations.
- a handoff package for delivery teams:
- repo link and setup instructions,
- decision note index,
- component specs and Storybook stories,
- POC brief and status documents.
My AI agents can draft the decks and summaries; I ensure the message and scope are correct.
Checklist: Fast POCs that age gracefully
To recap, a good AI-powered POC for XM Cloud, in this model, should:
- Start from a written brief with clear objectives, scope, and success criteria.
- Use an XM Cloud-compatible starter (Next.js + Content SDK + Storybook).
- Treat wireframes, flows, and copy as first-class artifacts, not throwaway sketches.
- Capture component specs in Markdown and build Storybook stories for them.
- Connect a subset of components to Experience Edge Preview with real content.
- Generate and maintain simple decision notes and keep/throwaway rules.
- Run at least basic lint, test, and Playwright smoke checks before showing to stakeholders.
- Include a hardening backlog and promotion path if the POC becomes the production foundation.
When I follow this pattern, AI stops being a “magic code generator” and instead becomes a repeatable engine for de-risking architecture and UX. POCs become intentional, auditable, and—when it makes sense—directly reusable for XM Cloud builds.
Additional resources
- Sitecore XM Cloud docs: https://doc.sitecore.com/xmc/en/developers/xm-cloud/index-en.html
- Experience Edge docs: https://doc.sitecore.com/xmc/en/developers/experience-edge-for-xm-cloud/index-en.html
- Storybook docs: https://storybook.js.org/docs
- Next.js App Router docs: https://nextjs.org/docs/app
- Architecture Decision Records (ADR) overview (a more formal version of decision notes): https://adr.github.io/
Related posts in the AI-Powered Stack series:
- AI-Powered Stack — Series overview and roadmap
- AI-Powered Stack — My Sitecore delivery stack for 2025
- AI-Powered Stack — Sprint zero for XM Cloud with agentic AI
- AI-Powered Stack — From Blank Page to Signed SOW
- AI‑Powered Stack — From Signed SOW to Sprint One
- AI-Powered Stack — Working with AI as your Sitecore BA, architect, and PM