The AI Code Generation theme is where the planning work from the AI-Powered Stack series turns into actual components and code. I use it to make a single architect—plus a bench of coding agents—behave like a full feature team. The goal is not “one-click site generation,” but a repeatable pipeline where:
- agents handle crawling, normalization, and scaffolding, and
- I keep control of architecture, naming, and quality.
In this article I step back and describe that pipeline end to end, then show how the other posts in this theme fit together:
- AI Code Generation — From prompt to XM Cloud component matrix
- AI Code Generation — Figma to Next.js components with Figma MCP and Claude Code
- AI Code Generation — Local-first component library with Storybook
- AI Code Generation — Promoting Storybook components into XM Cloud (BYOC)
- AI Code Generation — Flutter head for XM Cloud (optional multi-channel head)
This is the mental model I keep in front of me whenever I move from Sprint Zero discovery into “real” component work for XM Cloud.
Why “AI Code Generation” is not one magic button
When people hear “AI code generation,” they often imagine:
- full apps generated from a prompt,
- no need to think about content models or components,
- and no need to understand the underlying platform.
For Sitecore XM Cloud projects, that mindset is dangerous. You have:
- structured content going through Experience Edge and the Content SDK,
- serialized configuration that can easily be corrupted,
- personalization, analytics, and search that all depend on specific data shapes,
- and multiple channels (web, email, mobile, apps) that must share content and semantics.
So in this series, “AI code generation” means something narrower and more useful:
- use AI to accelerate the boring parts (crawling, normalizing markup, generating specs, scaffolding components, wiring tests),
- keep content models, component contracts, and integrations under human control, and
- treat everything AI produces as a proposal that needs your review.
The result: an AI-powered pipeline you can actually trust on real XM Cloud builds.
The high-level pipeline: from matrix to multi-channel components
On real projects, four core posts in this theme describe one cohesive pipeline I run with agents, and the Figma to Next.js components post is a deeper dive into the Figma-first variant of that pipeline:
In prose, that looks like:
-
AI Code Generation — From prompt to XM Cloud component matrix
I start from a manually curated component matrix or Figma design system (optionally informed by live-site recon and crawls) and normalize it into one requirement markdown file per component that defines its fields, example references, and behavioral notes. -
AI Code Generation — Figma to Next.js components with Figma MCP and Claude Code
When my inputs are heavily Figma-based, I use Figma MCP and a coding agent to go from real Figma frames to reusable Next.js components, keeping everything component-first and avoiding hard-coded pages. This post is a deeper dive into that Figma-first path and complements the matrix work in step 1. -
AI Code Generation — Local-first component library with Storybook
I turn the requirement files into a local-first component library:- React components built with Next.js App Router,
- Storybook stories for key states,
- and mock data that resembles Experience Edge and Content Hub payloads,
all without needing a running XM Cloud environment yet. This is also where I lean on AI to generate the bulk of the initial components and stories.
-
AI Code Generation — Promoting Storybook components into XM Cloud (BYOC)
I take the local library and promote it into XM Cloud BYOC components:- generate or refine templates and rendering items via Sitecore Content Serialization (YAML),
- register components in the XM Cloud component map,
- and wire data mappers so Experience Edge and the Content SDK can feed them correctly, using Sitecore CLI pushes rather than per‑developer Docker setups.
-
AI Code Generation — Flutter head for XM Cloud (optional)
I reuse the same component thinking and content contracts to drive a Flutter-based head, consuming Experience Edge GraphQL or Content SDK-backed endpoints for mobile and cross-platform experiences without reinventing the content model.
I rarely adopt all parts at once. Many of my projects stop after the BYOC step and treat the Flutter head as a future extension. But thinking of this as one pipeline keeps naming, data shapes, and component semantics consistent.
When to use this theme in your XM Cloud project
You will get the most value from this series when:
- you have run at least one Sprint Zero using the AI-Powered Stack,
- you have a component matrix (even if rough) or at least a list of likely components,
- and you are ready to turn that into code for a Next.js head and, optionally, other channels.
Common triggers:
- You need to prove how many components are required to rebuild a legacy site.
- You want to estimate the cost of a redesign or migration based on real components rather than page counts.
- You want a predictable way for agents to scaffold new components without breaking XM Cloud conventions.
- You want Storybook and Playwright to be first-class citizens rather than afterthoughts.
If you are still debating architecture, environments, or content modeling, spend more time in the AI-Powered Stack theme first.
The “AI Code Generation” principles
All of the posts in this theme use a small set of shared principles:
-
Local-first, XM Cloud-aware.
Everything starts with a local repo that:- builds without any cloud dependency,
- mirrors the structures expected by XM Cloud and the Content SDK,
- and can plug into Experience Edge Preview and Delivery later.
-
Components as the primary unit of work.
Instead of thinking in pages, we think in components:- each with a spec, stories, tests, and an XM Cloud mapping,
- tracked in a
components/folder that agents understand.
-
Agents as helpers, not owners.
Agents:- perform crawls and normalization,
- generate Markdown specs and starter code,
- wire Storybook stories and tests using consistent templates.
Humans: - define IA and content models,
- review and refine specs,
- approve architecture and integration decisions.
-
Repeatable prompts and scripts.
Prompts live in the repo underprompts/just like code. You do not reinvent them every project; you version them and refine them. -
Favor idempotent scripts over manual rituals.
Whenever AI suggests a manual sequence (“open this file, change these imports”), try to convert it into a script or generator. Agents are much better at running scripts than at editing in-place configuration over months.
How the posts fit together in practice
Here is how I typically use the core posts on a real project, and where the Figma-to-XM Cloud components code deep dive fits.
Step 1: Build the component matrix (Post 1)
You start with AI Code Generation — From prompt to XM Cloud component matrix:
- run the crawler and let an agent group repeated patterns into provisional components,
- review and normalize those into a matrix,
- generate Markdown specs per component (fields, data sources, analytics, a11y).
Deliverables:
- a
components/folder with one spec per component, - a
crawls/andanalysis/folder capturing what was observed, - and a shared understanding across design and dev of “what we’re building.”
Optional: Figma to XM Cloud components code (deep dive)
If your starting point is a Figma-heavy design system and you want to see a concrete, end-to-end example of going from Figma frames to XM Cloud-ready components and a working Next.js head, the AI Code Generation — Figma to XM Cloud components code post walks through that variant. I treat it as a practical case study that sits between the matrix work in Post 1 and the component library work in Post 2.
Step 2: Materialize local components and stories (Post 2)
Then you move to AI Code Generation — Local-first component library with Storybook:
- turn each spec into a React component and Storybook stories,
- set up Playwright Test Agents for basic regression and accessibility checks,
- plug in mock data that mirrors Experience Edge and Content Hub payloads.
Deliverables:
- a local component library you can demo to designers and stakeholders,
- fast feedback loops via Storybook and Playwright,
- no dependency yet on a running XM Cloud environment.
Step 3: Promote components into XM Cloud (Post 3)
Next up is AI Code Generation — Promoting Storybook components into XM Cloud (BYOC):
- register components in the XM Cloud component map,
- align fields with XM Cloud templates and serialized items,
- configure data fetching using the Content SDK or GraphQL,
- and ensure authors can drag-and-drop the components in XM Cloud Pages or your headless composition layer.
Deliverables:
- working XM Cloud components powered by your local library,
- consistent mapping from content fields to component props,
- and a path to keep Storybook and XM Cloud in sync over time.
Step 4: Extend into Flutter and other heads (Post 4)
Finally, AI Code Generation — Flutter head for XM Cloud (optional) looks at multi-channel:
- reusing the same content schemas and contracts for a Flutter-based head,
- consuming Experience Edge or Content SDK endpoints from mobile,
- and sharing analytics and personalization concepts where it makes sense.
Deliverables:
- a blueprint for mobile or multi-channel heads that do not fight your web stack,
- and patterns for sharing content and analytics across platforms.
How this theme connects to the rest of your stack
The AI Code Generation theme sits between:
- AI-Powered Stack, which defines your tools, roles, and Sprint Zero practices, and
- the upcoming themes on AI Workflows and Integrations, which handle editorial copilots, DAM image flows, and enterprise wiring.
You can think of it as the “build the engine” phase:
- your AI stack and Sprint Zero processes decide what to build,
- AI Code Generation decides how you turn that into components and heads, and
- AI Workflows and Integrations decide how those components fit into content operations and enterprise systems.
If you keep that mental model in mind, it becomes easier to decide:
- which prompts belong where,
- which repos should hold which workflows,
- and how to onboard new developers and architects into your AI-powered Sitecore ecosystem.
Next steps
Once you are comfortable with this roadmap, move on to AI Code Generation — From prompt to XM Cloud component matrix and start by:
- pointing the crawler at your current site,
- reviewing the proposed components with your designers,
- and agreeing on the first 15–25 components that will form the backbone of your XM Cloud build. If your inputs are primarily Figma designs, you can read AI Code Generation — Figma to XM Cloud components code in parallel as a concrete example of the Figma + MCP path.
From there, the rest of the series will help you turn those decisions into running code, Storybook stories, and XM Cloud components—with AI helping at each step, and humans staying firmly in charge of what ships.