On my XM Cloud projects, editors—and often I, wearing an editor hat as a company of one—are under constant pressure to:
- keep content fresh across many pages and locales,
- maintain consistent tone and terminology,
- and respond to SEO and campaign needs quickly.
At the same time, content is spread across:
- XM Cloud items and partials,
- internal style guides and brand docs,
- product documentation and legal constraints.
An editorial copilot helps me by:
- generating draft copy,
- rewriting existing text for clarity, tone, or length,
- proposing localized or variant versions,
- and calling out inconsistencies with your own guidelines.
But it has to be grounded and governed:
- grounded in your own content and docs, not generic web data;
- governed by approvals, audit trails, and PII-safe processing.
This post describes how I build such a copilot for XM Cloud using Azure OpenAI “On your data” or similar technology.
At a high level, the shape of that copilot in my setups looks like this:
How I think about copilot architecture
I surface an editorial copilot in two main ways:
- Inline inside the editing interface (a button next to a field in XM Cloud pages or a custom headless editor).
- Sidecar web app or extension that fetches content and proposes rewrites out-of-band.
Inline integration (tight editor experience, higher coupling)
When I go inline:
- editors see suggestions right where they edit,
- there are fewer context switches,
- and it is easier to tie approvals to specific fields.
The trade-offs are:
- I need UI customization and tighter coupling to XM Cloud’s release cycle,
- it is harder to evolve the copilot independently,
- and I have to stay within the platform’s extensibility model.
Sidecar service (looser coupling, more flexibility)
When I build a sidecar:
- I can iterate independently (deployed as a separate app),
- I can support multiple content systems as sources,
- and I can design a richer UI (batch operations, dashboards).
The trade-offs:
- editors may need to copy and paste or rely on an integration to sync changes back,
- and there are more moving parts to secure and monitor.
Most often I start with a sidecar and later embed specific flows inline once patterns stabilize.
How I ground the copilot with “on your data” AI
The goal is to make suggestions feel like the brand, not like a generic AI.
Building the knowledge base
I populate an Azure AI Search index (or equivalent) with:
- published and high-quality draft content from XM Cloud,
- your brand and style guides,
- glossary and terminology documents,
- legal and compliance guidelines,
— examples of “good” content (reference pages, campaigns).
For sensitive internal documents I keep:
- a private index,
- access controlled via Azure role-based access control and network restrictions.
Configuring Azure OpenAI “On your data”
Using Azure OpenAI and AI Search, I configure a chat or completions endpoint that:
- retrieves chunks from my index,
- passes them to the model as context,
- and returns suggested rewrites or variants with citations.
Key configuration points for me are:
- index selection and filters (for example filter by locale, brand, or product),
- maximum number of documents and tokens,
- and a strict requirement for citations in responses.
That way, when an editor asks for a rewrite or translation, the model grounds its answer in our own material.
How I fetch XM Cloud content into the copilot
To rewrite XM Cloud content, I fetch it in a way that:
- preserves context (page, component, field),
- is safe and does not leak secrets,
- and is easy to apply back after approval.
Using Experience Edge or Content SDK for read access
For read operations I use Experience Edge (Preview or Delivery) or the Content SDK to fetch:
- item fields and values,
- component hierarchies,
- relevant metadata (template, path, languages).
In sidecar mode the copilot can:
- accept a page URL or item identifier,
- fetch the relevant content via Experience Edge,
- and display fields in a structured interface (for example title, summary, body, calls to action).
Deciding how write-back works
For write-back I usually start with:
- manual paste back: the copilot suggests text and editors paste it into XM Cloud.
- This is the simplest to implement, low risk, but slower.
When the pattern proves itself, I move to API-based write-back where:
- the copilot uses XM Cloud APIs (or a backend integration) to update fields after explicit approval.
- This provides more automation and higher coupling, and it requires robust permissions and audit.
With write-back enabled, I always:
- enforce field-level approval (the editor selects which suggestions to apply),
- and log all changes with before and after diff, user, and timestamp.
How I design prompts and editor flows
The user experience matters as much as the model choice.
Core use cases I start with
I start with a small set of clear, repeatable actions:
- “Rewrite for clarity” (same length, friendlier tone).
- “Shorten to N characters” (for teasers, meta descriptions).
- “Adjust tone to [formal / conversational / technical / beginner-friendly].”
- “Translate to [language] and keep product names in English.”
- “Propose A/B variants for this heading or call to action.”
Each action becomes a button or command with a well-tested prompt template.
Prompt patterns that work for me
For each use case, I define:
- a system prompt that sets:
- the role (“You are an editorial assistant for Sitecore XM Cloud content”),
- and constraints (no new claims, no legal guarantees, keep product names and terminology intact).
- a user prompt that includes:
- the current field value,
- optionally surrounding context (page description, target persona, search keywords),
- and explicit instructions (target tone, length, locale).
I ask my coding agent to generate and refine these prompts, then lock them into configuration (YAML, JSON, or code) rather than editing them ad hoc.
UI patterns I like
UI patterns that work well for editors:
- side-by-side view of current versus suggested content,
- inline diffing (highlighted insertions and deletions),
- quick buttons to accept, edit, or discard suggestions,
- and a small indicator of which sources were used (for example “Based on: Brand Guide v3, Product Docs v5.2”).
Approvals, audit, and handling personal data
An editorial copilot touches content that may include personal data or sensitive context, especially in business-to-business or logged-in scenarios.
Handling personal data safely
I try to:
- avoid sending raw personal data to models wherever possible,
- redact or tokenize sensitive fields before sending content to the copilot,
- and ensure logs or traces are stored securely and comply with organization policies.
Azure OpenAI “On your data” can be configured with:
- private network access,
- strict role-based access control,
- and logging limited to what is needed for auditing.
Approvals and audit trails
For each suggestion I:
- record:
- who requested it,
- what input text and prompt were used,
- which sources were retrieved from the index,
- the model version,
- and what was ultimately applied to XM Cloud.
I store this in:
- a database table,
- or a structured log that I can query and visualize.
Where possible I link entries back to:
- item IDs and field names in XM Cloud,
- and user IDs in the identity provider.
How I roll out an editorial copilot
I try to start small and iterate.
Piloting with a narrow scope
In early pilots I:
- choose a limited set of pages (for example blog posts or marketing landing pages),
- limit actions (for example rewrites and shorten, no translation at first),
- and work with a small group of editors who are comfortable experimenting.
I collect:
- qualitative feedback (what helped, what confused),
- and quantitative metrics (time saved, number of suggestions accepted).
Expanding gradually
Once patterns stabilize I:
- add more actions (for example localized variants),
- expand to more templates or sections of the site,
- and integrate closer with XM Cloud (for example inline buttons or context menus).
I keep prompts, retrieval settings, and UI behavior under version control so I can roll forward and back safely.
How this copilot connects to other AI workflows
The editorial copilot is just one piece of the AI Workflows theme.
I often:
- reuse the same “on your data” foundation for:
- explaining Content Hub asset usage,
- assisting with image selection or captioning.
- feed approved copy into Content Hub pipelines for translation or enrichment.
- plug copilot logs into my content operations pipeline as signals:
- which fields are most frequently rewritten,
- which guidelines cause friction,
- and where I might need better base content.
In the related post AI Workflows — Image generation and editing from Content Hub DAM, I shift focus from text to imagery, but the same principles apply: grounding, guardrails, and human sign-off.
Useful links
- Azure OpenAI “On your data” concepts and configuration — https://learn.microsoft.com/azure/ai-services/openai/how-to/use-your-data
- Sitecore XM Cloud Experience Edge best practices — https://doc.sitecore.com/xmc/en/developers/xm-cloud/experience-edge-best-practices.html
- Sitecore XM Cloud components overview — https://doc.sitecore.com/xmc/en/users/xm-cloud/components.html
- Sitecore Content SDK for XM Cloud — https://doc.sitecore.com/xmc/en/developers/content-sdk/sitecore-content-sdk-for-xm-cloud.html