Skip to content

AI Workflows — Editorial copilot for XM Cloud pages with “on your data” AI

Created:
Updated:

On my XM Cloud projects, editors—and often I, wearing an editor hat as a company of one—are under constant pressure to:

At the same time, content is spread across:

An editorial copilot helps me by:

But it has to be grounded and governed:

This post describes how I build such a copilot for XM Cloud using Azure OpenAI “On your data” or similar technology.

At a high level, the shape of that copilot in my setups looks like this:

XM Cloud

pages and items

Experience Edge / Content SDK

Editorial copilot

sidecar app

Azure OpenAI

'On your data' index

Editor experience

field suggestions


How I think about copilot architecture

I surface an editorial copilot in two main ways:

Inline integration (tight editor experience, higher coupling)

When I go inline:

The trade-offs are:

Sidecar service (looser coupling, more flexibility)

When I build a sidecar:

The trade-offs:

Most often I start with a sidecar and later embed specific flows inline once patterns stabilize.


How I ground the copilot with “on your data” AI

The goal is to make suggestions feel like the brand, not like a generic AI.

Building the knowledge base

I populate an Azure AI Search index (or equivalent) with:

For sensitive internal documents I keep:

Configuring Azure OpenAI “On your data”

Using Azure OpenAI and AI Search, I configure a chat or completions endpoint that:

Key configuration points for me are:

That way, when an editor asks for a rewrite or translation, the model grounds its answer in our own material.


How I fetch XM Cloud content into the copilot

To rewrite XM Cloud content, I fetch it in a way that:

Using Experience Edge or Content SDK for read access

For read operations I use Experience Edge (Preview or Delivery) or the Content SDK to fetch:

In sidecar mode the copilot can:

Deciding how write-back works

For write-back I usually start with:

When the pattern proves itself, I move to API-based write-back where:

With write-back enabled, I always:


How I design prompts and editor flows

The user experience matters as much as the model choice.

Core use cases I start with

I start with a small set of clear, repeatable actions:

Each action becomes a button or command with a well-tested prompt template.

Prompt patterns that work for me

For each use case, I define:

I ask my coding agent to generate and refine these prompts, then lock them into configuration (YAML, JSON, or code) rather than editing them ad hoc.

UI patterns I like

UI patterns that work well for editors:


Approvals, audit, and handling personal data

An editorial copilot touches content that may include personal data or sensitive context, especially in business-to-business or logged-in scenarios.

Handling personal data safely

I try to:

Azure OpenAI “On your data” can be configured with:

Approvals and audit trails

For each suggestion I:

I store this in:

Where possible I link entries back to:


How I roll out an editorial copilot

I try to start small and iterate.

Piloting with a narrow scope

In early pilots I:

I collect:

Expanding gradually

Once patterns stabilize I:

I keep prompts, retrieval settings, and UI behavior under version control so I can roll forward and back safely.


How this copilot connects to other AI workflows

The editorial copilot is just one piece of the AI Workflows theme.

I often:

In the related post AI Workflows — Image generation and editing from Content Hub DAM, I shift focus from text to imagery, but the same principles apply: grounding, guardrails, and human sign-off.



Previous Post
AI Workflows — Image generation and editing from Content Hub DAM
Next Post
AI Workflows — Series overview and roadmap