On projects where XM Cloud owns copy and Content Hub DAM owns imagery, the request I hear most from marketing and design teams is simple:
“Can we get more on-brand image variants faster, without losing control of rights and usage?”
Generic AI image tools don’t help much here. What I needed in practice was:
- Content Hub staying the source of truth for all assets and metadata.
- Cloud image models acting as workers behind DAM actions.
- Simple, reviewable flows that creatives trust and legal can live with.
This post is a write-up of a pattern I’ve implemented and iterated on: integrating Content Hub DAM with cloud image models (OpenAI images, Azure OpenAI, Vertex AI, etc.), using a lightweight worker service and a small set of well-defined workflows.
The focus is deliberately narrow:
- generate and edit variants of existing assets,
- keep everything inside Content Hub with rich metadata,
- and surface approved assets cleanly to XM Cloud.
Architecture: DAM in the middle, models at the edge
The architecture that has worked reliably for me looks like this:
Key points:
- Triggers live in Content Hub (manual actions or metadata changes).
- A worker service receives events via webhook, talks to the image model, and writes back to Content Hub.
- All generated assets live in DAM, with metadata linking them to the source and to the model that created them.
- XM Cloud only ever sees approved assets via existing Content Hub → XM integration or Experience Edge.
The rest of this post walks through how I wired this up in practice:
- Content Hub configuration (fields, actions, events),
- the worker service (TypeScript examples),
- prompt templates and guardrails for image models,
- and how these assets surface in XM Cloud.
Step 1 — Marking assets for AI in Content Hub
I start by making AI flows explicit in the DAM data model.
Fields I add on asset entities
Exactly how you model this depends on your schema, but these have worked well:
AIRequested(Boolean) — whether an editor has requested AI variants/edits.AIProfile(Option set) — which workflow to run, e.g.:brand-safe-variantbackground-cleanupsocial-bannerconcept-sketch
AIPromptHint(Text) — optional free text from the creative team (“winter theme”, “add subtle lab background”).AISourceAsset(Reference) — links a generated asset back to its original.AIGenerated(Boolean) — whether this asset is AI-generated or edited.AIStatus(Option set) —pending,processing,ready_for_review,approved,rejected.
These fields are the only thing the worker service cares about. It doesn’t need to know about your full schema.
How the workflow is triggered
I’ve used two patterns successfully:
-
Manual action:
- Add a Content Hub action: “Request AI variants”.
- The action sets
AIRequested = trueandAIProfileto one of the allowed values. - A rule or webhook fires on
AIRequestedchanges.
-
Metadata rule:
- When an asset is added to a specific collection (e.g., “AI Variants Needed”), a rule sets
AIRequested = true. - Again, a webhook/event picks this up.
- When an asset is added to a specific collection (e.g., “AI Variants Needed”), a rule sets
This explicit flagging keeps the system auditable: someone always chose to involve AI, and we can see who and when.
Step 2 — Webhook → worker service
On the integration side, I run a small service (usually Node/TypeScript for this kind of glue code) with an endpoint that Content Hub calls whenever AIRequested flips to true.
Example webhook payload (simplified)
Your exact payload will differ based on how you configure the webhook, but conceptually I expect something like:
{
"event": "AssetAIRequested",
"entityId": 12345,
"fields": {
"AIRequested": true,
"AIProfile": "brand-safe-variant",
"AIPromptHint": "Add subtle winter theme"
},
"user": {
"id": 678,
"name": "jane.doe@company.com"
}
}
Worker service skeleton (TypeScript)
import express from "express";
import { processAssetAIRequest } from "./imageWorker";
const app = express();
app.use(express.json());
app.post("/webhooks/content-hub/ai-image", async (req, res) => {
try {
const { entityId, fields } = req.body;
await processAssetAIRequest({
assetId: entityId,
profile: fields.AIProfile,
promptHint: fields.AIPromptHint,
});
res.status(202).send({ status: "accepted" });
} catch (err) {
console.error("AI image webhook failed", err);
res.status(500).send({ error: "internal_error" });
}
});
app.listen(3000, () => {
console.log("AI image worker listening on :3000");
});
Behind processAssetAIRequest is where the interesting work happens: download asset, build prompts, call the model, and push results back into Content Hub.
Step 3 — Fetching the source asset and metadata
For each asset, I need:
- a usable binary (often a high-quality rendition),
- enough metadata to build a prompt (product name, campaign, language, etc.).
Fetching from Content Hub (pseudo-code)
I keep the Content Hub client small and explicit:
import fetch from "node-fetch";
const CONTENT_HUB_URL = process.env.CONTENT_HUB_URL!;
const CONTENT_HUB_TOKEN = process.env.CONTENT_HUB_TOKEN!;
async function getAssetMetadata(assetId: number) {
const resp = await fetch(`${CONTENT_HUB_URL}/api/entities/${assetId}`, {
headers: {
Authorization: `Bearer ${CONTENT_HUB_TOKEN}`,
"Content-Type": "application/json",
},
});
if (!resp.ok) {
throw new Error(`Failed to load asset ${assetId}: ${resp.status}`);
}
return resp.json();
}
async function downloadRendition(assetId: number, renditionName = "Web") {
const resp = await fetch(
`${CONTENT_HUB_URL}/api/entities/${assetId}/renditions/${renditionName}/download`,
{ headers: { Authorization: `Bearer ${CONTENT_HUB_TOKEN}` } }
);
if (!resp.ok) {
throw new Error(`Failed to download rendition: ${resp.status}`);
}
const buffer = await resp.arrayBuffer();
return Buffer.from(buffer);
}
In real implementations I:
- put these calls behind a small
contentHubClientmodule, - add retries and better error handling,
- and keep auth concerns (tokens, keys) in environment variables or a secret store.
Step 4 — Designing prompts and calling the image model
Prompt design matters more than people expect. I had better results once I constrained prompts and grounded them in DAM metadata instead of relying on free-form inputs.
Prompt templates that worked
For brand-safe variants of existing assets:
Task: Generate on-brand variants of an existing marketing image.
Brand:
- Name: {{ brandName }}
- Colors: {{ brandColors }}
- Style: {{ brandStyle }} (e.g., clean, minimal, photography-first)
Asset description:
- Product: {{ productName }}
- Campaign: {{ campaignName }}
- Locale: {{ locale }}
- Current usage: {{ usage }} (e.g., hero banner, social story, blog header)
Requested change:
- Profile: {{ profile }} (e.g., brand-safe-variant, background-cleanup)
- Additional hint from creative: "{{ promptHint }}"
Constraints:
- Do not alter logo proportions or colors.
- Do not add text overlays.
- Respect human subjects: no distortion, no unrealistic body shapes.
- Keep overall composition similar to the original.
For social banner crops:
Task: Create a social-media-ready version of this marketing image.
Required:
- Aspect ratio: {{ aspectRatio }} (e.g., 16:9, 9:16, 1:1)
- Keep the main subject visible and centered.
- Maintain brand color palette and mood.
- No additional text.
Additional guidance:
{{ promptHint }}
I keep these templates in code or configuration, not scattered across prompts in people’s heads.
Calling an image model (OpenAI Images example)
Below is a simplified example using OpenAI’s Images API to create a variant based on an existing image. The same pattern applies to Azure OpenAI or Vertex AI with different endpoints.
import OpenAI from "openai";
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY! });
export async function generateVariantFromImage(
sourceImage: Buffer,
prompt: string
) {
const base64Image = sourceImage.toString("base64");
const response = await openai.images.generate({
model: "gpt-image-1", // or a specific image model
prompt,
image: base64Image,
size: "1024x1024",
n: 1,
});
const imageBase64 = response.data[0].b64_json!;
return Buffer.from(imageBase64, "base64");
}
I deliberately:
- keep
sizemodest (cost and speed), - limit
n(number of variants) to avoid overwhelming reviewers, - and store model name and parameters as metadata on the generated asset.
Step 5 — Uploading variants back into Content Hub
Once I have a new image buffer, I create a new asset in Content Hub and link it to the original.
Uploading the new asset (pseudo-code)
async function uploadVariantAsset(
originalAssetId: number,
variantBuffer: Buffer,
profile: string,
prompt: string
) {
// 1. Create a new asset entity.
const createResp = await fetch(`${CONTENT_HUB_URL}/api/entities`, {
method: "POST",
headers: {
Authorization: `Bearer ${CONTENT_HUB_TOKEN}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
template: "M.Asset", // adjust to your asset template
properties: {
Title: `AI variant of ${originalAssetId}`,
AIGenerated: true,
AISourceAsset: originalAssetId,
AIProfile: profile,
AIStatus: "ready_for_review",
AIPrompt: prompt,
},
}),
});
const created = await createResp.json();
const variantId = created.id;
// 2. Upload the binary file.
const form = new FormData();
form.append("file", variantBuffer, "variant.png");
await fetch(
`${CONTENT_HUB_URL}/api/entities/${variantId}/files/file`,
{
method: "POST",
headers: { Authorization: `Bearer ${CONTENT_HUB_TOKEN}` },
body: form as any,
}
);
return variantId;
}
In a real implementation I also:
- set folder or collection membership (e.g., “AI Pending Review”),
- generate additional renditions where needed,
- and update the original asset’s metadata to reference its variants.
Putting it together in processAssetAIRequest
export async function processAssetAIRequest(opts: {
assetId: number;
profile: string;
promptHint?: string;
}) {
const meta = await getAssetMetadata(opts.assetId);
const image = await downloadRendition(opts.assetId, "Master");
const prompt = buildPromptFromMetadata({
profile: opts.profile,
hint: opts.promptHint,
metadata: meta,
});
const variantBuffer = await generateVariantFromImage(image, prompt);
const variantId = await uploadVariantAsset(
opts.assetId,
variantBuffer,
opts.profile,
prompt
);
console.log(`Created AI variant ${variantId} for asset ${opts.assetId}`);
}
buildPromptFromMetadata is just string assembly based on templates like the ones earlier.
Step 6 — Review and approvals in Content Hub
I never let AI-generated images go straight to production.
Review queues
In Content Hub I typically:
- create a collection or saved search for
AIGenerated = true AND AIStatus = "ready_for_review", - add a dedicated “AI Review” page with:
- side-by-side view of original and variant,
- key metadata (profile, prompt, model),
- actions: Approve, Reject, Request changes.
Approving an asset:
- sets
AIStatus = "approved", - optionally moves it to a production-ready collection,
- and unblocks downstream systems from using it.
Rejecting an asset:
- sets
AIStatus = "rejected", - can optionally log a reason (e.g., “off-brand”, “visual artifacts”),
- and may feed back into prompt/template tuning.
Rights and watermarking
Depending on your provider:
- enable watermarking or provenance features (e.g., SynthID-style where available),
- ensure rights and usage metadata clearly indicate
rights_source = ai_generated, - and align with legal on where such assets can and cannot be used (e.g., internal vs public, hero vs supporting placements).
Step 7 — Surfacing approved assets to XM Cloud
Once assets are approved, they flow to XM Cloud the same way any other asset would:
- via Content Hub → XM Cloud integration, or
- via Experience Edge for Content Hub, or
- via a custom integration that synchronizes asset references into XM Cloud item fields.
Example: consuming assets via Experience Edge in a Next.js head
In an XM Cloud + Next.js head, a typical pattern is:
import { gql } from "@apollo/client";
import { client } from "../lib/edgeClient";
const QUERY = gql`
query PageAssets($pageId: String!) {
item(path: $pageId, language: "en") {
id
field(name: "HeroImage") {
jsonValue
}
}
}
`;
export async function getHeroImage(pageId: string) {
const { data } = await client.query({ query: QUERY, variables: { pageId } });
const heroField = data.item.field;
// Assuming the field is a reference to a Content Hub asset or delivery URL.
return heroField.jsonValue;
}
From the component’s perspective there is nothing special about an AI-generated asset—it’s just another DAM-backed image with the right metadata and renditions.
The difference is entirely in the workflow that created and approved it.
How this ties into broader AI workflows
This DAM-centric workflow slots neatly into the rest of the AI Workflows theme:
- Editorial copilots (for XM Cloud pages) can surface available AI variants from Content Hub as suggestions for hero images or inline visuals.
- Content operations pipelines (with LangGraph-style agents) can treat image generation and editing as additional steps in a larger flow that also covers moderation, translation, and refinement.
- CDP/Personalize-driven optimization loops can flag underperforming creatives and automatically open “generate new variants” tasks that feed into this same pipeline.
The principles stay the same:
- Content Hub remains the source of truth for assets.
- AI models are workers, not decision-makers.
- Human approvals and clear metadata keep brand, legal, and compliance teams comfortable.
Lessons learned
From getting this working end-to-end on real projects, a few guidelines have stuck:
-
Keep flows small and explicit.
“Generate a banner variant” is manageable; “generate all campaign imagery” is not. -
Make AI involvement obvious in the UI.
Editors should always know which assets are AI-generated or edited. -
Version prompts and profiles like code.
Store them in Git, review changes, and roll back if needed. -
Start with internal or low-risk use cases.
Internal dashboards, blog headers, and non-regulated content are good early targets. -
Measure cost and value.
Track which workflows actually save time or improve performance, and prune the rest.
Useful links
- Sitecore Content Hub docs — https://doc.sitecore.com/ch/en/users/content-hub/content-hub.html
- Experience Edge for Content Hub APIs — https://doc.sitecore.com/ch/en/developers/content-hub/experience-edge-for-content-hub-apis.html
- Sitecore XM Cloud docs — https://doc.sitecore.com/xmc/en/home.html
- OpenAI Images API — https://platform.openai.com/docs/guides/images
- Azure OpenAI Service — https://learn.microsoft.com/azure/ai-services/openai/
Related posts
src/content/blog/ai_workflows_series_overview_and_roadmap.mdsrc/content/blog/ai_workflows_editorial_copilot_for_xm_cloud_pages_on_your_data_ai.mdsrc/content/blog/ai_workflows_content_operations_pipelines_with_langgraph_style_agents.mdsrc/content/blog/ai_powered_stack_series_overview.md