Skip to content

AI Workflows — Image generation and editing from Content Hub DAM

Created:
Updated:

On projects where XM Cloud owns copy and Content Hub DAM owns imagery, the request I hear most from marketing and design teams is simple:

“Can we get more on-brand image variants faster, without losing control of rights and usage?”

Generic AI image tools don’t help much here. What I needed in practice was:

This post is a write-up of a pattern I’ve implemented and iterated on: integrating Content Hub DAM with cloud image models (OpenAI images, Azure OpenAI, Vertex AI, etc.), using a lightweight worker service and a small set of well-defined workflows.

The focus is deliberately narrow:


Architecture: DAM in the middle, models at the edge

The architecture that has worked reliably for me looks like this:

Editor / Creative

Content Hub UI

(actions / flags)

Content Hub

webhook / action

Image Worker Service

(Node/TS, .NET, or Python)

Cloud Image Model

(OpenAI / Azure / Vertex / etc.)

Content Hub DAM

(new asset / rendition)

Review & Approve

in Content Hub

XM Cloud / Channels

Key points:

The rest of this post walks through how I wired this up in practice:


Step 1 — Marking assets for AI in Content Hub

I start by making AI flows explicit in the DAM data model.

Fields I add on asset entities

Exactly how you model this depends on your schema, but these have worked well:

These fields are the only thing the worker service cares about. It doesn’t need to know about your full schema.

How the workflow is triggered

I’ve used two patterns successfully:

This explicit flagging keeps the system auditable: someone always chose to involve AI, and we can see who and when.


Step 2 — Webhook → worker service

On the integration side, I run a small service (usually Node/TypeScript for this kind of glue code) with an endpoint that Content Hub calls whenever AIRequested flips to true.

Example webhook payload (simplified)

Your exact payload will differ based on how you configure the webhook, but conceptually I expect something like:

{
  "event": "AssetAIRequested",
  "entityId": 12345,
  "fields": {
    "AIRequested": true,
    "AIProfile": "brand-safe-variant",
    "AIPromptHint": "Add subtle winter theme"
  },
  "user": {
    "id": 678,
    "name": "jane.doe@company.com"
  }
}

Worker service skeleton (TypeScript)

import express from "express";
import { processAssetAIRequest } from "./imageWorker";

const app = express();
app.use(express.json());

app.post("/webhooks/content-hub/ai-image", async (req, res) => {
  try {
    const { entityId, fields } = req.body;
    await processAssetAIRequest({
      assetId: entityId,
      profile: fields.AIProfile,
      promptHint: fields.AIPromptHint,
    });
    res.status(202).send({ status: "accepted" });
  } catch (err) {
    console.error("AI image webhook failed", err);
    res.status(500).send({ error: "internal_error" });
  }
});

app.listen(3000, () => {
  console.log("AI image worker listening on :3000");
});

Behind processAssetAIRequest is where the interesting work happens: download asset, build prompts, call the model, and push results back into Content Hub.


Step 3 — Fetching the source asset and metadata

For each asset, I need:

Fetching from Content Hub (pseudo-code)

I keep the Content Hub client small and explicit:

import fetch from "node-fetch";

const CONTENT_HUB_URL = process.env.CONTENT_HUB_URL!;
const CONTENT_HUB_TOKEN = process.env.CONTENT_HUB_TOKEN!;

async function getAssetMetadata(assetId: number) {
  const resp = await fetch(`${CONTENT_HUB_URL}/api/entities/${assetId}`, {
    headers: {
      Authorization: `Bearer ${CONTENT_HUB_TOKEN}`,
      "Content-Type": "application/json",
    },
  });
  if (!resp.ok) {
    throw new Error(`Failed to load asset ${assetId}: ${resp.status}`);
  }
  return resp.json();
}

async function downloadRendition(assetId: number, renditionName = "Web") {
  const resp = await fetch(
    `${CONTENT_HUB_URL}/api/entities/${assetId}/renditions/${renditionName}/download`,
    { headers: { Authorization: `Bearer ${CONTENT_HUB_TOKEN}` } }
  );
  if (!resp.ok) {
    throw new Error(`Failed to download rendition: ${resp.status}`);
  }
  const buffer = await resp.arrayBuffer();
  return Buffer.from(buffer);
}

In real implementations I:


Step 4 — Designing prompts and calling the image model

Prompt design matters more than people expect. I had better results once I constrained prompts and grounded them in DAM metadata instead of relying on free-form inputs.

Prompt templates that worked

For brand-safe variants of existing assets:

Task: Generate on-brand variants of an existing marketing image.

Brand:
- Name: {{ brandName }}
- Colors: {{ brandColors }}
- Style: {{ brandStyle }} (e.g., clean, minimal, photography-first)

Asset description:
- Product: {{ productName }}
- Campaign: {{ campaignName }}
- Locale: {{ locale }}
- Current usage: {{ usage }} (e.g., hero banner, social story, blog header)

Requested change:
- Profile: {{ profile }} (e.g., brand-safe-variant, background-cleanup)
- Additional hint from creative: "{{ promptHint }}"

Constraints:
- Do not alter logo proportions or colors.
- Do not add text overlays.
- Respect human subjects: no distortion, no unrealistic body shapes.
- Keep overall composition similar to the original.

For social banner crops:

Task: Create a social-media-ready version of this marketing image.

Required:
- Aspect ratio: {{ aspectRatio }} (e.g., 16:9, 9:16, 1:1)
- Keep the main subject visible and centered.
- Maintain brand color palette and mood.
- No additional text.

Additional guidance:
{{ promptHint }}

I keep these templates in code or configuration, not scattered across prompts in people’s heads.

Calling an image model (OpenAI Images example)

Below is a simplified example using OpenAI’s Images API to create a variant based on an existing image. The same pattern applies to Azure OpenAI or Vertex AI with different endpoints.

import OpenAI from "openai";

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY! });

export async function generateVariantFromImage(
  sourceImage: Buffer,
  prompt: string
) {
  const base64Image = sourceImage.toString("base64");

  const response = await openai.images.generate({
    model: "gpt-image-1", // or a specific image model
    prompt,
    image: base64Image,
    size: "1024x1024",
    n: 1,
  });

  const imageBase64 = response.data[0].b64_json!;
  return Buffer.from(imageBase64, "base64");
}

I deliberately:


Step 5 — Uploading variants back into Content Hub

Once I have a new image buffer, I create a new asset in Content Hub and link it to the original.

Uploading the new asset (pseudo-code)

async function uploadVariantAsset(
  originalAssetId: number,
  variantBuffer: Buffer,
  profile: string,
  prompt: string
) {
  // 1. Create a new asset entity.
  const createResp = await fetch(`${CONTENT_HUB_URL}/api/entities`, {
    method: "POST",
    headers: {
      Authorization: `Bearer ${CONTENT_HUB_TOKEN}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({
      template: "M.Asset", // adjust to your asset template
      properties: {
        Title: `AI variant of ${originalAssetId}`,
        AIGenerated: true,
        AISourceAsset: originalAssetId,
        AIProfile: profile,
        AIStatus: "ready_for_review",
        AIPrompt: prompt,
      },
    }),
  });
  const created = await createResp.json();
  const variantId = created.id;

  // 2. Upload the binary file.
  const form = new FormData();
  form.append("file", variantBuffer, "variant.png");

  await fetch(
    `${CONTENT_HUB_URL}/api/entities/${variantId}/files/file`,
    {
      method: "POST",
      headers: { Authorization: `Bearer ${CONTENT_HUB_TOKEN}` },
      body: form as any,
    }
  );

  return variantId;
}

In a real implementation I also:

Putting it together in processAssetAIRequest

export async function processAssetAIRequest(opts: {
  assetId: number;
  profile: string;
  promptHint?: string;
}) {
  const meta = await getAssetMetadata(opts.assetId);
  const image = await downloadRendition(opts.assetId, "Master");

  const prompt = buildPromptFromMetadata({
    profile: opts.profile,
    hint: opts.promptHint,
    metadata: meta,
  });

  const variantBuffer = await generateVariantFromImage(image, prompt);

  const variantId = await uploadVariantAsset(
    opts.assetId,
    variantBuffer,
    opts.profile,
    prompt
  );

  console.log(`Created AI variant ${variantId} for asset ${opts.assetId}`);
}

buildPromptFromMetadata is just string assembly based on templates like the ones earlier.


Step 6 — Review and approvals in Content Hub

I never let AI-generated images go straight to production.

Review queues

In Content Hub I typically:

Approving an asset:

Rejecting an asset:

Rights and watermarking

Depending on your provider:


Step 7 — Surfacing approved assets to XM Cloud

Once assets are approved, they flow to XM Cloud the same way any other asset would:

Example: consuming assets via Experience Edge in a Next.js head

In an XM Cloud + Next.js head, a typical pattern is:

import { gql } from "@apollo/client";
import { client } from "../lib/edgeClient";

const QUERY = gql`
  query PageAssets($pageId: String!) {
    item(path: $pageId, language: "en") {
      id
      field(name: "HeroImage") {
        jsonValue
      }
    }
  }
`;

export async function getHeroImage(pageId: string) {
  const { data } = await client.query({ query: QUERY, variables: { pageId } });
  const heroField = data.item.field;
  // Assuming the field is a reference to a Content Hub asset or delivery URL.
  return heroField.jsonValue;
}

From the component’s perspective there is nothing special about an AI-generated asset—it’s just another DAM-backed image with the right metadata and renditions.

The difference is entirely in the workflow that created and approved it.


How this ties into broader AI workflows

This DAM-centric workflow slots neatly into the rest of the AI Workflows theme:

The principles stay the same:


Lessons learned

From getting this working end-to-end on real projects, a few guidelines have stuck:



Previous Post
AI Code Generation — Figma to Next.js components with Figma MCP and Claude Code
Next Post
AI Workflows — Series overview and roadmap