Skip to content

AI Workflows — Image generation and editing from Content Hub DAM

Created:
Updated:

If XM Cloud is where your copy lives, Content Hub DAM is where your brand imagery lives. On my projects as a company of one, this is where I lean on agents the most to keep up with variant requests while still respecting rights and brand rules:

Generative image models (Imagen, Gemini, DALL·E, etc.) are powerful, but they raise questions:

This post describes how I build a DAM-centric image workflow where:

At a high level, the flow I implement looks like this:

Editor or creative

Content Hub action\nor metadata flag

AI worker\nimage model call

AI variants\nstored in DAM

Review queue\nrights and brand

Approved assets\nXM Cloud and channels


How I identify the right use cases for AI imagery

Not every image should be AI-generated. Good candidates in my projects include:

High-risk areas (where I stay cautious or avoid AI entirely):

I document these allowed and disallowed use cases in a DAM policy and reference it in my workflows.


How I define trigger points in Content Hub

I want AI workflows to feel like a natural extension of DAM actions.

Common trigger patterns I use:

In Content Hub, I either:

The key is to ensure triggers are:


How I choose and configure image models with guardrails

There are multiple options for image models:

Regardless of vendor, I focus on:

How I define model usage profiles

I create simple profiles such as:

Each profile has:

The AI worker reads these profiles and applies them consistently.

How I keep prompts structured and reusable

I avoid free-form prompts wherever possible. Instead I:

Agents help me design and refine these templates, but once agreed, I treat them as configuration.


How I implement the worker: from Content Hub to the model and back

At the heart of the workflow is a worker service that:

  1. Receives a request (from an action, webhook, or queue).
  2. Fetches the source asset and metadata from Content Hub.
  3. Calls the image model with appropriate prompts and parameters.
  4. Writes the result back to Content Hub as a new asset or variant.
  5. Updates metadata and queues the asset for review.

Inputs from Content Hub

I include:

Model call and output handling

The worker:

Writing back to Content Hub

When creating variants I:

I mark assets as “pending review” so they do not appear in downstream systems until approved.


How I handle review queues, approvals, and rights

AI-generated imagery must never bypass existing rights and brand controls.

Building review queues in Content Hub

I use:

Reviewers can:

I make sure:

In practice I often:


How I think about performance, cost, and monitoring

Image generation and editing can be expensive and slow if not managed.

Batching and queuing requests

Instead of calling the model synchronously for each click I:

I expose status to users:

Monitoring usage and cost

I track:

Dashboards or reports help me spot:


How DAM workflows fit into the broader AI ecosystem

This DAM-focused workflow can connect to:

The same governance principles apply:


In the next post—AI Workflows — Content operations pipelines with LangGraph-style agents—we will zoom out and design end-to-end pipelines that combine text, images, and approvals in a single orchestrated flow.


References


Previous Post
AI Code Generation — Figma to Next.js components with Figma MCP and Claude Code
Next Post
AI Workflows — Editorial copilot for XM Cloud pages with “on your data” AI