If XM Cloud is where your copy lives, Content Hub DAM is where your brand imagery lives. On my projects as a company of one, this is where I lean on agents the most to keep up with variant requests while still respecting rights and brand rules:
- product photos,
- hero banners,
- campaign visuals,
- icons, logos, and UI elements.
Generative image models (Imagen, Gemini, DALL·E, etc.) are powerful, but they raise questions:
- How do we keep everything on-brand?
- How do we track rights and usage?
- How do we avoid a flood of ungoverned variants?
This post describes how I build a DAM-centric image workflow where:
- Content Hub remains the source of truth for assets,
- AI models are workers called by the DAM,
- and human reviewers stay in control of what goes live.
At a high level, the flow I implement looks like this:
How I identify the right use cases for AI imagery
Not every image should be AI-generated. Good candidates in my projects include:
- Variants for channels or crops (for example landscape vs portrait hero shots),
- Background cleanup or extension for existing assets,
- Simple compositing (for example adding devices or props to a base product photo),
- Placeholder imagery for early content or wireframes.
High-risk areas (where I stay cautious or avoid AI entirely):
- regulated or sensitive industries (healthcare, finance, government),
- imagery implying real people or events (where misrepresentation is risky),
- any case where legal review is mandatory for visuals.
I document these allowed and disallowed use cases in a DAM policy and reference it in my workflows.
How I define trigger points in Content Hub
I want AI workflows to feel like a natural extension of DAM actions.
Common trigger patterns I use:
- Manual actions: a user selects an asset or collection and chooses “Generate variants” or “Edit in AI.”
- Metadata-driven triggers: when a field changes (for example
needs_ai_variants = true), a workflow kicks off. - Webhooks or events: when an asset enters a specific state or collection, a webhook notifies the AI worker.
In Content Hub, I either:
- define actions and pages that expose AI operations,
- or use Sitecore Connect and other integration services to bridge events to an external worker.
The key is to ensure triggers are:
- explicit,
- easy to audit,
- and reversible (I can stop or skip a workflow when needed).
How I choose and configure image models with guardrails
There are multiple options for image models:
- Vertex AI Imagen / Gemini image models (on Google Cloud),
- other cloud providers’ image APIs,
- or on-prem / VPC-deployed models for stricter control.
Regardless of vendor, I focus on:
- policy controls: what content is allowed (no logos, no faces, etc.),
- watermarking: techniques like SynthID or equivalents,
- latency and cost: how many images you can afford to generate per request.
How I define model usage profiles
I create simple profiles such as:
brand-safe-variant: small variations of existing assets (colors, crops, minor background changes).concept-art: more freedom for early ideation (used in pre-production only).placeholder: quick visuals for wireframes or internal docs.
Each profile has:
- a set of allowed prompts,
- model parameters (strength of guidance, resolution, and so on),
- and output types (JPEG, PNG, WebP).
The AI worker reads these profiles and applies them consistently.
How I keep prompts structured and reusable
I avoid free-form prompts wherever possible. Instead I:
- define prompt templates per use case,
- take key variables from Content Hub metadata (product name, category, campaign, brand colors),
- and keep these templates in version control.
Agents help me design and refine these templates, but once agreed, I treat them as configuration.
How I implement the worker: from Content Hub to the model and back
At the heart of the workflow is a worker service that:
- Receives a request (from an action, webhook, or queue).
- Fetches the source asset and metadata from Content Hub.
- Calls the image model with appropriate prompts and parameters.
- Writes the result back to Content Hub as a new asset or variant.
- Updates metadata and queues the asset for review.
Inputs from Content Hub
I include:
- asset ID, path, and renditions,
- relevant metadata (product, campaign, language, region),
- requested profile (
brand-safe-variant,concept-art, and similar), - and any user-supplied hints (for example “add subtle winter theme”).
Model call and output handling
The worker:
- constructs the prompt from templates and metadata,
- calls the model API,
- receives one or more generated images,
- stores them temporarily in secure storage,
- and attaches them back into Content Hub.
Writing back to Content Hub
When creating variants I:
- use Content Hub’s renditions or derived assets concepts,
- apply metadata such as:
- source asset reference,
- model and version used,
- profile name,
- generation timestamp,
- watermark flags.
I mark assets as “pending review” so they do not appear in downstream systems until approved.
How I handle review queues, approvals, and rights
AI-generated imagery must never bypass existing rights and brand controls.
Building review queues in Content Hub
I use:
- collections or dedicated pages for “AI Pending Review,”
- filters on metadata (for example
ai_generated = true AND status = pending).
Reviewers can:
- see the original and variant side-by-side,
- inspect prompts and model metadata,
- approve or reject variants,
- and optionally tweak metadata before approval.
Rights and legal considerations
I make sure:
- license and rights metadata is clearly flagged (for example
rights_source = ai_generated), - usage policies for AI-generated assets are documented,
- and legal and compliance teams are comfortable with where and how these assets are used.
In practice I often:
- restrict AI-generated assets to certain channels (blog, internal pages),
- and exclude them from others (homepage hero, regulated communications).
How I think about performance, cost, and monitoring
Image generation and editing can be expensive and slow if not managed.
Batching and queuing requests
Instead of calling the model synchronously for each click I:
- push jobs to a queue,
- process them asynchronously,
- and update Content Hub as jobs complete.
I expose status to users:
- “In progress,”
- “Completed,”
- “Failed (with error details).”
Monitoring usage and cost
I track:
- number of images generated per profile and per project,
- average latency and error rates,
- cost per asset and per campaign.
Dashboards or reports help me spot:
- misuse (for example repeated generations for the same asset),
- and opportunities to optimize parameters (lower resolution, fewer variants).
How DAM workflows fit into the broader AI ecosystem
This DAM-focused workflow can connect to:
- XM Cloud: use AI-generated variants as hero images or content-specific visuals.
- Editorial copilots: surface available asset variants as suggestions when editors write copy.
- Content operations pipelines: treat image generation as a step alongside copy refinement and translation.
The same governance principles apply:
- clear allowed use cases,
- strong approvals,
- traceability,
- and configuration-as-code for prompts and profiles.
Useful links
- Sitecore Content Hub documentation — https://doc.sitecore.com/ch/en/users/content-hub/content-hub.html
- Sitecore XM Cloud documentation — https://doc.sitecore.com/xmc/en/home.html
- Google Cloud Vertex AI Imagen — https://cloud.google.com/vertex-ai/docs/generative-ai/image/overview
- Responsible AI image guidance (Google) — https://cloud.google.com/vertex-ai/docs/generative-ai/image/responsible-ai
Related posts
- AI-Powered Stack — Series overview and roadmap
- AI-Powered Stack — My Sitecore delivery stack for 2025
- AI-Powered Stack — Sprint zero for XM Cloud with agentic AI
- AI-Powered Stack — Working with AI as your Sitecore BA, architect, and PM
- AI-Powered Stack — Fast POCs from prompt to XM Cloud components
In the next post—AI Workflows — Content operations pipelines with LangGraph-style agents—we will zoom out and design end-to-end pipelines that combine text, images, and approvals in a single orchestrated flow.
References
- Sitecore Content Hub DAM and renditions
- Example: Google Vertex AI Imagen and Gemini image models
- Sitecore Connect and integration patterns