Skip to content

AI Workflows — Content operations pipelines with LangGraph-style agents

Created:
Updated:

When I first started experimenting with connecting LangGraph workflows and Sitecore projects, my goal was something simple yet practical, something along these lines:

Over the last few months I’ve experimented with LangGraph-style workflows that connect:

This write up summarizes what actually worked, what didn’t, and the patterns I’d reuse if I were starting again.

The focus is content operations:

All of it is orchestrated by LangGraph running as a cloud‑hosted service, with Sitecore products on one side and editors on the other.


LangGraph as the content workflow orchestrator

At a high level, the architecture I’ve had the best results with looks like this:

LangGraph Cloud / Service

Sitecore

Tasks, approvals

Notifications

Flows

Content create/edit

Translation & localization

Image gen/edit

Optimization loop

XM Cloud

(Headless / Edge)

Content Hub

(Content & DAM)

CDP / Personalize

LangGraph Orchestrator

Editors & Marketers

Slack / Teams / Email

Key choices:

The rest of the post dives into three representative flows, then into implementation details.


Flow 1: Assisted content creation & editing for XM Cloud

The first workflow I usually build is an editorial copilot for XM Cloud pages and articles.

Scenario

An editor is working on a new article or landing page in XM Cloud. They want help with:

But they still want to stay in control of the content and publishing.

High-level workflow

Accept

Edit manually

Editor triggers

Run editorial copilot

Fetch source content

from XM Cloud

Classify & tag

(topic, product, audience)

Generate suggestions

& variants

Moderate & policy check

Send to editor

for review

Write back

as new draft version

Ready for normal

XM Cloud workflow

In practice this looks like:

  1. Trigger

    • I expose a simple command in the head (e.g., a button in a custom editor UI) that calls a small backend service.
    • That service sends a POST to the LangGraph endpoint with the XM item ID and the fields to work on.
  2. Fetch & classify

    • A LangGraph node calls Experience Edge Preview to fetch the item content and metadata.
    • Another node classifies the content (topic, product line, funnel stage) using a model plus a knowledge base of existing content.
  3. Draft suggestions

    • One or more nodes generate:
      • tightened version of the existing copy,
      • alternative headlines and intros,
      • optional SEO‑leaning variant with different emphasis.
    • Prompts are grounded by brand voice docs stored in Content Hub (or a simple internal knowledge base).
  4. Moderation

    • A moderation node checks for policy violations, PII, regulatory issues, etc.
    • If anything is flagged, the flow halts and posts to a moderation channel instead of writing back.
  5. Review & apply

    • LangGraph posts a summary + diff to Slack/Teams (or sends a callback the head can poll).
    • The editor chooses which suggestions to accept, optionally edits them, and then confirms.
    • A final node writes the accepted text into XM Cloud as a new draft version, never directly to the published one.

Example LangGraph integration

Below is a simplified version of how I modeled that flow. It omits error handling and auth for clarity.

from typing import TypedDict, List, Optional
from langgraph.graph import StateGraph, END
import httpx

XM_EDGE_ENDPOINT = "https://edge.sitecorecloud.io/api/graphql/v1"


class ContentState(TypedDict):
    item_id: str
    language: str
    fields: dict
    classification: dict
    suggestions: dict
    moderation_flags: List[str]
    editor_decision: Optional[dict]


async def fetch_xm_content(state: ContentState) -> ContentState:
    query = """
    query Item($id: String!, $language: String!) {
      item(path: $id, language: $language) {
        id
        name
        fields {
          name
          value
        }
      }
    }
    """
    async with httpx.AsyncClient(timeout=10) as client:
        resp = await client.post(
            XM_EDGE_ENDPOINT,
            json={"query": query, "variables": {"id": state["item_id"], "language": state["language"]}},
            headers={"sc_apikey": "<EDGE_PREVIEW_API_KEY>"},
        )
    data = resp.json()["data"]["item"]
    state["fields"] = {f["name"]: f["value"] for f in data["fields"]}
    return state


async def generate_suggestions(state: ContentState) -> ContentState:
    # Pseudo-call to an LLM; in practice use your provider of choice.
    body = state["fields"].get("Body", "")
    # ... call LLM with body + brand guidelines ...
    state["suggestions"] = {
        "tightened": body,     # replace with actual LLM output
        "headline_options": [],  # etc.
    }
    return state


async def moderate(state: ContentState) -> ContentState:
    # Call moderation provider or custom policy checker.
    state["moderation_flags"] = []
    return state


async def apply_changes(state: ContentState) -> ContentState:
    decision = state["editor_decision"]
    if not decision:
        return state
    # Call XM Cloud management API / headless API to write a new draft version.
    # I keep this in a separate module with strong typing and tests.
    return state


graph = StateGraph(ContentState)
graph.add_node("fetch_xm_content", fetch_xm_content)
graph.add_node("generate_suggestions", generate_suggestions)
graph.add_node("moderate", moderate)
graph.add_node("apply_changes", apply_changes)

graph.set_entry_point("fetch_xm_content")
graph.add_edge("fetch_xm_content", "generate_suggestions")
graph.add_edge("generate_suggestions", "moderate")
graph.add_edge("moderate", "apply_changes")
graph.add_edge("apply_changes", END)

editorial_copilot_flow = graph.compile()

In a real implementation I add:


Flow 2: Translation & localization across XM Cloud and Content Hub

The second workflow I’ve found very impactful is localization, especially when content lives partly in XM Cloud and partly in Content Hub.

Scenario

The goal is not “machine‑translate everything.” It’s:

Workflow overview

Human

AI-assisted

Item updated / locale requested

Gather fragments

from XM & Content Hub

Decide strategy

(human vs AI-assisted)

Create translation job

in Content Hub / TMS

AI translation

+ glossary

Light QA & checks

Optional linguist review

Persist translations

(XM versions / CH entities)

Integration points that worked

A simple LangGraph localization flow

from langgraph.graph import StateGraph, END
from typing import TypedDict, List


class LocalizeState(TypedDict):
    item_id: str
    source_language: str
    target_languages: List[str]
    source_fragments: dict
    translations: dict


async def gather_fragments(state: LocalizeState) -> LocalizeState:
    # Fetch main body from XM Cloud (Edge Preview)...
    # Fetch supporting text from Content Hub (e.g., product description)...
    state["source_fragments"] = {
        "title": "...",
        "body": "...",
        "product_short_desc": "...",
    }
    return state


async def translate_fragments(state: LocalizeState) -> LocalizeState:
    translations = {}
    for lang in state["target_languages"]:
        # Call your translation model with glossary and examples.
        translations[lang] = {
            "title": "...",
            "body": "...",
            "product_short_desc": "...",
        }
    state["translations"] = translations
    return state


async def persist_translations(state: LocalizeState) -> LocalizeState:
    # For each target language, write:
    # - a new language version in XM Cloud (headless write)
    # - or localized fragments in Content Hub
    return state


graph = StateGraph(LocalizeState)
graph.add_node("gather_fragments", gather_fragments)
graph.add_node("translate_fragments", translate_fragments)
graph.add_node("persist_translations", persist_translations)

graph.set_entry_point("gather_fragments")
graph.add_edge("gather_fragments", "translate_fragments")
graph.add_edge("translate_fragments", "persist_translations")
graph.add_edge("persist_translations", END)

localization_flow = graph.compile()

In reality I augment this with:


Flow 3: Image generation and editing with Content Hub DAM

Once text workflows are stable, image workflows are a natural extension. Content Hub makes this easier because it already owns asset metadata and storage.

Scenario

Workflow overview

  1. An editor creates or updates a Content Hub campaign or content item and flags it as “needs image variants.”
  2. LangGraph picks up the item, reads:
    • the copy,
    • target audience,
    • locale,
    • and any existing brand imagery.
  3. A node generates prompt candidates for an image model based on that context and brand rules.
  4. Another node calls the image generation API (DALL·E, Midjourney, Stable Diffusion, etc.).
  5. Generated images are ingested back into Content Hub DAM as assets with:
    • tags,
    • source prompts,
    • usage restrictions.
  6. Editors get a notification with thumbnails and approve which ones to use in XM Cloud.
  7. A final node updates the relevant image fields in XM Cloud (or in Content Hub variants used by XM Cloud).

Integration diagram

needs image variants

Upload asset + metadata

Proposed image selection

Content Hub

Campaign / Content Item

LangGraph

Image workflow

Image model

(API)

Content Hub DAM

Assets

XM Cloud

Page / Component

From a technical standpoint this flow looks very similar to the editorial one; the difference is:


Experimenting with CDP/Personalize

Example: underperforming landing pages

  1. A scheduled LangGraph flow calls CDP/Personalize APIs to:

    • retrieve experiment results and engagement metrics for key pages,
    • flag pages with poor performance or high bounce rates.
  2. For each flagged page, the flow:

    • fetches the underlying XM Cloud item,
    • summarizes what the page is trying to do and how it’s currently structured,
    • compares it to higher‑performing pages in the same category.
  3. An LLM node then proposes:

    • alternative headlines,
    • restructured content blocks,
    • or alternative calls to action.
  4. The flow creates tasks (e.g., in Content Hub, JIRA, or Azure DevOps) with:

    • a summary of the issue,
    • suggested changes,
    • links to metrics from CDP/Personalize,
    • and deep links to the relevant XM Cloud item.
  5. Editors review, accept or adjust, and then rerun the normal publish flow.

The key is that CDP/Personalize is not writing content—it’s providing feedback signals that LangGraph uses to prioritize work and pre‑fill suggestions.


Hosting LangGraph and exposing flows as APIs

In all of these experiments I’ve had better results treating LangGraph as a versioned backend service rather than local scripts.

Example FastAPI wrapper

Here’s a highly simplified example of how I expose a flow as an HTTP endpoint that Sitecore systems can call:

from fastapi import FastAPI
from pydantic import BaseModel
from langgraph.graph import StateGraph

app = FastAPI()


class RunEditorialRequest(BaseModel):
    item_id: str
    language: str


# Assume editorial_copilot_flow is the compiled graph from earlier.

@app.post("/workflows/editorial-copilot")
async def run_editorial_copilot(req: RunEditorialRequest):
    state = {"item_id": req.item_id, "language": req.language}
    result = None
    async for event in editorial_copilot_flow.astream(state):
        # You can stream intermediate events to logs if desired.
        result = event
    return result

On the Sitecore side, this is just another HTTP call from:

I keep:


Lessons learned, so far

After a few iterations, a few patterns have stood out.



Previous Post
Integrations — Series overview and roadmap
Next Post
AI Code Generation — Figma to Next.js components with Figma MCP and Claude Code