DEV Community

Yusufhan Sacak
Yusufhan Sacak

Posted on • Originally published at yusufhan.dev

Designing an AI-Native Content Publishing Pipeline

Over the past few weeks I've been working on a content hosting system that allows AI tools to publish structured artefacts directly into a production environment.

The goal was simple:

Make it possible to generate and publish structured content using AI — without developers being in the loop.

Today tools like Claude or GPT are excellent at generating content, but they usually stop at the generation step.

Publishing is still manual.

Someone still needs to move the generated content into a CMS, format it, upload assets, and press publish.

What we wanted instead was a pipeline where an AI assistant could generate an artefact and publish it directly to a hosting system in a controlled way.


The Core Idea

Instead of thinking in terms of:


CMS → editor → publish

Enter fullscreen mode Exit fullscreen mode

we flipped the model.

The system is built around artefacts.

An artefact can be:

  • a document
  • a presentation
  • a template
  • structured HTML content
  • any renderable asset

The pipeline then becomes:


AI Tool
↓
MCP Tool Call
↓
Content API
↓
Artefact Storage
↓
Public Rendering Endpoint

Enter fullscreen mode Exit fullscreen mode

This allows AI tools to generate something and immediately turn it into a hosted resource.

Instead of manually moving content through several systems, publishing becomes an infrastructure action.


System Components

The architecture consists of three primary layers.

1. Publishing UI

A simple frontend interface for manual publishing and inspection.

Its responsibilities are:

  • viewing artefacts
  • uploading templates
  • triggering publish actions
  • previewing output

This is mainly used when humans want to inspect or manage artefacts directly.

2. Content API

The API layer acts as the control plane for publishing.

It handles:

  • artefact creation
  • validation
  • storage
  • access control
  • rendering metadata

The API intentionally avoids exposing infrastructure details. Instead it provides a small set of safe publishing primitives that external systems can call without needing to understand the underlying storage or rendering layer.

One important design choice was making the system API-first.

Publishing is ultimately just an HTTP request.

Because of this, any external system capable of sending an HTTP request can use the pipeline.

3. Public Rendering Layer

The rendering layer takes stored artefacts and serves them through public endpoints.

Each artefact gets a unique URL that can be shared, embedded, or referenced from any external system.


Automation and External Integrations

Making publishing API-driven turned out to be extremely useful.

It means the system can easily integrate with external tools and automation platforms.

For example, automation systems like Workato can trigger publishing as part of a workflow.

A typical flow might look like:


CRM Event
↓
Automation Recipe
↓
HTTP Request to Content API
↓
Artefact Created
↓
Public URL Generated

Enter fullscreen mode Exit fullscreen mode

Instead of manually creating documents or presentations, the automation system can generate and publish them automatically.

This makes the content pipeline usable not only by humans, but by systems.


MCP Integration

The most interesting part is the MCP interface.

This allows AI tools to interact with the content system as if it were a native tool.

Example interaction:


User: Publish this presentation using the standard template.

Enter fullscreen mode Exit fullscreen mode

The assistant then calls the MCP tool, which forwards the request to the content API.

From the user's perspective the workflow becomes:


Prompt → Published Artefact

Enter fullscreen mode Exit fullscreen mode

No manual publishing step is required.


Why MCP Matters

Without tool interfaces like MCP, AI tools are mostly isolated.

They can generate text, but they cannot safely interact with external systems.

MCP exposes infrastructure as capabilities.

Instead of telling an assistant:

Write a presentation.

you can say:

Publish this presentation.

And the assistant can actually perform the action.


Artefacts Instead of Pages

One design decision that simplified the system was avoiding the concept of pages.

Pages are tied to websites.

Artefacts are portable outputs.

An artefact can be:

  • rendered on a webpage
  • embedded in a CRM
  • attached to an email
  • inserted into a presentation
  • accessed via API

This abstraction keeps the publishing layer flexible and independent from any single frontend.


Safety Considerations

Allowing external systems — especially AI tools — to publish content introduces obvious risks.

A few constraints help keep the system safe:

  • strict template boundaries to prevent arbitrary content injection
  • sanitised HTML rendering to block malicious payloads
  • isolated hosting layer so publishing cannot affect other infrastructure
  • controlled publishing endpoints with authentication and rate limiting

The goal is to allow fast publishing without turning the system into an open execution environment.


Where This Is Going

AI tools are slowly evolving from chat interfaces into interfaces for systems.

Content platforms, automation pipelines, deployment systems, and internal tools are starting to expose structured interfaces that AI can operate.

Publishing is just one example.

But it illustrates a broader shift: AI tools moving from generating information to triggering real infrastructure actions.

The workflow becomes:


Prompt
↓
Tool Call
↓
Infrastructure Action
↓
Published Output

Enter fullscreen mode Exit fullscreen mode

Things that previously required multiple manual steps can now happen in seconds.


Originally published on my personal site:

https://yusufhan.dev/blog/designing-an-ai-native-content-publishing-pipeline

Top comments (0)